The Cioni brothers’ Strada, an AI-enabled, browser-based cloud platform that looks to revolutionize creative workflows, edges closer to general release.
We’ve talked about Strada before, and later on today (9.00 am PT, 5.00 pm GMT), it's officially launching its beta program with an announcement on YouTube that you can register for here.
"Let's keep doing more of the same" is a slogan least likely to be adopted by the Cioni brothers' post-production startup, Strada. The carefully chosen name means "Street" in Italian. It hints at a structure that is not about stationary objects but about moving things around, connecting and facilitating productive activities, and delivering results. Whatever else it is, it's definitely not doing things the same old way.
And that's entirely appropriate for the current era. With rates of change completely off the scale, the only way to build a new company in the content creation business is to take flexibility as the central tenet. And you won't find anything more flexible than the cloud.
Strada is crafting a new platform: a vague concept, which essentially means a set of foundations you can build on. The new company is building its platform in the cloud and, crucially, will use this structure to harness AI's seemingly endless new capabilities. It is a next-gen platform with next-gen technology.
Strada is massively parallel, thanks to the cloud. You can think of it as an almost unlimited number of simultaneous processes helping creators shoot, edit, and finish their productions. Inevitably, these processes talk to each other, but Strada's underlying orchestration just "gets things done." To users, every connected (and authorized!) device - smartphone, tablet, laptop - is a "window" onto the process. What you see on the window is your choice, but whichever window you look through will always be up to date.
The whole point is that it's bigger than the individual processes that it's made from. It's like a universal viewer on an intelligent, optimized production ecosystem. And how is all of this tied together into a cohesive whole? Not, perhaps, the way you'd think.
That's because Strada looks for how media is related within a project. It might be timecode or filenames, but equally, and increasingly likely in the future, it looks for correlations: conceptual relationships that cumulatively add to the completeness of the concept as a whole.
All of which sounds eye-squintingly abstract. So, let’s look at a concrete example.
In this example, the main character in the film wants to be able to talk in English and his native Armenian. Switching between languages adds to the film's color and context, and it's easier to relate anecdotes in their original tongue. But it's a potential nightmare to edit.
Strada already has phenomenal multi-camera chops. It can play several simultaneous views without rendering multiple camera angles together into a single clip. You might think there's nothing new there, but in an astonishing demonstration of what's going on under the hood and only a hint at what will be possible in the future, Strada has automatically translated and transcribed the conversation. The only thing syncing the clips together is dialogue. This is huge.
It's important because it shows how it's possible to orchestrate complex, multiple elements within a project into a cohesive and manageable whole. And this is just the start because AI is at the heart of Strada's platform. It's not generative AI, as we've learned over the last few months, but AI that's good at the tedious, repetitive, task-oriented stuff. The stuff that we'd rather not have to do. Before long (and to some extent, now), AI will not only work on dialogue but also the visual content of footage. And that's important for several reasons that will define the future of post-production.
Lift the hood on a conventional NLE, and what you see is a database: strictly managed tables of clips and their external properties like file names, length, frame rate, and, if you're lucky, a manually entered content description. Strada's approach is different and will increasingly use the intrinsic properties of content to allow post-production professionals to organize and ultimately craft their media into valuable artistic work.
We spoke to Peter and Michael Cioni before Strada's Beta launch.
"What's coming online is that we're coupling AI and workflow together in a totally new way that people haven't done before,” said Michael, making the point that Strada is made for the AI era: AI isn't "bolted on," but is part of the product's fabric.
The keystones of Strada's platform - the cloud, Artificial Intelligence, automated media workflows, etc - have become familiar over the last decade, but it is only recently that, collectively, they have reached the threshold where real-time cloud production is not just possible but fast. One essential part of this has taken place under our noses: it's the incredible power of the browser as a virtualized computing platform. The "write once, use anywhere" essence of browser computing brings superpowers to savvy digital media developers. Essentially, today's browsers blow away the remaining ties between video editing applications and traditional, physical computers.
Michael again, "We're probably the first people ever to have multi-asset sync in the cloud in a web browser. We can even provide log files for viewing media in a browser correctly. That's a big deal! And it shows how far browser technology has come. We're doing color, analytics, and transcription all in the cloud. What we're starting to do is make the traditional NLE technology stack look very limited. People said you can't do this in a browser, but that's precisely what we're doing".
Strada's Beta launch is in two phases. The first happening today will be targeted at a small group motivated to get their hands on a very early version and willing to provide detailed feedback. The second release - to be announced - will be a public one where anyone can participate.
Keep watching Strada. Over the next few years - or perhaps even months - it may well change how you think about media production.