Disguise and Cuebric reckon it can and have a new AI solution that they say can generate plug-and-play virtual production environments with almost indecent speed.
Essentially, what's happened here is that Cuebric has been integrated with the Disguise platform. This means creatives can now use AI to create the shape and depth of 2.5D environments and then import them into Disguise. The result, says the company, is a plug-and-play scene that can be executed on an LED stage in under, as the headline says, three minutes.
So, 2.5D. To create 2.5D scenes, users can either add purely generative content or import images from elsewhere into the Cuebric platform. Cuebric then leverages AI rotoscoping and inpainting to segment the images into layers, transforming them from 2D to 2.5D based on the depth of objects. Once the Cuebric 2.5D scenes are imported into Disguise’s Designer software, each individual layer is depth-mapped with an auto-generated mesh. This means that the individual plates are not limited to flat planes, as 3D shapes can be built into each of the plates — resulting in a more realistic parallax effect that works no matter how you move the camera on set.
As well as faster setup, it also allows for faster iteration. Using Disguise and Cuebric, users can easily make changes and iterations in virtual environments during production, allowing for a more dynamic and creative process right when you need them most - on set.
“Real-time environments look spectacular on-camera, yet often require many hours of artistic and technical build,” says Addy Ghani, VP of Virtual Production at Disguise. “Thanks to our partnership with Cuebric, there’s now another option. Using generative AI, artists can build 2.5D plates, helping them go from concept to camera in minutes so they can tell unique stories in a way that works for them."