Brett Ineson, President and CTO of Animatrik Film Design, on how the evolution of motion and performance capture affects storytelling from Hollywood blockbusters, to AAA games, and immersive metaverse experiences.
Performance capture technology is transforming the way characters are brought to the screen. The data captured is getting more detailed all the time, enabling more of an actor’s original performance to translate to film, TV, games, and immersive metaverse experiences.
Motion capture pioneers like Andy Serkis and Terry Notary have been bringing memorable characters to life for some time now, with Serkis’ portrayal of Gollum in 2002’s The Lord of the Rings: The Two Towers rightly considered to be a groundbreaking moment for the technology. But, both body and facial capture have evolved significantly in the two decades since. Experiments in the field brought us CG characters who stole the show, from the distinctive aliens in Oscar-nominated District 9 to the bounty hunter Rocket Racoon.
And it’s this advancement of technology that enables much of modern day cinema. James Cameron famously held off from making Avatar: The Way of Water until the tools had caught up with the vision he wanted it to deliver, including groundbreaking underwater capture techniques.
In the past, while an actor could puppeteer a digital character through mocap, the movement of the 3D asset was largely down to VFX teams, often building on the look created by physical prosthetics. But with the latest technology, it’s possible to capture more nuanced movements for a more realistic portrayal, while transposing likeness has become easier thanks to more accurate data.
We’ve come a long way since Neill Blomkamp’s cult classic District 9 hit the cinemas. The process of bringing its alien protagonist to life was made possible by capturing the performance of actor Jason Cope — months after location filming had wrapped – and bringing it in sync with Sharlto Copley’s portrayal of Wikus Van De Merwe, so the transition from human to alien appeared seamless on screen.
Unlike the traditional workflow, where each piece of the performance was captured separately and then blended together, the latest technology enables the recording of body movements along with facial and voice capture for a level of synchronization that wasn’t previously possible.
The evolution of motion and performance capture brought more than just pinpoint accuracy — it transformed workflows to the extent that now it’s possible to truly replicate human performance in a digital character. This means that more of the actor’s original performance reaches the audience, without as much need for prosthetics or VFX to fill the gaps. So actors can now appear on screen as characters with completely different physical characteristics, delivering some of the most memorable performances that go far beyond just lending their voice to the role. Just look at Thanos, Caesar, or Smaug.
From film to TV, and games, characters are known for their signature moves just as they are for their famous lines. The latest performance capture technology enables creators to harness the subtleties in motion that actors bring to their roles.
Digital characters need to be believable, rather than a jarring presence on screen. Especially when they’re interacting with real-world characters and sets. As a result, casting for mocap-based roles is becoming a niche of its own.
The latest developments have opened up new possibilities, including de-aging actors, such as Harrison Ford in the latest Indiana Jones adventure, or even resurrecting actors who are no longer alive, as in Rogue One: A Star Wars Story. From origin stories to long-awaited sequels, this means that actors aging out of their roles no longer puts limits on storytelling — creators have power over time itself.
And beyond cinema, the ability to accurately recreate the motion of well-known characters brings opportunities for licensing digital doubles and their signature moves in the metaverse. The popularity of virtual concerts and real-time experiences featuring world class performers like Ariana Grande and Justin Bieber shows a real appetite for these types of events that are accessible to a much wider audience. But the level of immersivity that can be achieved hinges on data — how much, how fast, and how accurate it is.
Rich synchronized data is the key to creating digital characters, helping actors and creators to shape the future of on screen storytelling through movement. Performance capture technology is all about bringing the subtleties of the actor’s performance to the screen and as it continues to evolve, creators can keep on pushing the boundaries of what’s possible.