CGI and VFX transformed media; now generative AI is doing the same | Papercup Blog
CGI and VFX transformed media; now generative AI is doing the same
by Team Papercup
September 15, 2023
4 min read
Generative AI isn’t new, but technological leaps in the last decade have culminated in a wholesale explosion of interest and adoption in 2023. Its trajectory is not unlike that of CGI and VFX, which when they burst onto the scene (in flames, as oversized reptiles and slow motion bullets) transformed storytelling possibilities in the 1980s and 90s. Now generative AI, with its ability to produce new text, speech, image and video by running machine learning models fed by large data sets, is doing something similar. AI dubbing, automatic video segmentation, NeRFs (Neural Radiance Fields) that generate 3D images from 2D ones, are the technological game-changers of the new age. 
Just as CGI and VFX were received with varying degrees of understanding, excitement and skepticism, so too is generative AI. But if the creative industries’ total integration of the former is anything to go by, the future of the latter is promising. And the parallels don’t end there: both CGI and generative AI require humans to be the arbiters of what actually works and doesn’t, even in the realm of make believe, and have created entirely new industries that employ millions of people to do so.

Why now?

The media and entertainment sector’s renewed focus on AI has been largely driven by a series of seismic changes in the entertainment industry. Remember when Netflix dropped subscribers in Q2 of 2022, spooked its investors and sent its stock plummeting? Despite the fact Netflix has more than gained ground since, its Q2 ‘22 stock drop precipitated a general shift in focus from subscriber numbers to revenue and the fallout sent shock waves through the industry. The media titans tightened their budgets, reorganised departments, hired or re-hired CEOs and canned completed shows that didn’t promise good returns. Then came the streaming race to produce as much good original content as possible, and the talent crunch that ensued. All this is to say that generative AI, with its promise of increased efficiency, suddenly went from ‘interesting’ to ‘essential’. The world of CGI developed similarly. When Alfred Hitchcock wrote Vertigo in 1958, he felt that for audiences to really get inside the mind of his tortured protagonist, something more than the standard effects was needed. Essentially, for Hitchcock, his creative ambition had exceeded human capability. Computer-generated images stepped in, and Vertigo’s trippy credits became the first instance of CGI in film. 

CGI and VFX snowballed. The eighties gave us the futuristic cityscapes of Blade Runner and The Exorcist’s spinning head of nightmares. The nineties yielded classics like Jurassic Park and Terminator 2: Judgment Day. The success of these films cemented CGI’s place in film-making and paved the way for its application across different genres: Pixar’s Toy Story, Avatar and Alfonso Cuarón’s Gravity.  

 

AI and VFX faced similar challenges

This illustrious IMDb history of CGI gives a false impression of how it was received back in the day. It was widely criticised for threatening specific roles in the arts, per Bryan Curtis in the New Yorker, Jan 2016:

“It wasn’t until 1993, with Jurassic Park, that practical effects and CGI came into mortal conflict. Steven Spielberg dispatched two teams to figure out how to create realistic-looking dinosaurs. One team was to use go-motion animation — a technique that dates to the nineteen-twenties — and another was to use CGI Some months later, the CGI group presented Spielberg with footage of a tyrannosaur marauding across the screen. George Lucas wept. The effects man Phil Tippett, who was leading the go-motion team, said, “I think I’m extinct.”

Except Tippett wasn’t extinct. Instead his creative role evolved and he went on to be responsible for some of modern entertainment’s most recognisable CGI creations. Both generative AI and CGI have been criticised for eclipsing creative roles, rather than being viewed as tools or mediums that allow artists to explore new creative terrain. The existential furore that surrounds AI and the creative industries right now is likely to give way to an understanding that AI tools can augment capabilities rather than eclipse them, as well as democratise access to top-tier tools that were formerly remote to creatives because of cost or scarcity.

Runway ML, for example, is a suite of AI tools powered by models trained on quantities of video data for artists, filmmakers and creators. As laid out in this Variety article: “VFX artists have been using Runway to complete manual tasks that previously took days and costly professional software and equipment. For example, its inpainting tool removes objects or backgrounds in the video editing process.”

Speech technology, as the newest frontier of generative AI, is being used to preserve the irreplaceable spirit of cult characters: Respeecher cloned the voice of James Earl Jones (Star Wars’ Darth Vader), and AI was used to recreate the voice of Anthony Bourdain in the documentary Roadrunner about his life.   

In the wider media space, speech technology is being used to localise content into other languages — such are the costs and time frames of traditional dubbing methods with voice actors in a studio. 

Papercup is a principles-first AI company, helping media companies and content creators leverage AI in a way that enhances their current operations and produces the best results. Papercup’s machine learning technology uses data from real voice actors to produce our AI voices. Our human-in-the-loop, whereby real translators check the translated content for accuracy, ensures the translated audio is of top quality; it’s why brands like Sky News, Bloomberg, and Insider trust us with their content libraries.

“With generative AI, video creators now have a set of new tools to help their craft, be it from expediting the editing process through new automated workflows to using synthetic speech to generate voiceovers in different languages for secondary and tertiary characters in movies. AI is not necessarily replacing existing artistic roles but allowing the media owners that utilise it to do their lesser tasks more efficiently.” — Amir Jirbandey, Head of Growth at Papercup.

At the intersection of CGI and generative AI, NVIDIA recently announced updates to NVIDIA Omniverse, a development platform for building and connecting 3D tools and applications based on the Universal Scene Description framework, known as OpenUSD.

“These 3D tools and applications enabled by Omniverse can help teams tackle their toughest creative challenges. By tapping into the power of Omniverse and AI, artists can seamlessly work with larger, more complex assets and scene files in real time, helping them accelerate and add new capabilities to their production pipelines.” Rick Champagne, Director of Global Media and Entertainment Industry Marketing at NVIDIA.

AI as a force for good 

With global attention on generative AI, governments and tech companies are focused on learning lessons from the implementation of technological innovations like CGI, the Internet and smartphones by building rules, regulations and best practices to ensure AI is a force for good; one, at that, which can empower and augment creative capabilities rather than supersede them. 

 

In the words of Jurassic Park’s Ray Arnold, “Hold onto your butts”, creativity is on the loose.

 

——

Want to learn more about how to advance your AI capabilities? Head over to NVIDIA.

Want to see how AI dubbing works? Speak to the Papercup team.

Join our monthly newsletter

Stay up to date with the latest news and updates.

By subscribing you agree to our