New Times,
New Thinking.

  1. Culture
10 August 2021

Can the film industry be trusted with AI technology?

After a director’s controversial use of AI to recreate the voice of the late chef Anthony Bourdain, the industry is walking into an ethical minefield.

By Ed Lamb

Films and TV shows love to envisage what’s coming next – from Blade Runner to The Terminator; Back to the Future to 2001: A Space Odyssey. Often these visions turn out to be little more than fantasy (much as we were all hoping for Marty McFly’s hoverboard and self-tying shoes in 2015). But some of these farfetched ideas do, eventually, appear in reality. The rapid development of artificial intelligence, which can now fly drones, drive cars, write music and even paint in the style of Van Gogh, means that some aspects of these distant futures are just around the corner.

July brought the UK release of the documentary Roadrunner: A Film About Anthony Bourdain, directed by Morgan Neville. The film proved controversial: a few of its lines, seemingly narrated by Bourdain, were in fact an AI simulation of his voice, created from archive material. Bourdain’s widow has expressed disapproval at the simulation, while Neville has happily defended what he calls a “modern storytelling technique”. It is true that Bourdain wrote the words, but he never said them aloud.

Neville’s decision to use AI is an uncomfortable one, raising a number of moral conundrums. Bourdain took his own life in 2018, so the idea of resurrecting his voice particularly macabre. What’s more, had Neville not admitted to having used AI in a magazine interview, there’d be no way for us as viewers to know these lines were never spoken by the chef. AI is now advanced enough to deceive us.

[See also: Anthony Bourdain was food’s first rockstar – and so much more]

The presence of a deceased star in a film made after their death has not always been seen as controversial: when the actress Carrie Fisher, who died in 2016, appeared three years later in Star Wars: The Rise of Skywalker (2019), fans were impressed by the technology. The difference in this case was that scenes unused in previous films were repurposed – it was still Fisher’s face and voice, but her hair, costume and movement had been simulated. Fisher was always meant to be part of the film, and, crucially, there was transparency about the techniques used to make it happen.

Select and enter your email address Your weekly guide to the best writing on ideas, politics, books and culture every Saturday. The best way to sign up for The Saturday Read is via saturdayread.substack.com The New Statesman's quick and essential guide to the news and politics of the day. The best way to sign up for Morning Call is via morningcall.substack.com
Visit our privacy Policy for more information about our services, how Progressive Media Investments may use, process and share your personal data, including information on your rights in respect of your personal data and how you can unsubscribe from future marketing communications.
THANK YOU

But the case of the Bourdain film reveals an industry walking into an ethical minefield, failing to consider the practical consequences of increasingly sophisticated technologies. The use of AI to simulate voice and action distances performers from their work; when we hear or witness something which, though based on real human activity, was not created by it, it is more difficult to attribute the product to its model.

Ironically, these philosophical questions are raised in films and TV plotlines themselves – just think of HBO’s Westworld or certain episodes of Black Mirror. A storyline from the animated Netflix sitcom BoJack Horseman involves the titular character having all his scenes in a movie replaced with AI simulations.

Far more troubling is the issue of consent: you can’t agree to something if you’re dead. Even if Bourdain’s family had approved of recreating his voice, there is no way he could have given his blessing.

The use of AI to create “deepfakes”, digitally altered images that make someone appear as someone else, has already posed huge problems in pornography. Thanks to “deepfake” technology, someone only needs to get hold of a few of your selfies to give you the starring role in a sex scene – no consent needed. “Deepfakes” are also used to spread misinformation: watching the comedian Jordan Peele ventriloquising President Obama using AI is amusing, until we remember that, during the 2019 US election, a social media video was doctored to make the Democratic speaker Nancy Pelosi look like she was slurring her words.

AI is already being mishandled in the entertainment industry. Using AI responsibly is particularly difficult because the technology develops so rapidly and its capabilities are so hard to predict. As legislation lags behind real-world usage, the attitudes of creators such as Neville are of paramount importance.

As we become more familiar with the capability of AI, the emergence of a popular morality around AI ethics must reinforce both legislative and institutional standards higher up. We live in a world where AI plays an increasingly prominent role – but though it might seem like its dominance is inevitable, it is crucial not to be taken in by the myth that AI development is a runaway train over which we have no control. As John Tasioulas, the director of the newly established Institute for Ethics in AI at Oxford University, has stated: “AI is not a matter of destiny, but instead involves successive waves of highly consequential human choices.”

It’s time for an industry that so often studies the future of humanity to start considering its own. In Westworld, an AI is asked “Are you real?” The reply comes: “If you can’t tell, does it matter?” We ought to answer: yes, it does.

[See also: The doctored video of Nancy Pelosi shared by Trump is a chilling sign of things to come]

Content from our partners
The Circular Economy: Green growth, jobs and resilience
Water security: is it a government priority?
Defend, deter, protect: the critical capabilities we rely on