New Times,
New Thinking.

  1. Culture
12 April 2024

The dangers of AI art

The appetite for AI-generated work reveals a pessimistic view of the future: one that tells us that our collective imagination has already been stretched to its limits.

By Sarah Manavis

When we talk about the danger of deepfakes and artificial intelligence (AI), the focus has almost entirely been on misinformation and malice. We are warned to be wary of what we see online – that something which seems believable might actually be a crafty manipulation. There’s a push to make technology such as unauthorised deepfake pornography, including the kind that circulated of Taylor Swift in January, illegal. The threat of this technology in these circumstances is obvious and indisputable – black and white.

But one grey area beginning to receive attention is the danger of AI to creatives. We hear about artists whose work has been unknowingly used to train generative AI and those who have used it to enhance their work. We have seen the emergence of overwhelmingly bad AI-generated art: formulaic music, predictable stories and garish graphics. The question has largely been how to draw a line around the rights of creatives – or whether a line can or should be drawn at all.

Last week, when the estate of the late comedian George Carlin officially settled a lawsuit with a pair of American podcasters, Will Sasso and Chad Kultgen, who appeared to have used a deepfake of Carlin’s voice earlier this year in an episode entitled “George Carlin: I’m Glad I’m Dead”. The hour-long episode featured a character who sounded identical to Carlin performing a new stand-up set, which the creators claim was both a result of an AI trained on Carlin’s old material but also a deepfake of his voice reading a script that was entirely written by Kultgen. The Carlin estate argued, regardless, that this work could be clipped out of context, making it appear that Carlin had said or felt things he never had, violating his copyright. Sasso and Kultgen ultimately agreed to take down the podcast and remove any versions of it they had posted online. They also agreed to refrain from using Carlin’s likeness in any content in the future.

This settlement comes at a moment in which artists are pushing to be legally protected against AI in law. Only a few days earlier, 200 popular musicians – including Billie Eilish, Stevie Wonder and the estate of Frank Sinatra – signed an open letter asking technology companies to agree not to develop AI tools that could undermine artists and songwriters, and to create protections against AI to secure the work of human creatives. At the end of March, Tennessee became the first US state to make it illegal to replicate an artist’s voice without their consent. In just a few weeks, the UK’s Society of Authors will vote to not consent to the use of their members’ works in AI development. Underpinning each of these motions is the assertion that the work of a human artist must not be used to create an endless stream of knock-offs.

The ethical issues for artists are becoming more apparent. But the Carlin case reveals a growing appetite for fake art from artists who can no longer make any more. This first sparked discussion in 2021, when a documentary about the chef Anthony Bourdain used an undisclosed deepfake of his voice to read out a distressing email he had sent a friend shortly before his death by suicide. The Beatles even released their “final song” in October, using AI to recreate John Lennon’s voice from an old demo he had discarded. It’s no coincidence that several estates of dead musicians signed this month’s open letter, nor that the Tennessee legislation has been named the “Elvis Act”.

Select and enter your email address Your weekly guide to the best writing on ideas, politics, books and culture every Saturday. The best way to sign up for The Saturday Read is via saturdayread.substack.com The New Statesman's quick and essential guide to the news and politics of the day. The best way to sign up for Morning Call is via morningcall.substack.com
Visit our privacy Policy for more information about our services, how Progressive Media Investments may use, process and share your personal data, including information on your rights in respect of your personal data and how you can unsubscribe from future marketing communications.
THANK YOU

This eagerness to milk dead artists for “more” of their work shows how the rise of AI has flattened what we see as good art – even what we consider talent. Many people would rather be stuck in a loop of creating falsely “new”, generic works created solely from art that already exists, where “collaboration” merely involves manipulating well-known works to current tastes. Those who experiment with AI might argue that they are creating something exciting and fresh, using their talents the work of famous artists to produce something great. But this is a greedy excuse for mediocrity that obscures the reality: there is a plentiful supply of artists out there with the ability to truly innovate.

The stymieing constraints of AI art reveals a pessimistic view of the future. It tells us that imagination and creativity have already been stretched to their limits, and our only job now is to endlessly tinker within those margins. It shuts us off to the idea of something we haven’t conceived before, and suggests that, even if it’s possible, the best way forward is to shirk anything that feels different or new.

But art is at its best when it gives us all the surprise and flaws of humanity. Artificial intelligence will never give us “more” of it – only an unsatisfying imitation.

[See also: Stephen King’s “Carrie” and the horror of girlhood]

Content from our partners
Water security: is it a government priority?
Defend, deter, protect: the critical capabilities we rely on
The death - and rebirth - of public sector consultancy

Topics in this article :