Among my many great business ideas is the sing-nav, a car satnav that incorporates directions into the music you’re listening to. Rather than a robotic voice harshly interrupting Chris de Burgh, an AI synthesises De Burgh’s voice and fits the directions into his lyrics: “lady in red…. turn left at the next junction… cheek to cheek”. This is a great idea and if you patent it immediately, you need never work again.
That said, you might get an angry call from De Burgh himself, who would rightly be concerned that if the AI can sing the directions, it can sing the whole thing, and maybe release a new album to his fans.
Similar calls are already being made across the music industry. Yesterday, the Financial Times reported that Universal Music Group (UMG), the world’s largest owner of music rights (including those to “Lady in Red”), has written to streaming platform companies including Spotify and Apple to ask them to prevent its music being used as training data for AI-generated tracks. UMG has already served a number of takedown notices on AI clones of its artists’ work.
[See also: We are finally living in the age of AI. What took so long?]
This suggests that two things are going to happen. First, we are on the brink of a huge wave of copyright lawsuits. Photographers, illustrators and publishers already use copyright search services to find people using their work without permission and send them a bill. OpenAI’s GPT-4 was trained on 300 billion words written by other people, and I wouldn’t be surprised (especially as OpenAI has received $11bn in investment) if some of those people start to ask for their share of the proceeds.
But the other, and perhaps more concerning, implication of this is that the real growth in AI applications is happening in the creative industries. Last month the media company Buzzfeed began publishing articles written by AI; a Chinese advertising company plans to replace its copywriters and graphic designers with generative AI models.
This isn’t happening because these are the most valuable jobs to replace – there would be more money in replacing doctors or engineers than the barely-paid writers of mild internet humour – but because they’re the cheapest and easiest jobs to replace.
For years it’s been assumed that machines would take up jobs that people generally don’t want to do, such as cleaning, or driving. In 2019, Elon Musk stated that a million self-driving Teslas would be on the roads by 2020; Toyota, Honda and others made similar promises, but despite an estimated $100bn in investment in self-driving cars, the robo-taxis haven’t arrived.
“Anything to do with AI in the real world is hard,” explains Michael Wooldridge, professor of computer science at Oxford University and director of AI research at the Turing Institute, “because the real world is fuzzy, and it’s difficult. It’s terribly hard to make AI systems work in the real world.” The complexity is practically endless. A driverless car could take data from a million hours of training and it is still highly likely that on a given drive it will encounter something it has never seen before, with results that might be disastrous for people and expensive for companies.
In the closed system of a simulation, however, AI excels, because it can work within set parameters, use a set amount of data, and make all the mistakes it likes. This is why so much early work on AI focused on chess, before taking on more complex games such as Go. Large language models such as GPT-4 “hallucinate” new things from large data sets, particularly the game-like structures we have always used to simulate and interpret the world: language, music and mathematics.
The implication is that rather than replacing physical work we don’t enjoy, market forces will drive AI to replace the mental work we find most satisfying: gaining knowledge and using it creatively. We could end up with an economy in which software writes the novels and people drive the taxis.
Wooldridge, however, is more optimistic that AI will “free us from drudgery” and empower people to do their knowledge work more effectively. But he acknowledges that in some cases – such as warehouse work – software has already taken over management while people are reduced to being “human robots” whose actions are dictated by technology. “We need to think about what it is that we value in jobs,” he says.
[See also: Only philosophy can beat AI]