New Times,
New Thinking.

  1. Science & Tech
6 August 2024

When the AI bubble bursts

The boom in AI investment since ChatGPT’s launch has been accompanied by constant claims that the technology will transform our world. Will it?

By Will Dunn

In October 1852 Charles Darwin wrote to Josiah Wedgwood III, his cousin, brother-in-law and financial adviser, informing him that he had, “with groans & sighs”, decided to sell all of his shares in the London and North Western Railway. Darwin himself, when he first boarded a train, had declared himself “altogether disappointed” with the new mode of transport, but in the years that followed he joined thousands of other people who put their savings into railway shares. These shares were soaring in value: everyone thought the new age of steam would replace the horse-drawn world, and innovation in the market itself accelerated their prices. But Railway Mania, as the craze became known, could not support this exuberance indefinitely, and prices collapsed just as quickly. By the time he sold, Darwin and his wife Emma had lost nearly £800 (£94,000 in today’s money) on their LNWR shares alone.

Great scientists are clearly not immune to irrational exuberance. In the previous century, Isaac Newton lost a fortune equivalent to millions of pounds on the South Sea Bubble. Both were victims of a pattern that has repeated through history: an opportunity is discovered, everyone decides it’s the future, and the market balloons with hope. Asset prices cease to have anything to do with intrinsic value and are decided instead by the persistence of a narrative that the future is here, and its fruit can be enjoyed right now.

This happened with radio stocks in the 1920s and internet companies in the dot-com bubble. It is the norm: one study of two centuries of technological innovations found that they are accompanied by an investment bubble three times out of four.

Every time this process occurs, intelligent people line up to reassure each other that this time is different. The huge growth of AI investments since the launch of ChatGPT in November 2022 has been accompanied by endless arguments that, this time, the technology really will transform our world. Our last prime minister nodded and smiled as a tech billionaire told him it would eliminate all jobs.  

To a certain extent, the ChatGPT bubble is different. It is bigger and faster than previous tech booms, and underpinned by something more serious. It is based on a land-grab on a grand scale of other people’s productivity and property, justified by a dangerous and dehumanising philosophy. The billionaires who currently profit from this bubble will do everything they can to keep it afloat.

The dot-com crash began on a Friday – 10 March 2000 – but it wasn’t named as such until some time later. A week later, the New York Times declared “technology-heavy Nasdaq bounces back” as part of a new “surge” in stock prices. Internet companies were still attracting huge valuations without making any profit. Almost everyone believed the boom was still under way, but it had already become a crash: by October 2002, tech stocks had declined by almost 80 per cent from their peak.

It may be that 24 July 2024 comes to be remembered in similar terms. The Nasdaq-100 – an index of 100 publicly traded companies which includes Apple, Intel, Nvidia, Microsoft, Alphabet and other Big Tech names – lost a trillion dollars in market value as investors looked at a new round of company reports and asked when exactly the world-changing AI revolution was going to show up as earnings. On 2 August, the investors moving their money out of the tech-heavy American stock market was “becoming a stampede”, Bloomberg News reported, as signs of a slowing US economy sent money flowing away from riskier investments.

Give a gift subscription to the New Statesman this Christmas from just £49

This was the end of a period of spectacular growth that has in recent years been largely based on the AI narrative. From November 2022 to July 2024 the market value of Nvidia, which makes chips used for running large language models such as ChatGPT, increased by nearly $2.5trn – hundreds of billions more than the value of the entire FTSE 100 index of Britain’s largest companies. By March of this year, tech stocks were priced as confidently (relative to their sales) as they had been at the height of the dot-com boom.

In recent months, however, financial institutions – some of which had previously been rather excited about the AI boom – have begun clarifying that such valuations were not for ever. In June, Goldman Sachs published a report in which it was argued that the “truly transformative changes” promised by the industry “won’t happen quickly and few – if any – will likely occur within the next 10 years”.

Even the venture capital (VC) industry has become more sceptical. In late June, David Cahn, a partner at one of Silicon Valley’s most successful VC firms, Sequoia Capital, calculated that for companies to make any profit at all from the money they’ve spent so far on the technology, it will have to deliver $600bn for each year of spending at the current rate.

To put that into context, the entire US oil and gas extraction industry in 2022 – a year in which America was the world’s biggest oil producer, at more than 11 million barrels of crude per day – turned over $513bn. So at the current rate of investment, companies are effectively betting that generative AI – a system for making weird pictures, lazy sales emails and bad essays – is going to generate more money than all of the oil and gas wells in the largest economy in the world

This would be wildly optimistic even if AI was already revolutionising whole industries, but among those who use generative AI for their work, the magic is increasingly wearing off. In a survey of 2,500 workers in the US, UK, Australia and Canada, published last week, 77 per cent said the generative AI tools they had been asked to use actually added to their workload; nearly half said they had no idea how to produce the productivity gains their employers expected from AI. In a Morgan Stanley investor note, one client, a large pharmaceutical company was reported to have dropped the AI tools it had licensed for 500 workers; an unnamed executive said the slides the software generated looked like “middle-school presentations”.

The leading company in the boom, ChatGPT maker OpenAI, could lose up to $5bn this year, according to a report by the Information. Worse still, one of OpenAI’s main competitors – the Facebook and WhatsApp owner, Meta – isn’t playing by the same rules. It has made its software open source: “It is,” explains Gary Marcus, a cognitive scientist and AI expert who was an early realist about the generative AI boom, “as if Uber was trying to make money and Lyft was giving away rides for free. They have a competitor that is literally giving the stuff away for free… How are you going to make that work?”

Moore’s law, which held that computer chips would double in complexity every two years without getting much more expensive, was the principle that guided the dot-com boom: new computing power was arriving, pretty much for free, and it would change everything. But Moore’s law wasn’t a law; it had no logical or mathematical proof. It was a magazine column. It was just a good observation, true enough that an industry continued to make it happen – more or less – for decades afterwards.

The AI bubble had its own version. Between 2019 and late 2022, the technology improved exponentially as more and more data was used to train the large language models behind software such as ChatGPT. The industry claimed that the technology was once again on a steeply rising path; today a chatty impersonation of a human, tomorrow a machine that is far better at reasoning and understanding than any human (in AI parlance this is known as artificial general intelligence, or AGI). This assumption, however, was based on something much less solid than a better computer chip.

The reason AI models improved so rapidly in a few years was that they had incorporated so much more of the data needed to “train” the systems to generate new words, images and sounds. “We saw rapid progress,” explains Marcus, “as the industry raced from using a tenth of the internet, to a quarter of the internet to half of the internet, to almost everything that’s available. And now they’ve run out… these companies have essentially used the entire internet.”

This is the fundamental difference with the AI boom: large language models need to be fed vast amounts of words and pictures to be able to synthesise new words and pictures that people will accept as convincing or real. Much of this information is free for the taking – anyone can download Wikipedia, for example. Great libraries of out-of-copyright works exist. Much of what’s available on the internet, however, still belongs to someone else. The AI industry appears to be gearing up to claim a right to take it.  

In March, Mira Murati, the chief technology officer of OpenAI was asked in an interview what data was used to train its AI video generator, Sora. Were YouTube videos used? Murati looked away from the interviewer’s gaze and pouted oddly for an uncomfortable moment. “I’m actually not sure about that,” she replied. This is a bit like the chief product officer of the world’s leading sausage company not knowing if any pork went into their product.

Murati is not the only OpenAI executive to make such a claim: when the company’s chief operating officer, Brad Lightcap, was asked in May if YouTube videos were used to train Sora, he, too, declined to answer, replying instead that “an entirely different social contract” was needed with all the people whose intellectual property can be accessed using the internet. “We’re looking at this problem, it’s really hard,” he concluded.

It is a problem, but it is not hard: either the AI boom has been sustained by companies helping themselves to the property of millions of other people, or it hasn’t. The social contract, as defined since the mid-17th century by Thomas Hobbes and John Locke, has always been pretty clear that one of the fundamental reasons to have laws and a government is to create a state in which people can’t just help themselves to other people’s things without expecting consequences.

The AI industry’s latest gambit is to pretend that computers somehow access a different plane of reality in which property rights do not apply. In June, Microsoft’s head of AI, Mustafa Suleyman, told CNBC that “the social contract” of anything that could be accessed on the internet “since the Nineties, has been that it is fair use. Anyone can copy it… that has been ‘freeware’, if you like.”

It is very hard to believe that Suleyman is unaware of the decades of high-profile prosecutions against companies such as Napster and individuals who have helped themselves to other people’s property using the internet. It makes no difference where the thing is found; countries around the world have “theft by finding” laws that state that property doesn’t stop belonging to someone just because someone else can access it.

There is a deeper philosophical foundation here. Silicon Valley is doing its best to convince itself that it is not stealing from people because personhood is nothing special; personhood is something they can make more of. The most succinct version of this philosophy was tweeted by the OpenAI founder, Sam Altman, in December 2022: “i am a stochastic parrot, and so r u”. A “stochastic parrot” is a random generator of language that is nevertheless convincing. It is, as Maya Angelou famously advised, worth believing someone when they express a belief such as this. The head of the world’s fastest-moving tech company appears to have been suggesting that you and everyone who matters to you is no more human than his chatbot.

The other factor that distinguishes the AI bubble from previous speculative booms is its proximity to power. No other investment fad has played so well on the promise of danger and disruption that its technology might offer. It’s true that during Railway Mania, many of Britain’s politicians were themselves investors in the railways, but at no point did parliament agree that the railways might prove an existential threat to humanity.

“I don’t know if they [tech leaders] believe what they’re saying,” says Marcus, “but I think that they’ve convinced a lot of the public, and even some people in the field, that artificial general intelligence is imminent, that it’s going to change everything… they get power by suggesting that they are close to this magical thing. They get treated like saints, or kings, they take meetings with prime ministers.”

This is the most salient aspect – beyond what it might do to the average portfolio – of the popping of the AI bubble: that the billionaire class will simply not allow it to happen. Almost all of the wealth gained by the world’s richest people last year came from AI stocks. As a consumer you may question whether you really want large language models deciding the outcome of your job application, writing the TV programmes you watch or taking your customer service call, but the tech industry has no intention of allowing you to make any such choice. The bubble will not deflate without a fight.

[See also: In defence of the new Luddism]

Content from our partners
"Time to bring housebuilding into the 21st century"
For building best practice? Look North
Where does the Budget leave housebuilding?

Topics in this article : ,