New Times,
New Thinking.

There is no chance the government will regulate AI

This government is planning light-touch regulation of a technology our politicians do not understand.

By Will Dunn

Editorā€™s note: On 16 May, ChatGPT boss Sam Altman faced questions in a US senate committee hearing about the impacts of artificial intelligence (AI). He warned that government regulation of AI is ā€˜crucialā€™ if we are to avert risks. This article was originally published on 29 March and has been repromoted following these comments.

What if the internet saw you as an individual? The business model we call ā€œsurveillance capitalismā€ watches as you use the internet, recording what you pay attention to and what you buy, then calculates what other information you might find compelling, what other products you might want. For all the hype about its power, itā€™s a passive and fairly general system: you are sorted into a bucket by what can be known about you, and there are lots of other people in that bucket, and you all see the same stuff.

What if your behaviour was responded to actively, intelligently? Every time this article was opened, it could be rewritten by a language model that could adjust it for the kind of sentence construction you (and only you) find most persuasive. Every advert you saw could be rewritten and redesigned in an instant, tailor-made to stimulate your hunger or greed or fear.

This is rapidly becoming possible, and if the energy cost of the system is less than the profit to be made from it, that is how the internet will begin to look in the near future: everyone will see a different version, and it will be more addictive and persuasive than ever before. It will be one way in which artificial intelligence might, according to Goldman Sachs, change the jobs of 300 million people. That might be really good or really bad. The one certainty is that our government will play no effective role in regulating it.

To give the impression that something is being done, the Department for Science, Innovation and Technology released a white paper on 29 March about regulating AI. It will have no effect whatsoever on the changes overseas technology companies decide to impose on our society.

We can say this with confidence because it has already happened. Have a guess: since Facebook arrived in the UK in October 2005, how many new laws have been implemented to regulate the use of social media? In nearly two decades Ā­ā€“ during which time Facebook and its ilk have polarised politics, undermined local newspapers and enabled an epidemic of stress, anger, misinformation and bullying ā€“ how many new laws has the UK imposed on their use?

The answer is none. Not a single law has been implemented to govern a technology that has demonstrably affected how our society works and the health of the people in it. British primary legislation contains just six brief mentions of social media.

Give a gift subscription to the New Statesman this Christmas from just Ā£49

In September 2022 a coroner concluded that the death from self-harm of Molly Russell, 14, in 2017 could be attributed at least partly to ā€œthe negative effects of online contentā€, specifically the hundreds of images and videos related to depression and self-harm that were served to her by the recommendation algorithms of social media platforms. After the inquest, Matt Hancock told the press: ā€œI have for some time been very concerned that we act nowā€ about the effects of social media on the mental health of its users, especially children. This was an admission of failure: as secretary of state for digital, culture, media and sport, and then health secretary, it had been Hancockā€™s job to act, and he didnā€™t.

It took five years from Molly Russellā€™s death for a new law, the Online Safety Bill, even to be written. It has not been passed, and Facebookā€™s parent company, Meta, has already said it wonā€™t comply with some important parts of it.

And why would it, when government ministers themselves deride any attempt to regulate technology? When the Cabinet Office banned TikTok ā€“ a platform described by the US security services as a ā€œTrojan horseā€ for China ā€“ on government devices, Grant Shapps, Secretary of State for Energy Security and Net Zero, immediately shared on his own TikTok account a clip from The Wolf of Wall Street, in which Leonardo DiCaprio shouts: ā€œIā€™m not f***ing leaving.ā€ Hancock, too, continues to post extensively on TikTok and Facebook, using his taxpayer-funded time to provide free content to the platforms he failed to regulate.

[See also: Only philosophy can beat AI]

The problem, a reader might conclude, is that Hancock and Shapps are feckless, trivial narcissists who are fundamentally incapable of government. That is a problem, but the deeper issue is that the machinery of government moves at a completely different pace to technology companies. The business model of Silicon Valley requires a company to aggressively pursue scale without slowing down to do anything so old-fashioned as making a profit, let alone considering the moral or legal implications of its work. Regulation is glacial by comparison: the Computer Misuse Act has hardly been updated for 30 years.

This is a particular problem when it comes to AI, not only because AI promises to create an internet that is even more effective at guiding peopleā€™s thoughts and preferences, but because it is by its nature even less accountable. Deep learning systems (of which ChatGPT is one example) have been making surprising discoveries for almost a decade, including accurate medical diagnoses and the solutions to maths problems, but researchers arenā€™t able to say how they arrived at their conclusions. The chance that politicians of the calibre of Kwasi Kwarteng and Nadine Dorries, signatories to the UKā€™s National AI Strategy, could hope to effectively regulate systems that are opaque even to the people who built them was always zero.

Nor, as the National AI Strategy reveals, is this the governmentā€™s intention. Its stated aim is to ā€œbuild the most pro-innovation regulatory environment in the worldā€, presumably because if the UK did achieve successful and meaningful regulation of AI, the companies that build it would simply go elsewhere. As in other areas of business, technology companies can threaten to withhold the economic benefits of their work from governments that see them as necessary to ā€œkeep up with the pace of changeā€, as Dorries, the digital secretary at the time, put it.

This is especially true of the UK because our government canā€™t afford the other incentives ā€“ such as massive state-led investment in research ā€“ offered by other countries. The US government (which is already legislating cautiously around machine learning) will spend $1.8bn on non-defence-related AI this year; in his recent Budget, Jeremy Hunt announced a prize that will pay 0.06 per cent of that.

AI could be a huge benefit to the UK, a social and economic disaster, or a damp squib. We donā€™t know. What we do know is that the plan from the beginning is that the government will effect a light touch on something that is already moving too quickly for it to grasp.

[See also: Data on cloud will change the way you interact with the government]

Content from our partners
Building Britainā€™s water security
How to solve the teaching crisis
Pitching in to support grassroots football