New Times,
New Thinking.

The Research Brief: What the AI summit should be focusing on

Your weekly dose of policy thinking.

By Spotlight

Welcome to the Research Brief, where Spotlight, the New Statesman’s policy section, brings you the pick of recent publications from the government, and the think tank, charity and NGO world. See more editions of the Research Brief here.

What are we talking about this week? Artificial Intelligence for Public Value Creation, a new paper by the Institute for Public Policy Research (IPPR).

Who? The IPPR – that’s a progressive, centre-left think tank founded in 1988. It was once associated with the “modernising” wing of the Labour Party (ie, Blairite reformers), but in the Corbyn years it contributed on the slightly more radical side of that era’s economic and domestic policy debates. I suppose you could throw it in with that ill-defined, loose grouping known as the “soft left”. Carsten Jung, a former Bank of England economist and now an IPPR senior economist, has led on this report, along with Bhargav Srinivasa Desikan, a senior researcher.

So what are they saying about AI? The report coincides with the AI safety summit taking place today (1 November) and tomorrow. The government describes it as a “major event”, to be held at the very apt venue of Bletchley Park (where Alan Turing cracked the Enigma code during the Second World War). Hosting a major international summit certainly won’t do any harm to the UK’s $1trn tech industry, which gets the number one spot in Europe, on 2022 data. It also won’t do any harm to Rishi Sunak’s tech bro image, and the conference will wrap up with an “in conversation with” chat between the Prime Minister and Elon Musk – yes, really. Popcorn at the ready.

Yikes. Exactly. But the report looks much deeper than that, of course. It says the summit’s focus on AI safety and harms is too narrow, and is lacking in ambition. Instead we should be thinking about how this exciting new technology can do “public value creation” and push artificial intelligence into a “socially beneficial direction”.

[See also: How AI is speeding up diagnosis]

Tell me more. First, the report identifies several areas that could be given a boost by artificial intelligence: in healthcare, it gives the example of improving diagnostics and “DeepMind’s AlphaFold model”, which is “predicting complex types of protein folding and is already being used for new drug discovery”; in education, it could provide “widespread access to lifelong learning”; and across the economy it could augment (note: not replace) jobs, increasing productivity. On the latter point it gives an example of how AI could automate “administrative tasks such as patient record-keeping and medication scheduling, allowing nurses to focus more on direct patient care”. It also says AI could deliver more efficient transport networks, power grids, and better urban planning processes.

Give a gift subscription to the New Statesman this Christmas from just £49

You mean no existential panic? How do I sign up? IPPR says we should start with three “pillars”, which, they say, could “ensure that the transformative potential of AI can be realised”. In keeping with the new trend for industrial strategy and mission government, it calls for – you guessed it – a “mission-driven industrial strategy for AI”. Part of that will be about tax incentives, subsidies and public infrastructure to facilitate growth and innovation in the sector. The Inflation Reduction Act even gets a mention as a model of best practice.

The second “pillar” cited is about regulation, or “supervisory capacity for monitoring risks from AI”, which could be called an “Advanced AI Monitoring Hub”. That would use real-time datasets from AI firms in a similar way that regulators in the Financial Conduct Authority and Bank of England do for lenders, insurers and institutional investors. It would work alongside the Competition and Markets Authority and the broadcasting regulator Ofcom to track risks, particularly in areas defined as “systemically important AI infrastructure”.

The third and final “pillar” is about ensuring competition with an “anti-trust approach” so that “dominant firms act in the public interest, do not stifle innovation and ensure small business can thrive”. It says access for open source developers should be encouraged too, and should also be properly regulated.

What else? It’s not all a techno-utopian dream – it does of course acknowledge that AI can be risky, but says that if the risk is managed properly then this new “frontier technology” can be used to improve social and economic outcomes. The report says that as part of mitigating the risks “certain uses of AI should be outright banned”, giving the example of “AI for facial recognition, social scoring and manipulation – due to the huge risks to safety and civil liberties inherent in these technologies.”

In a sentence? Rather than focusing narrowly on harms and AI safety, we should also be looking at how to use the technology to improve public services and boost inclusive growth across the economy.

[See also: Trapped in the AI echo chamber]

Content from our partners
Building Britain’s water security
How to solve the teaching crisis
Pitching in to support grassroots football