New Times,
New Thinking.

The government must move more quickly on AI regulation

We wouldn’t design new cars without road safety laws, so should we expect less for artificial intelligence?

By Michael Birtwistle

To say this government has substantial ambitions for artificial intelligence would be an understatement. Ministers are committed to the UK becoming a technology “superpower” and want it to be the best place in the world to build, test and use AI.

These ambitions have great merit. Technological innovation is a driver of economic growth, AI could deliver real benefits for society, and the UK is well placed to harness its increasing potential.

There are also real, documented risks of harm from these technologies, to individuals, communities and society. If we want to successfully realise these ambitions in a way that works for people and society, we don’t just need to develop computing power or attract global AI talent – the UK needs to become a world-leader in AI governance.

Last week the UK government published a much-needed white paper laying out how it plans to regulate AI. Regulation supports public confidence in AI, safeguards our fundamental rights and – as the government says itself – helps businesses by providing them with the legal clarity and certainty they need. The government should be commended for acknowledging the value of regulation and thoughtfully engaging with this difficult challenge, but in many places the ambition does not marry up to design, or recognise the staggering pace of AI integration.

[See also: We are finally living in the age of AI. What took so long?]

Select and enter your email address Your weekly guide to the best writing on ideas, politics, books and culture every Saturday. The best way to sign up for The Saturday Read is via saturdayread.substack.com The New Statesman's quick and essential guide to the news and politics of the day. The best way to sign up for Morning Call is via morningcall.substack.com
Visit our privacy Policy for more information about our services, how Progressive Media Investments may use, process and share your personal data, including information on your rights in respect of your personal data and how you can unsubscribe from future marketing communications.
THANK YOU

Put simply, the government’s light-touch approach puts responsibility for regulating AI in the hands of the UK’s existing suite of regulators. They will be asked to consider new principles for AI and provided with additional central support to co-ordinate action and understand AI’s impacts.

But crucially, we won’t see any new legal powers, rules or obligations, at least initially, and the government doesn’t anticipate bringing in anything more than a minimal duty for regulators to have regard for the new AI principles. This approach stands in marked contrast to the EU, which is in the latter stages of passing new, comprehensive AI legislation that treats AI through the lens of product safety, with strict legal requirements on “high-risk” use cases.

The government has argued that legislation introduced now would rapidly become obsolescent, but there’s more nuance to this than meets the eye. Legislation can be a great tool if used smartly – it doesn’t have to be complex and costly to comply with, or comprised of inflexible detail. Simple rules and principles – if given force – can be enabling.

The rule that tells us what side of the road to drive on doesn’t slow car innovation down – it’s a prerequisite to people being able to use cars and trust that the roads are going to be safe. What matters is that everyone knows what side of the line they need to be on, and that they’re appropriately incentivised – by law – to stick to it.

That basic guardrail works not only for road users and pedestrians, but also for the car industry, giving them the space and confidence to innovate and invest. We think the same is true for AI – regulation isn’t a barrier to AI innovation, it’s a prerequisite.

When we’re thinking about what legislation can achieve, there’s a huge amount of design space between a minimal “duty to regard” AI principles that the UK is considering, and the rules-heavy product safety approach in the EU.

At the Ada Lovelace Institute, we’re researching what alternative options exist in that space. But there is reason to believe that the lack of statutory footing for the UK approach risks leaving harms unaddressed by creating no new incentives for regulators, AI developers, or those deploying the tech to implement the otherwise laudable AI principles.

Regulations only really matter if they are enforced, but the government’s approach inherits the UK’s uneven patchwork of regulators. The use of AI in areas like recruitment or employment, which lack comprehensive regulatory oversight and where the risk of harm is particularly high, leaves it unclear which actors will be expected to enforce the AI principles – or how the principles will apply across central government departments or wider public services.

Where regulators do exist, their capacity to consider the implications of a landmark technology like AI within their existing resourcing varies considerably. While it’s heartening to see indications of some central capacity for this, we expect this will need substantial investment to meaningfully support some regulators.

There are more serious unresolved questions around the governance of cutting-edge “General-purpose AI” (GPAI) such as GPT-4 (the latest language model for the ChatGPT AI chatbot) and Bard, and other forms of generative AI. GPAI governance is a new and complex policy challenge, and the UK can hardly be blamed for not having a solution to hand – but the lack of urgency in finding that solution is out-of-step with the pace of change.

These systems are being developed with limited oversight and rolled out at incredible speed into swathes of everyday technologies, from office software to search engines, as well as potentially the functioning of government itself.

Getting this right really matters. Public concerns around GP data sharing, contact tracing, and the A-level exams algorithm have shown us how making the most of data-driven technology means having trustworthy governance in place to handle how we use it. A lot will hang on how the proposals end up being implemented. If the government can credibly demonstrate how it will address these concerns, it will create a real moment of opportunity to lead the way on AI – but it will need to move fast.

[See also: “AI is invisible – that’s part of the problem,” says Wendy Hall]

Content from our partners
The Circular Economy: Green growth, jobs and resilience
Water security: is it a government priority?
Defend, deter, protect: the critical capabilities we rely on