The government’s white paper on artificial intelligence in March called for light-touch regulation to encourage AI firms to base themselves in the UK, as Google’s DeepMind already does. In the subsequent months, as some parts of the scientific community issued apocalyptic warnings and people panicked about ChatGPT, the government started briefing it would harden its approach to AI.
Has it?
In a speech yesterday (12 June) Sunak said that he wanted to develop “safe AI”. He pointed to his agreement with President Joe Biden last week, that the UK would host a summit in the autumn for countries to decide together what action could be taken to mitigate the risks of AI. This could end up with an overarching regulatory body and a pooling of international resources to fund research into AI. The PM compared the conference to climate change summits. But, as we know, climate change summits often don’t achieve their stated aims.
Sunak said that the government had invested £100m into a new task force that would conduct “cutting-edge safety research” into AI. But in the original press release for the task force back in April, the emphasis was less on safety and more on building AI models to spur economic growth. As the government said at the time: “The investment will build the UK’s “sovereign” national capabilities so our public services can benefit from the transformational impact of this type of AI.”
Rhetoric and policy are separate. The government is still pursuing a strategy of light-touch regulation – relative to other governments such as the EU – to make Britain the best place for AI firms to set up. More importantly, it has yet to propose a fresh tranche of regulation. What has changed is that Sunak is more careful to acknowledge the dangers that AI poses, and to take a leading role in international cooperation. That may result in future regulatory changes. But we aren’t there yet.
This piece first appeared in the Morning Call newsletter; subscribe to it on Substack here.
[See also: Only philosophy can beat AI]