New Times,
New Thinking.

Rishi Sunak touts UK AI safety research at London Tech Week

The Prime Minister says Anthropic, Google DeepMind and OpenAI will offer models for safety training.

By Ryan Morrison

Some of the largest AI labs including Anthropic, OpenAI and Google’s DeepMind will offer early access to new foundation models to UK researchers. The move, revealed by prime minister Rishi Sunak at London Tech Week, is to aid in safety and risk mitigation research.

The debate over how to manage risk and regulation of large general purpose AI models has become widely contested. All of the major AI labs are pushing for some degree of regulation but the extent to which that applies and how the burden is distributed isn’t clear.

Sunak told delegates during his keynote opening the 10th anniversary London Tech Week that cutting edge safety research would be a key component of the UK’s AI policy. This would see it partner with academia and the AI companies to ensure the technology was being safely developed, deployed and used throughout the economy.

Some of the funding for this research will come from the £100m foundation AI taskforce announced earlier this year. The group will also lead on proposing standards and guardrails that would then be put forward for international discussion.

Sunak said the labs were “committed to give early or priority access to models for research and safety purposes to help us understand risks of these systems.”

Select and enter your email address Your weekly guide to the best writing on ideas, politics, books and culture every Saturday. The best way to sign up for The Saturday Read is via saturdayread.substack.com The New Statesman's quick and essential guide to the news and politics of the day. The best way to sign up for Morning Call is via morningcall.substack.com
Visit our privacy Policy for more information about our services, how Progressive Media Investments may use, process and share your personal data, including information on your rights in respect of your personal data and how you can unsubscribe from future marketing communications.
THANK YOU

To date all of the major labs have worked on their own risk analysis research, including a recent proposal from Google DeepMind to introduce a framework for measuring severe risk from future advanced systems. This would see some of that handed to British academic institutions.

Sunak said it was vital that there was international cooperation on the standards and regulations introduced to govern AI, declaring that “AI doesn’t respect borders.” He highlighted the international AI summit due to happen later this year in the UK as an example, comparing it to the COP climate change summits that in the past saw global agreement to reduce average temperatures through reductions in carbon emitting activities.

He said we need to “lead at home, lead overseas and lead change in our public services,” explaining the need for more AI use in health and education. “This idea that every job having AI as a copilot, making everyone’s job a little easier and a little more productive, replicated across an entire economy has the potential to be transformative,” the prime minister said.

Sunak told delegates during his keynote opening the 10th anniversary London Tech Week that cutting edge safety research would be a key component of the UK’s AI policy. This would see it partner with academia and the AI companies to ensure the technology was being safely developed, deployed and used throughout the economy.

Some of the funding for this research will come from the £100m foundation AI taskforce announced earlier this year. The group will also lead on proposing standards and guardrails that would then be put forward for international discussion.

Sunak said the labs were “committed to give early or priority access to models for research and safety purposes to help us understand risks of these systems.”

To date all of the major labs have worked on their own risk analysis research, including a recent proposal from Google DeepMind to introduce a framework for measuring severe risk from future advanced systems. This would see some of that handed to British academic institutions.

Sunak said it was vital that there was international cooperation on the standards and regulations introduced to govern AI, declaring that “AI doesn’t respect borders.” He highlighted the international AI summit due to happen later this year in the UK as an example, comparing it to the COP climate change summits that in the past saw global agreement to reduce average temperatures through reductions in carbon emitting activities.

He said we need to “lead at home, lead overseas and lead change in our public services,” explaining the need for more AI use in health and education. “This idea that every job having AI as a copilot, making everyone’s job a little easier and a little more productive, replicated across an entire economy has the potential to be transformative,” the prime minister said.

“We approach things with a principles first logic,” he said. “The government is focused on engaging with industry to make sure we understand how innovation is happening, creating a safe space to make it happen.” This could include liberal use of regulatory sandboxes and creating incentives to make the technology accessible to the entire population.

Highlighting the number of AI-related companies opening offices in the UK, Sunak said it puts the country in a good position to create dialogue and drive the direction of standards. He explained that while there are real risks, not all risks should be dealt with the same way.

“Government has to ensure everyone has access to education and skills at every point in their lives,” he said, talking about the fact AI is impacting jobs and changing the types of roles that are available He said this includes “changing how we fund education, so it is something you want to come back to over your entire career.”

Hassabis told delegates that there are “risks” with any new transformative technology. He said: “Different categories of risk need to be mitigated in different ways. We need to understand and research those systems to have a better handle on boundaries of those systems.”

He believes this will then let governments “put the guardrails in place”, adding: “The right way to proceed is with exceptional care, be optimistic about opportunities and use the scientific method to study and analyse these systems as they get more powerful.”

Tech Monitor has approached OpenAI, Anthropic and Google DeepMind for comment on how the model access will work. Anthropic was the only one to respond, saying: “We are in support of Prime Minister Sunak’s work on research and safety and look forward to collaborating with this effort and others like it.”

This piece was originally published by TechMonitor on June 12.

Content from our partners
The Circular Economy: Green growth, jobs and resilience
Water security: is it a government priority?
Defend, deter, protect: the critical capabilities we rely on

Topics in this article : , , , ,