New Times,
New Thinking.

  1. Spotlight on Policy
  2. Tech and Regulation
  3. Emerging Technologies
26 February 2024updated 29 Feb 2024 1:23pm

Unregulated AI could cause the next Horizon scandal

Without robust regulation, it will be even easier to make automated decisions without people’s knowledge or consent.

By Francine Bennett

Many politicians are understandably hopeful about how the recent wave of artificial intelligence (AI) could boost productivity and improve public services. The government has set up a new unit for AI innovation in the public sector. Labour is promising to use AI for everything from fraud prevention to reducing truancy.

Data and AI are increasingly embedded in our economy, public services and our communities. Algorithms can and are being used to make life-altering decisions, whether that’s about our employment, finances or exam results. The adoption of easily accessible general purpose AI tools like ChatGPT is accelerating the shift to a more automated, data-driven world.

But leaving important decisions to technology can go badly wrong, as the Post Office scandal showed. Hundreds of postmasters were prosecuted for theft and fraud on the evidence of flawed accounting software, Horizon, with severe consequences for them and their families.

This scandal should underscore the dangers of integrating AI into our economy at pace and uncritically. Where decisions such as hire-and-fire processes and loan applications are delegated to automated systems, they become less transparent and harder to explain. Systematic bias, technical failings or individual circumstances that don’t fit into the system’s map of reality can result in unfair outcomes. And without meaningful routes of redress people can’t appeal decisions.

If we delegate more decisions to AI without first putting in place the right safeguards, more lives could be affected by the ineffective governance of opaque and fallible technical systems. In the case of Horizon, the government is now taking welcome steps to address this miscarriage of justice. At the same time, however, it risks opening a legislative door to further unaccountable and unfair uses of technology.

Select and enter your email address Your weekly guide to the best writing on ideas, politics, books and culture every Saturday. The best way to sign up for The Saturday Read is via saturdayread.substack.com The New Statesman's quick and essential guide to the news and politics of the day. The best way to sign up for Morning Call is via morningcall.substack.com
Visit our privacy Policy for more information about our services, how Progressive Media Investments may use, process and share your personal data, including information on your rights in respect of your personal data and how you can unsubscribe from future marketing communications.
THANK YOU

On 6 February, the government confirmed it will not provide any immediate commitment to new law on AI, instead making further legislative interventions dependent on industry behaviour and further consultation. This falls short of what is needed. We shouldn’t be waiting for companies to stop cooperating or for a Post Office-style scandal to prompt the government and regulators to react.

Meanwhile, British reforms to data protection law in the Data Protection and Digital Information Bill, currently before the House of Lords, will weaken existing protections against automated decision-making. Since the Horizon scandal, the introduction of the General Data Protection Regulation (GDPR) has offered some legal protection against many types of automated decisions, such as loan approvals or hiring decisions. While limited and imperfect, these safeguards did introduce important opportunities for mitigating possible harms, as well as a paper trail to support future investigations. The threat of regulator fines and legal action incentivises organisations to take complaints seriously and to take action.

Independent legal analysis commissioned by the Ada Lovelace Institute has found that, under the UK’s current proposals for AI regulation and data protection, these incentives will be eroded. It will be even simpler to make automated decisions about people without their knowledge, and without seeking their consent. This could make it easier for organisations to dismiss the concerns of whistleblowers like Alan Bates, who spearheaded the campaign for justice over the Post Office scandal.

The Horizon failure was about more than just inadequately tested software; it was fundamentally about bad governance. It demonstrated something that should be obvious: technology does not exist in a vacuum. It co-exists with people, and is embedded within institutions and power structures. Legally binding regulation is an important tool for reshaping these structures. It can strengthen the hand of individuals and communities in the face of power that is all too often unaccountable. Getting AI governance right means making those developing and deploying technology genuinely accountable for their impact on people and society.

A survey of the UK public carried out last year by the Ada Lovelace Institute found significant concern that an over-reliance on technology will negatively affect people’s agency and autonomy. More than half (59 per cent) of respondents said that they would like clear procedures in place for appealing to a human against an AI decision – but the only way of guaranteeing this is through legally binding requirements.

We have been shown what can go wrong in a “computer says no” society, but without amendment the government’s data protection reforms risk creating an “AI says so” society. Instead, as we enter an age of wider AI use, we should be looking to strengthen rights and protections, ensuring important decisions are subject to meaningful human review, and that personalised explanations are available to affected people.

To achieve this, the government must look again at the Data Protection and Digital Information Bill. It must work with legal experts, civil society and affected people to pass data protection laws fit for the AI era.

Content from our partners
The Circular Economy: Green growth, jobs and resilience
Water security: is it a government priority?
Defend, deter, protect: the critical capabilities we rely on