New Times,
New Thinking.

Inside Europe’s fight for ethical AI

France wants to expand the scope of the EU's groundbreaking new AI Act. Do its plans go far enough?

By Oscar Williams

On 11 June, Blake Lemoine, a software engineer at Google, published transcripts of interviews he had conducted with an AI “chatbot”. The documents, the engineer claimed, showed the software had developed sentience. Google disagreed, however, and quickly suspended Lemoine for violating the company’s confidentiality policy.

His claim that the company’s Lamda chatbot had gained human-like intelligence is widely disputed by experts in the field. Nevertheless, Google’s decision to punish one of its employees for raising concerns about an AI product publicly has seen fears about the Californian tech firm and its competitors resurface. The Silicon Valley giants, critics say, are failing to be sufficiently transparent about how they are developing a technology that promises to transform not only their sector, but many others over the coming decades.

Across the Atlantic, policymakers in Brussels and other European capitals are increasingly concerned by the rise of US-controlled AI. They are developing new legislation, the AI Act, to protect consumers and societies from harmful artificial intelligence, boost innovation, and create a set of common standards for the adoption of the technology across the EU’s 27 member states. The act will apply not only to European businesses and governments, but any company that wants to sell “high-risk” AI services into the 450 million-person trading bloc. It is expected to pave the way for similar legislation in other jurisdictions too.

However, early drafts of the text appeared to absolve providers of general-purpose AI – such as Google’s Lamda service – of responsibility for their products. The question of if, and how, such providers should be regulated is now one of the key points of contention in the act and expected to lead to a major lobbying battle. Critics argue that unless significant revisions are made, the legislation will fail to deliver on its aim of creating the conditions for “trustworthy” AI to flourish.

In mid-May, France unveiled proposals to dramatically expand the scope of the act. Under the plans, which have been seen by Spotlight, US tech giants developing general-purpose AI would face fresh scrutiny. However, critics of the proposals, including influential MEPs, argue they are too weak, while industry groups insist they go too far. The row reflects the fraught nature of developing regulation for one of the most transformative technologies of the 21st century.

In recent years, Amazon, Google and Microsoft have poured billions of pounds into general-purpose AI (GPAI). These systems can analyse large datasets, from imagery to data and text, for a variety of purposes. The tech giants use these tools for their own benefit; Google, for example, has used algorithms produced by its AI division, DeepMind, to find more efficient ways to run its energy-expensive data centres.

But these companies are also increasingly selling general AI services to other businesses and organisations. In turn, they can then use the AI models they have procured however they see fit. Microsoft licenses GPT-3, a groundbreaking, large language model, to companies through its Azure cloud computing service. The businesses, referred to as downstream providers or deployers, can then apply GPT-3 to specific use cases, from developing a chatbot to summarising reports and crafting marketing copy. Similarly, public sector organisations might use Azure to access Microsoft’s decision-making models to assess a citizen’s eligibility for a particular service.

Give a gift subscription to the New Statesman this Christmas from just £49

General-purpose AI is widely seen as the future of the field. But when the European Commission unveiled the first draft of the act last April, many were surprised to see it did not mention GPAI. Instead, the legislation suggested the burden of compliance would fall on deployers of specific AI services in “high-risk” contexts, such as employment, migration and law enforcement.

The regulatory powers introduced by the legislation, such as ensuring models are trained on non-biased datasets and giving regulators the right to remove certain services from the market, did not appear to apply to general-purpose AI providers. This, legal experts warned, posed a challenge to downstream providers. How could they be expected to ensure that the models they had bought from other companies, and had little control over, complied with new standards?

Furthermore, the majority of algorithms produced by US tech giants, such as those used in search engines and social networks, are set to fall outside the scope of the rules entirely because they are not classified as high risk. When the risk-based approach was presented by the European Commission last year, Google welcomed it, saying it struck “an appropriate balance” between protecting against AI-related harms and promoting innovation. Microsoft, which has invested more than $1bn in OpenAI, one of the leaders in large language model development, said it strongly supports the approach.

But, perhaps concerned by the prospect of being accidentally ensnared in the scope of the legislation, Google called for a new class of suppliers called “deployers”, distinguishing between “providers” of general-purpose AI and the businesses and organisations putting it into practice. The company said it would be happy to be called upon to provide information to show deployers were complying with the regulation. But under the proposals, its obligations would go no further.

The previous Presidency of the Council of the European Union appeared to heed the industry’s advice. In November, the right-wing Slovenian government, then holders of the presidency, introduced a “compromise” that, if passed, would have confirmed that general-purpose AI providers would remain largely untouched by the regulation. The proposals “exculpated the providers of large upstream general-purpose AI”, says Newcastle University law professor Lilian Edwards. “[They] dumped all the effort and liability on small downstream deployers, who in all likelihood had neither the legal nor practical ability to see the details of the upstream model, or fix them.”

The backlash to the proposals prompted a major U-turn. The latest Council presidency, which is now held by the French government, published its alternative to the Slovenian text on 13 May. “It is essential,” states a draft obtained by Spotlight, “to clarify the role of actors who may contribute to the development of AI systems. In order to ensure a fair sharing of responsibilities along the AI value chain, such systems should be subject to proportionate and tailored requirements and obligations under this regulation before their placing on the union market or putting into service.”

The new plans would require two key commitments from general-purpose AI providers. First, they would need to ensure they were documenting their systems and complying with the standards that apply to other providers of more specific AI. These standards are yet to be drawn up, but will seek to ensure that AI providers do not introduce bias into automated systems. Second, they would be called upon to provide information to downstream providers in the way that Google suggested.

“The new French proposal is a big improvement on the previous Slovenian suggestion,” says Edwards, who wrote research organisation the Ada Lovelace Institute’s report on the AI Act. “However, it is not perfect.” The data-sharing requirement “gets round the problem that deployers taking a GPAI service via an API [application programming interface] – e.g. to use to create a helpline chatbot – cannot currently see ‘under the hood’ so have difficulty knowing if the service is biased or erroneous and can’t easily fix the problems when they occur,” Edwards adds. “But there is a carve out for IP [intellectual property] rights or ‘confidential business information or trade secrets’, which could pretty much render this duty non-existent.”

The legal expert raises another concern: “There’s one big exception – article 14, which says the provider has to make sure that a downstream deployer (‘user’) can exercise oversight over the system if it goes wrong – e.g. pull the plug if it’s producing unfair results as opposed to accepting that ‘computer says no’. This would be an extremely tough duty for GPAI providers to meet, but it’s also one deployers can’t do much about on their own.”

But perhaps the greatest safeguard is from article 65, which gives regulators the power to force providers to retrain high-risk AI or remove it from the market entirely. Brando Benifei MEP, one of the two rapporteurs of the bill, welcomed the changes, but is concerned about the opt-outs.

“Compared to the Slovenian compromise proposal, the French one is a welcome improvement, as it shows the Council [of the European Union] is considering more carefully the impact and characteristics of general-purpose AI systems in the value chain, moving from a full exemption as per the initial draft,” Benifei tells the New Statesman. “However, the proposal is still not sufficient, as it would entail subjecting these systems only to a subset of requirements, with a self-assessment procedure, and an exemption for SMEs. We believe that general-purpose AI systems should be subject to all requirements foreseen by the AI Act.”

It is not guaranteed that the latest compromise will find its way into the final legislation. A previous proposal by the French presidency to narrow the legislation’s definition of AI received a fierce backlash and several MEPs are confident it will be rejected.

The plans to expand the scope to cover GPAI have been better received. However, the tech lobby is still expected to push back. “It’s concerning for our members because it’s something that could be increasing the scope of the legislation quite dramatically,” says Cecilia Bonefeld-Dahl, director general of DigitalEurope, a trade association for the tech industry. “How do you control what GPAI is used for? It’s not so easy. The problem with the French presidency’s compromise is that most of deployers will not be using it for high-risk purposes. Should they go through all these different requirements?”

The European Commission, Bonefeld-Dahl claims, has underestimated the cost of compliance. Rather than basing the estimates on the average IT salary, Bonefeld-Dahl continues, officials should have factored in the higher earnings of compliance officers. “Some of the estimates were ranging from €100,000-€300,000 for conformity for one AI software,” she says. “Even for large companies it’s not a small investment. If you have general-purpose AI thrown in, that’s very concerning.”

In the coming weeks, the Council of the European Union will convene government ministers from across the EU to agree the terms of their preferred version of the draft text. At the same time, MEPs are beginning to filter through thousands of amendments submitted by parliamentary committees, before voting through a final proposal later in the year. Work on the act started in 2019, at the start of Ursula von der Leyen’s presidency of the Commission. The legislation is due to enter the trilogue system of negotiations between the Commission, Council and Parliament in early 2023.

The three institutions of the EU will be tasked with resolving a number of issues critics have raised with the legislation in its current form, from the way it defines AI to the absence of protections for individuals. But it is the scope of the act as it relates to general-purpose AI that is likely to attract the most scrutiny, given that critics of the earlier drafts believe they fail to properly assign responsibilities and the tech lobby is expected to mount a forceful pushback. Whatever stance MEPs and officials take, it will have major consequences for the way in which AI is developed and deployed in the years ahead, and not just in Europe.

Read more: “AI is invisible – that’s part of the problem,” says Wendy Hall

Content from our partners
Building Britain’s water security
How to solve the teaching crisis
Pitching in to support grassroots football

Topics in this article : , ,