New Times,
New Thinking.

  1. Science & Tech
6 June 2019updated 07 Jun 2021 5:22pm

How Big Tech funds the debate on AI ethics

By Oscar Williams

Most people are unaware of the extent to which algorithms already make life-changing decisions on their behalf. Software can diagnose illnesses, shortlist you for a job interview or assess whether you should be granted parole. As machine learning and artificial intelligence become more sophisticated, automated decision-making will become ever more influential, and the rules that underpin these decisions may one day have as much effect on daily life as the laws of nation states. But while elected governments draw up legislation in public, the technology industry is spending millions of pounds quietly attempting to shape the ethical debate about the decisions machines make.

Spotlight and NS Tech have spent the past two months investigating the role that the industry plays in supporting British research into AI ethics. Data obtained under freedom of information laws reveals that Google has spent millions of pounds funding research at British universities over the last five years. Oxford University alone has received at least £17m from Google.

The search giant, whose parent company was worth more than $750bn at the time of going to print, has been fined €8.24bn over the last two years by the European Commission for its business practices. The French data protection authority fined the company a further €50m in January, and in the UK the Information Commissioner’s Office is currently investigating claims that the company has breached EU data laws.

While much of the funding provided to Oxford goes towards technical research, Google and DeepMind – a British AI company wholly owned by Google’s parent company, Alphabet – have also supported work at the Oxford Internet Institute (OII) exploring the ethics of AI, the civic responsibilities of tech firms and research into the auditing and transparency of automated decision-making (ADM).

A number of academics at the OII, most notably Professor Luciano Floridi, Associate Professor Sandra Wachter and Dr Brent Mittelstadt, are prolific public commentators on ethical AI and ADM. Floridi is an advisor to the Information Commissioner’s Office, a board member for the UK government’s Centre for Data Ethics and Innovation and a representative of the European Commission’s High Level Expert Group (HLEG) on AI. He also served on Google’s short-lived AI ethics board earlier this year.

In January 2017, the academic journal International Data Privacy Law published an article written by Floridi, Wachter and Mittelstadt which argued that European data law does not give EU citizens the right to an explanation of decisions made about them by machines. Other experts expressed concern at what this might mean when software made decisions that were contentious. According to a page on the OII’s website, the same researchers then received a grant from Google covering work on the “underlying ethical and legal principles, as well as latent policies shaping the auditing [of automated decision-making] debate”. An OII spokesperson said the funding enabled the delivery of an internal workshop at the company.

Later that year, DeepMind also committed to funding a series of work at the OII on a similar subject. A page on DeepMind’s website states that the funding would support a “research project on explainable and accountable algorithms and automated decision-making in Europe”. The project started in January last year and is led by Wachter, but also supports some of Mittelstadt’s work.

Give a gift subscription to the New Statesman this Christmas from just £49

When Spotlight asked the OII for evidence of papers that acknowledged funding from either Google or DeepMind, a spokesperson presented four articles, each of which were written or published in or after April this year.

However, two papers identified by Spotlight, which were authored by Wachter and Mittelstadt, published since January 2018 and address approaches to algorithmic accountability, do not acknowledge that funding had been received from DeepMind. Other academics consulted for this feature said it would be normal practice to disclose any corporate funding in the realm of a paper’s topic, and if necessary to disclaim it for the sake of transparency.

An OII spokesperson said that only a small proportion of its funding comes from business, that it is committed “to the highest standards of academic independence” and that all projects are carried out “in line with Oxford’s culture of academic freedom”, meaning “external funding does not compromise the value or integrity of the research output”. The institute also noted that Wachter and Mittelstadt have recently written papers which acknowledge DeepMind’s funding, are critical of the industry and call for more stringent regulation.

“In the interests of transparency,” the spokesperson said, “we publish the details of all our corporate supporters on our website and the University of Oxford publishes a consolidated breakdown of its annual income and expenditure.” However, unlike many other organisations contacted for this piece, the OII refused to reveal how much the individual grants it had received from Google were worth. Another spokesperson for the university said that “the publication of research results, for example in journal articles or conference proceedings, will always acknowledge the source of any funding that has supported the work”. DeepMind said it was passionate about supporting the research community and proud to have supported Oxford through several “unrestricted donations”.

Google has been funding research into ethics of technology for more than a decade. In 2017, the Campaign for Accountability (CfA) claimed to have identified 329 research papers published from 2005 onwards about “public policy matters of interest to Google that were in some way funded by the company”. The company has also partnered with a number of European organisations to launch policy research units, including Readie – a joint initiative with the innovation agency, Nesta – in London.

“Google uses its immense wealth and power to attempt to influence policymakers at every level,” said Daniel Stevens, a campaigner and executive director of CfA, at the time. Addressing the research identified by his organisation as having been funded by the company, he said: “At a minimum, regulators should be aware that the allegedly independent legal and academic work on which they rely has been brought to them by Google.”

However, it later emerged that CfA had in turn received funding from Oracle, which had been fighting Google in court. A recent call for papers on corporate funding published by the University of Amsterdam observed that the incident “perfectly captures the complexity of the situation this [industry funding] creates around science”.

A Google spokesperson said that “academic institutions must properly disclose our funding” and that grants are only offered “for discrete pieces of research, not to shape academics’ subsequent scholarship. The researchers and institutions to whom we award research grants will often publish research with which we disagree.”

The company is not alone in funding this sort of research. Facebook announced in January that it was set to donate $7.5m to the Technical University of Munich, to fund the launch of a new AI ethics research centre. Two months later, Amazon announced it would partner with the US National Science Foundation (NSF) “to commit up to $10m each in research grants over the next three years focused on fairness in AI”.

Academics are divided over the ethics of such funding. While some say businesses play an important role in supporting research, others question whether this work can ever be entirely independent. “It creates a level of dependency, especially in an age where there’s such pressure to get external funding,” says Lina Dencik, the co-director of the Data Justice Lab at Cardiff University and a member of the Funding Matters initiative. “In that context, it’s even more pressing that there are more discussions around who universities should and shouldn’t accept money from.” Oxford confirmed that it had not rejected any offers of funding from Google, Amazon, Facebook, Apple or Microsoft over the last five years.

“A university abdicates its central role when it accepts funding from a firm to study the moral, political and legal implications of practices that are core to the business model of that firm,” writes Yochai Benkler, a Harvard Law School professor, in a recent article for Nature. “So too do governments that delegate policy frameworks to industry-dominated panels.”

Last year Thomas Metzinger, a German philosophy professor, was named as one of the select group of ethicists who had been appointed to the European Union’s AI expert group. Around half of the 52 members of the HLEG were industry representatives. Google, Facebook, Amazon, Microsoft and Apple were represented by an organisation called DigitalEurope, which has since denied that the group was tilted in the industry’s favour.

Metzinger and a machine learning expert were tasked with drawing up the “red lines” for the use of AI in Europe. “We did this for six months,” he says. “We talked to many other experts and extracted their proposals, formulated it and put it in a document of non-negotiable normative principles.” Two of the “red lines” Metzinger proposed were that AI should never be used to build either autonomous lethal weapons or social credit score systems akin to China’s. But after he submitted his proposals, some members of the group pushed for a softer stance.

“Suddenly there was this categorical sentiment by various industry representatives and others that this phrase ‘red line’ cannot appear in that document anymore,” he recalls. The final version of the text, which has now been shared with the European Commission, instead referred to these issues as “critical concerns”.

The Commission aspires for Europe to invest €20bn a year on AI in the coming years, and some experts have proposed the creation of a new dedicated centre of research similar to CERN. One way, says Metzinger, to ensure regulation keeps pace with the technology would be to establish an allied Ethics Centre of Excellence with a series of rotating fellowships, as well as a wider European network of 720 professorships in the applied ethics of AI.

“For the Centre, the idea would be to have the young ethicists right there [working with the developers] and for the network to educate new generations of students and to keep the wider public informed and to structure public debates,” says Metzinger. “Europe is the last grown-up in the room between China and [President] Trump. We have to guide the debate.”

Content from our partners
Building Britain’s water security
How to solve the teaching crisis
Pitching in to support grassroots football