Mohamed Abdalla’s disillusionment with Google began when he interned at the company in the summer of 2017, after graduating from the University of Toronto. It was less than a year since Donald Trump’s election and the search giant was still grappling with the fallout of the vote.
A Canadian citizen, Abdalla reflects that, were he an American, he would “definitely be more Democrat than Republican”. But he told me he “felt bad” for the people asking more Republican-oriented questions in Google’s monthly, company-wide meetings. Those who appeared to be on the right “weren’t treated with respect”, he said.
Many of these questions were motivated by the concern that Google’s algorithm might be biased against conservative media outlets – a view that is still common among Republicans. “Conservatives believe that there is a bias against them and I would disagree with them,” Abdalla said. But while he personally believes that “having Fox News on the first page is wrong”, he also worried that his colleagues thought they had a right – or indeed a duty – to make political decisions. “It shouldn’t be up to Google to decide this. It shouldn’t be up to the algorithms to decide.”
Abdalla said he witnessed “clear censorship of the questions being asked, and who gets answered” in the company-wide meetings. Senior Google executives, he said, would pass on the most challenging questions to lawyers. “The fact that you’re having a lawyer answer this means that you’re trying to avoid blame, and it’s too important a question – even if I disagree with the person asking and their viewpoints – to give a non-answer.”
Abdalla liked his team. He didn’t think Google had a malicious plan, but he was concerned about how the company operated. “I’m a systems-oriented kind of guy. Even if I agree with the outcome – if the process is bad, I think that’s something bad, because then it can be used against us eventually if the leadership changes.”
In October last year, Abdalla, who is now at the University of Toronto studying for a PhD in machine learning, published a non-peer-reviewed version of a paper he had written with his brother, Moustafa, a graduate student at Harvard Medical School. The brothers argue in the paper that Big Tech’s efforts to influence academic policy research mirrors Big Tobacco’s funding of health research in the late 20th century.
The final version of the paper on AI, ethics and society, which is due to be included in a conference next week, describes how both sectors have used university research funding to revive their public image, influence scientific and policy research and identify “receptive academics who can be leveraged”. The study, which cites the New Statesman‘s reporting, also claims that 58 per cent of the surveyed North American AI ethics faculty is “looking to Big Tech for money”.
[See also: How Big Tech funds the debate on AI ethics]
The comparison with the tobacco industry is “intended to serve two purposes”, the paper states. The first is “to provide a historical example with a rich literature to which we can compare current actions”. Secondly, and more bluntly, it aims to be provocative: “To leverage the negative gut reaction to Big Tobacco’s funding of academia to enable a more critical examination of Big Tech.”
The analogy has been well received by some researchers, but academics at another conference rejected the paper as “conspiratorial”.
“It’s clearly very divisive in terms of the feedback it gets,” said Abdalla.
However, since an initial version of the paper was published online last year, a number of the concerns Abdalla outlined in theory have come to pass. In October, a leaked document revealed that Google planned to leverage “academic allies” in order to push back on the European Commission’s plans to increase competition in the digital advertising market. Tech firms such as Facebook and Google have spent millions of pounds funding European tech ethics research.
[See also: Why the EU’s new tech legislation could become the most lobbied in history]
Two months later a researcher at Google, Timnit Gebru, said the company fired her after she resisted efforts to quash her research into the discriminatory nature of AI speech technology. In the wake of Gebru’s treatment, other Google employees have either resigned or been forced out. In December, leaked documents showed Google encouraging its researchers to “strike a positive tone” in their analysis of its technology.
“The paper originally started out as a theoretical analysis,” said Abdalla. “It turned from a theoretical paper into a practical, applied paper.”
But he says it will take more than a few revelations for things to change between the industry and academia.
“I think the issue is there are a lot of academics who care about this issue but… unless everyone acts, whoever acts first is most likely to get harmed, because you’re losing out on that money.”
One solution proposed by Abdalla is to create a centralised body that manages industry funding for elite universities. But he warned that the proposal would be resisted by the significant number of senior computer scientists who are closely associated with the industry. “Can it be done? Yes. Will it be done? I’m not an optimist.”
[See also: How UCL’s groundbreaking AI research became entangled in Facebook’s net]