New Times,
New Thinking.

  1. Science & Tech
9 June 2021

How social media companies help authoritarian governments censor the internet

Increasingly, the price of access to a global digital market is assisting the policing of online speech.

By Laurie Clarke and Katharine Swindells

The president of Nigeria, Muhammadu Buhari, was so outraged when Twitter deleted his post last Wednesday (2 June) that he ordered the country’s mobile networks to block access to the social network and broadcasters to delete their accounts. While Buhari’s comment – in which he appeared to threaten to return Nigeria to the appalling violence of its 1967 civil war – certainly looks like a violation of Twitter’s policies, it’s an unusual political intervention by the company (when Donald Trump threatened people protesting the murder of George Floyd with “shooting”, the post was merely flagged). More common are the requests governments make for the removal of posts that criticise them – and social media companies are often willing to comply.

Companies can restrict certain types of content in different jurisdictions. Their policies state that they reject “improper” requests, and that the content that’s removed violates platform rules or local laws. But what is legal is not necessarily just. In the past few years, governments around the world have introduced a flurry of new laws governing online speech. In countries that restrict freedom of expression, social media platforms are being offered a choice between aiding repression or losing access to lucrative markets.

The New Statesman examined Facebook, Google and Twitter data, and found that countries with less internet freedom (according to US NGO Freedom House’s annual Freedom on the Net report) tend to make the most requests for content removal, especially for reasons such as “government criticism” and “national security”. These include Russia, India, Turkey, South Korea, Brazil, Vietnam and Thailand.

As India was engulfed by a second surge of Covid-19, for example, the government attempted to quash online criticism of its handling of the crisis. Last month, it ordered Twitter, Facebook and Instagram to block around 100 critical social media posts in the country – coinciding with a temporary ban of the Facebook hashtag “ResignModi”. It’s a pattern of online censorship that has become increasingly common in the embattled democracy. Supporters of the farmers’ protests, which brought millions out onto the streets of Delhi, were targeted in the same way earlier this year.

In India, most content removal requests are made under the Information Technology Act, which was passed in 2000 by a coalition led by current prime minister Narendra Modi’s Bharatiya Janata Party (BJP). Section 69A of this law authorises the government to block any digital information it judges necessary to protect India’s sovereignty and security.

The Indian government recently reinforced the law further, when it failed to get a number of Twitter accounts supportive of the farmers' protests suspended. “What seemed like an initial small victory was actually followed by a very large crackdown,” says Rohin Garg, associate policy counsel at India’s Internet Freedom Foundation.

Give a gift subscription to the New Statesman this Christmas from just £49

The new rules, introduced in March, target encryption, force platforms to verify users’ identities, implement strict time limits for the removal of content or sharing of user information with law enforcement, and expand the government’s ability to remove news media content. They are “very thick manacles for free speech in a variety of ways”, says Garg.

The Indian government’s social media crackdowns mark a more pervasive shift to authoritarianism. The Economist Intelligence Unit’s Democracy Index 2020 saw the country slide from 27th to 53rd place under Modi. “This has become exacerbated with the onset of this latest wave of the pandemic,” says Garg. He says that volunteers helping people during the outbreak have been painted by Modi’s government as agents of the opposition. “The chief minister of Uttar Pradesh, India's most populous state, has said that anyone saying we don't have enough oxygen or asking for help of this sort will be arrested and their property will be seized.”

Google’s transparency data highlights that, in recent months, it received multiple requests from Indian law enforcement for the removal of YouTube videos about Covid-19 that ranged from “conspiracy theories and religious hate speech” to “news reports and criticism of the authorities’ handling of the pandemic”.

The New Statesman's analysis found that between January 2019 and June 2020, only three countries ranking in the top ten for removal requests in the categories of “national security”, “government criticism” and “religious offence” have full internet freedom. These are France, Germany and the UK. According to the Democracy Index 2020, only Germany and the UK qualify as full democracies, as France was reclassified as a “flawed democracy” this year.

Russia, which submitted the most requests, is well-known for internet crackdowns. It recently introduced new measures to choke off access to social media sites, and is building a sovereign internet to give the government more control over what its citizens can see online.

Unlike Google, Facebook and Twitter don’t break down successful content removal requests by category. But the data shows that countries that rank the highest for “government criticism”, “national security” and “religious offence” content removal requests for Google tend to also rank highly for overall content removal requests across the three platforms. Mexico, Brazil and South Korea, all of whom only have partial internet freedom, also rank in the top ten countries for overall removal requests.

Jillian C. York, the Electronic Frontier Foundation’s director for international freedom of expression, who has been researching social media content moderation for the past ten years, says that platforms have been capitulating to the requests of authoritarian governments “since as early as 2007, when YouTube first made the decision to remove content at the behest of Turkey and Thailand’s governments. The situation has only grown worse over time, with companies capitulating to ever-increasingly authoritarian governments, including Saudi Arabia’s at times.”

A number of nations have pressured social media companies to hire local executives in order to be more responsive to government demands. YouTube, Facebook, TikTok and Twitter have all agreed to do so in Turkey, despite pressure from digital rights activists. This means “the social media platforms will be compelled to comply with every single blocking and removal decision they receive,” says Yaman Akdeniz, professor of law at the Istanbul Bilgi University. “Otherwise, they will be held liable for the content that they did not block or remove.” In India, government officials threatened Twitter executives with imprisonment if they failed to act on content removal demands.

Once employees are on the ground, companies have an extra impetus to comply with government requests in order to protect them. “The question, then, should be what is taken into account when deciding to open an office in a given country,” says York. “When Facebook and Twitter and Google opened offices in Dubai, they were certainly aware of the UAE’s record on human rights.”

It’s important to note that the data provided by platforms is not the full picture, but simply the most visible element. There are other, less obvious, ways that countries exert influence over online speech through their relationships with social media companies. “The people I’ve spoken to who’ve worked in these companies all point to the United States, Germany, France, and Israel as being major influencers of policy, in varying ways,” says York.

[See also: “The left was deplatformed long before the right”: Jillian C York on social media’s haphazard war on content]

This year has seen a vast enlargement of the category of misinformation (ie, legal speech) that social media platforms take action on. In parliamentary hearings with tech companies, it’s common to see British and US legislators demand social media companies remove more speech, both illegal and legal. US senators recently sent a letter to Facebook demanding the removal of 12 individual user accounts they dubbed the “disinformation dozen”.

Some countries, including France, Israel and Vietnam, have established back-door channels to pressure companies over content. Israel’s Cyber Unit uses an “alternative enforcement” mechanism to pressure platforms to voluntarily remove content outside of the law, something rights activists say is disproportionately used to silence Palestinians. Meanwhile, a Wall Street Journal investigation found that the head of Facebook public policy in India has close ties to the BJP, and that this may have influenced content decisions Facebook made in the country.

More democratic countries don’t necessarily file more reasonable content removal requests either. Google data shows that France recently requested the removal of YouTube videos that contained audio recordings of confidential parliamentary hearings and a Google+ post with a picture depicting two high-ranking government officials as dictators.

The most vexatious question in all of this is whether companies should be beholden to local laws, or appeal to some higher set of ideals. York points out that some of Google and Facebook’s actions would appear to be in conflict with the commitments they made upon joining the Global Network Initiative, an NGO with the goal of preventing internet censorship by authoritarian governments.

“International human rights law offers a good set of principles for companies to assess local obligations,” says Brenda Dvoskin, doctoral candidate at Harvard Law School. “Companies bear responsibility for what their services do in the world. If they are silencing activists or spreading incitement to violence, they are responsible for those actions.”

[See also: Big Tech’s favourite law is running out of time]

Social media companies argue that if their services are cut off, people in authoritarian countries will be worse off, with fewer means to communicate. Garg is somewhat sympathetic to this argument, saying that if companies refuse the Indian government’s request, it may block them, like it did the video-sharing app TikTok last year. Instead, he says, what is needed is greater transparency and accountability for the platforms’ decisions.

But social media platforms’ acquiescence to government demands could affect how they’re viewed by users. According to Akdeniz, social media platforms’ willingness to work with the Turkish government has left some disillusioned. “Especially the dissidents and the opposition, who are very unhappy and no longer trust these companies,” he says. “No one seems to believe their ‘rest assured’ messages.”

Content from our partners
Building Britain’s water security
How to solve the teaching crisis
Pitching in to support grassroots football