New Times,
New Thinking.

  1. Technology
2 December 2021

It is time to regulate Twitter and other social media platforms as publishers

The libertarian utopia that Jack Dorsey thought he had created is gone forever.

By Paul Mason

In one tweet Jack was gone. Twitter’s chief executive Jack Dorsey resigned abruptly on 29 November, after months of pressure from investors who objected to his part-time approach to leading the social media company he founded.

His replacement, 37-year-old Parag Agrawal, the company’s former chief technology officer, got straight to making changes. Overnight Twitter announced a new anti-doxxing policy, designed to remove unauthorised images of people. 

Doxxing, the practice of revealing personal information – such as address, phone number or bank details – about a targeted individual was already banned, as were abusive messages and threats. Now the unauthorised posting of people’s photographs, with or without abuse, can lead to a user’s account being suspended and the material removed.

The American right, already outraged over Twitter’s permanent ban on Donald Trump early this year for inciting violence, is furious about the new measure. Anti-fascists also complained that accounts dedicated to tracking far-right extremists were immediately suspended. 

But the move has a wider significance. Agrawal has made a more fundamental admission that could change the way all social media is regulated. If you read the new rules for posting content on Twitter, it is now beyond reasonable doubt that the company is a publisher with an editorial policy.

Select and enter your email address Your weekly guide to the best writing on ideas, politics, books and culture every Saturday. The best way to sign up for The Saturday Read is via saturdayread.substack.com The New Statesman's quick and essential guide to the news and politics of the day. The best way to sign up for Morning Call is via morningcall.substack.com
Visit our privacy Policy for more information about our services, how Progressive Media Investments may use, process and share your personal data, including information on your rights in respect of your personal data and how you can unsubscribe from future marketing communications.
THANK YOU

It reserves the right to remove pictures of people. But if the pictures are of a “public figure”, or if they are of an ordinary person “shared in the public interest” or “add value to public discourse” they are exempt from removal. Likewise, if an image is already being used in the mainstream media, it could be exempt.

Who decides? Agrawal and his techbro employees in California. Based on what criteria? Twitter’s new policy says the misuse of photographs has “a disproportionate effect on women, activists, dissidents, and members of minority communities”. These are tough decisions to make, as every newspaper, magazine and TV news editor knows. But done right, they will make Twitter a more tolerable place for users, and presumably a more attractive space for advertisers.

[see also: Twitter could be better off without Jack Dorsey]

Agrawal signalled his desire for a tougher editorial policy last year: “Our role is not to be bound by the First Amendment [which protects freedom of speech], but our role is to serve a healthy public conversation” he told MIT’s Technology Review.

In addition to banning Trump, Twitter imposed during the 2020 US election an extra step in the process of retweeting someone else’s content, which asked people to “read before retweeting”. It also routinely identifies “alt” media sites – such as RT and Redfish – as “Russia state-affiliated media”. The libertarian utopia of a global free-speech platform that Dorsey designed in 2006 is gone forever, destroyed by trolls, bots, governments and, arguably, by human nature. I do not lament its passing. 

When I first joined Twitter in 2009, having been inspired by the Iranian democracy uprising, it was a liberating experience. We were at the height of horizontalist activism, of swarming, of ad-hoc alliances seeking the truth.

During the Libyan uprising of 2011, for example, activists would post requests for information on the weapons used against them by Gaddafi’s military, and within minutes expert voices from around the world would post details of weapons, calibres, suppliers and ranges.

Twitter and Facebook were such powerful tools for social justice movements that back then it seemed impossible that they would ever become weapons in the hands of dictators and oligarchs. The first attempts of the powerful to suppress social media failed. Egypt’s Hosni Mubarak tried to switch off Facebook, but was still overthrown. Turkey’s Recep Tayyip Erdoğan, too, suppressed it repeatedly, but the activists found work-arounds.

It was only around 2013-14, after the Euromaidan protests in Ukraine and the suppression of the Muslim Brotherhood in Egypt, that elites around the world understood: the only way to take down a free information network is to fill it with noise. This noise proliferated in the form of troll farms, doxxing, hate speech, bots (fake accounts generated by artificial intelligence) and disinformation.

The algorithms of the social media giants did the rest. Report after report shows that Twitter, Facebook and YouTube deployed algorithms that rewarded outrage, hate and prejudice. Facebook has buried negative internal reports about its content. It still makes billions of dollars.

In the brief history of the contaminated social media – contaminated by far-right activist groups, disinformation networks and states – what were once dubbed “networks of outrage and hope” became the means of imposing despondency and despair.

When Facebook tested its own algorithm in India this year, by setting up a fake test user that simply followed recommendations generated by Facebook itself, the account was quickly swamped in “a near constant barrage of polarising nationalist content, misinformation, and violence and gore”. The project manager reported: “I’ve seen more images of dead people in the past three weeks than I’ve seen in my entire life total.”

The solution has been staring us in the face for years. It is to regulate the social media platforms in the way we regulate newspapers, radio channels, movies and TV.

The tech giants’ argument was that they were “only platforms” for content generated by third parties. This was never true. Each of the platforms deploys decision-making algorithms that determine what content is prioritised, and it’s been repeatedly shown that the algorithms themselves are flawed.

Twitter, for example, admitted that its algorithm was automatically cropping photos of people in a racially biased way, and that it was amplifying right-wing politicians and media outlets more than left-wing ones. How and why it does so is complex – a result of the way right-wing people use Twitter, the language of right-wing politics and probably flaws in the algorithm itself, reflecting the assumptions (and negligence) of the people who designed it. 

The problem is further complicated by the fact that Twitter, like all social media companies, allows the algorithm to redesign itself using machine learning. Ultimately, it can take decisions no human being has mandated.

Regulators, with those in Europe taking the lead, have up to now focused on forcing the tech giants to publish details of the algorithms they use, and to conduct risk assessments of the content they are disseminating.

But it’s time to do more. Both in the UK and the European Union there is no “First Amendment” right to disseminate hate speech and incitement. That is why newspapers self-regulate and the government regulates both broadcast and film content. 

Hence, if the Daily Mail publishes an article about refugees that generates an avalanche of racist abuse, it will often close its comments section rather than waste time policing the abusive content. And if the BBC finds itself with video of an atrocity, or a crime, or a moment of death, its own rules usually require it not to publish.

We can debate where the line should be drawn. But in a civilised society it will always be drawn through a debate between journalists, politicians, consumers and lawyers. In the rules-free world of social media, it is being drawn arbitrarily, reluctantly, by unaccountable executives according to principles that remain opaque.

Britain’s forthcoming Online Harms White Paper takes a light-touch approach, designating Ofcom as the social media regulator but requiring companies merely to introduce procedures and risk assessments to prevent harm. But it maintains the fiction that there is a distinction between “platforms” serving “third party content” and media organisations that publish content they have created. 

Once you understand what algorithms do, the fiction evaporates. The algorithms currently deployed create a toxic user experience, amplify hate and prejudice, and make decisions beyond human control. Any corporation using such algorithms, especially where there is no transparency and no public change log, should be designated as content producers.

The executives at Facebook and Google know this moment is coming, which is why they’re making so many concessions and apologies. Jack Dorsey, in his own way, was one of the last believers in the fiction of the neutral platform protected by America’s First Amendment. 

Now Dorsey is gone, Twitter needs to get real. It is surely one of the biggest purveyors of anti-Semitism, Islamophobia, homophobia and sheer toxicity in history. It could shed millions of fake accounts tomorrow. It could publish its algorithms. It could require all users to have real identities, even while some might legitimately maintain pseudonyms.

It could, in short, mature into the kind of platform that helps maintain democracy, civility and truth. Instead, it is one weaponised to destroy them.

[see also: What the Online Safety Bill means for social media]

Content from our partners
The Circular Economy: Green growth, jobs and resilience
Water security: is it a government priority?
Defend, deter, protect: the critical capabilities we rely on

Topics in this article : ,