New Times,
New Thinking.

  1. World
  2. Americas
  3. North America
  4. US
12 January 2021updated 21 Sep 2021 6:13am

It has always been easy for social media firms to pull the plug on extremism

Why have the tech giants waited until now to curb the promotion of ideas that lead to violence? 

By Sarah Manavis

On 6 January, as the world watched rioters break into the US Capitol building brandishing guns, knives, and zip ties, social media companies began to panic. The attack had been planned, organised, and promoted on their platforms – on Twitter, Facebook, and other niche, hard-right sites such as Parler, 8kun (formerly 8chan), and 4chan. In conjunction with a major vote in the Senate scheduled for that day to certify Biden’s election win, Trump supporters across the country shared messages that they would “storm the government buildings, kill cops, kill security guards, kill federal employees and agents, and demand a recount”. 

This insurrection was exacerbated, predictably, by one Twitter account in particular: President Trump’s, where he called the rioters “patriots” and repeatedly urged on their actions. (This encouragement, of course, came after two months of tweeting that the election had been “stolen”.) The Capitol Hill riot was far from the first time violence had occurred in Trump’s name, nor the first time he had courted it on Twitter. However, it was the clearest, most undeniable example – strong enough to, as New York Times columnist Kevin Roose put it, “finally cause the platforms to hold hands and jump”. 

So after temporarily freezing his account the night of the 6 January, Twitter became the first social media site to do what users had for years been calling for: it permanently suspended Trump’s account. What happened in the days following the president’s suspension from his platform of choice could only be described as an avalanche of tech company backlash against Trump and his supporters. 

By 9 January, Trump had been banned from Facebook and Google’s various platforms, as well as a smattering of other social media sites. Twitter even went so far as to purge any accounts that were linked to the pro-Trump conspiracy theory QAnon (many hard-right accounts have since complained that this lost them tens of thousands of followers). Parler, a Twitter-esque site founded in 2018 that has become a haven for hard-right extremism, was informed on 9 January that its server, Amazon, would be pulling its hosting. By 11 January, Parler was offline. 

[See also: What is Parler?]

Select and enter your email address The New Statesman's quick and essential guide to the news and politics of the day. The best way to sign up for Morning Call is via morningcall.substack.com Your weekly guide to the best writing on ideas, politics, books and culture every Saturday. The best way to sign up for The Saturday Read is via saturdayread.substack.com
Visit our privacy Policy for more information about our services, how Progressive Media Investments may use, process and share your personal data, including information on your rights in respect of your personal data and how you can unsubscribe from future marketing communications.
THANK YOU

It’s easy to celebrate what is, on the surface, objectively good news. Donald Trump’s social media presence – and the social media presences of his loudest supporters – have played a pivotal role in the acceleration of the white supremacist, alt-right ideology that has ripped across the world. It’s also easy to shout concerns about free expression and the strange world we live in, where privately owned tech companies get to play moral arbiter for what is shared in the public sphere. It’s finally easier to create a binary between these positions. But they are, in fact, not mutually exclusive. 

Trump’s widespread social media ban, of course, was sparked by the violence we saw on Capitol Hill. But it was made easier by two other things: that Biden’s win had been made official, and that for the first time ever a majority of groups across the political spectrum were condemning the attack and blaming Trump for inciting it. Twitter’s decision (the starting pistol) was, for these reasons, rooted in finding this rare, previously nonexistent window: the tech giant’s management team knew that Trump’s outrage would only be “presidential” until 20 January, and that banning him now meant Twitter would receive the least amount of backlash from the public (and from its investors).

The same cynical reasoning applies to the countless other platforms that hosted Trump’s opinions, knowing for years that they were a huge part of mainstream online radicalisation. And for all the good they’ve done in stopping this particular account from encouraging dangerous hard-right views, we have to think about the tens of millions (if not more) who have already drunk the Kool-aid; who are now unable to think beyond the extremist, white supremacist views they have learnt on social media. 

We should therefore question Silicon Valley’s role in allowing the organisation and promotion of ideals that have led to real-life violence. Concerns over free speech are valid – why should Twitter boss Jack Dorsey get to make the ultimate call on who speaks and who doesn’t? but we should be more concerned about how pernicious content that is not defined under free speech protections has become acceptable on social media – and on some sites, the norm. Hate speech, the incitement of violence, racial slurs, and death threats are commonplace in particular corners of every social media platform existing today as are misinformation and conspiracy theories, including those shared by people who believe what they say is true and by those sharing it for sport, knowing it isn’t.

Why have tech companies been allowed to let this kind of speech run wild, to a point where these views may be irreversible, or will at least take decades to unlearn? It has always been easy for them to pull the plug on this movement.

When a woman was killed at a white supremacist rally in Charlottesville, organised on Twitter, 4chan, and Reddit, social media platforms did nothing. When a shooter in Christchurch, New Zealand killed 51 people after being radicalised on YouTube (and livestreaming his attack to Facebook), these platforms did nothing. So while it’s legitimate to feel troubled that a few hoodie-wearing billionaires get to decide who gets to say what, and when, online, it’s far more worrying that those men – and they are almost all men, sitting unaffected in California – have all this time had the power yet have turned a blind eye and let these violent ideologies thrive. 

On Monday, the FBI briefed that officers were preparing for armed “protests” (is it still a “protest” if it’s armed?) across all 50 states before the inauguration. These will likely be smaller and less violent (thanks to greater scrutiny from law enforcement following the riot on Capitol Hill and the fact that the President won’t be able to egg them on), but they will still, like their predecessor, be organised online. What will the platforms do to stop them?

The problem with online radicalisation, and big tech’s fundamental part in it, is that these companies are not motivated by morality. They are motivated by public opinion, what will afford them the least amount of scrutiny, and what will, ultimately, keep them in the black. After what we saw last week, for the first time ever, it was in tech companies’ best interest to remove Trump and pro-Trump conspiracy theorists. And while it’s easy to celebrate these decisions when they’re on the right side of history one time, we must also ask: on what side of history has big tech been on for the last ten years? 

Content from our partners
The Circular Economy: Green growth, jobs and resilience
Water security: is it a government priority?
Defend, deter, protect: the critical capabilities we rely on