If you are a woman, black, Asian, Jewish, any ethnic minority, disabled, gay, trans, or a combination, your experience online – whether you are famous or not – will often follow a predictable path. You will post something benign, maybe tangentially political, about an issue you’ve struggled with due to not being white, straight, cis, or male. Then, via comments, DMs, or tweets, you’ll begin to receive messages from anonymous accounts harassing you about what you’ve written.
It will escalate, with messages flooding in full of sexual and racial slurs, perhaps even telling you to kill yourself. The number of messages you receive will often be correlated to how well-known you are, but you could inadvertently become the centre of an abusive campaign even with only a few hundred followers.
This will not be an unfamiliar experience to anyone who has spent a significant amount of time on social media, even before Covid-19. But during this year of lockdowns and online socialising, our awareness of social media’s many problems has grown, prompting questions about how these platforms operate.
[See also: How influencers justify jet-setting to Dubai in the midst of lockdown]
And one question underpins them all: how do we create accountability online when users can cloak their abuse in anonymity?
The answer for many feels simple: force people to be themselves. Require that every social media user has their real name and a photograph attached to everything they post. If someone is going to share hate speech or threaten another user with violence (or even death), they shouldn’t be protected by an untraceable username –everyone else should know what has been said and who is behind it. Eliminating anonymity would instantly create accountability for what is posted online, and encourage people to only share views they’d be happy for everyone in their life (including their employer) to hear them say.
This solution may sound tempting – and it’s likely that it would, indeed, push down the amount of abuse online (although plenty of people are abusive with their full name in sight). It would also reduce protection for those who spread misinformation or deliberate disinformation – a particular plague on the internet right now.
But there are drawbacks to removing anonymity in full: namely, that it would expose vulnerable groups for whom an anonymous online presence has become a lifeline, who use the protective shield of anonymity not to abuse but to find safety and support in online communities that they are denied in the physical world.
The benefits on online anonymity, particularly to young people, are well-documented. LGBT+ kids in unaccepting homes and communities, workers in toxic employment environments, those suffering from domestic abuse – people encountering various harmful situations flock to digital spaces to find help and meet others with similar experiences. They do so with their names hidden because otherwise the risk is too great. If their identities were revealed, it would make their lives much harder, potentially risking their livelihood or even putting them in physical danger. While it’s easy to see how anonymity, for many, is a luxury, to take it away from the most vulnerable would mean cutting off their only hope at improving their lives.
[See also: It has always been easy for social media firms to pull the plug on extremism]
So how is this problem solved? The real issue is not with online anonymity itself, but a lack of accountability for those being genuinely abusive (it’s worth noting that part of what makes this debate murky is the prevalence of people claiming they have been abused in posts that are merely critical). Whether it involves racism, misogyny, or straightforward threats of violence, it is reasonable to insist that those posting should be held responsible.
But anonymity doesn’t have to be sacrificed to achieve such accountability. In fact, most online abuse could be mitigated before the message is even seen by its target.
That would require tech companies to start taking content moderation seriously. This might feel like the most boring answer – it’s complex, bureaucratic, and has been argued over for years. The only way to preserve a safe online experience is for anonymity to be balanced with high levels of responsibility, which means a heavy hand is needed from the platforms which are hosting this content.
The early days of the internet were an effective free-for-all, and a decade and a half of social media such as Facebook and Twitter has normalised saying whatever you want, however dangerous or abusive. The principle of free speech has been stretched and mutated to the point where any curbs at all, even on content that would be considered illegal offline, are screamed down as a violation of that right. It would feel strange to see a tweet or a Facebook post moderated within minutes of sharing (dare I say, even before it was allowed to go up), because we have been conditioned to believe we are owed immediacy from these platforms. The monumental effort these companies would inevitably face trying to change the current ecosystem, not to mention the backlash, is why this hasn’t yet been attempted. But the platforms alone can solve the problem, by holding people responsible for what is published on their sites.
Conversations about accountability online are years overdue, but to question whether or not people have the right to be anonymous deals with a symptom, not the issue itself. Tech companies will be happy with this distraction – it is yet another opportunity to avoid confronting a controversial problem that will lose them users and money. But this problem is, like most issues online, of their own making.
If platforms want to continue to claim they are concerned about online abuse, it is up to them to make the drastic changes to tackle it. And as the public, we have to turn our focus away from side issues like anonymity, and keep those who are the most powerful in our sights.
[See also: Boris Johnson needs to stop making it so easy for Covid-19 conspiracy theories to spread]