In the summer of 2012, PepsiCo ran what it thought would be a lighthearted, engaging campaign called “Dub The Dew” – a competition open to the public for people to submit ideas for and subsequently vote on a name for one of the company’s soda brands, a new apple-flavoured line from Mountain Dew. While the company likely expected suggestions along the lines of “Green Blast”, “Fruit Dew”, and or “Apple Slammer”, the most popular results were names like “Fapple”, “Grannies Squirt”, and, infamously, “Hitler Did Nothing Wrong”. After it became clear PepsiCo was the target of the notorious troll forum 4chan, it shut down the Dub The Dew competition and became the posterboy for warnings against open online surveys.
This week, Twitter announced a new campaign called “Creating new policies together”. From now until 9 October 2018, Twitter users can vote and submit feedback about the social media platform’s new policy on dehumanising language (ie. hate speech). The policy reads: “You may not dehumanize anyone based on membership in an identifiable group, as this speech can lead to offline harm.” It goes on to display definitions of what Twitter deems dehumanising language, and an identifiable group, alongside explanatory text as to why they’re allowing the public to weigh in on this sensitive topic.
“We want your feedback to ensure we consider global perspectives and how this policy may impact different communities and cultures,” the post on the public consultation reads, saying that Twitter would be dismissing their previous methods of working with experts and scholars and instead getting users’ final approval before making their proposed policy a part of Twitter’s rules.
The feedback form below all of this text gives users a limited set of ways to give their opinion. They have the ability to vote on whether they find Twitter’s standing dehumanisation policy clear or unclear, on a varying scale from 1-5, and then the form provides fill-in-the-blank response boxes below questions, such as “How can the dehumanization policy be improved, if at all?” and “Are there examples of speech that contributes to a healthy conversation, but may violate this policy?”
While appearing like a democratic approach to solving one of Twitter’s most prevalent problems, this is, of course, a terrible, godawful idea. And – most importantly – Twitter should know it.
Allowing the public to vote on issues isn’t inherently wrong. Many issues that voters are polled on are harmless enough that, regardless of what the majority believes, the result won’t be life-endingly bad. Based on current polling, the British public would like to require limited time slots for visiting overrun tourist attractions, demand pubs serve meals or have a beer garden, and relax rules on taking school children on holiday. These are hardly life-or-death decisions, could be an improvement, and are almost certainly not disastrous.
However, leaving the public to decide on things like whether or not you should be exposed to hate speech or violence is another matter. Based on current polling, this could lead us to permit sexual harassment, vote for anti-Semitic catchphrases for new soda brand names (see above), or even leave a union with our nearest geographical neighbours to catastrophic repercussions. Twitter’s dehumanisation feedback allows the public to weigh in on an issue that should not be decided by random people online – but should be left in the hands of those who would be the target of dehumanising speech, or for scholars and experts on the dangers of violent language.
Not only are the stakes too high for the public’s whims, but Twitter is providing this feedback form online, leaving it wide open for a special kind of bombardment. Although most of the answer options are pre-selected (letting the public choose answers to everything from scratch would be a neglectful, rookie error), there is still plenty of room for this survey to get hijacked in its open-response sections, in the classic style of online troll-campaigns. Like with Mountain Dew – and frankly, most mainstream online polls (more wholesomely, Boaty McBoatface) – it will inevitably be drowned in a co-ordinated response, organised by the likes of 8chan, 4chan, or the darker forums on Reddit.
These forums are infamous for setting up targeted attacks, and have a method behind who they target: the trolls particularly focus on sites, people, and publications that are both easy to flood with responses (eg. online polls) and are generally left-wing (ie. going against their traditionally alt-right point-of-view). The forums also organise particular language to use when filling in online forms so that companies, not realising they are under attack, think it is consistent advice from the public that they should be listening to. Sometimes targeted attacks involve tens of thousands of co-ordinated users. Earlier this month, 8chan coordinated a campaign to get women they saw as “whores” banned from Twitter, PayPal, and YouTube. A similar campaign was organised on 4chan in May, in which it asked users to vote in the online poll for a Nasa science competition for any team that wasn’t a group of three black girls (Nasa ended up cancelling the poll, and the winners are still yet to be announced).
Even with the best intentions, surveying the public online leaves a company prone to attacks from troll-filled forums – and that’s before considering whether the intentions are not so altruistic.
Crucially, Twitter is shirking its responsibility to draw the line on what is and isn’t OK to post on its website. It is already criticised for its failure to police the hate speech rampantly posted on the forum every single day. Even Twitter CEO Jack Dorsey admitted back in August that, not only has Twitter not changed its incentives in the 12 years it’s existed, but that he was aware of the “echo chambers” it created; specifically noting that hate speech spreading on the platform is a problem. Yet this should not be a surprise: when you build a platform for people to write whatever they feel like in any given moment, bigoted opinions were never going to be excluded. And Twitter’s been around long enough to know that online polls, surveys, and forms are susceptible to hijacking.
Consulting the public, in light of this looks rather convenient. When hate speech proliferates on Twitter, its bosses can now throw up their hands and say: “Hey, we didn’t make the rules! In fact, you guys did.”
This approach is not new. In January, Facebook asked its users to tell the social media platform which publications they trusted and which they didn’t, in a way for Facebook to determine what should be considered “fake news”. “We could try to make that decision ourselves, but that’s not something we’re comfortable with,” Facebook CEO Mark Zuckerberg said about the decision to outsource the process to Facebook’s users. “We decided that having the community determine which sources are broadly trusted would be most objective.”
But demuring to public opinion has become an industry scapegoat to avoid making difficult decisions – one that all of these platform founders should have known, early on, they would inevitably have to make.
One can hope this consultation could make good and lasting change at Twitter, which would allow people from marginalised backgrounds to spell out the nuances in dehumanising language that might not be caught by its current policy. However, the open answer section of the form only allows 280 characters (the length of a tweet) in a user’s response. And when it comes to explaining the complex bigotry in the language that fuels hate speech about gender, sexuality, race and religion, 50 words isn’t enough to demonstrate why something’s wrong, nor explain how on earth one begins to fix it.
While it’s ultimately up to Twitter which opinions it chooses to take on board and which ones it doesn’t, the vaguer the policy is, the better it is for Twitter’s lack of culpability. And the way the current survey form stands, it will undoubtedly be open to responses from people with the worst intentions.