We’ve been having what feels like an endless – and often circular – conversation about fake news and online political misinformation since at least 2016.
It’s likely we’ve been facing these problems for far longer than that, but since the extensive reporting on Brexit and the 2016 US election, the effects of fake news, political advertising, micro-targeting and more on social media has dominated the conversation in countries across the world.
Even before it hit the top of the headlines, fakery has been one of the proudest traditions of the internet for decades – the now-notorious message board 4chan has been pranking for 15 years, while Photoshop challenges have been a staple of its (slightly) more benign UK counterpart b3ta for even longer.
For as long as people have been making things on the internet, people have been making things up. It’s among the internet’s proudest and most ignoble traditions, done for laughs, for malice, to fool others, for profit, or to change minds.
Which would lead you to think that to be unaware of the risk that people will prank or game internet systems, you’d not only have to have not been on the internet in the last two years, but also not turned on a television, read a newspaper, or really spoken to any other human being whatsoever.
Enter Facebook, willing to prove that a team of highly-qualified and lavishly paid engineers and managers, working on a problem described as one of the top priorities for their employer’s future – and their key PR risk – can show about the same level of awareness of the internet as an off-grid shut-in loner.
The mistake came as part of Facebook’s efforts to tackle political advertising on its network in the UK, by introducing a requirement for all adverts with a political element to have a named and registered advertiser behind them, in theory ending the phenomenon of “dark ads” – adverts with no means of seeing who is behind them.
These “dark ads” are a real risk: could they be fake adverts from opponents, designed to discredit candidates? Are they funded by people who are following registered rules? Are they nasty adverts run by official campaigns who don’t want to be associated with them? There is no doubt they were – and are – a problem for democratic system, and one that needs tackling.
Facebook’s solution would have advertisers register – first voluntarily, then compulsorily – as political advertisers, showing extra information to those who wanted it on who was behind the ad, and archiving them for up to seven years.
So far, so good, but for one slight issue: Facebook did absolutely nothing to check the advertisers who registered with their scheme were who they said they were. This proved not to be a small mistake: in the US adverts were falsely registered from US senators, Vice-President Mike Pence, and even from Islamic State. In the UK, someone registered adverts as coming from the notorious (and also defunct) Cambridge Analytica.
The team tackling misinformation at – and restoring trust in – the world’s largest and most lucrative social network never stopped to consider the possibility that someone might put fake information on their network. They’ve now delayed introducing the new system while they have a rethink.
The size and simplicity of this fuck-up is so great that it’s hard not to find it straightforwardly funny. If nothing else it should be a great case study for any of us if we feel like we’ve screwed up: if these people at this place can screw up so big, we can let ourselves off some of our own smaller failings.
But we can’t let our incredulity let us miss the implications of this mistake for Facebook and its oversized role in the online information ecosystem. This is the company that assures us it can tackle terrorist content and radicalisation, that it is a responsible arbiter during elections, that it can be trusted to curate the mix of news, page content and adverts, and far more.
That a solution to such a high-profile problem made it to the public before such a basic flaw was discovered speaks ill of Facebook’s culture: either it has a culture that suppresses dissent when a plan has been made, which is bad, or it has such a mono-culture of staff that no-one even spotted the problem, which is worse.
Facebook is an engineering company at heart, run by engineers, and tends to assume every problem has a technical fix. This mistake should show it this approach is especially misguided – and the change has to start with its own culture, or it will just keep getting it wrong.