In seeking to address harms to children, the Online Safety Bill proposes remedies that will create new harms. Internet services used every day by British consumers would be obligated to scan all public posts and private chats against government-specified criteria. It’s a serious interference with free speech and privacy rights.
If these services don’t want to do it, they could pack up and leave the UK. If they do comply, it would open the floodgates to scanning requests from governments seeking to undermine democratic debate around the world. Who then would be the loser?
Internet companies will be required by the government to police social media posts for criminal offences across 11 areas of law. These offences include assisting illegal immigration and public order offences, as well as terrorism and child sexual abuse material. It is not clear why public order and immigration offences are in a bill to protect children.
Untrained civilians working for private companies will determine illegality and criminality based on what the “reasonably consider” without gathering evidence. They will be aided by context-blind algorithmically driven systems that will look for matches in images and video. The potentially high number of false flags (posts wrongly categorised as illegal) makes this a risky approach. The danger is that innocent posts could be swept away – potentially before they are even published – and legitimate public debate suppressed.
Private chat platforms, such as WhatsApp, Signal, Facebook Messenger and Telegram, will also fall under the bill’s mandate. Under a recent amendment to the bill, they will have to scan for government-specified forms of illegal content using Home Office-approved systems. It is a deeply intrusive form of surveillance that will compromise the end-to-end encryption that currently keeps our chats confidential and secure. The template is one that governments in non-democratic countries could copy for political surveillance.
Monitoring of all public posts on social media sites, 24 hours a day, with the aim of removing those that meet prohibited criteria – this is an interference with free speech rights. Moreover, pervasive surveillance of private chats conducted without warrants is a violation of privacy rights. The likely chilling effect will be to make it more difficult for victims of crime and vulnerable people to seek help.
None of this cancels out the objectives of the bill to address online content that is harmful. Indeed, there are very serious concerns that the bill seeks to address. For example, the coroner’s conclusions in the Molly Russell inquest raise grave concerns around self-harm content.
The government is right to want to address these issues, but under law, it must state clearly and precisely what content it wants to tackle and it must put in place procedures to rectify any errors if they occur. The bill should ensure that online platforms must justify their actions when they remove content and offer an effective appeals process.
The idea that technology can provide a silver bullet to solve complex societal issues does not stand up to examination. The back-of-a-fag-packet nature of this bill reflects an inept failure of due diligence in policymaking. Ideally, the government should go back to the drawing board; in its present form, large swathes of the bill will be unworkable, and we will be struggling with the consequences for years to come.
[See also: No one will be safe online until we break Big Tech’s toxic algorithms]
Read the other side of the debate on the Online Safety Bill here.
This opinion piece originally appeared in our Spotlight print edition on Cybersecurity on Friday 18 November 2022. Read the supplement in full here.