It’s been five years since an initial attempt to tackle “online harms” in the UK was launched through the Internet Safety Strategy green paper. But despite determined and focused efforts, practical solutions continue to elude lawmakers.
Nowhere has this been clearer than in the effort to tackle “legal but harmful” content on the internet – everything from hate speech to sexual content to posts promoting suicide and self-harm.
The phrase has been subject to growing criticism, even mockery, as attempts to define it have floundered. But the truth is that “legal but harmful” remains a concise and candid statement of the problem. It is reminiscent of the famous former US Supreme Court judge Potter Stewart’s description of hardcore pornography: “I know it when I see it.”
The problem lies in understanding where people “see it”, and how frequently. A video talking about how to take your own life or blaming societal ills on a specific group of people can be offensive or in poor taste, but on its own, its impact may be limited and quickly forgotten.
When aligned with dozens of other videos saying the same thing and viewed by the same individuals in short order, however, the impact is very different, as the recent case of the teenager Molly Russell, who died in 2017, made clear.
As reportedly confirmed by YouTube’s chief product officer Neal Mohan in 2018, roughly 70 per cent of people’s time on YouTube is spent watching videos automatically queued up by an algorithm, rather than something they have searched for.
Tech companies that serve as the gateways to online content have optimised systems to fit their business models: more users spending more time with them equals more market share and more money. Those companies don’t create the content they share, and their algorithms are designed to give users more of what they want. They aren’t going to change that approach until they are under a legal obligation to do so.
This is where the Online Safety Bill has, belatedly but effectively, got it right. It will impose a new “duty of care” on tech giants and require them to make promises to users that are more than just words on a webpage. The version that will be reintroduced to parliament this month will dump unworkable efforts to control “legal but harmful” content, but give regulators the ability to keep tabs on promises and make changes where necessary.
What has been called a watered-down approach is perhaps better described as having been smartened up. We don’t know what will work to limit online harm, nor do the tech companies that are indirectly causing it. It’s going to take experimentation, adjustment, measurement and flexibility to get to a better place.
The main recipient of the new law’s powers – the regulator Ofcom – is under no illusion that it has the answers. The man in charge of the effort, online safety policy director Jon Higham, said this month that the regulator will make small changes, wait to see their impact, and adjust accordingly.
But internet-style regulation will only be possible with clear legal authority, and that is what the Online Safety Bill will provide. It’s far from perfect, and we will need the House of Lords to clean it up, but it will at least allow us to start solving these problems. It’s time to make this bill law and get on with the job of making online life safer.
[See also: No one will be safe online until we break Big Tech’s toxic algorithms]
Read the other side of the debate on the Online Safety Bill here.
This opinion piece originally appeared in our Spotlight print edition on Cybersecurity on Friday 18 November 2022. Read the supplement in full here.