New Times,
New Thinking.

Online abusers and trolls could be jailed for five years

Criminal offences have been added to the Online Safety Bill to punish those who deliberately share threatening and harmful messages.

By Sarah Dawood

Sending threatening messages to someone online will become a crime punishable by up to five years in prison, the government has confirmed as it announces changes to the Online Safety Bill.

It is currently being revised by ministers after receiving recommendations from two select committees and from independent body the Law Commission.

The Department for Digital, Culture, Media and Sport (DCMS) has confirmed that several of the recommendations will be added, including three new criminal offences for online behaviour.

These include: threatening content – such as threats to rape, kill and inflict violence – punishable by up to five years in prison; content intentionally sent to cause psychological or physical harm – such as violent or coercive messages sent by domestic abusers – punishable by up to two years in prison; and deliberately spreading false information to cause harm – such as a hoax bomb threat or Covid-19 disinformation – punishable by up to 51 weeks in prison.

Government is considering further criminal offences for cyber-flashing (sending unsolicited sexual images or “dick pics”), encouraging self-harm and intentionally sending flashing images to someone with epilepsy.

Criminal behaviour will be determined based on malicious intent, helping to protect debate and free expression, says the criminal law commissioner Penney Lewis. The new offences also mean the bill now places more responsibility on individuals, rather than solely tech companies. “The criminal law should target those who specifically intend to cause harm, while allowing people to share contested and controversial ideas in good faith,” she says.

These punishments are more punitive than previous legislation – the maximum sentence for sending indecent, threatening or knowingly false information was two years under the Malicious Communications Act 1988.

Give a gift subscription to the New Statesman this Christmas from just £49

While they will apply to a range of media including WhatsApp messages, emails and social media posts, regulated media such as print and online journalism, TV, radio and film will be exempt.

“Priority” illegal harms

The bill also now includes an extended list of illegal content that websites must prioritise tackling, adding to the existing named offences of terrorism and child sexual abuse and exploitation. This is in response to campaigners and MPs who have previously said the bill was too vague and didn’t explicitly name enough online harms.

Revenge porn and sexual exploitation of adults, fraud and money laundering, the sale of illegal drugs or weapons, people trafficking and illegal immigration, hate crimes, the promotion or facilitation of suicide, harassment and stalking, and controlling prostitution have been added.

The regulator Ofcom’s powers have also increased – previously, websites would have been forced to take down illegal content once it had been reported to them by users but now they must be “proactive” or face fines of 10 per cent of annual global turnover.

Social media sites will need to prevent users being exposed to illegal content in the first place, through systems such as automated and human content moderation, banning illegal search terms, spotting suspicious users early and stopping banned users opening new accounts.

The Secretary of State for Digital, Culture, Media and Sport, Nadine Dorries, said in a statement: “We are listening to MPs, charities and campaigners who have wanted us to strengthen the legislation, and [these] changes mean we will be able to bring the full weight of the law against those who use the internet as a weapon to ruin people’s lives.”

Age verification checks will also now be required for all pornography websites to ensure that users are 18 years old or over.

More specific legislation

Damian Collins MP, chair of the joint committee on the draft online safety bill, says the additions provide clearer and more specific regulation, which draw on existing offences in law.

“These changes are a step towards moving to a world where what is illegal offline is regulated online,” he tells Spotlight. “They will give social media businesses more clarity on what’s expected of them, and users more certainty that they will be protected, especially children.”

Asking tech companies to prevent content spreading rather than just removing it retrospectively is a welcome change, he adds. “I was pleased that the government’s initial response does make very clear that the systems and processes of companies will be regulated,” he says. “It’s not just about taking something down but about identifying harmful content and mitigating it before it’s amplified.”

He says he would like to see more clarity from the government on criminal liability for tech bosses, age verification of adult content, and company categories within the bill. Currently, the bill splits companies into Category 1 and 2, with the former including giants such as Facebook and Twitter, which are subject to stricter regulation. “There is a question around whether regulation should be assessed on risk rather than size of company,” he says.

Child safety concerns

Child safety campaigners have previously said the draft bill is not strong enough to sufficiently protect young people online. Richard Pursey, CEO at tech company SafeToNet, says he would like to see it include smart device manufacturers and telecommunications companies, which would help improve the safety of devices as well as websites. SafeToNet produces tech that detects and filters harmful content in real time, such as a child filming themselves naked or sending abusive messages, blocking the camera or keyboard so messages cannot be sent.

“We developed on-device technology over three years ago that tackles the source of the problem,” says Pursey. “The bill could easily enforce a ‘safety by design’ approach on smart devices and by doing so prevent much of the abuse from happening in the first place.”

There are also calls to include more criminal prosecutions for individual company directors. Andy Burrows, head of child safety online policy at children’s charity the NSPCC, wants to see tech bosses liable if they fail to stop children being subject to harm, not just if they fail to report figures to Ofcom, which is what the bill currently states. He also wants to see a responsibility for collaboration between tech giants written into the bill.

“The proposed criminal sanctions offer bark but little bite, with tech bosses escaping personal liability for the harmful effects of their algorithms or failing to prevent grooming,” he says. “Having a named manager responsible for safety, with criminal sanctions for gross failure, will focus minds on child protection at the very top of companies. A clear duty for firms to work together to stop harm and abuse that spreads quickly across their sites and apps will also be crucial for effective regulation.”

While not yet in its final form, Collins says the Online Safety Bill is a “world-leading piece of legislation” that will set a precedent globally. “It’s clear that nowhere else is attempting anything as ambitious as this,” he says. “The decisions we make and getting the system right means we could create not just the safest online environment in the world, but also a model that other countries will copy.”

The new laws are expected to be introduced in the first half of 2022.

[See also: The Online Safety Bill has created a free speech culture war]

Content from our partners
Building Britain’s water security
How to solve the teaching crisis
Pitching in to support grassroots football