New Times,
New Thinking.

Will the Online Safety Act protect us or infringe our freedoms?

After six years of development, child safety and free speech campaigners react to the long-awaited internet laws.

By Sarah Dawood

Six years after the idea first materialised, the Online Safety Act has finally passed into law. It’s long overdue – since its invention, the internet has had little-to-no regulation. And in the years since, it has grown increasingly all-encompassing, embedding itself in every facet of people’s lives.

Recent global events show the impact of a completely unlegislated online world. Social media has been awash with falsified information of the Israel-Hamas conflict, making it increasingly difficult to decipher which videos and photos are real and which fake. Meanwhile, online grooming and child sexual abuse cases are up a staggering 80 per cent in the past four years, and four in ten UK children aged eight to 17 have experienced bullying, either offline or online.

During its development, the Online Safety Bill came under intense scrutiny from both online safety and free-speech campaigners, with each side arguing that it neither kept people safe nor protected free expression and privacy.

After much flip-flopping, the final act has ended up with same core tenets as the original draft – the independent regulator, Ofcom, can fine social media platforms up to £18m or 10 per cent of their global annual revenue (whichever is higher) if they do not remove illegal content. This includes child sexual abuse, terrorism, assisting suicide and threats to kill. There is a list of “priority offences” in the act, which are the types of illegal content that must be prioritised by tech platforms.

Tech companies can now also face criminal liability – social media bosses could go to jail for up to two years if they consistently fail to protect children from harm. This includes illegal offences but also certain types of legal content, such as that promoting eating disorders, self-harm or cyberbullying. Tech companies will be required to write their own “terms of service” based on “codes of practice” created by Ofcom and enforce these consistently. The bill originally contained provisions to protect both adults from “legal but harmful” content too, but this was scrapped.

Select and enter your email address Your weekly guide to the best writing on ideas, politics, books and culture every Saturday. The best way to sign up for The Saturday Read is via saturdayread.substack.com The New Statesman's quick and essential guide to the news and politics of the day. The best way to sign up for Morning Call is via morningcall.substack.com
Visit our privacy Policy for more information about our services, how Progressive Media Investments may use, process and share your personal data, including information on your rights in respect of your personal data and how you can unsubscribe from future marketing communications.
THANK YOU

Other laws have been strengthened in line with the act, especially around violence towards women and girls. An update to the “revenge pornography” law means that someone who threatens to share intimate images without consent could face two years in prison, as well as those who do share them. Cyberflashing and upskirting have also been criminalised, and three new communications offences have been created around intentionally sharing harmful, false and threatening communication.

Free-speech campaigners are broadly unhappy with the final Online Safety Act. Barbora Bukovská, senior director for law and policy at the human rights organisation Article 19, says that while the final version has been “slightly improved” by the removal of “legal but harmful”, the act remains an “extremely complex and incoherent piece of legislation that will undermine freedom of expression and information, [and] the right to privacy”. It will be “ineffective” in making the internet safer, she adds.

As the law now requires social media platforms to enforce their own terms of service, she believes this will cause companies to be overzealous, and will give them too much censorship power. “This will incentivise platforms to censor many categories of lawful speech that they – or their advertisers – may consider harmful, inappropriate or controversial,” she says, adding that algorithm-based moderation technology is “not advanced enough” to navigate this complex landscape.

Instead of focusing on content moderation, the Online Safety Act should have addressed the “business model of Big Tech”, she adds, which is based on “advertising and monetising users’ attention”, and looked to increase competition rather than consolidate the market power of giants.

Concerns around the suppression of marginalised voices online is legitimate – social media companies have been called out for their moderation policies before. According to documents obtained by the investigative news website the Intercept, TikTok moderators were previously told to suppress videos from users who appeared too “ugly, poor or disabled”. Instagram users have also raised concerns about the censoring of posts in support of Palestine.

“In light of the [now former] Home Secretary’s response to pro-Palestine protests, it’s easy to see how tech companies could be pressurised into removing – at scale – content featuring Palestinian flags,” says Jim Killock, executive director of digital rights organisation Open Rights Group. “This carries tremendous risk for freedom of debate within a democracy.” He adds that preventing children from seeing a broad range of legitimate content could mean they are denied access to “large swathes” of the internet, including informative and supportive resources.

Safety campaigners, however, are more positive. Peter Wanless, the chief executive of the National Society for the Prevention of Cruelty to Children, says the new act will make children “fundamentally safer in their everyday lives”. Michael Tunks, head of policy and public affairs at the the Internet Watch Foundation (IWF) charity, says he is “delighted” to see the bill finally become law, and that he hopes it will make tech companies think more carefully about how they design their services to prevent child sexual abuse in future.

He also says he is pleased to see stricter regulation around end-to-end encryption, which currently stops third parties from accessing private messages on apps. This has been another point of contention during the bill’s passage. The government has decided not to ban end-to-end encryption but has said Ofcom can force messaging platforms to use “accredited technology” to scan for content like child abuse if the regulator deems it “necessary and proportionate”. An amendment to the Investigatory Powers Act also means messaging apps now have to inform the Home Office about any privacy features they want to add, including encryption, and the government can block these if necessary.

Jessica Ni Mhainin, policy and campaigns manager at the free-speech organisation Index on Censorship says that encryption is “essential” for protecting sensitive information, such as bank details, medical data and journalistic sources that hold power to account, and that the amendment “sets an extremely dangerous precedent”.

Even those broadly in support of the bill say there needs to be more nuance around invading privacy with the aim of keeping people safe. “End-to-end encryption can be vital to protect users’ privacy, particularly in sensitive discussions regarding abortion, sexual orientation or gender identity,” says Olivia DeRamus, founder at Communia, a social media start-up aimed at women and non-binary people.

She believes that the law alone is not enough, and that the tech industry needs to be more proactive in creating digital spaces that are healthy and safe to start with. “We need to divorce ourselves from the notion that harm is a normal part of being online,” she says. “Large social media platforms often use ‘free speech’ as an excuse not to address and even encourage harmful online conduct.” Communia recently conducted a survey of 2,058 UK women and marginalised genders using social media and found that more than a third (36 per cent) had been a victim of online abuse.

Technology is advancing at pace, and many campaigners feel legislation is struggling to keep up. Tunks thinks the Online Safety Act has “simply taken too long” since the bill’s first inception in 2017, and that developments in artificial intelligence have resulted in a surge in deepfakes of child sexual abuse. A recent report from the IWF found that nearly 3,000 AI-generated images depicting child sexual abuse were found in September this year. Ofcom should work with child safety organisations to determine new threats and develop its codes of conduct accordingly, he says.

There are still questions over the government’s approach to disinformation and misinformation – the intentional and unintentional spread of false information, which surged during the pandemic. The act specifies that Ofcom must establish a committee to advise the government, but otherwise does not have any parameters in place to tackle the scourge of fake news online.

The fact-checking organisation Full Fact says there is “no credible plan” to tackle this, after the government scrapped its plans to include health misinformation in a list of “priority” harms, and for Ofcom to have a “media literacy duty”, which Full Fact says could have helped to build the public’s resilience to bad information. The organisation recommends that the government rethinks these decisions, and suggests that promoting good information should be prioritised over restricting content, to protect free expression.

The Online Safety Act is now law and will come into force once Ofcom has published its codes of practice, likely in mid-2024. But it’s clear this is only the start of a long road of amendments that will be needed as technology advances and new platforms emerge. Given the pace of change, these reforms will need to be much quicker and more frequent than the historically sluggish parliamentary process.

Content from our partners
The Circular Economy: Green growth, jobs and resilience
Water security: is it a government priority?
Defend, deter, protect: the critical capabilities we rely on