Take a short scroll through Twitter or Facebook and you’re bombarded with information. In just five minutes you can view, comment on and share the latest headlines, political opinions, cultural trends and celebrity mishaps. Social media has democratised information dissemination to such an extent that half of UK adults now get their news from these networks.
Cyber utopians would argue that the internet’s unbridled freedom has helped us build a more equal society. But in giving everyone a voice, social media has also increased the spread of vitriol, misinformation and illegal material. A third of people are thought to have been exposed to online abuse, while false information about the Covid-19 vaccine has become a major public health issue.
What is the Online Safety Bill?
There is an obvious need to rethink the regulation of online communication. The government published the Draft Online Safety Bill in May this year, which aims to safeguard young people and clamp down on online abuse while protecting freedom of speech. It is currently undergoing pre-legislative scrutiny by a joint committee, which will report its recommendations by 10 December.
It places a “duty of care” on social media websites, search engines and other websites where users interact to protect people from dangerous content. If they fail to do so, companies face fines of up to £18m or 10 per cent of their annual turnover, plus access to their sites being blocked.
Companies will be grouped into either Category One or Category Two, with the first including social media giants and being subject to harsher rules – they will be expected to tackle both illegal content, such as terrorist propaganda and child abuse, and “legal but harmful” content, such as misinformation and cyberbullying. Adherence will be overseen by the regulator, Ofcom.
A Department for Digital, Culture, Media and Sport spokesperson said it is “committed to introducing the bill as soon as possible” following the joint committee’s report. Minister for Tech and Digital Economy, Chris Philp, told Spotlight the proposed online safety laws are “the most ambitious in the internet age… No other country has published a bill that will go so far to make ‘big tech’ accountable for the content on their platforms, and for the way they promote it.”
Does the bill go far enough?
The bill is controversial with both safety advocates and freedom of speech campaigners, with the former saying the legislation does not do enough and the latter saying it goes too far.
Many worry that the “duty of care” approach coupled with the “legality” of some online harms mean that individuals face little retribution. The Law Commission recently made recommendations around reforming criminal law in this area, asking for offences to be based on “likely psychological harm”, with perpetrators facing up to two years in prison. Digital Secretary Nadine Dorries has said that she intends to accept some of the suggestions, such as an offence for the most serious threats.
Lindsay McGlone, a 23-year-old body positivity campaigner, regularly faces online abuse. These tend to be derogatory comments about her appearance but also include pornographic images and a running commentary about her on gossip forum Tattle Life. She says the comments have affected her mental health and that abusers should be punished. “I’ve always faced discrimination but online abuse can be the worst because there are so few repercussions. People forget or don’t seem to care that there’s an actual person behind the screen.”
Some campaigners believe that board directors and company owners whose platforms fall foul of the new laws should face more than fines. Richard Pursey, CEO of the safety tech company SafeToNet, notes that “the idea that board directors should be disqualified or even imprisoned if they do not follow their duty of care has been mooted. One of the biggest motivators of humankind is self-interest – policymakers should bear that in mind.”
He adds that the bill’s scope should have also covered manufacturers and telecoms companies, forcing them to increase the “safety by design” of devices as a first line of defence. SafeToNet incorporates this principle into its tech; it uses artificial intelligence (AI) to prevent children from sending sexually explicit or abusive content, blocking the camera or messaging function if the user is about to film themselves or write something inappropriate.
“If there was a ‘seatbelt’ on our smartphones, it would mean they were safe, irrespective of what app we were running,” he says. “Then this would restrict the user from posting nasty things anyway. The tech is there to fix this – there should be no reason why anybody is holding back deploying it.”
Legal but harmful
Safety campaigners such as Pursey argue that the “wishy-washy” definition of “legal but harmful” will make it easier for the robust big tech legal teams to avoid fines. Roughly one in five 10–15-year-olds in England and Wales experienced online bullying in 2019 and 2020.
“It’s an absolute tragedy that the bill doesn’t mention bullying,” says Pursey. “It needs a much clearer definition of what is and isn’t acceptable – such as, if you wouldn’t say it to someone’s face, you shouldn’t say it via a screen.”
Helen Margetts, a professor at the Oxford Internet Institute (OII) and director of the public policy programme at The Alan Turing Institute, agrees that some “legal” abuse is indeed very harmful, such as misogyny. “If it’s harmful, we need to think about why it isn’t illegal,” she says. “I’m a believer in the internet but I’m also a believer in democracy – and I think the kind of unbridled free speech that we have seen online is a threat to democracy.”
She adds that the bill should be more nuanced in how it assesses online safety. Researchers at Cambridge University found that people with lower levels of numerical literacy are more likely to believe Covid-19 misinformation – but the same will not necessarily apply to those who commit hate crimes. “We tend to treat these things like a lump,” she says. “They all have different targets, actors and remedies.”
Free speech as fundamental freedom
Opposing critics argue that the “legal but harmful” concept will lead to censorship, causing platforms to over-delete as a precaution.
Ruth Smeeth, a former Labour MP and now CEO of free speech organisation Index on Censorship, is a victim of regular misogynistic and racist abuse, and has received death threats. She worries that the bill gives too much power to tech companies, will make it harder to scrutinise those in power and will stop researchers from assessing cultural change – for example, analysing extremist language to prevent future terrorist acts.
“There is a difference between feeling offended and feeling threatened and vulnerable,” she says. “In this country, British parliament determines our laws – it should decide whether speech is illegal. It shouldn’t be outsourced to anybody else.”
Deleting abuse impacts victims, she adds, as individuals will be unaware they are in danger and it can make it harder for police to prosecute. She also worries that vulnerable people, such as rape victims and political dissidents, will not be able to speak openly because it could be flagged as inappropriate content – evidence of war crimes in Syria, for example, has reportedly been removed from YouTube due to it violating the platform’s policies.
Her proposed solution is a “digital evidence locker” – an archive that could be accessed by the police, civil society and journalists, alongside designated online safe spaces for people to talk about their experiences. She also thinks greater transparency over social media’s algorithms and content moderation would help identify discrimination, such as when TikTok, for instance, was accused of censoring disabled people’s content last year to appear more “aspirational”.
There is a level of secrecy around social media companies’ data capture, both in terms of reported cases of online safety breaches and content moderation – Margetts of OII adds that big tech should be mandated to share their data with government and researchers so that we can fully understand the impact of different harms.
The question of anonymity
Following the brutal murder of MP David Amess, many MPs and pro-safety campaigners are calling for online anonymity to be waived. Body positivity campaigner McGlone says that social media platforms should require ID verification so individuals are fully traceable, while platforms like Tattle Life should be completely removed.
But Smeeth says that waiving anonymity could put vulnerable people, such as domestic abuse victims, at greater risk. “We need to make sure we are not isolating these people further,” she says. “If victims are not able to use anonymous accounts, their partner might realise who they are. There’s no nuance in this, especially when using algorithms.”
Such a drastic move would be likely to have global implications – the UK is a “policy exporter”, she adds, with other less democratic countries likely to follow suit. Pro-privacy campaigners such as Smeeth worry that this could restrict the voices of groups such as LGBTQ+ people, abortion campaigners and political dissidents.
Removing anonymity also poses a higher cyber security risk, says Camilla de Coverly Veale, head of regulation at the Coalition for a Digital Economy (Coadec), a trade body for start-ups. Getting rid of the right to be anonymous online could make small businesses a “honeypot for hackers” and increase their financial strain, she says, as users will have to entrust them with personal details.
Confusion over categories
Tech advocacy groups are concerned that the bill’s categorisation is vague and some smaller businesses could face the harshest restrictions. De Coverly Veale says that tech companies seek clarity and are worried about being pushed into Category One by lobbying campaigns, where they would not be able to cope with rigorous reporting requirements and possible huge fines. This could hamper innovation and competition because they will avoid user-to-user functionality, add age gates to their products or pivot to less risky areas such as financial tech.
“It feels like the government is regulating as if the internet is five companies,” she says. “We worry that the perception of harm will become very politicised.”
But Pursey of SafeToNet says that children’s safety must come before business interests. “Paedophiles often build trust with children, encourage them to leave Instagram and move onto lesser-known platforms to isolate them,” he says. “I don’t see the logic in only going after the big guys – the law should also apply to small businesses.”
More public engagement
Outside of the bill, online safety principles need to be instilled into education, says Smeeth. Digital citizenship should be mandatory at school, and the government should establish and publicise a penalty framework for online abusers. McGlone says that policymakers should speak directly with victims and translate that into campaigns that tell the true stories of online bullying.
Smeeth adds that the police should be properly resourced to deal with digital crimes. “When I got my first online death threat, the police didn’t have access to Facebook as it was viewed as a distraction,” she says. “I had to give them printed copies. This is ludicrous. Now they can access social media but they don’t have the resources to deal with this.”
The internet is constantly evolving and so must this bill, she says. After the legislation is implemented, there should be a parliamentary scrutiny process, and it should be updated regularly.
The wonderful thing about the internet is its ability to give everyone a voice. But this brave new world of unrestrained viewing, sharing and commenting is inevitably causing trauma, be it psychological or physical. While freedom of speech is a cornerstone of any democracy, the online world should be treated the same way as the real one, with an ongoing conversation around culturally appropriate language – or we risk prioritising libertarianism over people’s health.