On Tuesday evening (19 April) the highly anticipated draft Online Safety Bill had its second reading in the House of Commons, following months of debate, delay and discussion.
The Online Safety Bill is an unprecedented attempt to regulate the internet – clamping down on a wide range of areas from misinformation to illegal content. It was introduced to parliament back in March 2022, following the government’s publication of an Online Harms White Paper back in December 2020.
Why does the bill matter?
Governments across the world have been long trying to keep up with the growth of the internet and its transformative impact on our daily lives.
The bill is designed to protect UK internet users by bringing certain internet services under the regulatory eye of Ofcom, and putting the onus on internet companies, such as Facebook and Twitter, to keep users safe online. The bill covers a wide range of illegal content (such as child pornography) and material that is legal but harmful (such as self-harm material and misogyny).
Prior to Tuesday’s debate, the government launched the next phase of its Online Media Literacy Strategy, which aims to help vulnerable and “hard-to-reach” people who might be at risk of digital exclusion. The government says this programme will help people navigate the internet safely and spot falsities online.
What is a “second reading”?
A second reading is the first opportunity for MPs to debate the main points of the bill on the floor of the House, and usually takes place no sooner than two weeks after the first reading, which is when the bill is introduced, more as a formality, into parliament.
At the end of the debate, the Commons voted that the bill should go through to “committee stage”, where a dedicated committee of MPs will scrutinise the bill, line by line, with testimony from experts. This is where most of the major amendments take place.
What came out in the debate?
The Culture Secretary, Nadine Dorries, gave the introductory speech, setting out the main points of the bill. During her speech, she received a number of interventions from members across the House on issues including online abuse, journalistic rigour and misinformation, self-harm and the metaverse.
As per procedure, Lucy Powell, the shadow culture secretary, was the first to respond. Powell accused the bill of “government overreach” by setting out the “exhaustive list of legal but harmful content”. Powell also spoke about the need for the bill to focus less on individual content, and more on the business models, systems and policies that allow harmful content to flourish.
Chris Philp, minister for tech and the digital economy, responded to Powell’s point, stating that “this bill does deal with systems and processes, not simply with content. There are risk assessment duties. There are safety duties. There are duties to prevent harm. All those speak to systems and processes, not simply content.”
Other MPs, including the culture select committee chair, Julian Knight, and former culture secretary, John Whittingdale, raised concerns about online grooming, Ofcom processes and uncertainty for tech companies.
Other issues that were brought to the floor included freedom of speech, protections for children and online radicalisation.
What happens next?
This bill has now been sent to a public bill committee, which will scrutinise the legislation in great detail, and is expected to report to the House by the end of June this year. This will probably allow the Lords to have its second reading of the bill before the summer recess.
There’s still much to discuss, and lots of public and industry interest in this bill. The government claims the legislation is the most ambitious of its kind globally.
Expect greater focus on the safety of young people online, the question of anonymity, and the specific responsibilities placed on tech companies, as the committee picks apart the finer details of the bill.
[See also: The Online Safety Bill has created a free speech culture war]