New Times,
New Thinking.

  1. Science & Tech
19 August 2013updated 22 Oct 2020 3:55pm

We can’t crowdsource the right to free speech

The BBFC's plan to put content flags on online video could work – but crowd-sourcing censorship isn't the right way to do it.

By Mark Owen

The debate over protecting children from unsuitable web content has given rise to a novel proposal for content to be rated by users, with the resulting votes going to determine the suitability of the content. Plans are reported to be underway for a traffic light age rating system for user generated videos, on which the British Board of Film Classification and its Dutch equivalent are working with service providers and government. How this will work in practice has not yet been announced but dangers for freedom of expression lurk in relying too heavily on the wisdom of the crowd.

The timing suggests that the idea may be related to the Prime Minster’s proposal that households should be able to control their access to adult content online by switching on a simple filter. One of the criticisms levelled at filtering is that to be effective it will have to be a relatively blunt instrument and block both the inoffensive and the inappropriate, with a potential impact on freedom of expression. Crowd-sourced age rating of content is at first sight both appealingly simple and potentially better, allowing greater discernment between content which really is adult and that which a machine might consider so. Red, amber and green ratings will reportedly be arrived at through a combination of the rating applied by the work’s contributor and how the audience reacts.

The web inevitably makes available some content which is unsuitable or inappropriate for children to access. Some of this will be illegal, but much more will not, or may be suitable say for over 13s or over 16s only. A traffic light system may therefore struggle to distinguish between these and runs the risk of imposing the strictest warning on masses of content by default.

A greater concern however, is how the new system will guard against becoming a tool to enable prejudices of one kind or another to be played out. The system can only operate if it is the crowd’s decision which counts – the reason this is even being considered is because there is too much content for a regulator or platform to consider. Relying on the crowd assumes that a collective consciousness emerges from the great mass of web users and their shared values, rather than a set of subjective reactions. This is a dangerous assumption. As a recent MIT study reported in Science suggests, the “wisdom” of the crowd may be a myth, its mentality more akin to that of a mob or herd. 

A huge amount of content which some viewers may be strongly, even violently, opposed to can be found online. However, such content may well not be illegal, or even be the sort of content that a body such as the BBFC would normally feel the need to apply adult age ratings to – religious teachings for example. Once crowd or mob has control, how will the system ensure it cannot be hijacked to serve the values of one interest group over another? Very few votes may be enough for any piece of video content to be tagged as unsuitable. 

Select and enter your email address Your weekly guide to the best writing on ideas, politics, books and culture every Saturday. The best way to sign up for The Saturday Read is via saturdayread.substack.com The New Statesman's quick and essential guide to the news and politics of the day. The best way to sign up for Morning Call is via morningcall.substack.com
Visit our privacy Policy for more information about our services, how Progressive Media Investments may use, process and share your personal data, including information on your rights in respect of your personal data and how you can unsubscribe from future marketing communications.
THANK YOU

Even then, merely adding a red traffic light rating to a piece of content may not by itself do much harm. But what if the ratings are not a simple visual warning but information which determines whether that piece of content is made available or not?

In controlling what content is made available, European governments’ room for manoeuvre is limited. EU law enshrines protection for freedom of expression. Where Member States take measures which affect users’ access to and use of services and applications over electronic networks, they have to respect fundamental human rights and freedoms. Any restrictions need to satisfy tests of being appropriate, proportionate and necessary in a democratic society. Determining the suitability of content has, until now, been the preserve of carefully chosen, neutral regulators, applying a set of agreed principles. Would mandating a system of crowd-sourced suitability ratings from anonymous web users around the world satisfy these tests? Without being able to ensure that the system could not be hijacked, it may struggle to do so.

So, encouraging ISPs to take voluntary steps may assist governments in assuaging the most vocal demands for action, while avoiding a difficult debate over internet regulation. But any approved scheme will need safeguards over whether the traffic lights become the basis for automated blocking of content which a household or ISP can apply at the flick of a switch. Once an appealingly simple idea like this takes hold, it may not be readily dropped and may go on to have profound effects on what content is made available in the majority of households in this country.

Content from our partners
The Circular Economy: Green growth, jobs and resilience
Water security: is it a government priority?
Defend, deter, protect: the critical capabilities we rely on