New Times,
New Thinking.

  1. Science & Tech
13 July 2015

Twitter’s new porn-spotting robot moderators

The social networking site has introduced new artificial intelligence systems that can spot and delete sexual and violent images – and spare human moderators in the process. 

By Barbara Speed

Under the compelling headline “The labourers who keep dick pics and beheadings out of your Facebook feed”, journalist Adrien Chen delved last year into the little-known world of social media’s content moderators. These thousands of workers, most based in Asia, trawl through social networking sites in order to delete or flag offensive content. In the process, they are exposed to the very worst the internet has to offer – beheadings, violent pornography, images of abuse – all for wages as low as $300 a month.

But this month, Twitter has taken a first step towards automating this process, and thus sparing a huge unseen workforce from their daily bombardment of horrors. Almost exactly a year ago, Twitter bought start-up Madbits, which offers, in the words of its co-founders, a “visual intelligence technology that automatically understands, organises and extracts relevant information from raw media”. 

At the time, tech websites speculated that the Madbits would be used to develop facial recognition or tagging on Twitter photos. But in fact, the start-up’s first task was very different: it was instructed by Alex Roetter, Twitter’s head of engineering, to build a system which could find and filter out offensive images, defined by the company as “not safe for work”. 

This month, Wired reported that these artificial intelligence (AI) moderators are now up and running. Roetter claims the new moderator-bots can filter out 99 per cent of offensive imagery. They also tend to incorrectly identify about 7 per cent of acceptable images as offensive – but, the company reasons, better safe than sorry. 

Like other artificial intelligence robots, the moderator “learns” how to spot offensive imagery by analysing reams of pornography and gore, and then applies its knowledge of the content and patterns to new material. Over time, the system continues to learn, and get even better at spotting NSFW images. Soon, systems like these could replace content moderation farms altogether.

Select and enter your email address Your weekly guide to the best writing on ideas, politics, books and culture every Saturday. The best way to sign up for The Saturday Read is via saturdayread.substack.com The New Statesman's quick and essential guide to the news and politics of the day. The best way to sign up for Morning Call is via morningcall.substack.com
Visit our privacy Policy for more information about our services, how Progressive Media Investments may use, process and share your personal data, including information on your rights in respect of your personal data and how you can unsubscribe from future marketing communications.
THANK YOU

In most cases like these, it’s worth remembering those whose jobs might be lost as the robots advance, especially in developing countries – but considering the psychological damage brought on by endless exposure to violent images, we can only hope Twitter and sites like it can offer less distressing moderation jobs (and higher salaries) to these workers instead. 

Content from our partners
No health, no growth
Tackling cancer waiting times
Kickstarting growth: will complex health issues be ignored?