New Times,
New Thinking.

  1. Science & Tech
25 March 2015

How far can you trust citizen journalism on the internet?

As the BBC reports that it can receive up to 10,000 pieces of user-generated content on a single day, Vicky Baker looks at the increasing need for verification and how propaganda and hoaxes have become more prevalent.

By Vicky Baker

Photo: Joel Saget/AFP/Getty Images

“Mosul church survived 1,800 years but couldn’t survive Isis ­– burned it as Christians expelled,” tweeted the head of Human Rights Watch, ­Kenneth Roth, sharing a now-deleted photo of a church in flames – purportedly in Iraq. The image had come from a seemingly reputable news source, and, with Islamic State’s assaults on Christians well known, it seemed legitimate – but there was more to it than a quick glance in a Twitter timeline would show.

Internet sleuths quickly debunked it by doing a simple Google “reverse image search” – a tool that enables users to see if the same image has been used anywhere on the web before. It turned out it had: in Egypt, one year prior. By this time, it was too late for the Atlanticthe Independent and the Times of India, who had all run the story. Once the mistake came to light, many outlets retracted the item, although, eight months on, it still sits on the Saudi-owned Al Arabiya news site, where Roth first saw it, like a snapshot of news history. (Roth deleted his tweet on Monday, when alerted.)

Google’s reverse image search is one of the most basic verification techniques around and would have been a good first step. The Guardian does it for every image sent in to its user-sourced Guardian Witness platform (even for those images of readers’ bookshelves, as it gets round possible copyright infringements too). The average Twitter user, adopting a he-who-shares-first-wins approach, may be unlikely to bother with such a procedure, or even know it exists; yet the future will surely bring more integrated verification tools, which we will all rely on increasingly.

Citizen journalism has come full circle. It came on to the scene as a threat to journalists, a way to replace newsgatherers with something more “authentic”. Now we are reliant on professionals again, and the teams of specialists who are working behind the scenes at major news organisations and start-ups, using increasingly sophisticated techniques to sift out deception, propaganda and mistakes.

“There are hoaxers now, trying to trip you up,” Chris Hamilton, social media editor for BBC News, told Index on Censorship magazine, for an article in the latest quarterly issue – looking at how user-generated content (UGC) lost its innocence. “Also with the Sydney siege and Woolwich murder [of British soldier Lee Rigby by two Islamic extremists], we’ve seen events staged with online media coverage in mind.” In both cases, passersby or hostages were asked to film statements to put them online, showing how footage was no longer just a chance by-product, but a battle for control. 

Give a gift subscription to the New Statesman this Christmas from just £49

When earlier this month it was announced that Rebekah Brooks was joining Dublin-based Storyful, many considered the position as a major step down, but as heading a fast-growing, verification-focused business, she’ll be at the forefront of a growing industry, heading a news agency that specialises in UGC. Storyful, which was bought by News Corp in 2013, also announced this month that it is expanding into China, in partnership with Youku, China’s biggest video sharing portal.

“There’s been evolution in the industry,” says Fergus Bell, formerly international social media and UGC editor for Associated Press, and now working for Canada’s SAM, a company that creates platforms to ease social media curation and verification. “Before, social media desks didn’t have very much visibility in the newsroom. People didn’t understand what they were doing, but now UGC can make or break a story, or it is the story.”

SAM, which launched in July 2014, is already been used by Associated Press, Reuters, Canadian Press, Press Association, the Wall Street Journal, the Financial Times and the Guardian. Although it doesn’t have built-in tools to verify the content, it does track an increasingly complex process. It allows newsrooms to see who’s checked what, who’s contacted the originator. It also flags disturbing content, so not all journalists have to see it. Bell, who joined the company at the start of the year and is co-founder of the Online News Association’s UGC ethics initiative, is heavily focused on developing the ethical standards in the industry.

Meanwhile, the traps are getting increasingly elaborate. In September 2014, there was a short burst of Twitter activity after hoaxers tried to spread news of an imaginary Isis attack on a chemical plant in Centerville, Louisiana. Multiple Twitter, Facebook and Wikipedia identities were created over more than a month. Then when the moment hit, news stories appeared – from pieces on an imaginary media outlet called Louisiana News to a fake CNN screen shot. There was even a coordinated plot, using bots in both Russian and in English, to get a hashtag trending. But ultimately, despite all the efforts, the hoaxers weren’t quite clever enough; they failed, as Cory Doctorow has reported. An attack in Louisiana didn’t take long to disprove.

But it’s not just the professionals and experts that need to be on high alert, users also need to become more sceptical. Just because a tweet is geotagged (ie with a place and date stamp), it doesn’t mean the sender was on the spot or could be a credible witness. Even simple sites such as pleasedontstalkme.com enable users to tweet from “anywhere in the world”. Google algorithms are also fallible. At the moment, when you type “Syria hero boy” into Google, the well-known hoax story of a boy saving a girl from gunfire appears first as an original news piece, before the debunk. (One of these original pieces, by the Telegraph, is a lesson in what big organisations shouldn’t be doing: a story based on popularity of unverified content littered with phrases such as “seems to”, “appears to” and “thought to”, and comes with no update or footnote to say this video was later indisputably discredited.)

Google has recently announced new plans to rank pages by trustworthiness, using a model that will count incorrect information in a story. It believes to have determined what is true and what is not based on a huge vault of facts that it has been collating by trawling online content with bots. “Facts the web unanimously agrees on are considered a reasonable proxy for truth,” reported the New Scientist. This is an interesting next step, although will clearly bring its own problems.

And so the circle keeps on turning. UGC has given way to a pressing need for keen eyesight and human verification, but the next step could be handed over to the bots, be they Russian hoaxers or Google fact hunters.

Vicky Baker is the deputy editor of Index on Censorship magazine, which carries more on social media verification and the changing approach to user-generated content in its latest issue. The edition’s main theme is how refugee stories get told, with a tie-in event at London’s RichMix Cinema on 21 April. Follow the magazine @index_magazine

Content from our partners
Building Britain’s water security
How to solve the teaching crisis
Pitching in to support grassroots football