In January 2018, a desktop application was released. While images have been manipulated online for years, “FakeApp” uses deep learning to allow anyone to create realistic face-swap videos – often for sexually explicit purposes. Headlines were made when celebrities were superimposed over porn stars, while many on social media laughed at safe-for-work edits of Nicholas Cage’s face onto other actors’ bodies.
This month, Reddit and PornHub banned these videos – known as “deep fakes” – and when Motherboard first broke the story, writer Samantha Cole was clear about the potential consequences of the tech.
“It isn’t difficult to imagine an amateur programmer running their own algorithm to create a sex tape of someone they want to harass,” she wrote. But Hany Farid, a digital forensics and image analysis expert from the Dartmouth College, warns there could be even greater consequences.
“What I’m particularly concerned about is the ability to create fake content of world leaders,” he tells me, giving the example of Donald Trump declaring nuclear war. “You can imagine that creating an international crisis very quickly.”
Despite the bans, people aren’t going to stop using this technology, which means every internet user has a responsibility to get smarter about what they share. Right now, deep fakes themselves aren’t too hard to identify. A combination of common sense (did noted-feminist and multi-millionaire Emma Watson really film a video of herself in the shower for Reddit?) and looking for tell-tale signs (does the video flicker? Is it a very short clip? Does something seem off?) will suffice.
Yet as the technology improves and other technology (such as the artificial intelligence used by researchers at the University of Washington to create a fake speech by Barack Obama) becomes more accessible – everyone will need to be equipped to spot fake videos.
“Ten, 20 years from now, ideally we will have software online where you can upload a video and it can tell you if it’s real or not, but we’re not there, and we’re not even close,” says Farid, who warns that ordinary people shouldn’t attempt to do complicated forensic analysis of videos and pictures. “A low-tech solution can solve a lot of the problems: common sense.”
When Farid wants to verify a picture or a video, he has many tools – including his own knowledge – at his disposal. When he wanted to see if a viral video of an eagle snatching a baby was real, he took a single frame and analysed the shadows. He found them inconsistent with the sun in the video. When he wants to see if a person in a video is CGI or real, he uses software that identifies subtle colour changes in people’s faces.
“When you’re on camera and your heart is beating, blood comes in and out of your face,” he explains. “Nobody notices this, you can’t see it – but the colour of your face changes ever so slightly, it goes from slightly redder to slightly greener.” So far, this same technique appears to work for faceswapped deep fake videos, and digital forensic experts can use the colour changes to determine a person’s pulse, which in turn determines if the video is real.
“Now I’ve told you this, guess what’s going to happen?” says Farid. “Some asshole Redditor is going to put the goddamn pulse in.”
Farid calls this an “information war” and an “arms race”. As video-faking technology becomes both better and more accessible, photo forensics experts like him have to work harder. This is why he doesn’t advise you go around trying to determine the amount of green or red in someone’s face when you think you spot a fake video online.
“You can’t do the forensic analysis. We shouldn’t ask people to do that, that’s absurd,” he says. He warns that the consequences of this could be people claiming real videos of politicians behaving badly are actually faked. His advice is simple: think more.
“Before you hit the like, share, retweet etcetera, just think for a minute. Where did this come from? Do I know what the source is? Does this actually make sense?
“Forget about the forensics, just think.”
Kim LaCapria works for one of the internet’s oldest fact-checking web resources, Snopes. Deep fakes aren’t her biggest concern, as lower-tech fakes have dominated the internet for decades.
Most successful video fakery in recent years involved conditions that allowed for misrepresentation, she says. Examples include videos in a foreign language which aren’t subtitled but are instead captioned misleadingly on social media (such as this video of a Chinese man creating wax cabbages, which Facebook users claimed were being shipped to America and sold as real veg) and old videos repurposed to fit a current agenda.
In the past, low-tech editors have successfully spread lies about world leaders. In 2014, a video appeared to show Barack Obama saying “ordinary men and women are too small minded to govern their own affairs” and must “surrender their rights to an all-powerful sovereign”. It gained nearly one and a half million views on YouTube.
“This video goes back perhaps four years or more, and wasn’t exceptionally technologically advanced,” says LaCapria. “It involved the careful splicing of audio portions of a speech given by President Obama abroad. It spread successfully not just due to its smooth edits, but also because the speech was one not many Americans witnessed.”
Concern about the political ramifications of deep fakes may be overhyped when we already live in a world where a faked Trump tweet about “Dow Joans” can be shared by over 28,000 people, and a video of an academic talking about “highly intelligent people who gravitate to the alt-right” can be taken out of context to publicly shame him. Yet LaCapria hopes that deep fakes will now inspire people to be hypervigilant online.
“Readers should really be cautious to find the source of the video and its context before sharing it,” she advises. When it comes to speeches by world leaders, you can Google for a transcript or simply search to see when a statement was first said. For example, a participant on Australian TV show The ABC claimed London Mayor Sadiq Khan said terror attacks are “part and parcel of living in a big city” after the 2017 Westminster attack. In fact, he actually said a version of this in 2016.
“People should definitely be wary if they can’t themselves authenticate a translation, and as always, they should be aware if it’s a ‘too interesting to be true’ situation,” says LaCapria. “If a famous person is depicted doing, saying, or engaging in newsworthy activity, it’s likely that would be something entertainment and news websites would report if it were even remotely legitimate.”
Sometimes, things that are too-good-to-be-true are true. This month, a video of Trump’s hair blowing in the wind exposed the president’s strange scalp. Snopes were able to verify it by comparing the popular video on social media with footage from other sources – such as Fox News, The Guardian, and Getty Images. This is arguably something everyone can and should do. When a video confirms your political biases, just double check.
Deep fakes, then, are just the latest iteration of a problem as old as the internet. While its worrying that FakeApp has democratised the process of creating fake videos, problems still remain with how we receive and scrutinise information online.
“We have to start afresh,” says Farid, arguing people need to be taught how to properly handle online information. He himself has been asked to run public talks where he explains this.
“Do I really have to tell people to use common sense?” he muses, with a hint of frustration. “Apparently, here we are – so OK, fine. I’ll do that.”
More like this: How to identify fake images online