Does this faked Obama speech spell doom for video evidence?
In August 2017, academics demonstrated a new technology that creates realistic facial movements to match a spoken soundtrack – and many worried it would make the advance of ‘fake news’ unstoppable. After the initial hype had subsided, Martin Robbins, tech writer and product manager at Factmata, told us why he wasn’t so sure it meant the end was nigh for trustworthy video
2nd August 2017 (Taken from: #28)
When three University of Washington researchers presented their paper Synthesizing Obama: Learning Lip Sync from Audio at a Los Angeles conference hall on 2nd August 2017, the assembled delegates had been primed to expect fireworks. A video created by the trio to demonstrate the potential of their innovation had already caused a stir.
The Atlantic speculated that the researchers’ work will “make it impossible for you to believe what you see”, while tech website Boingboing said the experiment might herald “the beginning of the end for video evidence.”
The video (above) presented by Supasorn Suwajanakorn, Steven M Seitz and Ira Kemelmacher-Shlizerman showed two clips of Barack Obama speaking simultaneously, side by side. But only one was real – the speech in the video on the right had never taken place. The researchers had used a neural network to study footage of Obama speaking and learn how his lips move when he makes particular sounds. The system then learned to animate mouth shapes on the fake Obama corresponding to old audio recordings of him – and of an impersonator who sounds like him.
“Seeing words coming out of the mouth of the fake Obama was a shock to the system,” says Martin Robbins, a product manager at Factmata, which is working to develop new technology to help tackle the problem of misinformation and ‘fake news’ online. “I thought, ‘Oh God, this changes everything,’” he says.
This video editing technology has arrived at a sensitive moment – the phenomenon of ‘fake news’, a term named word of the year by Collins Dictionary’s lexicographers in November 2017 due to its “ubiquitous presence” since Donald Trump’s arrival on the political scene, is perceived to have undermined the public’s level of trust in journalists and news organisations. In September, Facebook acknowledged that a Russian-funded misinformation campaign had spent US$100,000 in advertising on the social media site.
The bigger problem with ‘fake news’ is that a hyper-partisan section of the population is determined to believe it”
“Social media platforms have almost accidentally become major news platforms and they’re now wrestling with the kind of problems that major news organisations have faced,” says Robbins.
For those who might benefit from the online dissemination of misleading media, the lip-synching technology will bolster a rapidly expanding toolkit. It could potentially be used in conjunction with speech-mimicking software – in early 2017 a Canadian startup, Lyrebird, released a beta version of a program that takes a one-minute voice sample from users and mimics their speech reasonably accurately. Meanwhile, ‘facial reenactment’ software Face2Face (below) takes a user’s facial expressions and superimposes them on someone else’s face.
Yet reports of the death of video verification may be exaggerated, says Robbins. “Photo manipulation has become incredibly sophisticated over the past 15 years and while there’s been an escalation in fake content, we haven’t seen doctored photos of Obama shaking hands with Kim Jong-un fooling people,” he says. “There are always ways of spotting fakes, and many people on the internet are obsessed with flagging them up.”
The bigger problem with ‘fake news’, he says, is that a hyper-partisan section of the population is determined to believe it, even if it’s been flagged as fraudulent. “For them it’s like candy,” he says. “They want to shove it into their faces without examining the content and where it comes from.”
The source of ‘fake news’ is key to its efficacy, says Robbins. “I can’t see a person posting a fake video on a random Facebook forum starting World War 3,” he says. “But if it’s on a president’s Twitter account or it appears to come from a serious news outlet it’s a very different matter.”
Robbins has little doubt that state-backed hackers will already be thinking about how to use the glut of media-editing technologies currently in development to fool people. But he refuses to join in with some of the more alarmist analyses of the technology and his initial shock has subsided. “Upon reflection I’m not so sure it’s quite such a game-changer because you don’t need a fake video of Obama to make people believe that he said something. You can post an article on a conspiracy website and many people will quite happily believe it in the absence of any evidence whatsoever. Video editing technology is another weapon in the arsenal [of propagandists] and it potentially expands an existing problem, but I’m not convinced it will make a dramatic difference.”
You could create a database of every piece of knowledge that has ever been gleaned but if a report is about something new, what can you fact-check that against?”
Factmata is one of several tech startups aiming to tackle the problem of online misinformation, including British charity Full Fact, which is developing automated fact-checking tools to monitor live subtitles of politicians’ remarks and instantly check them against reliable primary sources in its archive. But Robbins believes that a magic bullet – a tech breakthrough that singlehandedly stops all false information in its tracks – is unattainable.
“There’s the fundamental problem that by definition news consists of new information,” Robbins says. “You could create a database of every piece of knowledge that has ever been gleaned but if a report is about something new, what can you fact-check that against?”
Those seeking technological antidotes to ‘fake news’ must also tackle nuance and context, areas AI tends to struggle with. Robbins cites an article published by right-wing outlet Breitbart which claimed that migrants had committed over 400,000 crimes in Germany in 2015. The figure was correct, but it didn’t clarify that crossing the border as an asylumseeker, which many migrants are forced to do, is a crime in Germany and was included in the statistics.
Robbins points out that there’s nothing new about propagandists using doctored images or film. The Soviet Union added and removed political figures from photographs from the 1920s onwards, while in 2008 Iran digitally altered an image of a missile test to obscure the fact that one of the missiles failed to take off.
“We can never stop propaganda,” Robbins insists, “But there are many moderate people who want to be well-informed and hopefully we can help create a better space for public discourse for these people.”
We hope you enjoyed this sample feature from issue #28 of Delayed Gratification
You can buy the issue from our shop or
Subscribe and receive the magazine through your letterbox every three months
Slow Journalism in your inbox, plus infographics, offers and more: sign up for the DG newsletter.
Thanks for signing up.