9 months ago
Last month, a concerned anchorman dressed in a smart black suit jacket and navy striped tie solemnly told viewers from the studios at the BBC that a “serious incident between Russia and NATO” had occurred. Eventually, he told the audience that the Royal Family has been moved, thermonuclear war has erupted in Europe, and an official emergency broadcast from what looks to be the country’s Home Office advises those watching in the U.K. to seek immediate shelter.
The problem? The video wasn’t real. The anchorman was Mark Ryes, a voiceover artist and TV presenter who works from professional studios in Oxfordshire, and he initially recorded it for a psychometric test in a scientific study. It was never meant to be released to the public, and when the video was uploaded to YouTube and then spread on the Facebook-owned messaging service WhatsApp, enough people thought it was real that the BBC had to issue an official statement distancing itself from the fictional clip.
“Over the last couple of days we have had people contact various BBC bureaux, mainly in Africa and Asia, as they had seen the video on WhatsApp and wanted to check if it was a real BBC report,” a spokesperson told the Independent. “WhatsApp seems to be the main platform it was being shared on but we know it’s circulated on other social media too. We’re keen to make clear that this isn’t a BBC News report.”
Ryes told RealClearLife that he was worried about the potential misuse of the footage, but when he mentioned his concerns to his client, they brushed him off.
“When I suggested to the company who’d produced the video that they may want to password protect it online so that their clients could see it, but that the sensitive nature of the material wouldn’t get into the public realm, the company shot me down in flames,” Ryes told RealClearLife in an email.
“They wrote to me, ‘There is a significant difference between ‘fake news’ and a work of fiction, and we have only ever intended creating the latter and labelling it accordingly.’ This was a couple of years ago before fake news was even a ‘thing’—and they obviously didn’t think about the future where someone could re-package and re-post their original video without the appropriate labels,” Ryes recounted.
“It was frustrating to be at the centre of a so-called ‘fake news furore,’ but there was absolutely nothing I could do about it.”
But there’s a new digital realm of fake and fiction video that we’re entering. Thanks to software platforms like FakeApp or deepfakes, people with relatively basic digital literacy can swap Ryes’ face with Donald Trump’s, if they so chose. Add an accompanying fake audio software to quickly learn and imitate Trump’s speaking voice, and you have the President of the United States telling the world a major war has just erupted. This is very similar to the recently released Buzzfeed-produced video featuring director Jordan Peele as Barack Obama. The purpose? To warn about the manipulation of video online.
“You [used to] have very in-depth knowledge of these tools that take years and years to get used to,” said Siraj Raval, who teaches developers on YouTube how to build upon their Artificial Intelligence background. He told RealClearLife that, previously, it would take an entire team to create the kind of advanced, manipulated video that can fool much of the public, but that won’t be the case moving forward.
“With this tech like deepfakes, if it gets good enough—which it will, it will get good enough—anybody will be able to do that,” Raval said. “It’s kind of like these superpowers who are given to people who put in the time, the effort, the years, the learning—slowly everybody is going to be able to do everything.”
This problem isn’t particularly new. Raval said the technology has been rapidly progressing for several years but has only recently become widely accessible and newsworthy. One early area of deployment has been in pornography, where famous actresses have had their faces plastered onto adult film actresses without their permission. But the spread of misinformation, and our willingness to believe it comes from a more primal place that stretches back centuries.
“It used to be the case that people perceived whatever was written in books to be true because books were the high elite form of learning,” said Dr. Gleb Tsipursky, a behavioral scientist whose area of expertise focuses on how humans relate to information in the outside world. “It was hard for some people to imagine that if something was in a book, way back when, that it wasn’t true,” he told RealClearLife. “So we’re kind of approaching that era in videos, where people right now believe that if they see something on video, it’s true.
He continued, saying: “People are greatly flawed. Our minds are not adapted to the current environment in which we find ourselves. …Human beings are biological machines, we are driven by various impulses. In order to counteract our natural impulses, we need to develop mental habits—we learn to exercise, we learn to not interrupt when others are speaking—we don’t have any learned behaviors around engaging and social media. That’s why people are so easily manipulated.”
So what can be done about it? How can you arm yourself against being tricked? Tsipursky said that media literacy is vital, as is deciding to be vigilant and informed when viewing information online. “Be part of the solution,” he said.
“The best thing to do is make a pre-commitment to a certain set of truth-oriented behaviors,” said Tsipursky, who created an online pledge—signed by the likes of well-known cognitive psychologist Steven Pinker—to outline exactly what those behaviors are. “When we talk about being truthful, that’s a very flighty concept. It’s much more effective to ground out this concept in very clear and specific behaviors.”
“If you can recycle, you can fact-check,” Tsipursky said.
But to be able to do that, it’s important to be able to identify misinformation, in video form or otherwise, and Tsipursky has outlined some ways to identify when you’re seeing something that “goes against reality.”
“It can mean directly lying, lying by omission, or misrepresenting the truth to suit one’s own purposes. Sometimes misinformation is blatant and sometimes it’s harder to tell. For those tough calls we rely on credible fact-checking sites and the scientific consensus.”
Tsipursky points to Snopes, PolitiFact and FactCheck.org as a few of the fact-checking websites that can help verify information. He also outlined specific behaviors that can directly fight the spread of fake or manipulated information. These include verifying information before sharing, sharing the whole truth — even if you may disagree with some or all of it — and citing sources so that others can verify any information you share. It’s also important to distinguish between opinion and fact, and to honor truth by acknowledging when others share true information, even if you disagree.
But what happens when you’re the victim of a misinformation campaign? All of the responsibility sharing and fact-checking in the world can’t stop another person from taking your likeness and manipulating it. What can be done if you suddenly find your head attached to someone else’s body, doing things you didn’t do, saying things you didn’t say? Do existing laws do enough to protect you? Are there more that need to be crafted?
A recent public post by the civil liberties director of the Electronic Frontier Foundation, which in part advocates for free speech and privacy, argues that the appropriate legislation already exists.
“If a deepfake is used for criminal purposes, then criminal laws will apply,” David Greene writes. “For example, if a deepfake is used to pressure someone to pay money to have it suppressed or destroyed, extortion laws would apply. And for any situations in which deepfakes were used to harass, harassment laws apply. There is no need to make new, specific laws about deepfakes in either of these situations.”
Greene goes on to highlight various legal pathways that people can take—including copyright infringement, right of publicity and various tort laws like False Light invasion of privacy—to ensure they get justice.
But Dr. Johanna Blakley, the managing director at The Norman Lear Center at the University of Southern California’s school of communication and journalism, told RealClearLife in an email that “no policy is going to eliminate malicious misinformation from our global media ecosystem.”
“We would like to think that there is an obvious difference between a sharp satire and a malicious disinformation campaign, but I suspect that policymakers will continue to struggle with those distinctions as the technology improves and evolves,” Blakley said. “What I hope will come to the fore in public conversations about audio/video manipulation and fake news is the underlying problem: a lack of media literacy and critical thinking skills.”
Not everyone is concerned about the new technology spelling doom for the online ecosystem, though. While Siraj Raval stresses the importance of people owning their data and staying informed, he’s hopeful for what these developments in technology mean for society as a whole—including scientists who’ll be able to visualize diseases or breakthrough research in brand new ways.
“Technology has always been a double-edged sword. That’s been the case for a lot of the tech we have created. But it comes down to my faith in humanity. In myself, in us,” Raval said. “That we will build systems are more trustful that we can all agree upon the validity of.
“Now we’re at a point where we’re seeing stuff like deepfakes happen and we’re seeing how important data is, and how important truth is, and the way we’re going to get around that is to build better systems that allow people to validate all the data they see, and the way to get there is to create entirely new services we use that are community run.”