Will Advances in AI Deepen The Fake News Crisis, Or Help Solve It?

As synthetic media spreads, authentic images will invite skepticism.

fake news
(Getty)
Getty Images/iStockphoto

In a media environment filled with fake news, technological advances have disturbing implications.

A Reddit user named Deepfakes released a software tool last fall that allows anyone to make synthetic videos in which a neural network substitutes one person’s face for another’s, while keep their expressions consistent.

The user also posted pornographic videos, known as “deepfakes,” that appear to feature various Hollywood actresses.

Meanwhile, around the same time, a research group at the University of Washington published a paper that showed a neural network that could create believable videos in which the former President Barack Obama appeared to be saying words that were actually spoken by someone else.

Today’s smartphones can digitally manipulate even ordinary snapshots and recent top-grossing movies like Black Panther and Jurassic World are saturated with synthesized images that would have been dramatically harder to produce just years ago, writes The New Yorker. 

What does this mean for the world of fake news? Will this technology make it worse, or will we better learn what is real and fake?

“Actually, from the very beginning, photography was never objective,” said Alexei A. Efros, a computer scientist who runs one of the world’s best image-synthesis labs, according to The New Yorker. “Whom you photograph, how you frame it—it’s all choices. So we’ve been fooling ourselves. Historically, it will turn out that there was this weird time when people just assumed that photography and videography were true. And now that very short little period is fading. Maybe it should’ve faded a long time ago.”

The InsideHook Newsletter.

News, advice and insights for the most interesting person in the room.