How Facebook and Google (and its Users) Can Fight Fake News Better

Tech companies aren't using the right weapons to fight mistruths and prevent them from spreading.

May 26, 2017 5:00 am
Fakebook, Google, and Its Users can do more to fight fake news.
Fighting fake news on social media starts with its users (Getty Images)

The term “fake news” entered the public lexicon during last year’s election—and many critics blamed social media and tech giants for having a role in enabling its spread. And while Facebook and Google have taken steps to minimize the impact, the companies aren’t using the right weapons.

False reports have been around for centuries in various forms, but the so-called Information Age is ironically facing an inundation of misinformation. The advance of technology has made it easy to fake legitimacy and to spread ideas without verification.

“This has been a long-standing tradition in journalism, but the thing that has changed is the education levels and the astuteness of the audience,” Michael Bugeja, director of Iowa State’s Journalism School told RealClearLife.

Bugeja says people are more likely to people believe false information when they passively imbibe it in the form of push alerts on their phone and posts on their social media feeds.

It’s easy to blame the messenger for sharing false information, but it’s hard to ignore the evidence. Social media and search algorithms promote stories that are shared and clicked most often. A 2015 study from MIT political scientist Adam Berinsky found that consumers are more likely to believe a statement, no matter how doubtful they may initially be, the more often they see it.

The chance of that person believing something false is higher when it’s shared by someone with similar political views, according to Yale University psychologists.

In an attempt to stem the problem, Google and Facebook have changed their algorithms to feature more authoritative sources more prominently while simultaneously suppressing untrustworthy outlets. Both companies also gave users a way to report inaccurate search suggestions or news stories found on the platform.

In particular, Facebook has started to alert its users if they want to share a news story that’s been debunked. Even though they still have the option to share those flagged posts, Facebook users are now nudged to pages that feature more accurate coverage of the same subject. Cambridge University researchers found that a strategy like this can act as a “psychological vaccine” against fake news. Authors of the study, published in Global Journal, reported that the tactic “helps build up resistance to misinformation, so the next time people come across it they are less susceptible.”

However, both Facebook and Google still need to take a proactive role in the hunt for offending websites by leveraging their powerful machine learning algorithms to spot new outlets as they appear. This was one of the key recommendations from members of the academic, journalism, and technology communities when they convened for a panel on combating fake news at Harvard in February.  The panel also suggested getting more conservatives involved in the conversation about misinformation to avoid the problem being dismissed as a partisan issue.

While admitting he may come across as a luddite (though he asserts he’s not), Bugeja warns that the effectiveness of fake news is an example of how society’s over-reliance on technology may lead to a shift in values.

“Let’s stop with the idea that technology is always going to bring us a better picture. It’s not true,” he explains.

“Technology brings us good futures and bad futures, and we need a literate audience to determine which path we are going to take.”

One example of this bad future Bugeja mentions is the prevalence of social botnets, a large group of social media accounts pushing similar messaging in an orchestrated fashion. With the right numbers, social botnets will dominate the online conversation. This is especially true for Twitter since it features posts on a chronological timeline, which effectively gives those messages a wave to ride. CBS News reports that expert witnesses in the Russian investigation told the U.S. Senate Select committee that social botnets still play an active role in spreading misinformation.

In his forthcoming book Interpersonal Divide in the Age of the Machine, Bugeja argues for media and technology literacy classes. He believes that if people understand how those platforms work, they’ll be better equipped to determine fact from fiction.

Consumers of media and news publications play a role in fanning the flames of the raging garbage fire that fake news has become. It’s often sparked by the digital frustrations of groups on the fringe, which manipulate the 24-hour news cycle.

According to Alice Marwick and Rebecca Lewis, it’s become a vicious cycle. In a recent report by the Data & Society Research Institute, they wrote “online communities are increasingly turning to conspiracy-driven news sources, whose sensationalist claims are then covered by the mainstream media, which exposes more of the public to these ideas.”

The technique is called “attention hacking” and it’s used to boost the visibility of fake news that supports an agenda. News outlets can’t resist covering the false stories—while debunking it —and readers have a hard time not clicking on them given the outrageous nature. Regardless, the voyeurism helps the message stick and can damage mainstream media’s credibility to part of the audience.

And these divisions are spilling over—even when people put down their phones and have in-person political arguments.

“We turn the physical realm into an echo chamber of what have so easily created online,” MIT social scientist Sherry Turkle writes her in book Reclaiming Conversation.  “It’s a cozy life, but we risk not learning anything new.”

The InsideHook Newsletter.

News, advice and insights for the most interesting person in the room.