Expert Strategy to Force Google and Facebook to End Fake News

April 12, 2017 3:34 pm
(Getty/Nora Carol Photography)
(Getty/Nora Carol Photography)
Vivek Wadhwa (Pier Marco Tacca/Contributor/Getty Images)
Vivek Wadhwa (Pier Marco Tacca/Contributor/Getty Images)

A syndicated columnist at the Washington Post is calling out Google and Facebook for reaping massive profits from allowing the spread of so-called “fake news.”

But Vivek Wadhwa, who’s also a distinguished fellow at Carnegie Mellon University, isn’t just whining. He is pointing out some specific changes the companies should make to rectify the phenomenon.

“It wasn’t supposed to be this way. Social media was developed with the promise of spreading democracy, community and freedom, not ignorance, bigotry and hatred,” Wadhwa writes. “Instead, it has become a tool enabling technology companies to mine data to sell to marketers, politicians and special-interest groups, enabling them to spread disinformation.”

Although Wadhwa notes that the tech giants have acknowledged the problem of spreading misinformation, he slams Facebook chief executive Mark Zuckerberg for claiming it will take “years” to develop the artificial intelligence needed to review the billions upon billions of posts made on the platform every day.

It’s not a lack of resources or ability, Wadhwa says. It’s a lack of motivation.

“Whenever it comes to making money, the tech industry always seems to be able to find a way — and it doesn’t take years,” Wadhwa points out. He also argues that the owners of these companies need to steer them in a more positive direction, pulling from Ramesh Srinivasan’s book “Whose Global Village?” for three specific ways that social media and tech companies should do their due diligence and take responsibility:

    1. Ask social media companies to reveal the filters and choices that make up algorithms and shape interactivity. Does a user see a post because of location, number of common friends, or similarity in posts? Google should tell what factors lead to the results seen in a search, Srinivasan. Neither have to publish proprietary software code in doing either of these things.
    2. Users should have the opportunity to decide between types of information. This should include news shared by people beyond a user’s social network, or options for filters on feeds that could include allowing users to see information from different geographic locations, as well as a range of political opinions.
    3. Social media companies should allow news credibility to be visualized, Srinivasan argues, and are capable of developing interfaces that allow users to see the same topic as discussed or seen from multiple places, cultures, or perspectives in relation to any given topic.

To read Vivek Wadhwa’s full column, which he calls one of his harshest ever, click here.

The InsideHook Newsletter.

News, advice and insights for the most interesting person in the room.