Facebook’s fake news problem solved in 36 hours by these students
Facebook has been recently caught in a storm over the questionable role that the company played in promoting fake news in the lead-up to Donald Trumpโs victory in the U.S. election last week. The reason is that the social networking giant has been accused of allowing propaganda lies disguised as fake and misleading news articles to be spread on the social media site without being checked. Almost half of adult Americans depend on Facebook as a source of news, a recent study by the Pew Research Center found.
Even, President Barack Obama called Facebook a “dust cloud of nonsense” for spreading false information during the election cycle. However, on the other hand, Facebook said that since fake news is such a small percentage of the stuff shared on Facebook, it couldn’t have had an impact. Also, the social networking giant said that searching for the real news from the lies is a difficult technical problem.
But not for these four college students, who during a hackathon at Princeton University sponsored by Facebook, created an algorithm in the form of Chrome browser extension in just 36-hours. They named their project “FiB: Stop living a lie.”
The students are Nabanita De, a second year Masters in Computer Science student at UMass Amherst; Anant Goel a freshmen in Purdue University; Mark Craft, a sophomore at University of Illinois Urbana-Champaign (UIUC) and Catherine Craft, a sophomore at UIUC.
Talking about how their news feed authenticity checker works, De told Business Insider:
“It classifies every post, be it pictures (Twitter snapshots), adult content pictures, fake links, malware links, fake news links as verified or non-verified using artificial intelligence.
“For links, we take into account the website’s reputation, also query it against malware and phishing websites database and also take the content, search it on Google/Bing, retrieve searches with high confidence and summarize that link and show to the user. For pictures like Twitter snapshots, we convert the image to text, use the usernames mentioned in the tweet, to get all tweets of the user and check if current tweet was ever posted by the user.”
A little tag in the corner is added by the browser plug-in that says whether the story is verified or not.
For example, it discovered that the news story (see below) promising that pot cures cancer was fake; so it noted that the story was “not verified.”
However, the news story (see below) about the Simpsons being bummed that the show predicted the election results? That was real and was tagged “verified.”
The students have released their extension as an open source project. Therefore, any developer with the knowledge can install it and make changes to it.
While a Chrome plug-in that marks fake news clearly isn’t the complete solution for Facebook to police itself, the social networking giant will have to remove fake stuff entirely. It should not just add a small, easy to miss tag and not ask people to install a browser extension.
However, the students show that algorithms can be developed to find out within reasonable certainty which news is true and which is not. And this can be done to place that information in front of readers as they think of clicking.
It is said that several Facebook employees are so upset about this situation that they have formed an unofficial task force inside the company to determine how to fix this issue, reports BuzzFeed. Will FiB will give them a helping hand? We will have to wait and watch.
Source: Business Insider