Facebook, Twitter, Microsoft and YouTube create database to quickly censor violent or offensive content uploaded by terrorists

Social media sites Facebook, Twitter and YouTube are going to work together to create a joint database of deleted posts, terrorist images and videos to fight the spread of extremism online.

This decision was made after MPs blamed the social media sites of “passing the buck” and their unwillingness in cracking down on terrorism in the fight against online extremism. The MPs also warned that social media sites feared it would “damage their brands” and that they were becoming the “vehicle of choice” for extremists. The tech companies, which have been criticised for allowing their online platforms to become a “recruiting platform” tool for the Islamic State of Iraq and the Levant, said they would be able to identify propaganda more effectively by pooling their resources.

On Monday night, Facebook, Twitter, YouTube and Microsoft announced that they would create a database of “hashes.”

In a blog post that announced the collaboration, the social network said the “shared industry database” would consist of content that was each given a ‘hash’ or unique digital fingerprint that would help the various social network sites better and easily identify possibly extremist content when posted by other users and on other websites.

“Our companies will begin sharing hashes of the most extreme and egregious terrorist images and videos we have removed from our services – content most likely to violate all of our respective companies’ content policies,” the post said.

“Participating companies can add hashes of terrorist images or videos that are identified on one of our platforms to the database. Other participating companies can then use those hashes to identify such content on their services, review against their respective policies and definitions, and remove matching content as appropriate.”

So, how does this work? Whenever a version of the image or video is posted by a user, the database will alert the websites’ staff even if it hasn’t been posted before on the site itself. For instance, if a terrorist’s Twitter account posts an extremist picture, Twitter’s moderators would report it to the database, and staff at Facebook would then be notified if the same picture was posted on Facebook.

The firms have maintained that the privacy of existing users will not be affected.

“No personally identifiable information will be shared, and matching content will not be automatically removed,” the firms said.

“Each company will continue to apply its own policies and definitions of terrorist content when deciding whether to remove content when a match to a shared hash is found.

“Throughout this collaboration, we are committed to protecting our users’ privacy and their ability to express themselves freely and safely on our platforms.

“We also seek to engage with the wider community of interested stakeholders in a transparent, thoughtful and responsible way as we further our shared objective to prevent the spread of terrorist content online while respecting human rights.”

The method is already used by companies to fight against accounts sharing images of child sexual exploitation. It is believed to be particularly effective on less public networks such as Facebook, where there is less chance of a user reporting a post.

Last week, Twitter said that during the period between February and August of this year, it had suspended 360,000 accounts for violating rules on terrorism.

Source: The Sun

LEAVE A REPLY

Please enter your comment!
Please enter your name here