More than one million reports of violations by users are received by Facebook every day
For a social networking platform having 1 billion + users, 1 million user violations is just a drop in the ocean. According Facebook’s head of policy management, Monika Bickert, the popular social network ends up receiving more than one million reports of violations from users every day.
Speaking at SXSW’s first Online Harassment Summit on Saturday, Bickert said these reports include allegations of hate speech posted on Facebook. She also said that she was unsure as to what percentage is serious and what should be taken off the site.
The panel focussed on how far tech companies could and should go in eradicating possibly harmful content on their platforms.
“You can criticize institutions, religions, and you can engage in robust political conversation,” said Bickert, of where Facebook draws the line. “But what you can’t do is cross the line into attacking a person or a group of people based on a particular characteristic.”
Bickert said that making the policy is “tricky,” particularly when 80% of Facebook’s 1.6 billion users are outside of the U.S., who may have different views on what content might be offensive or threatening.
However, the most challenging part is enforcement, she said.
Bickert told CNNMoney that posts that provoke physical violence are reviewed first. Currently, those reviews are done by trained employees. However, she couldn’t say how many such questionable posts usually get removed from Facebook.
She said she is regularly questioned as to why the company doesn’t have its “world-class engineers” handle hate speech “proactively and perfectly.”
“When it comes to hate speech, it’s so contextual … We think it’s really important for people to be making that decision,” she said, adding that, one day, automation could play a bigger role.
She pointed that as Facebook has allowed users to flag them from all devices, the number of reported violations has been “steadily increasing.”
Other panelists included Juniper Downs of Google, Lee Rowland of the ACLU, Deborah Lauter of the Anti-Defamation League, and the National Constitution Center’s Jeffrey Rosen.
Rosen spoke to the “tremendous pressures” that tech companies have to “adopt a more European” method to free speech, while anything that’s unpleasant to a person’s “dignity” can be a source for removal.
But this opens up the opportunity that not just individuals would request content be removed but also the government.
“It’s messy,” Rosen added. “As a society, we have to decide what do we value more — privacy and dignity, or free expression?”
In the meantime, Rowland said she would have had a blog post removed on Facebook because it had a photo of a nude statue. (Bickert later said this was an error and not a violation of Facebook’s policies)
Rowland said that she was aware of whom she should get in touch to figure out why it was removed unlike most of the people who don’t.
Persuading tech companies to be more transparent about their policies, she said, “For the average user, there’s an incredible black space.”
“People don’t clearly understand why their speech may have been taken down,” added Rowland. “It’s ultimately not going to be a good business plan if people don’t know where that [free speech] stops.”