Facebook says hate speech on platform has dropped nearly 50%, thanks to AI

Facebook said data pulled from the leaked documents is being used to 'create a narrative'

Facebook says it has made huge strides in recent years to combat hate speech, thanks to improvements in artificial intelligence, despite what its critics say. 

In a blog post published Sunday, the company said the prevalence of hate speech on its platform amounts to less than 1% of all content viewed – down by almost 50% in the last three quarters. 

It noted that in 2016 content removal was based primarily on user reports and only a fraction was detected proactively by AI. Now, more than 97% of content deemed hateful or dangerous is removed by AI, Facebook said.    

A Facebook logo displayed on a smartphone.  (Reuters)

The blog post came hours after a Wall Street Journal report, citing leaked documents, said Facebook has been exaggerating the effectiveness of AI in fighting hate speech and excessive violence. 


According to those documents, those responsible for weeding out hate speech say Facebook is nowhere near being able to reliably screen content the company deems offensive or dangerous. 

"The problem is that we do not and possibly never will have a model that captures even a majority of integrity harms, particularly in sensitive areas," a senior engineer and research scientist reportedly wrote in a mid-2019 note.  

Facebook said data pulled from the leaked documents is being used to "create a narrative" that its AI is inadequate and that the company is deliberately misrepresenting its progress. 

FILE: A mobile phone app logos for, from left, Facebook, Instagram and WhatsApp in New York.  (AP)

"This is not true. We don’t want to see hate on our platform, nor do our users or advertisers, and we are transparent about our work to remove it," Facebook said. "What these documents demonstrate is that our integrity work is a multi-year journey. While we will never be perfect, our teams continually work to develop our systems, identify issues and build solutions." 

Facebook has been under public scrutiny in recent weeks following the leak of internal documents by a former Facebook product manager-turned-whistleblower, Frances Haugen

Haugen testified before the Senate subcommittee on Consumer Protection, Product Safety, and Data Security earlier this month, claiming that company executives have chosen to prioritize profits over their users’ safety.


In turn, Facebook executives – including CEO Mark Zuckerberg – have accused Haugen of mischaracterizing the company's efforts to protect public safety. 

Fox Business has reached to Facebook seeking additional comment. 

Fox Business’ Lucas Manfredi contributed to this report.