Facebook bans manipulated 'deepfake' videos

Deepfakes have raised concerns about the spread of misinformation ahead of the 2020 US presidential election

Facebook announced Tuesday it will ban manipulated media, known as "deepfakes," from the social media platform in an effort to combat the sharing of misleading information.

Deepfake videos use complex artificial intelligence (AI) technology to manipulate video footage -- usually of a person talking -- in such a way that a person may appear to say something that they never actually said, which has raised concerns about the spread of misinformation ahead of the 2020 U.S. presidential election.

"Going forward, we will remove misleading manipulated media if it meets the following criteria: It has been edited or synthesized – beyond adjustments for clarity or quality – in ways that aren’t apparent to an average person and would likely mislead someone into thinking that a subject of the video said words that they did not actually say," Facebook Vice President of Global Policy Management Monika Bickert wrote in a blog post.

"And: It is the product of artificial intelligence or machine learning that merges, replaces or superimposes content onto a video, making it appear to be authentic," Bickert added.

MARK ZUCKERBERG SAYS FACEBOOK CONSIDERING 'DEEPFAKE' VIDEOS POLICY

The policy will not extend to parody or satire videos.

"When we become aware of video that potentially violates this policy, our content policy team will assess whether the content meets our criteria for removal, including whether it is parody or satire - which is often included as context in the post," Facebook spokesperson Andrea Vallone told FOX Business.

"In general, if the content is posted by a Page or domain that is a known satire publication, or a reasonable person would understand the content to be irony or humor with a social message, we won't take it down," she said.

Facebook refused to take down a heavily edited video of House Speaker Nancy Pelosi last year that made her appear to be slurring her words.

Bickert said all media published to Facebook, "whether a deepfake or not, will be removed from Facebook if they violate any of our other Community Standards including those governing nudity, graphic violence, voter suppression and hate speech." Videos that don't meet those standards are still eligible for review by the tech giant's independent fact-checkers.

CLICK HERE FOR THE FOX BUSINESS APP

Facebook will partner with academia, government and industry experts, as well as more than 50 other global experts, to identify and take down these manipulated videos.

The social media giant in September launched its Deep Fake Detection Challenge -- a project with $10 million in grants and support from the Partnership on AI, Cornell Tech, the University of California Berkeley, MIT, WITNESS, Microsoft, the BBC and AWS, among others -- to inspire a global effort to identify deepfake videos.

Additionally, Facebook partnered with Reuters "to help newsrooms worldwide to identify deepfakes and manipulated media through a free online training course."

CLICK HERE TO READ MORE ON FOX BUSINESS

Load more..