In a series of posts, the social media company asked users to submit feedback in the coming days regarding their own experiences with synthetic media and instances in which it was damaging or caused harm. Based on user feedback, Twitter will finalize policy guidelines for how it polices synthetic media.
“The new policy will address this type of content, specifically when it could threaten someone’s physical safety or lead to offline harm,” Twitter said in a post.
Arguably the most prominent form of synthetic media, deepfake videos are altered or created using artificial intelligence or machine learning to mimic voices or mannerisms in a way that can be hard to distinguish from real-world clips. A deepfake video featuring Facebook CEO Mark Zuckerberg circulated on Instagram earlier this year caused an uproar and tricked many.
Twitter announced the initiative just weeks after members of the Senate Intelligence Committee asked social media companies, including Twitter and Facebook, to "develop industry standards for sharing, removing, archiving, and confronting the sharing of synthetic content as soon as possible."
A Twitter spokesperson said the company’s development of a policy on synthetic media is not limited to or solely inspired by the rise of deepfake videos, nor would it mean that all deepfake videos will eventually be removed from the platform. Company officials are determining how to respond to any altered content that could be considered deceptive or potentially harmful to the public.
Requests for public feedback will eventually take the form of a survey, the spokespersons added. There is currently no timetable for when Twitter’s finalized policy on synthetic media will go live, or what penalties could result from potential violations.
Earlier Monday, Facebook outlined its own plans for combating misleading content on its platforms ahead of the 2020 election cycle.