Facebook and Twitter have made a number of significant policy changes since the 2016 election in the name of voter integrity and news accuracy.
Continue Reading Below
About 18% of Americans get their news from social media, and 25% get their news from a news website or app, according to a Pew Research Center analysis based on surveys conducted between October 2019 and June 2020.
The social media giants have come under fire for specific censorship decisions based on these policies meant to protect users from hate speech that could result in violence, the spread of fake news and other harm, but conservative politicians and other tech experts say this censorship has gone too far.
Here are some of the significant policy changes the two websites have made since 2016:
--2016: Facebook introduces tool to flag content as disputed in effort to stop spread of fake news
--2017: Facebook announces its teams of independent, third-party fact-checkers to monitor news posted on the site
--2017: Facebook replaces "Disputed Flags" feature with "Related Articles" to provide more context to disputed articles rather than label them
--2018: Facebook requires advertisers to disclose their identities and locations
--2018: Facebook introduces policy allowing users to see more information about Facebook Pages and their ads
--2018: Facebook announces that political ads will have to be labeled with the names of the companies or individuals that sponsored them
--2018: Facebook updates data policies to be more transparent about data it collects from users
--2019: Facebook changes terms of service to provide more insight into how it profits from ads
--2020: Facebook announces that it will start labeling posts from politicians, or "newsworthy content," if they violate its Community Standards
--2020: Facebook announces Oversight Board so users can appeal content removal decisions
--2020: Facebook updates how it prioritizes news on user timelines
--2020: Facebook requires labels on state-run media pages and posts
--2020: Facebook temporarily bans political ads two weeks prior to the 2020 presidential election
--2020: Facebook will add labels directing users to its Voting Information Center on posts from politicians on Election Day
--2020: Facebook announces new policy to add labels to posts disputing the legitimacy of voting methods
--2020: Facebook announces new policy to label posts from candidates and campaigns from declaring results before they are officially announced
--2016: Twitter announces new feature for users to "mute" certain words and people
--2017: Twitter changes verification process for users
--2019: Twitter announces new policy to add labels to disputed tweets from high-profile users, including government officials, to provide more context
--2019: Twitter bans all political ads on its platform
--2019: Twitter launches Twitter Privacy Center to increase transparency about user data
--2020: Twitter adds labels to identify state-affiliated media
--2020: Twitter announces new policy to add labels to manipulated or synthetic media
--2020: Twitter starts adding more context to trending tweets
--2020: Twitter announces policy barring 2020 election candidates from declaring victory before results are authoritatively called
--2020: Twitter announces policy saying it may remove "tweets meant to incite interference with the election process or with the implementation of election results"
--2020: Twitter adds new label for tweets containing "misleading information about civic integrity, COVID-19, and synthetic and manipulated media," including those from public officials
--2020: Twitter changes Hacked Media Policy after New York Post article block
--2020: Twitter encourages users to quote-tweet posts instead of retweet to promote discussion
--2020: Twitter updates "Trends for you section" to inform users about trending topics in the U.S.
In addition to these changes, both websites say they have hired more people to combat misinformation and protect election integrity, limited coordinated user actions on their platforms, invested in their security systems and made efforts to highlight local journalism.
Facebook has created a Voting Information Center for Facebook and Instagram users and has helped 4.4 million users register to vote on its platform. The website has also removed more than 100 networks worldwide engaging in coordinated inauthentic behavior since 2017, including 30 networks so far in 2020.
Russian troll farms and other foreign operatives used Facebook and Twitter in 2016 to push political messages to American users – many of whom use social media as a way to get their news – in an attempt to influence the election.
Additionally, Facebook came under fire after the 2016 election when The New York Times, The London Observer and The Guardian uncovered documents showing how Cambridge Analytica, a British consulting firm, improperly harvested data from Facebook users to help political campaigns reach voters.
Facebook has testified before Congress more than 20 times since 2016.