How Big Tech's new policies to fight misinformation could backfire

Newly implemented policies from Twitter and Facebook have yet to be tested

Since foreign agents swamped social media platforms with misleading posts during Hillary Clinton's 2016 battle for the White House with Donald Trump, the Big Tech firms have established a variety of new policies to prevent a recurrence.

Some of those efforts, however, have increased skepticism of Facebook, Twitter and Google and their gigantic reaches as much as they have assuaged it. Still, their executives have promised users and lawmakers alike that their goal is to promote accurate information.

Twitter, for instance, told users this week that it would rely primarily on independent decision desks at nine national news outlets -- including Fox News -- for official results from the election.

FREE SPEECH HANGS IN THE BALANCE REGARDLESS OF 2020 ELECTION OUTCOME: PARLER EXECS WERNICK, PEIKOFF

Additionally, "when people attempt to retweet a tweet with a misleading information label, they’ll see a prompt pointing them to credible information before they are able to amplify it further on Twitter," the San Francisco-based company explained in a Monday tweet.

"If we see content inciting interference with the election, encouraging violent action or other physical harms, we may take additional measures, such as adding a warning or requiring the removal of Tweets," the company added.

In a blog post updated Monday, the company said it would also label or remove tweets that falsely claim a win for any candidate and add warnings on misleading tweets from U.S. political figures or U.S.-based accounts with more than 100,000 followers.

Twitter has also prompted users to Quote Tweet instead of Retweet, prevented  “liked by” and “followed by” recommendations from strangers, and emphasized trending topics that provide additional context in the "For You" tab for U.S. users.

"Twitter has a critical role to play in protecting the integrity of the election conversation, and we encourage candidates, campaigns, news outlets and voters to use Twitter respectfully and to recognize our collective responsibility to the electorate to guarantee a safe, fair and legitimate democratic process this November," they concluded.

Just hours later, Facebook, Inc. -- Twitter's larger competitor -- announced that its platform would rely on results from six independent decision desks, including The Associated Press and Reuters.

In a blog post, the company announced a "range" of policies to keep users armed with facts.

Once polls close, Facebook will run a notification at the top of its pages and label candidates' posts, directing people to its Voting Information Center.

Any premature victory announcements will be labeled as well, linking to the official results, and content that discusses the legitimacy of the election or claims that lawful methods of voting will lead to fraud will also be labeled.

All electoral and political ads will be temporarily paused until after the polls close, and posts calling for election interference or voter intimidation will be removed.

Facebook CEO Mark Zuckerberg speaks at Georgetown University in Washington. (AP Photo/Nick Wass, File)

Facebook also promoted its 2019 launch of Facebook Protect, which works to "safeguard" the accounts of elected officials, campaigns, federal and state political party committees and staff members.

"This has been helping to stop the hack-and-leak operations we saw in 2016," they explained. "With just under a month until Election Day, we’re encouraging more people to enroll in Facebook Protect."

Wary lawmakers on both sides of the aisle have been skeptical about the changes at Twitter and Facebook, however, putting CEOs Jack Dorsey and Mark Zuckerberg under the congressional microscope this year.

As Axios pointed out Tuesday, most newly instated policies have yet to be truly tested and Facebook and Twitter's histories are colored with inconsistency and chaos.

The behemoths may have a longer arm than they can effectively manage -- even with additional hires specifically for the presidential election.

Facebook currently has more than 2.6 billion users and Twitter has around 330 million, according to statistics from e-commerce company Oberlo.

Just last week, Facebook got into hot water with both Democrats and Republicans after "technical problems" and confusion hid ads that should have been displayed, and Twitter faced widespread criticism after locking the New York Post's account -- eventually reversing course.

While social media leaders have made strides in monitoring and removing misinformation, the National Security Agency and intelligence community members have warned of continued threats from hackers in China, Iran, Russia and Syria.

Less than a week ago, Facebook announced it had taken down a small network of fake accounts and pages tied to the Iranian government.

CLICK HERE TO READ MORE ON FOX BUSINESS

According to The Wall Street Journal, Facebook’s head of cybersecurity policy, Nathaniel Gleicher, told reporters that "overstating the importance of these campaigns plays into the hands of malicious actors," urging others to "not take the bait."