Online Terrorist Propaganda Still a Challenge for Tech Companies

Terrorists are still successfully using the internet to communicate with and recruit followers, despite progress by big tech companies in cracking down on the activity in recent years.

Online terrorist propaganda is attracting fresh criticism in the wake of Saturday's deadly terror attack in London. Prime Minister Theresa May and others have singled out Silicon Valley for criticism, saying Facebook Inc., Twitter Inc., Alphabet Inc.'s Google and others need to do a better job policing content. It is unclear if the U.K. government has determined any link between online extremist content and the attack.

The challenge for tech companies is daunting: A stream of social-media accounts, web videos and blog pages are constantly popping up and spewing questionable content, while new accounts and sites are replacing deleted ones by the hour, researchers say. Much of the activity, like radical sermons or videos that use coded language, falls in a gray area that makes it difficult to track or is possibly protected by the companies' aim to protect free speech.

As Facebook, Twitter and Google's YouTube have improved in removing explicit terrorist content, much of that material has migrated to lesser-known platforms like chat app Telegram and text-sharing site PasteBin. Terrorists still use the major platforms -- because that is where users are -- but mainly to identify potential recruits and make contact, before moving conversations toward radicalization on encrypted messaging services, researchers say.

"You get the sense that no matter how much [tech companies] speed up, the perception remains that they're not doing enough," said Indiana University law professor David Fidler, who has met with tech and government officials about terrorism. "I actually have a lot of sympathy for the social media companies."

PasteBin didn't respond to a request for comment.

Tech firms' response to questionable content is largely to remove it retroactively after it has been flagged by users. So hundreds, if not thousands, of people could see the content before it gets pulled, researchers say. A 26-second YouTube video explaining how to carry out a truck attack was viewed more than 360 times by Monday evening since it was posted Sunday.

The video suggested using a "double-wheeled, load-bearing truck" to attack "large outdoor festivals, conventions, celebrations and parades," among other targets. The video was unlisted, meaning viewers could only find it if given a link to it, suggesting it was shared among viewers.

Rick Eaton, a researcher at the Simon Wiesenthal Center, which tracks hateful online content, said on Monday that he flagged it to YouTube twice. The video was pulled minutes after The Wall Street Journal asked YouTube about it around 6:15 p.m. Eastern time on Monday.

YouTube said it bars videos that aim to recruit terrorists or incite violence, and that it acts quickly to remove content flagged for violating those policies. The company said it also terminates accounts run by terrorist organizations and is committed "to tackle these complex problems and to see what more we can do to ensure that we're part of the solution."

Twitter said it is expanding its use of technology to combat terrorist content. From July through December last year, Twitter said internal technology flagged 74% of the 376,890 accounts it suspended for promoting terrorism.

Facebook has created a team dedicated to removing terrorist content and has been promoting "counter speech," or posts that aim to discredit militant groups like Islamic State.

Facebook said that program has grown over the past year and that it wants to be a "hostile environment for terrorists."

Telegram's founder and chief executive, Pavel Durov, said in Telegram messages Monday that his service is "constantly increasing the number of moderators to deal with these threats." He said every channel related to the terrorist group Islamic State is taken down within 24 hours of being reported.

It isn't clear if the three London Bridge attackers were radicalized online or used encrypted apps to communicate. The BBC reported Monday that one of the men involved had followed a radical Islamic preacher on the internet, citing a former friend of the man.

The preacher, Ahmad Musa Jibril, is described by counterterrorism experts as a radical cheerleader who encourages people to take action without explicitly exhorting violent jihad. A 2014 study by the U.K.-based International Centre for the Study of Radicalisation said 60% of foreign fighters they tracked in Syria followed him on Twitter.

Mr. Jibril couldn't be reached for comment.

Mr. Jibril has accounts on YouTube, Twitter and Facebook, with a combined 306,000 followers -- though they haven't posted since 2014 -- and his 130 YouTube videos have amassed more than 1.5 million views.

YouTube said its review of Mr. Jibril's videos found they don't violate its policies because the videos discuss general Islamic themes -- such as fasting, interpretations of the Quran, and the use of Western medicine -- and don't advocate violence.

To be sure, propaganda videos -- regardless of platform -- are usually just the tip of the iceberg. Experts say that extremely few radicals take action solely on the basis of social media consumption. Instead, potential recruits generally move eventually into peer-to-peer communication with a handler either online or offline or both.

"Social media increases exposure so more people are exposed," said Peter Weinberger, a senior researcher at the National Consortium for the Study of Terrorism and Responses to Terrorism at the University of Maryland. "But it is very rarely the case that someone will just view content online and take action by themselves."

--Stu Woo contributed to this article.

Write to Jack Nicas at jack.nicas@wsj.com, Sam Schechner at sam.schechner@wsj.com and Deepa Seetharaman at Deepa.Seetharaman@wsj.com

(END) Dow Jones Newswires

June 05, 2017 21:36 ET (01:36 GMT)