The leaders of Britain, France and Italy are setting an ambitious goal for tech companies to tackle online posts that promote terrorism: Take them down within an hour or two.
Continue Reading Below
Convening world and tech leaders Wednesday at the United Nations, British Prime Minister Theresa May said internet companies are making progress but need to go "further and faster" to keep violent extremist material from spreading online.
The average lifetime of Islamic State extremists' online propaganda shrank from six days to 36 hours in the first six months of this year, May said.
"That is still 36 hours too long," she said.
French President Emmanuel Macron and Italian Prime Minister Paolo Gentiloni joined May in leading what she called a first-of-its-kind session on the sidelines of annual U.N. General Assembly meeting of global leaders.
Internet services are facing increasing pressure to rid themselves of messages that, authorities say, provide inspiration and instructions for militant attacks. Leaders of the Group of Seven wealthy democracies pressed tech companies this spring to move more swiftly, after May raised the issue in the wake of a suicide bombing that killed 22 people outside a pop music concert in Manchester, England.
With potential legal consequences looming — May and Macron have suggested their countries could impose legal liability and fines if tech companies don't do enough to deal with extremist material — online giants are eager to show they're taking the issue seriously.
This summer, Facebook, Microsoft, Twitter and Google-owned YouTube launched a joint counterterrorism initiative to collaborate on technology and work with experts. Menlo Park, California-based Facebook announced it had started using its artificial intelligence capabilities to find and remove extremist content, as it does to block child pornography. The company now has 150 engineers, content reviewers, language specialists, academics and former law enforcement figures focused on counterterrorism, global policy and counterterror head Monika Bickert told the U.N. gathering Wednesday.
San Francisco-based Twitter recently said it suspended 300,000 accounts for promoting terrorism just in the first six months of this year, the great majority flagged by its own internal efforts before posting anything. YouTube has more than doubled the number of violent extremist videos removed in recent months, Google Senior Vice President Kent Walker said Wednesday as he announced the Mountain View, California-based company would commit millions of dollars to research on combatting extremist content online.
"Removing all of this content within a few hours, or even stopping it from getting there in the first place, poses an enormous technological and scientific challenge that we continue to undertake," he told the world leaders. "The haystacks here are unimaginably large, and the needles are both really small and constantly changing."
Another challenge: taking on extremist postings without impinging on free speech. Walker acknowledged "we still don't always get this right": YouTube's machine learning protocols recently removed activists' videos from Syria's civil war amid a search for graphic or pro-terrorist material, for example. The company said it would restore any videos improperly taken down, and at least some have already been returned.
There are other issues at play, as well. "We all know there are economic interests there, there are privacy problems," Gentiloni said, but added: "We can't reduce our ambition because of the difficulties."
U.S. Acting Deputy Homeland Security Secretary Claire M. Grady, meanwhile, said internet giants needed to ramp up work on extending the counterterrorism effort to smaller platforms; the big firms say they're doing so.
Beyond wiping terrorist messages off the web, the leaders agreed they needed to help moderate voices counter those messages.
"We have to try to bring back these vulnerable people who are likely to be radicalized to the tenets of common sense," Macron said.