In the wake of another deadly van attack in London over the weekend and a domestic shooting in Alexandria, Virginia last week during a GOP baseball practice, Google (GOOGL) announced it will dedicate more resources toward identifying and removing extremist and hate group content from its platform.
Continue Reading Below
The web giant said it will nearly double the number of independent experts dedicated to flagging problematic content and expand its collaboration with counter-extremist groups to help identify content that may be used to radicalize and recruit. Additionally, the company said it will train its employees to take down terror-related content faster.
Last week, Facebook (FB) announced similar efforts via a blog post on the site, saying it has 150 employees focused solely on counter terrorism. The company also said it will use artificial intelligence in combination with its ramped up employee efforts in order to make its strategy more effective.
While many have been critical of tech companies for doing little to remove extremist content against from their sites, Google and Facebook, along with other companies such as Microsoft (MSFT) and Twitter (TWTR), recently agreed to create an international forum to share and develop technology, support smaller businesses and speed up their joint efforts against online terrorism.
Google Inc. also said it will also take a tougher stance on videos that don't clearly violate its policies, like those that contain inflammatory religious or supremacist content. That content may still appear, but with a warning.
The Associated Press contributed to this report.