One of the biggest challenges facing websites, news organizations, and social media is online abuse and harassment. Since the dawn of the internet, sites have battled against abusive, toxic, and inappropriate comments posted in their online forums. With the millions of comments posted, news organization quickly became overwhelmed by the magnitude of data produced.
Continue Reading Below
Websites and publishers have become so concerned by the volume of vitriolic comments and fear of lawsuits that many have removed the ability to post comments on their sites. Consider that 72% of internet users in America have witnessed online harassment, and nearly half have experienced it themselves. Social media sites have been under fire to control hate speech and online abuse by their members. Technological advancements in artificial intelligence (AI) may finally provide the answer.
Google uses AI to battle abusive content. Image source: Pixabay.
An important piece of the puzzle
Jigsaw, a technology incubator of Google parent Alphabet (NASDAQ: GOOGL) (NASDAQ: GOOG), has developed an artificial neural network -- an AI system that replicates the structure and learning capacity of the human brain using algorithms and software models. Using this technology, it aims to identify and control abusive online comments. Google and Jigsaw are making the program available free of charge, and it's being added to the Google's TensorFlow library and Cloud Machine Learning Platform. The product, dubbed Perspective, uses deep learning to sift through reams of data to detect harassment, insults, and abusive speech in online forums in real time. Jigsaw explains how the program works:
Do you kiss your mother with that mouth?
Google and Jigsaw used comment data from The New York Times Company (NYSE: NYT), Wikipedia, and several unnamed partners. They then showed that data to panels of people and had them rate whether the comments were toxic. They used these human responses as training data for the AI system, which will provide ratings of a phrase based on its toxicity on a scale of 100 and allows you to testthe system. Type in the phrase "you are ignoring important information" rates a 10%, "your mother wears combat boots" garnered 55%, while "you're a jerk" gets a response of 86% toxicity.
The New York Times reports that it only has the resources to allow comments on 10% of its articles. The company provided the archives of comments in hopes of expanding its comments section and to "increase the speed at which comments are reviewed." With only 14 moderators to manually review every comment, the task became overwhelming, reviewing on average 11,000 comments daily.
The New York Times joins Google to curb online abuse. Image source: Pixabay.
Twitter tries to clean up its act
Google is not the only company seeking to curb online bullying using AI. At IBM's (NYSE: IBM) InterConnect conference last month, Twitter, Inc.'s (NYSE: TWTR) vice-president of data strategy Chris Moody announced that the popular social network had partnered with IBM's Watson, the AI-based cognitive computer, to address online abuse. He stated:
This comes at a crucial time for Twitter, which has been under fire for not policing its users. The company announced last month that it was working to make the site a safer place by limiting abusive users' ability to create new accounts and updating how users can report abusive tweets.
That didn't end well
AI systems have been tested on social media before, though the results have been less than stellar. In early 2016, the folks at Microsoft (NASDAQ: MSFT) research had used Twitter as a testing ground for an AI-based chatbot, @tayandyou, aka TayTweets, to learn the speech patterns of millennials by interacting with them on the site.Unfortunately, within 24 hours and 96,000 tweets, the experiment was suspended when the fledgling AI began spewing venomous vitriol. Microsoft later said:
Each new technological innovation brings benefits and challenges. The dawn of the internet age brought with it internet trolls, who sought to control the conversation and silence dissenting voices. Programs like Perspective and Watson seek to return voices to those vulnerable speakers, which benefits us all.
10 stocks we like better than Alphabet (A shares)When investing geniuses David and Tom Gardner have a stock tip, it can pay to listen. After all, the newsletter they have run for over a decade, Motley Fool Stock Advisor, has tripled the market.*
David and Tom just revealed what they believe are the 10 best stocks for investors to buy right now...and Alphabet (A shares) wasn't one of them! That's right -- they think these 10 stocks are even better buys.
Click here to learn about these picks!
*Stock Advisor returns as of April 3, 2017.
Suzanne Frey, an executive at Alphabet, is a member of The Motley Fool's board of directors. Teresa Kersten is an employee of LinkedIn and is a member of The Motley Fool's board of directors. LinkedIn is owned by Microsoft. Danny Vena owns shares of Alphabet (A shares). Danny Vena has the following options: long January 2018 $640 calls on Alphabet (C shares) and short January 2018 $650 calls on Alphabet (C shares). The Motley Fool owns shares of and recommends Alphabet (A and C shares) and Twitter. The Motley Fool recommends The New York Times. The Motley Fool has a disclosure policy.