Can Google AI spot and stop hate speech online?

Google offshoot Jigsaw have released a new tool that uses machine learning to spot abuse and harassment online, named “Perspective.” The company has made it available for developers around the world to use.

Pascal Rossignol/ Reuters/ File
Letters spell the word "Alphabet" as they are seen on a computer screen with a Google search page in this photo illustration taken in Paris, France, in this August 11, 2015, file photo.

Many internet users say they want a web world that is informative and troll-free. With the release of its latest hate-speech detecting piece of code, tech giant Alphabet – Google's parent company – may be bringing us one step closer to the goal – although others criticize such efforts as censorship. 

Jigsaw, a Google offshoot now operated by Alphabet, released on Thursday a new piece of code that uses machine learning to spot abuse and harassment online. The API (application program interface), named “Perspective,” is now available for developers around the world to use. While the creators hope the system, which scores comments based on their perceived “toxicity,” could help transform online communities into spaces with more genuine engagement and information, and fewer insults, critics have called similar moves restrictions on their freedom of speech.

Perspective is part of Jigsaw's larger effort, called Conversation AI, which it launched last September to study how computers could learn to recognize abusive language.

“We hope this is a moment where Conversation AI goes from being ‘this is interesting’ to a place where everyone can start engaging and leveraging these models to improve discussion,” Conversation AI product manager CJ Adams told WIRED.

The new code holds the potential to do so, the team said. Perspective was “trained” by using millions of comments taken from The New York Times, Wikipedia editorial discussions, and other unnamed partners, which were rated by panels of 10 people each on how "toxic" they considered them to be, WIRED reports. While the program is still in its early days, an user can experiment with the tool on a demonstration website released Thursday.

Jigsaw’s founder and president, Jared Cohen, emphasized that the tool is “a milestone, not a solution,” with the crucial ability to learn and improve over time. This feature, according to Mr. Adams, makes Perspective part of a new option for news and social media sites to rein in the comments, in addition to options such as "upvoting" or "downvoting" certain remarks, turning off comments altogether, or relying on human moderators.

"The default position right now is actually censorship," Cohen told WIRED. "We're hoping publishers will look at this and say 'we now have a better way to facilitate conversations, and we want you to come back.'"

Several publications have already gone ahead to implement this toxicity measurement system, including The New York Times, The Guardian, and The Economist, according to Perspective’s website.

Jigsaw’s approach has also received support from some cybersecurity experts. Entrepreneur Kalev Leetaru, a senior fellow at the George Washington University Center for Cyber & Homeland Security, writes that Perspective's focus on fighting harmful words, instead of ideas, differs from past methods. Its emphasis on the choice of language encourages digital citizens to make logic-seeking arguments supported by evidence, he argues in a piece for Forbes, rather than a “profanity-laden diatribe” or name calling, which could foster civilized and constructive discussions.

“It is human nature to revert to emotional attacks over logical discourse and the goal of tools like Perspective are to shift us back towards logic, essentially using machines to make us better humans,” Mr. Leetaru writes.

The launch on Thursday adds to Google’s latest initiatives to tackle hate speech online. In December, the search engine updated its algorithm to prioritize high-quality information in its results, filtering out hate sites and anti-Semitic auto-fill queries.

You've read  of  free articles. Subscribe to continue.

Dear Reader,

About a year ago, I happened upon this statement about the Monitor in the Harvard Business Review – under the charming heading of “do things that don’t interest you”:

“Many things that end up” being meaningful, writes social scientist Joseph Grenny, “have come from conference workshops, articles, or online videos that began as a chore and ended with an insight. My work in Kenya, for example, was heavily influenced by a Christian Science Monitor article I had forced myself to read 10 years earlier. Sometimes, we call things ‘boring’ simply because they lie outside the box we are currently in.”

If you were to come up with a punchline to a joke about the Monitor, that would probably be it. We’re seen as being global, fair, insightful, and perhaps a bit too earnest. We’re the bran muffin of journalism.

But you know what? We change lives. And I’m going to argue that we change lives precisely because we force open that too-small box that most human beings think they live in.

The Monitor is a peculiar little publication that’s hard for the world to figure out. We’re run by a church, but we’re not only for church members and we’re not about converting people. We’re known as being fair even as the world becomes as polarized as at any time since the newspaper’s founding in 1908.

We have a mission beyond circulation, we want to bridge divides. We’re about kicking down the door of thought everywhere and saying, “You are bigger and more capable than you realize. And we can prove it.”

If you’re looking for bran muffin journalism, you can subscribe to the Monitor for $15. You’ll get the Monitor Weekly magazine, the Monitor Daily email, and unlimited access to CSMonitor.com.