Many internet users say they want a web world that is informative and troll-free. With the release of its latest hate-speech detecting piece of code, tech giant Alphabet – Google's parent company – may be bringing us one step closer to the goal – although others criticize such efforts as censorship.
Jigsaw, a Google offshoot now operated by Alphabet, released on Thursday a new piece of code that uses machine learning to spot abuse and harassment online. The API (application program interface), named “Perspective,” is now available for developers around the world to use. While the creators hope the system, which scores comments based on their perceived “toxicity,” could help transform online communities into spaces with more genuine engagement and information, and fewer insults, critics have called similar moves restrictions on their freedom of speech.
Perspective is part of Jigsaw's larger effort, called Conversation AI, which it launched last September to study how computers could learn to recognize abusive language.
“We hope this is a moment where Conversation AI goes from being ‘this is interesting’ to a place where everyone can start engaging and leveraging these models to improve discussion,” Conversation AI product manager CJ Adams told WIRED.
The new code holds the potential to do so, the team said. Perspective was “trained” by using millions of comments taken from The New York Times, Wikipedia editorial discussions, and other unnamed partners, which were rated by panels of 10 people each on how "toxic" they considered them to be, WIRED reports. While the program is still in its early days, an user can experiment with the tool on a demonstration website released Thursday.
Jigsaw’s founder and president, Jared Cohen, emphasized that the tool is “a milestone, not a solution,” with the crucial ability to learn and improve over time. This feature, according to Mr. Adams, makes Perspective part of a new option for news and social media sites to rein in the comments, in addition to options such as "upvoting" or "downvoting" certain remarks, turning off comments altogether, or relying on human moderators.
"The default position right now is actually censorship," Cohen told WIRED. "We're hoping publishers will look at this and say 'we now have a better way to facilitate conversations, and we want you to come back.'"
Several publications have already gone ahead to implement this toxicity measurement system, including The New York Times, The Guardian, and The Economist, according to Perspective’s website.
Jigsaw’s approach has also received support from some cybersecurity experts. Entrepreneur Kalev Leetaru, a senior fellow at the George Washington University Center for Cyber & Homeland Security, writes that Perspective's focus on fighting harmful words, instead of ideas, differs from past methods. Its emphasis on the choice of language encourages digital citizens to make logic-seeking arguments supported by evidence, he argues in a piece for Forbes, rather than a “profanity-laden diatribe” or name calling, which could foster civilized and constructive discussions.
“It is human nature to revert to emotional attacks over logical discourse and the goal of tools like Perspective are to shift us back towards logic, essentially using machines to make us better humans,” Mr. Leetaru writes.
The launch on Thursday adds to Google’s latest initiatives to tackle hate speech online. In December, the search engine updated its algorithm to prioritize high-quality information in its results, filtering out hate sites and anti-Semitic auto-fill queries.