Someday, before you read a sentence like this, a new type of intelligent software will advise you if the content is civil and safe. It will warn you of malicious language, fake news, or other types of toxic behavior now commonly found in the digital universe. And it will do so based on the social norms of a community that values the widest common good.
That, at least, is what a majority of some 1,500 technology experts seem to predict, based on a survey by Pew Research Center and Elon University, released in a report last week. The survey looks at the future of online etiquette and whether public discourse will “become more or less shaped by bad actors, harassment, trolls, and an overall tone of griping, distrust, and disgust.” (The report might have added Russian influence in election campaigns.)
The survey also asks these experts to forecast whether it is possible to filter and label the current rage, hate, and misinformation of cyberspace – while also keeping a proper balance of security, privacy, and freedom of speech. The result was only slightly hopeful.
While 39 percent of respondents expect the online future will be “more shaped” by negative activities, 42 expect “no major change” and 19 percent said the internet will be “less shaped” by harassment, trolling, and distrust.
Several solutions are necessary, many of which require the help of artificial intelligence and new social norms. One is to define the limits of online anonymity that now allow abusive behavior. Another is to reduce incentives for those companies that rely on clicks for profit to encourage incendiary activities online. Most of all, people must be given more choices, not less, to control their online experience.
The report provides one compelling prediction from Amy Webb of the Future Today Institute: “Right now, many technology-focused companies are working on ‘conversational computing,’ and the goal is to create a seamless interface between humans and machines.... In the coming decade, you will have more and more conversations with operating systems, and especially with chatbots, which are programmed to listen to, learn from and react to us. You will encounter bots first throughout social media, and during the next decade, they will become pervasive digital assistants helping you on many of the systems you use.”
Yes, the next generation of Siris, Alexas, and OK Googles will become tools to refine civil discourse and sift the chaff of hate and lies from the wheat of civic engagement and honest information. Traditional institutions, such as news media, are already doing less of that sifting and sorting. And government can do only so much without crossing a line on basic freedom.
The internet has given a platform to individuals to declare their voice, sometimes for ill. The coming digital tools must give individuals ways to find and create a civil and safe space. That sort of intelligence would be anything but artificial.