Why you should think twice before spilling your guts to a chatbot

Chatbots for Facebook Messenger are proliferating. And while these handy personal assistants might be good for ordering shoes, they also pose serious privacy challenges. 

Facebook logo on a mug at the company's office in Mumbai, India.

Shailesh Andrade/Reuters

June 8, 2016

Chatbots are having a moment.

These artificial intelligence-powered digital assistants that can check the weather, read the latest headlines, do your banking, and help you shop for clothes are proliferating on Facebook Messenger. In fact, an estimated 10,000 chatbots are now available via Messenger.

But the rapid spread of chatbots is raising fresh privacy concerns, too. Due to the personal nature of the bots – encouraging users to reveal intimate details about their habits and personal lives – some experts worry that consumers aren’t aware of how just much insight these bots have into their lives, and who is actually using the information.

In Kentucky, the oldest Black independent library is still making history

“Chatbots may be able to get us to say more about ourselves than an ordinary website,” says Ryan Calo, codirector of the Tech Policy Lab at the University of Washington.

A major factor is convenience. The always-on interface is meant to make asking the digital assistants to do things – like ordering food with a simple text message – seamless for the user. And because of the simple, conversational interface, users might be tempted to tell the bots things they wouldn’t ordinarily post of type into an ordinary online order form.

“Consider a chatbot that leverages the social principle of reciprocity,” says Mr. Calo. “If a chatbot, like an online form, just says: ‘Enter your age here,’ you might not. But if amidst a conversation with a chatbot it says, ‘I was created last year. When were you born?’ you well might. At least that's what experimental studies by Cliff Nass and others have shown.” (One such study can be found here.)

While chatbots are just in their infancy, the ballooning ecosystem of personal assistants fueled by venture capitalists could collectively amass an even more intimate portrait of consumers and their behaviors than within the data archives of Google, Facebook, or Twitter.

“I think of chatbots as heralds of what's coming – increasingly sophisticated social technologies that are so pervasive that we never feel alone,” says Calo.

A majority of Americans no longer trust the Supreme Court. Can it rebuild?

As chatbots evolve and become more intelligent, they could be designed to tailor responses to make users more comfortable with giving away increasingly personal information, or telling the bots their true feelings. The machines could customize responses so that users don’t feel judged based on their views, thus giving many companies that are deploying chatbots a more accurate view of consumers.

“Pollsters always worry about whether people are willing to give their real opinion because they worry that the person on the phone may judge them for being racist or ignorant or just judge them in general,” says Dave Maass, an investigative researcher at the tech advocacy group Electronic Frontier Foundation. “But having a conversation with a chatbot might make people more honest.”

Still, chatbots are currently limited by the information that users provide to them. But what happens if and when chatbots are able to collect and analyze all the information that Facebook already has about a user – location, birthdays, names of friends and families, anniversaries, and career information?

“The success of bots relies a lot on their understanding of the user's context,” says Hamza Harkous, a PhD student at Switzerland’s École Polytechnique Fédérale de Lausanne who has studied the effect chatbots have on privacy. “This is an avenue where bots are currently limited: they don't have full permissions to access users' data, photos, location, or device sensors.”

Of course, Facebook already has much of this information on users. Mr. Harkous worries that the company’s desire to establish Messenger as the go-to platform for chatbots could eventually lead it to share some of this information with companies using its platform, and that it might do so even if its users don’t know that data exchange is happening or don’t fully understand its implications.

“With time, the need for a better user experience could force Facebook to grant more permissions directly to bots,” Harkous said. “It would be interesting to see how these permissions will work and how much control the user has. What could be a privacy nightmare is if Facebook is left to determine when and what permissions to grant based on the chatbots’ request.”

For its part, Facebook says that it has taken privacy concerns about chatbots into account with its Messenger platform.

“As with all things, people’s privacy is one of the most important factors we take into consideration when developing new products,” said a Facebook spokesperson. “Bots for Messenger are no exception. We have an approval process in place for the beta program to prevent sub-par bots from reaching people, and we also have policies in place that govern what information bots have access to.”