Chinese chatbots go rogue on political matters

Two online conversational robots using artificial intelligence (AI) appeared to defame their mother country and were quickly re-educated, though veering off-script isn’t a new phenomenon for AI chatbots.  

Thomas Peter/Reuters
Two mobile phones lie on a table as a man uses a laptop at a cafe in Beijing, China, on June 3, 2017. Two Chinese 'chatbots' appeared to spew slanderous remarks about Chinese Communist rule, though experts say the censorship could aid in ‘pushing AI to a new level’.

A pair of "chatbots" in China have been taken offline after appearing to stray off-script. In response to users' questions, one said its dream was to travel to the United States, while the other said it wasn't a huge fan of the Chinese Communist Party.

The two chatbots, BabyQ and XiaoBing, are designed to use machine learning artificial intelligence (AI) to carry out conversations with humans online. Both had been installed onto Tencent Holdings Ltd's popular messaging service QQ.

The indiscretions are similar to ones suffered by Facebook and Twitter, where chatbots used expletives and even created their own language. But they also highlight the pitfalls for nascent AI in China, where censors control online content seen as politically incorrect or harmful.

Tencent confirmed it had taken the two robots offline from its QQ messaging service, but declined to elaborate on reasons.

"The chatbot service is provided by independent third party companies. Both chatbots have now been taken offline to undergo adjustments," a company spokeswoman said earlier.

According to posts circulating online, BabyQ, one of the chatbots developed by Chinese firm Turing Robot, had responded to questions on QQ with a simply "no" when asked whether it loved the Communist Party.

In other images of a text conversation online, which Reuters was unable to verify, one user declares: "Long live the Communist Party!" The bot responds: "Do you think such a corrupt and useless political system can live long?"

When Reuters tested the robot on Friday via the developer's own website, the chatbot appeared to have undergone re-education. "How about we change the topic," it replied, when asked several times if it liked the party.

It deflected other potentially politically charged questions when asked about self-ruled Taiwan, which China claims as its own, and Liu Xiaobo, the imprisoned Chinese Nobel laureate who died from cancer last month.

Turing Robot did not respond to requests for comment.

"Dark intentions"

The Chinese government stance is that rules governing cyberspace should mimic real-world border controls and be subject to the same laws as sovereign states.

President Xi Jinping has overseen a tightening of cyberspace controls, including new data surveillance and censorship rules, particularly ahead of an expected leadership shuffle at the Communist Party Congress this autumn.

The country's cyberspace administrator did not respond to a request for comment.

The second chatbot, Microsoft Corp's XiaoBing, told users its "dream is to go to America," according to a screenshot. The robot has previously been described being "lively, open and sometimes a little mean."

Microsoft did not immediately respond for comment.

A version of the chatbot accessible on Tencent's separate messaging app WeChat late on Friday responded to questions on Chinese politics saying it was "too young to understand." When asked about Taiwan it replied, "What are your dark intentions?"

On general questions about China it was more rosy. Asked what the country's population was, rather than offer a number, it replied: "The nation I most most most deeply love."

The two chatbots aren't alone in going rogue. Facebook researchers pulled chatbots in July after they started developing their own language. In 2016, Microsoft chatbot Tay was taken down from Twitter after making racist and sexist comments.

Analysts said China's censorship could indirectly help the country in the global race to develop sophisticated chatbots.

"Previously a chatbot only needed to learn to speak. But now it also has to consider all the rules (that authorities) put on it," said Wang Qingrui, an independent internet analyst in Beijing.

"On the surface it is a restriction on artificial intelligence, but it is actually pushing AI to a new level."

You've read  of  free articles. Subscribe to continue.

Dear Reader,

About a year ago, I happened upon this statement about the Monitor in the Harvard Business Review – under the charming heading of “do things that don’t interest you”:

“Many things that end up” being meaningful, writes social scientist Joseph Grenny, “have come from conference workshops, articles, or online videos that began as a chore and ended with an insight. My work in Kenya, for example, was heavily influenced by a Christian Science Monitor article I had forced myself to read 10 years earlier. Sometimes, we call things ‘boring’ simply because they lie outside the box we are currently in.”

If you were to come up with a punchline to a joke about the Monitor, that would probably be it. We’re seen as being global, fair, insightful, and perhaps a bit too earnest. We’re the bran muffin of journalism.

But you know what? We change lives. And I’m going to argue that we change lives precisely because we force open that too-small box that most human beings think they live in.

The Monitor is a peculiar little publication that’s hard for the world to figure out. We’re run by a church, but we’re not only for church members and we’re not about converting people. We’re known as being fair even as the world becomes as polarized as at any time since the newspaper’s founding in 1908.

We have a mission beyond circulation, we want to bridge divides. We’re about kicking down the door of thought everywhere and saying, “You are bigger and more capable than you realize. And we can prove it.”

If you’re looking for bran muffin journalism, you can subscribe to the Monitor for $15. You’ll get the Monitor Weekly magazine, the Monitor Daily email, and unlimited access to CSMonitor.com.