Microsoft chatbot held a mirror up to Twitter, and the reflection wasn't pretty

Microsoft took the AI chatbot offline less than a day after it went live after it began tweeting offensive statements.

An artificial intellige­nce program designed by Microsoft to tweet like a teenage girl was suspended Wednesday after it began spouting offensive remarks.

Ted S. Warren/AP/File

March 25, 2016

Tay wasn't meant to be racist, sexist, or otherwise offensive. But as an artificial intelligence program that Microsoft designed to chat like a teenage girl, it was quick to learn from whatever it was told.

So it came as little surprise when Tay started to make sympathetic references to Hitler – and created a firestorm on social media – soon after its release on Wednesday. The uproar led Microsoft to suspend the chatbot in just a few hours.

"Unfortunately, within the first 24 hours of coming online, we became aware of a coordinated effort by some users to abuse Tay's commenting skills to have Tay respond in inappropriate ways," the company said in a statement.

In Kentucky, the oldest Black independent library is still making history

The brief experiment was an embarrassing reminder for Microsoft of the often obscene ways in which users online can work to undermine online services. But Kris Hammond, a computer scientist at Northwestern University, says Tay’s creators should have known better.

"I can't believe they didn't see this coming," he told the Associated Press, adding that Microsoft appeared to have made no effort to prepare the program with appropriate responses to certain words or topics.

Caroline Sinders, an expert on "conversational analytics,” called Tay "an example of bad design."

Microsoft’s intention for Tay was to learn more about computers and human conversation. On its website, the company said the program was targeted to an audience of 18- to 24-year-olds and was "designed to engage and entertain people where they connect with each other online through casual and playful conversation."

"Everyone keeps saying that Tay learned this or that it became racist," Dr. Hammond said. "It didn't." He added that the program, which used a version of "call and response" technology, most likely reflected things it was told by people who decided to see what would happen.

A majority of Americans no longer trust the Supreme Court. Can it rebuild?

Microsoft said it's "making adjustments" on Tay but did not announce when it will re-launch the program. Most of the messages on its Twitter account were deleted by Thursday afternoon; just three tweets remain, a "hello world," an emoticon filled reference to "new beginings," and a farewell, for now.

This reports includes material from The Associated Press.