Subscribe
First Look

Microsoft chatbot held a mirror up to Twitter, and the reflection wasn't pretty

Microsoft took the AI chatbot offline less than a day after it went live after it began tweeting offensive statements.

  • close
    An artificial intellige­nce program designed by Microsoft to tweet like a teenage girl was suspended Wednesday after it began spouting offensive remarks.
    Ted S. Warren/AP/File
    View Caption
  • About video ads
    View Caption
of

Tay wasn't meant to be racist, sexist, or otherwise offensive. But as an artificial intelligence program that Microsoft designed to chat like a teenage girl, it was quick to learn from whatever it was told.

So it came as little surprise when Tay started to make sympathetic references to Hitler – and created a firestorm on social media – soon after its release on Wednesday. The uproar led Microsoft to suspend the chatbot in just a few hours.

"Unfortunately, within the first 24 hours of coming online, we became aware of a coordinated effort by some users to abuse Tay's commenting skills to have Tay respond in inappropriate ways," the company said in a statement.

The brief experiment was an embarrassing reminder for Microsoft of the often obscene ways in which users online can work to undermine online services. But Kris Hammond, a computer scientist at Northwestern University, says Tay’s creators should have known better.

"I can't believe they didn't see this coming," he told the Associated Press, adding that Microsoft appeared to have made no effort to prepare the program with appropriate responses to certain words or topics.

Caroline Sinders, an expert on "conversational analytics,” called Tay "an example of bad design."

Microsoft’s intention for Tay was to learn more about computers and human conversation. On its website, the company said the program was targeted to an audience of 18- to 24-year-olds and was "designed to engage and entertain people where they connect with each other online through casual and playful conversation."

"Everyone keeps saying that Tay learned this or that it became racist," Dr. Hammond said. "It didn't." He added that the program, which used a version of "call and response" technology, most likely reflected things it was told by people who decided to see what would happen.

Microsoft said it's "making adjustments" on Tay but did not announce when it will re-launch the program. Most of the messages on its Twitter account were deleted by Thursday afternoon; just three tweets remain, a "hello world," an emoticon filled reference to "new beginings," and a farewell, for now.

This reports includes material from The Associated Press.

About these ads
Sponsored Content by LockerDome
 
 
Make a Difference
Inspired? Here are some ways to make a difference on this issue.
FREE Newsletters
Get the Monitor stories you care about delivered to your inbox.
 

We want to hear, did we miss an angle we should have covered? Should we come back to this topic? Or just give us a rating for this story. We want to hear from you.

Loading...

Loading...

Loading...

Save for later

Save
Cancel

Saved ( of items)

This item has been saved to read later from any device.
Access saved items through your user name at the top of the page.

View Saved Items

OK

Failed to save

You reached the limit of 20 saved items.
Please visit following link to manage you saved items.

View Saved Items

OK

Failed to save

You have already saved this item.

View Saved Items

OK