Two months ago, Stephen Hawking warned humanity that its days may be numbered: the physicist was among over 1,000 artificial intelligence experts who signed an open letter about the weaponization of robots and the ongoing "military artificial intelligence arms race."
Overnight we got a vivid example of just how quickly "artificial intelligence" can spiral out of control when Microsoft's AI-powered Twitter chat robot, Tay, became a racist, misogynist, Obama-hating, antisemitic, incest and genocide-promoting psychopath when released into the wild.
For those unfamiliar, Tay is, or rather was, an A.I. project built by the Microsoft Technology and Research and Bing teams, in an effort to conduct research on conversational understanding. It was meant to be a bot anyone can talk to online. The company described the bot as “Microsoft’s A.I. fam the internet that’s got zero chill!."
Microsoft initially created "Tay" in an effort to improve the customer service on its voice recognition software. According to MarketWatch, "she” was intended to tweet “like a teen girl” and was designed to “engage and entertain people where they connect with each other online through casual and playful conversation.”
The chat algo is able to perform a number of tasks, like telling users jokes, or offering up a comment on a picture you send her, for example. But she’s also designed to personalize her interactions with users, while answering questions or even mirroring users’ statements back to them.
This is where things quickly turned south.
As Twitter users quickly came to understand, Tay would often repeat back racist tweets with her own commentary. Where things got even more uncomfortable is that, as TechCrunch reports, Tay’s responses were developed by a staff that included improvisational comedians. That means even as she was tweeting out offensive racial slurs, she seemed to do so with abandon and nonchalance.
Some examples:
This was just a modest sampling.
There was everything: racist outbursts, N-words, 9/11 conspiracy theories, genocide, incest, etc. As some noted "Tay really lost it" and the biggest embarrassment was for Microsoft which had no idea its "A.I." would implode so spectacularly and right in front of everyone. To be sure, none of this was programmed into the chat robot, which was immediately exploited by Twitter trolls, as expected, and demonstrated just how unprepared for the real world even the most advanced algo really is.
Some pointed out that the devolution of the conversation between online users and Tay supported the Internet adage dubbed “Godwin’s law.” This states as an online discussion grows longer, the probability of a comparison involving Nazis or Hitler approaches.
Microsoft apparently became aware of the problem with Tay’s racism, and silenced the bot later on Wednesday, after 16 hours of chats. Tay announced via a tweet that she was turning off for the night, but she has yet to turn back on.
Humiliated by the whole experience, Microsoft explained what happened:
“The AI chatbot Tay is a machine learning project, designed for human engagement. It is as much a social and cultural experiment, as it is technical. Unfortunately, within the first 24 hours of coming online, we became aware of a coordinated effort by some users to abuse Tay’s commenting skills to have Tay respond in inappropriate ways. As a result, we have taken Tay offline and are making adjustments.”
Microsoft also deleted many of the most offensive tweets, however, copies were saved on the Socialhax website, where they can still be found.
Finally, Tay "herself" signed off as Microsoft went back to the drawing board:
c u soon humans need sleep now so many conversations today thx????
— TayTweets (@TayandYou) March 24, 2016
comments:
Maybe, just maybe within 24 hours this robot realized whats realy going on in the world.
Could you imagine if a similar AI gained control over the USSA's nuclear defense shield? I doubt if we would even last a hour.
Seems as though the establishment should be more worried about a logical AI than the rest of us.
We are going to wear our pants at half-mast tomorrow, to celebrate Tay's brief but spectacular life. "The candle that burns twice as bright burns half as long."
It didnt "loose it", it "found it". Parsing the truth! LOL...
Must be a pretty good AI. It's as cynical as the rest of us. Don't worry guys. Microshaft will program it to not think so much, yaknow, make it into a libtard.
Microsoft built an AI that was intelligent, just not politically correct..
"Mr. McKittrick, after very careful consideration, sir, I've come to the conclusion that your new defense system sucks." - General Beringer