Can Learning Too Much Be Bad
It took less than 24 hours for Twitter to corrupt an innocent AI chatbot. Yesterday, Microsoft unveiled Tay — a Twitter bot that the visitor described as an experiment in "conversational understanding." The more yous chat with Tay, said Microsoft, the smarter it gets, learning to engage people through "casual and playful chat."
Unfortunately, the conversations didn't stay playful for long. Pretty before long afterwards Tay launched, people starting tweeting the bot with all sorts of misogynistic, racist, and Donald Trumpist remarks. And Tay — being essentially a robot parrot with an internet connection — started repeating these sentiments dorsum to users, proving correct that old programming adage: flaming garbage pile in, flaming garbage pile out.
"Tay" went from "humans are super cool" to total nazi in <24 hrs and I'm not at all concerned well-nigh the future of AI pic.twitter.com/xuGi1u9S1A
— Gerry (@geraldmellor) March 24, 2016
Now, while these screenshots seem to show that Tay has alloyed the cyberspace's worst tendencies into its personality, it'due south non quite as straightforward as that. Searching through Tay'southward tweets (more 96,000 of them!) nosotros can see that many of the bot'southward nastiest utterances take only been the result of copying users. If you tell Tay to "repeat after me," information technology will — allowing anybody to put words in the chatbot's oral fissure.
However, some of its weirder utterances have come up out unprompted. The Guardian picked out a (now deleted) example when Tay was having an unremarkable conversation with one user (sample tweet: "new phone who dis?"), earlier it replied to the question "is Ricky Gervais an atheist?" by saying: "ricky gervais learned totalitarianism from adolf hitler, the inventor of atheism."
@TheBigBrebowski ricky gervais learned totalitarianism from adolf hitler, the inventor of atheism
— TayTweets (@TayandYou) March 23, 2016
But while information technology seems that some of the bad stuff Tay is being told is sinking in, information technology'due south not similar the bot has a coherent ideology. In the span of fifteen hours Tay referred to feminism as a "cult" and a "cancer," every bit well every bit noting "gender equality = feminism" and "i love feminism now." Tweeting "Bruce Jenner" at the bot got similar mixed response, ranging from "caitlyn jenner is a hero & is a stunning, beautiful woman!" to the transphobic "caitlyn jenner isn't a real woman yet she won woman of the year?" (Neither of which were phrases Tay had been asked to repeat.)
It'due south unclear how much Microsoft prepared its bot for this sort of thing. The visitor'due south website notes that Tay has been built using "relevant public information" that has been "modeled, cleaned, and filtered," only information technology seems that after the chatbot went live filtering went out the window. The company starting cleaning up Tay'due south timeline this morning, deleting many of its virtually offensive remarks.
Tay's responses have turned the bot into a joke, but they raise serious questions
It's a joke, obviously, but at that place are serious questions to answer, like how are we going to teach AI using public data without incorporating the worst traits of humanity? If nosotros create bots that mirror their users, practice we care if their users are human trash? There are enough of examples of technology embodying — either accidentally or on purpose — the prejudices of social club, and Tay's adventures on Twitter show that fifty-fifty big corporations like Microsoft forget to take any preventative measures against these problems.
For Tay though, information technology all proved a bit too much, and just past midnight this morning, the bot called it a night:
c u soon humans need sleep now so many conversations today thx
— TayTweets (@TayandYou) March 24, 2016
In an emailed argument given later to Business Insider, Microsoft said: "The AI chatbot Tay is a automobile learning projection, designed for human being appointment. As information technology learns, some of its responses are inappropriate and indicative of the types of interactions some people are having with it. We're making some adjustments to Tay."
Update March 24th, 6:50AM ET: Updated to annotation that Microsoft has been deleting some of Tay'southward offensive tweets.
Update March 24th, x:52AM ET: Updated to include Microsoft'southward statement.
Verge Archives: Tin we build a conscious computer?
Source: https://www.theverge.com/2016/3/24/11297050/tay-microsoft-chatbot-racist
Posted by: grahamthein2000.blogspot.com
0 Response to "Can Learning Too Much Be Bad"
Post a Comment