Microsoft recently unveiled Tay, an artificial intelligent chat bot developed by Microsoft’s Technology and Research and Bing teams to experiment with and conduct research on conversational understanding. The company stated that the more you chat with Tay, “the smarter it gets, learning to engage people through casual and playful conversation.” Microsoft launched a verified Twitter account for “Tay” – billed as its “AI fam from the internet that’s got zero chill”.
However, pretty soon after Tay launched, people starting tweeting the bot with all sorts of misogynistic, racist, and Donald Trumpist remarks. And as Tay was being essentially a robot parrot with an internet connection, started repeating these sentiments back to users.
The chatbot, targeted at 18- to 24-year-olds in the US, has now been temporarily shut down. A Microsoft spokesperson provided a statement to Business Insider and said, “The AI chatbot Tay is a machine learning project, designed for human engagement. It is as much a social and cultural experiment, as it is technical. Unfortunately, within the first 24 hours of coming online, we became aware of a coordinated effort by some users to abuse Tay’s commenting skills to have Tay respond in inappropriate ways. As a result, we have taken Tay offline and are making adjustments.”
0 comments:
Post a Comment