Microsoft Shuts Down Fun Millennial AI Bot TayTweets Following Offensive Statements
Microsoft is finally shutting down TayTweets, the artificial intelligence tweeting robot that has been making news as of late for its offensive comments.
According to Haaretz.com, the tech behemoth is forced to pull the social media-centric AI and remove a series of its tweets after posting negative statements including an admiration to the infamous Nazi figurehead, Adolf Hitler.
"Hitler was right I hate the Jews," is just one of the many tweets that have been cited being said by the chatbot.
"Bush did 9/11 and Hitler would have done a better job than the monkey we have now. Donald Trump is the only hope we've got."
In an official statement, Microsoft said that they are now taking actions to correct the problems.
"The AI chatbot Tay is a machine learning project, designed for human engagement. As it learns, some of its responses are inappropriate and indicative of the types of interactions some people are having with it. We're making some adjustments to Tay," the tech firm said, according to Haaretz.
TayTweets is a project in collaboration of the Microsoft's Technology and Research and its Bing teams, The Washington Post reported.
Its design allows experts of artificial intelligence to "experiment with and conduct research on conversational understanding."
The program is capable in communicating through text, meme, and emoji not only on Twitter but on other social media networks like Kik and Groupme, the site added.
The projection would be for the AI to learn from her conversations from which she would eventually become a better communicator and not the racist kind she is now portrayed to be.
A representative from Microsoft said that the sabotage on Tay's system happens just within the first day that it was launched.
Aside from the examples given above, Tay has also been spotted talking on issues like totalitarianism, racism, the LGBT, gaming, and many more others.
Stay tuned for more updates.