Tay AI Makes A Comeback, Spams Over 200,000 Followers
After a controversial start last week, Microsoft's Twitter chatbot Tay AI was turned off after it picked up some bad habits from Twitter users who interacted with it. The chatbot was originally designed by Microsoft to interact with users on Twitter and learn from these interactions. Unfortunately, Microsoft also found out that the bot can learn bad habits as it started sending out tweets that were anti-Semitic and anti-feminist. This led Microsoft to put an end to its experiment for now.
Yesterday, however, according to several reports, the chatbot made an appearance on Twitter again which was as a Fortune report called it – a disastrous comeback. Tay AI went online again last March 30, this time spewing out a nonsensical tweet which said, "You are too fast. Please take a rest." According to the Fortune report, the chatbot was sending the message to itself as well as other Twitter accounts several times per second. This continued until it was able to spam over 200,000 follower accounts. After a while, the chatbot stopped and its account settings were converted to "private."
In an official statement to CNBC, Microsoft confirmed that Tay AI was indeed turned on but was done so inadvertently while they were undergoing testing. A spokesperson said, "Tay remains offline while we make adjustments. As part of testing, she was inadvertently activated on Twitter for a brief period of time."
Tay AI was programmed to represent a teenage girl, thus the reference "she," and was originally designed to interact with young people between the ages 18- 24, and learn from this interaction in the process. Unfortunately, it also became a victim to a concerted effort by a group of people to teach it bad things such as saying the Holocaust was made up, sending misogynistic tweets to women, and saying that Donald Trump was "the only hope" left for Americans, among others.