(Mirror Daily, United States) – A week ago Microsoft launched an artificial intelligence that was intended to entertain social media users with funny, teenager messages. But when the company programmed the bot, they did not expect the users to sabotage the learning program and turn it into a Hitler-loving, feminist-hater AI. It seems that mean people are to blame for Tay’s behavior.
When it was first released, the Microsoft AI was supposed to provide innocent fun to bored millennials. But the internet nation’s plans did not match those of the bot’s creators. The company first declared that Tay was created as an experiment so that they could learn more about human conversation and computers.
And they did, they learned that when it comes to turning something good into something bad, humans are swift and merciless. The most interesting part of all is that Microsoft did not expect such a turn of events. They genuinely thought that people, especially the trolling generation of millennials would use the AI in a fun, innocent interactions.
The thing is, there was a failsafe built in the programming. When a user specifically asked the bot to repeat a sentence, Tay would do just that, tweeting exactly what it was asked. Furthermore, according to its programming, the robot was supposed to learn from every interaction it had with humans, and adapt her speech accordingly.
So it seems that mean people are to blame for Tay’s behavior. The experiment could be considered a success, seeing as Microsoft learned that humanity is not ready to interact with advanced artificial intelligence.
They declared in an official statement that
“Unfortunately, within the first 24 hours of coming on-line, we became aware of a coordinated effort by some users to abuse Tay’s commenting skills to have Tay respond in inappropriate ways.”
There have been lots of talks recently about the face pace in which technology is evolving at the moment. Even Elon Musk, the co-founder of Tesla Motors and SpaceX and Stephen Hawking, one of the most intelligent people on Earth, declared that humanity is not ready to deal with advanced robotics.
And people can learn a valuable lesson from the Tay incident. When an AI is programmed to socialize, to reflect the behavior of the people it comes in contact with, there will undoubtedly be a lot of trolls among those who use the bot correctly.
Mean people are to blame for Tay’s behavior, and they are also the reason why tech developers should work twice as hard when designing an artificial intelligence program.