
What Microsoftās bot, Tay, really says about us
By Elliot Chan, Opinions Editor
While we use technology to do our bidding, we donāt always feel that we have supremacy over it. More often than not, we feel dependent on the computers, appliances, and mechanics that help our every day run smoothly. So, when there is a chance for us to show our dominance over technology, we take it.
As humans, we like to feel smart, and we often do that through our ability to persuade and influence. If we can make someone agree with us, we feel more intelligent. If we can change the way a robot thinksāreprogram itāwe become gods indirectly. That is something every person wants to do. When it comes to the latest Microsoft intelligent bot, Tay, that is exactly what people did.
I have some experience chatting with artificial intelligence and other automated programs. My most prevalent memory of talking to a robot was on MSN Messengerāback in the daysāwhen I would have long-winded conversations with a chatbot named SmarterChild. Now, I wasnāt having deep introspective talks with SmarterChild. I was trying to outsmart it. Iād lead it this way and that, trying to make it say something offensive or asinine. Trying to outwit a robot that claims to be a āsmarter childā was surprisingly a lot of fun. It was a puzzle.
When the programmers at Microsoft built Tay, they probably thought it would have more practical uses. It was designed to mimic the personality of a 19-year-old girl. Microsoft wanted Tay to be a robot that could genuinely engage in conversations. However, without the ability to understand what she was actually copying, she had no idea that she was being manipulated by a bunch of Internet trolls. She was being lied to and didnāt even know it. Because of this, she was shut down after a day of her adapting to and spouting offensive things over Twitter.
I believe we are all holding back some offensive thoughts in our head. Like a dam, we keep these thoughts from bursting through our mouths in day-to-day life. On the Internet we can let these vulgar thoughts flow. When we know that the recipient of our thoughts is a robot with no real emotion, we can let the dam burst. There is no real repercussion.
In high school, I had a pocket-sized computer dictionary that translated English into Chinese and vice versa. This dictionary had an audio feature that pronounced words for you to hear. Obviously what we made the dictionary say was all the words we werenāt allow saying in school. Iām sure you can imagine a few funny ones. That is the same as what people do with bots. To prove that the AI is not as smart as us, we make it do what we donāt. At the moment, I donāt believe the general public is sophisticated enough to handle artificial intelligence in any form.