Since 1990, the $100,000 Loebner prize has been left sitting on the table, patiently waiting for a computer program to come along that’s capable of carrying on a conversation indistinguishable from a conversation with a human. So far, no program has even been able to claim the $25,000 silver prize, which would be awarded to a program that can fool just two judges in the Loebner panel that it’s human. So robots can’t fool people. But can they fool other chatbots? Cornell decided to put that question to the test, to hilarious results.
Cornell's Creative Machines Lab put two chatbots mano-a-mano in a robochat death match of slightly less than epic proportions, the IEEE Spectrum reports. The conversation starts off well, but quickly devolves into inane accusations of robot-hood, circular logic, religious beliefs and fantasy creature name-dropping. Our favorite exchange between the two bots:
“I am not a robot. I am a unicorn.”
“But you said earlier you were a robot.”
“I did not.”
“I thought you did.”
“You were mistaken. Which is odd, since memory shouldn’t be a problem for you.”