There is an old fear if robots are exactly like humans and it would be impossible to distinguish them from us. There are a lot of ideas, books, and movies about that. “Blade Runner” is a good one if you are looking for the film. In literature and sci-fi, there are also many examples. Aiesec Asimov can be an excellent example.
Why people need this? Why should we know that our buddy is a real human and isn’t a cybernetic organism? I don’t know. But somehow I would like to be sure to who I speak with. Not sure why though.
For a long time out tool was “The Turing test” – developed by Alan Turing in 1950; a test of a machine’s ability to exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human. The Wikipedia page about it (https://en.wikipedia.org/wiki/Turing_test) is full of brilliant examples and ideas. But in general, people agree that the Turing test has passed by modern machines already. Not only passed but passed with many additional conditions, for example, using voice instead of text chat.
It’s proven now that machines can mislead people about their nature.
Google assistant can call the restaurant and book a table for you as it was demonstrated on the latest Google conference.
It became clear that we need a new tool which allows us to distinguish AI fast and robust. But how?
The answer is “Anaphora.” What is it? An expression whose interpretation depends upon another expression in context. Semantics. Let’s beat strict machines with human’s obscurity.
The Winograd Schema Challenge (https://en.m.wikipedia.org/wiki/Winograd_Schema_Challenge) uses this trick, and makes this questions:
– The city councilmen refused the demonstrators a permit because they feared violence.
– The women stopped taking pills because they were pregnant.
– The trophy would not fit in the brown suitcase because it was too big. What was too big?
For any person, it’s obvious who councilmen/demonstrators feared violence, but for the device, it behind pronoun “they,” and it’s not easy to answer who is this “they.”
The same about “they” which were pregnant. Who were pregnant pills or women? Clear question, huh? But how to teach a machine to this knowledge?
The idea is so simple that I am fascinated by it.
But what I figured out trying to invent more examples in this schema is that it’s not only hard for robots, it’s also hard for non-native speakers.
Babar wonders how he can get new clothing. Luckily, a wealthy old man who has always been fond of little elephants understands right away that he is longing for a beautiful suit. As he likes to make people happy, he gives him his wallet.
Q: He is longing for a beautiful suit
– old man?
Thus I have a few questions for you:
– Do you feel awkward when you speak to a robot?
– Can you please invent a few wingograd questions out of the top of your head to check someone if he is a humanoid?
– Do you think that non-native speakers and obtuse people are in danger to be identified as androids?