“Intelligence is all about perception” – Turing Tests
Posted: 22 September 2017 | By Hazel Tang
The 27th Loebner Prize was held last Saturday at Bletchley Park. Over the years, critics have suggested that this oldest Turing Test contest may not have fueled the progress of artificial intelligence (AI) or truly summarised Alan Turing’s definition of intelligence.
“I think AI is all about perception”, said Dr. Joanne Pransky, the World’s first robotic psychiatrist and Loebner Prize 2016 judge. “At one point we may think that the machine is more intelligent than a human, but the way I speak to you in a face to face interview is very different compared to how I would email you, and all these are different forms of intelligence”
AI is still very much dependent upon human perception
As natural language kicks in, designing robots has become a whole new business. Common sense, emotions and social etiquettes – things which humans take for granted, are all huge hurdles for scientists and developers who are creating conversational AI.
The way intelligence could evolve
Although we have found ways to communicate with machines, for example via the use of emoticons to express how we feel (which make us more succinct in algorithm), we usually see human intelligence, and forms of interaction, as fundamentally different from AI.
Whilst we may have a device in our hands today, in 10 to 20 years’ time these devices may be implanted in our brains so that we can be as intelligent as a machine
It is inevitable that the way humans converse with AI will change as we spend more time with them. One may forgo the need to say “please” or “thank you” when commanding a robot, as we accept that AI won’t have an emotional response to our words.
One unexpected consequence of this change may be that certain human-only demographics like gender, age or race may also be negligible. “I don’t think we will think of gender or age or race as important eventually”, Dr. Pransky told Access AI. “The question will be: is this person biological or non-biological”.
She believes that whilst we may have a device in our hands today, in 10 to 20 years’ time these devices may be implanted in our brains so that we can be as intelligent as a machine. Also, people suffering from long term illnesses or permanent conditions could ask for an upgrade to eradicate any flaw in their systems.
Human & Machine intelligence combined
“We shouldn’t think that it is AI versus humanity or vice versa. It is not about robots taking over the World but rather merging with our World”, Dr. Pransky said. “The best form of intelligence is a human with a computer”, she added.
Intelligence should not be segregated at human or robot level
As a result, intelligence should not be segregated at human or robot level, rather it should be general intelligence or specific kinds of intelligence. Businesses should start thinking of the type of intelligence that best suits their needs and what kind of AI will work well with human counterparts.
At the same time, there is also a need to bear in mind that robots should be allowed to make mistakes. As Alan Turing himself had put it, “if a machine is expected to be infallible, it cannot also be intelligent.”
“We are not at the point whereby robots are able to learn on their own”, Dr. Pransky said. “There is no way a robot psychiatrist would take the robot aside and begin counselling it as if it is a human. Most of the time, it is about improving robot-human interaction.”
What does the Loebner Prize really mean?
Dr. Pransky felt that the Loebner Prize should be seen as a re-enactment, as close as possible to Alan Turing’s test, and a promotion of the field of AI, as the great computer scientist had once envisioned. One should also not walk away with the idea that a robot or chatbot is “fooling” or “tricking” humans during the contest.
“A trick is intentional: I am going to fool you. I don’t think an AI can have that intent”, Dr. Pransky said. “Everyone is just doing, running and operating programmes and systems as per normal [during the contest]. The greatest lesson to learn is that AI is still very much dependent upon human perception and the heart of the Loebner Prize should not be the contest or the result but its significance.”