news

You’ve still got time to enter the most prestigious Turing Test

Posted: 28 June 2017 | By Charlie Moloney

The deadline for entry into the Loebner Prize 2017 competition has been extended, in an announcement from the organisers, the society for the study of Artificial Intelligence and Simulation of Behaviour (AISB), in a post on the forum, chatbots.org this Monday 26/06.

The submission deadline is now Monday 24/07, pushed back from Monday 10/07, and the finalists will be announced on Tuesday 15/08, pushed back from Tuesday 01/08, as announced by Andrew Martin, current secretary of the AISB.

The announcement also stated that, “for this year, submissions using either protocol will be accepted”. The AISB hope this will allay fears that a ‘new protocol’, being introduced in 2017, will exclude some developers who created their bots according to an older system.

 If the AI can fool at least half the judges into mistaking it for a human, then its creator will be awarded a Silver medal and $25,000…No AI has ever successfully fooled half the judges

“We put the dates back in response to the concerns of entrants who felt they needed more time to implement the new protocol”, Janet Gibbs, a Loebner Prize Officer, told Access-AI. The new protocol, among other things, requires entrant chatbots to operate on a line system.

A line system, used on Facebook, Twitter, Instagram, etc, is when computers send messages as a complete sentence or paragraph. The old protocol used a character based system, designed by founder Hugh Loebner (pictured) to mimic the teletype in Alan Turing’s work, in which messages were communicated one character at a time.

Modernising Loebner’s Legacy

The new protocol is built on open standards, is fast and requires little network bandwidth, and allows arbitrary characters to appear including other languages (e.g., Japanese, Hebrew, and Chinese characters), and emojis.

Gibbs said this brings the Loebner Prize “more into line with other such contests”, allowing entrants to “concentrate on the content of the machine utterances rather than the ‘style’ in which they are ‘typed’”.

Only once contestants reach the final round will the new protocol be mandatory. “We will work with the four finalists, if they need help, to make sure they can get their bots up and running on the new protocol”, said Gibbs.

2017 is the first year that the competition won’t be run on the older protocol since it was introduced in 1991 by Loebner, who passed away in December last year. The AISB felt the change to a modern system was necessary to widen the possible base of entrants.

The Competition

The Loebner Prize is the oldest Turing Test contest, started in 1991 by Hugh Loebner and the Cambridge Centre for Behavioural studies. The 2017 finals will be held Saturday 16/09 at Bletchley Park, England, where Alan Turing worked as a code breaker during World War II.

Chatbots entering the competition will be questioned alongside a human by four judges over four rounds for 25 minutes. If the AI can fool at least half the judges into mistaking it for a human, then its creator will be awarded a Silver medal and $25,000.

No AI has ever successfully fooled half the judges, yet, so there are also prizes for which chatbots appear the most human, based on the judges scores. Each judge ranks each bot from one to four. One is best. The scores are then averaged out. The highest possible overall score is one. First prize is a bronze medal and $4000, second is $1500, and third place wins $1000.

Reigning champion: Mitsuku

The current Loebner champion, Mitsuku chatbot, will return to the fray in 2017 to try and snatch the prize for the third year running, announced creator Steve Worswick in a Tweet posted this Monday 26/06, shortly after the aforementioned update by the AISB.

Worswick, who won first place in both 2016 and 2013, praised the AISB’s decision to accept “entries in both the old and new protocol” into the first round, and stated that this would mean Mitsuku would enter “after all”.

Worswick has continued to develop Mitsuku, and tweeted a snapshot on Sunday 25/06 of a conversation with the bot which demonstrated Mitsuku understanding a question in context. Worswick’s chatbot scored 1.25 on average in 2016, and scored 1.75 from judges in 2013.

 

 

Related industries

Related functions

Related topics

,

Related organisations

Related key players

,