

We give examples which show that, because ambiguity in language is ubiquitous, open-ended conversation is not a flaw but rather the core challenge of the Turing test.

We argue to the contrary that implicit in the Turing test is the cooperative challenge of using language to build a practical working understanding, necessitating a human interrogator to monitor and direct the conversation. Some claim a new kind of test of machine intelligence is needed, and one community has advanced the Winograd schema competition to address this gap. In 2014, widespread reports in the popular media that a chatbot named Eugene Goostman had passed the Turing test became further grist for those who argue that the diversionary tactics of chatbots like Goostman and others, such as those who participate in the Loebner competition, are enabled by the open-ended dialog of the Turing test.
