We are passing the Turing test right on schedule

In 1950, the brilliant computing pioneer Alan Turing made the following prediction in his paper Computing Machinery and Intelligence:

I believe that in about fifty years’ time it will be possible, to programme computers, with a storage capacity of about 109, to make them play the imitation game so well that an average interrogator will not have more than 70 per cent chance of making the right identification after five minutes of questioning. The original question, “Can machines think?” I believe to be too meaningless to deserve discussion. Nevertheless I believe that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted.

We are slightly over 50 years after Turing’s time, but I think it is fair to say, at least, that the storage capacity prediction has been reached. Turing used bits (or “binary digits”) as units of storage so 109 is 100MB. That must have sounded like an enormous storage capacity in 1950. But today, all smartphones have a lot more storage than that. In fact, it seems that Turing somewhat underestimated storage capacities. He reported the brain to have a storage capacity of less than 1015 or 100TB, when today’s estimate puts the brain storage capacity at 10 times this amount. To be fair to Turing, storage bits were tremendously precious in 1950 so programmers used them with care. Today we waste bits without second thoughts. Many programs that require gigabytes of memory could make do with 100MB if they were carefully engineered. And even today, we know too little about the engineering of our brain to appreciate its memory usage.

As for the last part, where Turing says that people will accept that machines think, if you listen to people talk, they will routinely refer to software as “thinking”. Your mobile phone thinks you should turn left at the next corner. Netflix thinks I will like this movie. And so forth. Philosophers will still object that machines cannot think, but who listens to them?

We call our phones “smart”, don’t we?

What about fooling people into thinking that the machine is human? I think that Alan Turing, as an observer of our time, would have no doubt that this prediction has come to pass.

In 2014, a computer managed to pass for a 13-year-old boy and fool 33% of the judges. But it could be dismissed as an anecdote. However, recently, Ashok Goel, a professor of computer science, used IBM Watson’s technology to create a teaching assistant called Jill Watson. The assistant apparently fooled the students. Quoting from the New York Times:

One day in January, Eric Wilson dashed off a message to the teaching assistants for an online course at the Georgia Institute of Technology. “I really feel like I missed the mark in giving the correct amount of feedback,” he wrote, pleading to revise an assignment. Thirteen minutes later, the TA responded. “Unfortunately, there is not a way to edit submitted feedback,” wrote Jill Watson, one of nine assistants for the 300-plus students. Last week, Mr. Wilson found out he had been seeking guidance from a computer. “She was the person—well, the teaching assistant—who would remind us of due dates and post questions in the middle of the week to spark conversations,” said student Jennifer Gavin. “It seemed very much like a normal conversation with a human being,” Ms. Gavin said. Shreyas Vidyarthi, another student, ascribed human attributes to the TA—imagining her as a friendly Caucasian 20-something on her way to a Ph.D. Students were told of their guinea-pig status last month. “I was flabbergasted,” said Mr. Vidyarthi.

So Turing was right. We are about 15 years after his 50-year mark, but a 30% error margin when predicting the future is surely acceptable.

Let us reflect on how Turing concluded his 1950 article…

We may hope that machines will eventually compete with men in all purely intellectual fields. But which are the best ones to start with? Even this is a difficult decision. Many people think that a very abstract activity, like the playing of chess, would be best. It can also be maintained that it is best to provide the machine with the best sense organs that money can buy, and then teach it to understand and speak English.

Chess has been definitively solved in 1997. Our brains are now obsolete as far as the game of Chess is concerned. Computers can speak English quite well. They understand us, to a point.

All in all, Turing was nearly prescient in how he imagined the beginning of the XXIst century.

Daniel Lemire, "We are passing the Turing test right on schedule," in Daniel Lemire's blog, May 8, 2016.

Published by

Daniel Lemire

A computer science professor at the University of Quebec (TELUQ).

6 thoughts on “We are passing the Turing test right on schedule”

  1. You’re comparing apples and oranges though. In the Turing test you are knowingly attempting to interrogate the computer to determine if it is a conscious thinking being or not. That’s quite different to engaging in a conversation with a specific context and on a very limited topic.

    Furthermore the ’13 year old boy’ incident from 2014 was a very highly constrained version of the test with the rules heavily stacked in the chatbot’s favor. Modern chatbots can fulfill useful roles and interface systems, but they are a dead end in terms of achieving general purpose AI.

    1. Turing knew what he was talking about. Language understanding is grounded not intellectually, but experientially. Think about it. No, really think about that.

  2. Besides Turing, Asimov was also a pretty sharp person. Daniel, do you believe enough researchers are putting effort into the ethics side of AI?

    Second question: Even if they are, surely not everyone is. Will ethical AI’s win out over unethical ones?

    1. Thanks for the nice perspective. At some point I’ll read John Nash’s game theory work to see how it may apply to the question.

      Also, if people can show that ethical behavior wins out over unethical behavior (in their everyday lives), then we at least know that this is also possible for artificial intelligence.

      1. I have some loose philosophical arguments regarding how ethical collaboration may win if you’d like to take the discussion off line some time.

Leave a Reply

Your email address will not be published.

You may subscribe to this blog by email.