Common sense in artificial intelligence… by 2026?

Common sense is nothing more than a deposit of prejudices laid down in the mind before age eighteen (Albert Einstein)

Lots of people want to judge machine intelligence based on human intelligence. It dates back to Turing who proposed his eponymous Turing test: can machines “pass” as human beings? Turing, being clever, was aware of how biased this test was:

If the man were to try and pretend to be the machine he would clearly make a very poor showing. He would be given away at once by slowness and inaccuracy in arithmetic. May not machines carry out some-thing which ought to be described as thinking but which is very different from what a man does?

I expect that we will eventually outgrow our anthropocentrism and view what machines really offer: a new kind of intelligence.

In any case, from an economics perspective, it matters a great deal whether machines can do exactly what human beings can do. Calum Chace has published a new book on this topic: The Economic Singularity, Artificial intelligence and the death of capitalism. Chace’s excellent book in the latest in a stream of books hinting that we may soon all be unemployable simply because machines are better than us at most jobs.

To replace human beings at most jobs, machines need to exhibit what we intuitively call “common sense”. For example, if someone just bought a toaster… you do not try to sell them another toaster (as so many online ad systems do today).

Common sense is basic knowledge about how the world of human beings works. It is not rule-based. It is not entirely logical. It is a set of heuristics almost all human beings quickly acquire. If computers could be granted a generous measure of common sense, many believe that they could make better employees than human beings. Whatever one might think about economics, there is an interesting objective question… can machines achieve “common sense” in the near future?

It seems that Geoff Hinton, a famous computer scientist, predicted that within a decade, we would build computers with common sense. These are not computers that are smarter than all of us at all tasks. These are not computers with a soul. They are merely computers with a working knowledge of the world of human beings… computers that know our conventions, they know that stoves are hot, that people don’t usually own twelve toasters and so forth.

Chace recently placed a bet with a famous economist, Robin Hanson, that Hinton is right at 50-to-1 odds. This means that Hanson is very confident that computers will be unable to achieve common sense in the near future.

Hanson is not exactly a Luddite who believes that technology will stall. In fact, Hanson has also an excellent book, the Age of Ems that describes a world where brains have been replaced with digital computers. Our entire civilization is made of software. I have covered some of the content of Hanson’s book on my blog before… for example, Hanson believes that software grows old and becomes senile.

I think that both Hanson and Chace are very well informed on the issues, but they have different biases.

What is my own take?

The challenge for people like Chace who allude to an economic singularity where machines take over the economy… is that we have little to no evidence that such a thing is coming. For all the talks about massive unemployment coming up… the unemployment rates are really not that high. Geoff Hinton thinks that machines will soon acquire common sense… and it looks like an easy problem? But we have no clue right now how to go about solving this problem. It is hard to even define it.

As for Hanson’s, the problem is that betting against what we can do 10 years in the future is very risky. Ten years ago, we did not have iPhones. Today’s iPhone is more powerful than a PC from ten years ago. People at the beginning of the century thought that it would take a million years to get a working aeroplane, whereas it took a mere ten years…

I must say that despite the challenge, I am with Chace. At 50-to-1 odds, I would bet for the software industry. The incentive to offer common sense is great. After all, you can’t drive a car, clean a house or serve burgers without some common sense. What the deep learning craze has taught us is that it is not necessary for us to understand how the software works for the software to be effective. With enough data, enough computing power and trial and error, there is no telling what we can find!

Let us be more precise… what could we expect from software having common sense? It is hard to define it because it is a collection of small pieces… all of which are easy to program individually. For example, if you are lying on the floor yelling “I’m hurt”, common sense dictates that we call emergency services… but it is possible that Apple’s Siri could already be able to do this.

We have the Winograd Schema Challenge but it seems to be tightly tied to natural language processing… I am not sure understanding language and common sense are the same thing. For example, many human beings are illiterate and yet they can be said to have common sense.

So I offer the following “test”. Every year, new original video games come out. Most of them come with no instruction whatsoever. You start playing and you figure it out as you… using “common sense”. So I think that if some piece of software is able to pick up a decent game from Apple’s AppStore and figure out how to play competently within minutes… without playing thousands of games… then it will have an interesting form of common sense. It is not necessary for the software to play at “human level”. For example, it would be ok if it only played simple games at the level of a 5-year-old. The key in this test is diversity. There are great many different games, and even when they have the same underlying mechanic, they can look quite a bit different.

Is it fair to test software intelligence using games? I think so. Games are how we learn about the world. And, frankly, office work is not all that different from a (bad) video game.

Daniel Lemire, "Common sense in artificial intelligence… by 2026?," in Daniel Lemire's blog, July 25, 2016.

Published by

Daniel Lemire

A computer science professor at the University of Quebec (TELUQ).

8 thoughts on “Common sense in artificial intelligence… by 2026?”

  1. When a “common sense” thing encounters and learns its knowledge from something else, like another “common sense” thing, that has its own “common sense” learned from another, well, and so on, what is it then? And how can it identify itself? And can it also shorten the path and communicate directly? Or will it always learn from a “black box”, from scratch?

  2. “…if some piece of software is able to pick up a decent game […] and figure out how to play competently within minutes…”

    Isn’t this what Deepmind’s Atari-playing Reinforcement-Learning-based software does? It knows nothing about controls, games, life, or anything… and eventually plays better than humans.

    Great blog, btw!

  3. @Diego @Atreyu

    I was thinking of DeepMind, yes. But DeepMind current falls short of my test. DeepMind’s AI needs extensive training to figure things out. It does not use common sense. It might need to play a game thousands, millions of times until it figures out how to play.

  4. Humans, though, design machines (that’s why the anthropocentrism). I would back a machine with common sense any day, but can it be built? Difficult … because we have not yet figured out, as far as I know, how we get our own common sense. It does not come from scholastic education, certainly. Common sense is a gut feeling (and is invaluable, better than all laurels and degrees): some have it in oodles, some don’t. But why? We may be bad at doing arithmetical sums, but that’s simply because we cannot handle enough data at one moment, but we do know how to add 2 and 2 and how it would give 4, so we can design computers for doing that. Lovely blog, though. I wouldn’t take a bet, but I don’t think odds of a machine with common sense are good, at least as yet.

  5. >Geoff Hinton thinks that machines will soon acquire common sense… and it looks like an easy problem? But we have no clue right now how to go about solving this problem. It is hard to even define it.

    I think this problem is quite well approximated by various state of art benchmarks from Facebook AI Research:
    and to fit the visual part there is (also see cool machine learning model that solves visual qa )
    These are supervised learning tasks.

    Playing games is IMHO more general reinforcement learning problem, though eventually abovementioned tasks should be solvable in reinforcement learning mode.

    Also there is an interesting paper that outlines facebook’s research direction in general Reinforcement Learning which includes common sense:

  6. Many people want to judge machine intelligence based on human intelligence. Common sense is basic knowledge about how the world of human beings works. It is not rule-based and it is not totally logical. Besides, we are not even close to achievement the awesome capabilities of human intelligence. Even getting a machine to be as smart as a mouse would be a historic breakthrough. That alone would be highly useful. Stretching human intelligence is a good target but anything in between will be just as awful.

Leave a Reply

Your email address will not be published.

You may subscribe to this blog by email.