In 2016, we saw a wide range of breakthroughs having to do with artificial intelligence and deep learning in particular. Google, Facebook, and Baidu announced several breakthroughs using deep learning. Google also defeated Go.
Deep learning is one specific class of machine learning algorithms. It has a long history, taking its roots in the earlier days of computer science. However, not all of machine learning is deep learning.
We are in 2017, it is only January, and the breakthroughs in artificial intelligence keep on being announced. These days, we hear about how the best human poker players are being defeated.
In particular, a system from Carnegie Mellon University called Libratus seems to have a lot of successes.
Details are scarce regarding Libratus. What caught my attention was the mention that it used “counterfactual regret minimization”, a conventional machine learning that is not a form of deep learning.
Given all of the hype going to deep learning, I find it almost surprising… are there really still artificial intelligence researchers working on techniques other than deep learning? (I’m being half serious.)
Last year, I participated to a panel on algorithms and their impact on Canadian culture. The organizers retroactively renamed the panel “Are we promoting Canadian content through deep-learning algorithms?” Yet the panel did not address deep learning per se. The successes of deep learning have been so remarkable that “deep learning” has become synonymous with “algorithm”.
I was recently on a graduate scholarship committee… and it seems that every smart young computer scientist is planning to work on deep learning. Maybe I exaggerate a little, but barely. I have seen proposals to apply deep learning to everything, from recognizing cancer cells all the way to tutoring kids.
A similar process is under way in business. If you are a start-up in artificial intelligence, and you are not focused on deep learning, you have to explain yourself.
Of course, machine learning is a vast field with many classes of techniques. However, one almost gets the impression that all of the major problems are going to be solved using deep learning. In fact, some proponents of deep learning have almost made this explicit claim… they often grant that other techniques may work well… on the small problems… but they often stress that for the large problems, deep learning is bound to win out.
We will keep on seeing very hard problems being defeated using various techniques, often unrelated to deep learning. If I am right, this means that these young computer scientists and start-up founders who flock to deep learning should be cautious. They may end up in an overcrowded field, missing out on important breakthroughs happening elsewhere.
It is hard to predict the future. Maybe deep learning is, indeed, the silver bullet and we will soon “solve intelligence” using deep learning… all problems will fall using variations on deep learning… Or it could be that researchers will soon hit diminishing returns. They will need to work harder and harder for ever smaller gains.
There are significant limitations to deep learning. For example, when I reviewed scholarship applications… many of the young computer scientists aiming to solve hard problems with deep learning did not have correspondingly massive data sources. Having an almost infinite supply of data is a luxury few can afford.
I believe that one unstated assumption is that there must be a universal algorithm. There must exist some technique that makes software intelligent in a general way. That is, we can “solve intelligence”… we can build software in a generic way to solve all other problems.
I am skeptical of the notion of general intelligence. Kevin Kelly, in his latest book, suggests that there is no such thing. All intelligence is specialized. Our intelligence “feels” general to us, but that’s an illusion. We think we are good at solving most problems but we are not.
For example, the human brain is showing its limits with advanced mathematics. Given a lot of training and dedication, some of us can write formal proofs without error. However, we are highly inefficient at it. I predict that it is a matter of decade or two before the unassisted human brain is recognized as being obsolete for research in advanced mathematics. The old guy doing mathematics on a blackboard? That’s going to look quaint.
So our brain is no silver bullet with respect to intelligence and thus I don’t believe that any one machine-learning technique can be a silver bullet.
I should qualify my belief. There might be one overarching “algorithm”. Matt Ridley points out in his latest book that everything evolves by a process akin to natural selection. Nature acquires an ever growing bag of tricks that are being constantly refined. In effect, there is an overarching process of trial and error. This is truly general, but with a major trade-off: it is expensive. Our biology evolved but it took all of the Earth ecosystem millions of years to produce homo sapiens. Our technology evolves, but it takes all of the power of human civilization to keep it going. It is also not fool-proof. There are regular extinction events.
Credit: Thanks to Martin Brooks for inspiring this blog post.
Further reading: DeepStack: Expert-Level Artificial Intelligence in No-Limit Poker. Simon Funk’s Welcome to AI. Performance Trends in AI by Sarah.