The exponential cost of progress

When looking at the resolution of computer chips over time, we see that it takes roughly 5 years to cut the transistor size in half. However, this is costly. The second Moore’s law says that the cost of building a modern chip fabrication plant will double every 18 months.

In software, with voice recognition or machine translation, we see the same effect. If it takes 10 years to get the error down to 25%, it might take 10 more years to get it down to 10%, then 10 more years to get it down to 5% and so on. This might mean that if you have, today, a robot that makes 4 times as many mistakes as a human being, you might need 20 years before you reach human-level capabilities. The error rate is akin here to the transistor size in chips: it goes down regularly… However, this comes at a cost. The first machine translation systems were built by small teams using tiny budgets. Today, larger and larger efforts are needed to achieve each next step.

Though Apple recently announced that Siri can do voice recognition with an error rate of 5%, it is going to take us probably as much money as was invested in voice recognition in the last decades to reach the level of a competent adult. Of course, we could always experience a lucky break in this particular application, but I don’t think I am wrong in the general idea: it takes 80% of the effort to solve the last 20% of the problems. It might take thousands of engineers working every day for a decade or two. We will need about 2 times as many machine learning experts in 2020 and four times as many in 2030.

Eroom’s law says that the cost of new medical drug roughly doubles every nine years. Intuitively, it can be understood as follows. Decades ago you could create a new vaccine by heating slightly a bacteria. We also made significant progress just by getting people to clean their hands. Not long ago, we drastically reduced lung cancer deaths simply by getting people to stop smoking. These are the low-hanging fruits… dirt cheap and effective. But increasingly, we are left facing age-related diseases: cancer, heart conditions, stroke, Parkinson’s, Alzheimer’s… Though we have better technology than ever, these problems are orders of magnitude more challenging. So curing them is not going to cost hundreds of millions of dollars… it is more likely to cost hundreds of billions. If you think I exaggerate, consider that it costs several billions of dollars to get a new drug approved in the US. In twenty years, by Eroom’s law, we should expect the cost of any one new drug to reach 50 billion dollars. With that much money, you could buy almost 10% of Google. Again, we might hit a lucky break and cure Parkinson’s using some vaccine next summer… but the general law should hold: the more diseases we cure, the more expensive it is to cure the remaining diseases. To keep up, the medical research community should grow 4-fold in the next 20 years.

We cannot tell whether we will get human-level voice recognition in 2040, or 5-nm chips in 2020, or a cancer cure in 2060… we just know that it is going to be expensive and hard work if we do.

Any specific technology will end up hitting a wall after a time. For example, planes are not flying any faster today than they were in 1980. That’s because the increased speed of the plane is no longer important enough to justify its cost: most people would rather pay for Internet access in the plane than to the destination sooner. Progress does not make all costs go away and the problem itself is sometimes disrupted.

So it would seem like we are doomed to face imminent stagnation. How can we possibly afford the future?

There are reasons to be more optimistic however:

  • If I were, today, to consider the cost of Intel’s new plant from 2020, or the latest cancer therapy, it would look prohibitive by our standards. However, our collective wealth also grows considerably. Thus, it is a mistake to consider the cost of future goods and services using our current wealth. It is not just that, say, Americans become wealthier per capita on average over time… it is also that there are more of them… and that developing countries are catching up. Back in 1990, I suspect that most of the microprocessors were sold in Europe, Japan and North America. Today, we have huge markets worldwide.
  • Even if you have the money to pay for the research, it does not follow that you will have the expertise. Yet entire sectors of our job economy are collapsing, leaving our people free for new challenges. So it is not just that food and cars are getting less expensive… it is that Intel itself is building factories that are almost entirely automated.

    Right now, very few people design new chips, new voice recognition software and new medical therapies… but there are more than ever. We can easily multiply the number of people working directly in advancing the state-of-the-art.

    Some will object that we cannot keep on doubling the number of scientists without bound, and that we may have reached a maximum already. Indeed, if 1% of your population is made of scientists and the fraction doubles every decade, after 7 decades, everyone is a scientist.

    Yet the vast majority of bright young people are underemployed today. It is quite common to see young people unable to pursue a career in medical research. The average age to get a research grant in the US has is in the mid-forties. So we are very far from the point where we have used up all our bright people.

    And we are multiplying the numbers right now. As China is increasingly automating its factories, what are they going to do with all their smart kids? You can bet that a lot of them are going to go work on hard problems, in large numbers.

  • We benefit from innovations that allows us to better use our ressources. Open source software was a significant breakthrough in this respect… companies don’t need to reinvent the wheel each time they need to build new software. Generally, establishing a new firm is cheaper than ever. Similarly, we use increasingly sophisticated financial strategies to fund new medical therapies without getting stuck in corporate nightmares.

But what about the far future? Certainly, we will hit a wall when the cost of a new drug reaches 10 trillion dollars. But we should be reminded that any extrapolation, even just a few decades ahead, is doomed to become irrelevant. For example, we may no longer have computer chips, or drugs in 2060, the same way nobody actually wants to fly in supersonic planes.

Further reading: The Singularity Isn’t Near (2011) by Paul Allen, and Don’t Underestimate the Singularity by Ray Kurzweil.

4 thoughts on “The exponential cost of progress”

  1. @joe

    “Fixing or curing everything” puts the bar too high in almost any field. Perfection is often hard to reach.

    Rather, we tend to reach a point whereas faults are manageable. For example, we all pay with credit cards, and yet there is a lot of fraud. We do not eliminate fraud, we “manageable it”.

    Regarding our own bodies, it is likely that the same can be reached… a point whereas we still get sick despite great prevention, but there are cost-effective therapies for almost all these instances.

    My own guess regarding medicine is that most of the progress will take the form of prevention. That’s how we have done most of our progress so far with better hygiene, vaccine and so on. It is also a lot cheaper to prevent disease than to cure it, on the long run.

    Regarding computing targets… there is no reason to believe that there is a ceiling on, say, voice recognition technology, that is lower than human level performance. I am quite convinced that, given enough effort, in 20 years, we will have human level voice recognition… and so forth…

  2. (like how you said it takes 80% of the effort to solve last 2)% of the problems). But i think it takes 80% effort only if we do it the way we have been doing. When we hit a wall, we develop technologies that help us go around the wall. For eg: IBM’s watson helped identify many new potential proteins that drugs can target to cure diseases, just when it is becoming increasingly difficult to identify new targets

    Also quantum computing (if you believe d-wave’s computing as quantum) is advancing and increasingly solving problems that classical computing is finding difficult to do. It could also solve problems in machine learning and translation. This and or watson could identify new materials that could help us make 5 nm or lower chips more easily.

    Also i think heart disease/stroke is still a low hanging fruit. Smoking causes heart disease too as much it is responsible for lung diseases. And ofcourse transfat. These both are responsible for somekinds of stroke too. People simply need to modify their lifestyle and most cases of heart-disease will disappear.

    Parkinson/alzheimer i trust something like watson (if not wwatson itself) can solve those too in five to ten years time. Its simply a matter of ‘finding’ the chemical or bio-chemical cycle responsible for alzheimers.

Leave a Reply

Your email address will not be published. Required fields are marked *