Going beyond our limitations

The nerds online are (slightly) panicking: it looks like Moore’s law is coming to an end. Moore’s law is the observation that microprocessors roughly double in power every two years. The actual statement of the law, decades ago, had to do with the number of transistors… and there are endless debates about what the law should be exactly… but let us set semantics aside to examine the core issue.

When computers were first invented, one might have thought that to make a computer more powerful, one would make it bigger. If you open up your latest laptop, you might notice that the CPU is actually quite small. The latest Intel CPUs (Broadwell) have a die size of 82 mm2. Basically, it is 1 cm by 1 cm for well over 1 billion transistors. Each transistor is only a few nanometers wide. It is an astonishingly small unit of measure. Our white cells are micrometers wide… this means that you could cram maybe a million transistors in any one cell.

Why are chips getting smaller? If you think about the fact that the speed of the light is a fundamental limit, and you want information to go from any one part of the chip to any other part of the chip in one clock cycle, then the smaller the chip, the shorter the clock cycle can be. Hence, denser chips can run at a faster clock rate. They can also use less power.

We can build chips on a 7-nanometer scale in laboratories. That is pretty good. The Pentium 4 in 2000 was built on a 180-nanometer scale. That is 25 times better. But the Pentium 4 was in production back in 2000 whereas the 7-nanometer chips are in laboratories. And 25 times better represents only 4 or 5 doublings… in 15 years. That is quite a bit short of the 7 doublings Moore’s law would predict.

So scaling down transistors is becoming difficult using our current technology.

This is to be expected. In fact, the size of transistors cannot go down forever. The atoms we are using are 0.2 nanometers wide. So a 7-nanometer transistors is only about 35 atoms wide. Intuition should tell you that we probably will never make transistors 1-nanometer wide. I do not know where the exact limit lies, but we are getting close.

Yet we need to go smaller. We should not dismiss the importance of this challenge. We want future engineers to build robots no larger than a white cell that can go into our bodies and repair damages. We want paper-thin watches that have the power of our current desktops.

On the short term, however, unless you are a processor nerd, there is no reason to panic. For one thing, the processors near you keep on getting more and more transistors. Remember that the Pentium 4 had about 50 million transistors. Your GPU from 2000 had probably a similar transistor count. My current tablet has 3 billion transistors, that is 30 times better. Nerds will point out that my tablet is nowhere near 30 times as powerful as a Pentium 4 PC, but, again, no reason to panic.

As a reference point, you have about 20 billion neurons in your neocortex. Apple should have no problem matching this number in terms of transistors in a few years. No particular breakthrough is needed, no expensive R&D. (Of course, having 20 billion transistors won’t make your phone as smart as you, but that is another story.)

 Processor  Year  Billions of transistors 
Apple A620120.5
Apple A720131
Apple A820142
Apple A8X20143

Another reason not to panic is that chips can get quite a bit wider. By that I mean that chips can have many more cores, running more or less independently, and each core can run wider instructions (affecting more bits). The only problem we face in this direction is that heat and power usage go up too… but chip makers are pretty good at scaling down inactive circuits and preserving power.

We are also moving from two dimensions to three. Most chips a few years ago were flat. By thickening our chips, we multiply the power per unit area without having to lower the clock speed. One still needs to dissipate heat somehow, but there is plenty of room for innovation without having to defeat the laws of physics.

And finally, we live in an increasingly networked world where computations can happen anywhere. Your mobile phone does not need to become ever more powerful as long as it can offload the computations to the cloud. Remember my dream of having white-cell size robots inside my body? These robots do not need to be fully autonomous, they can rely on each other and on computers located outside your body.

Still, how do we go smaller and faster?

I still think that the equivalent of Moore’s law will continue for many decades… however, we will have to proceed quite differently. If you think back about the introduction of trains at the start of the industrial revolution, we quickly saw faster and faster trains… until we hit limits. But transportation kept on getting better and more sophisticated. Today, I can have pretty much anything delivered to my door, cheaply, within a day. I can order something from China and get it the same week. Soon, we will have robots doing the delivery. Of course, driving in traffic is hardly any faster than it was decades ago, but we have better tools to avoid it.

So, since we cannot scale down our CPU circuits much further, we will have to come up with molecular computers. In this manner, we could get the equivalent of a 1-nanometer transistor. In fact, we already do some molecular computing: George Church’s team at Harvard showed how to cram 700 TB in one gram. To put it in context: if we reduce the size of Intel’s latest processors by a factor of 20, we would have something the size of an amoeba. That is only about 4 doublings of the density! That does not sound insurmountable if we replace transistors by something else (maybe nucleotides). And at that point, you can literally put chips into a nanobot small enough to fit in your arteries.

 object  Physical width (approximate) 
hydrogen atom 0.0001 micrometers
silicon atom 0.0002 micrometers
nucleotides (DNA) 0.0006 micrometers
transistor (2020s) 0.005 micrometers
transistor (2015) 0.02 micrometers
transistor (2000) 0.2 micrometers
red blood cell8 micrometers
white blood cell12 micrometers
neuron100 micrometers
amoeba500 micrometers
arteries1 000 micrometers
CPU chip (2015)10 000 micrometers

5 thoughts on “Going beyond our limitations”

  1. Lets not forget that hardware miniaturization is only one of the ways to achieve greater performance. Sooner or later more attention has to be devoted to reduce software bloat and to improve software optimization.

    Many programmers who started their careers with hardware offering tens or hundreds of KB RAM and a few MHz of CPU clock speed are watching in horror multi-gigabyte installation files for many popular applications. Looking at my Windows 7 system I see the size of installed applications anywhere from 290KB to 8.2GB with median around 4.5MB, which is better than I expected, but it doesn’t mean that there is no problem.

  2. @Paul

    In many ways, the gains from software have matched the gains from hardware over time…

    It is true that business apps have typically terrible performance, but that’s because nobody cares… if you look at problems where performance matter… we have often software that is orders of magnitude faster than older software.

    This being said, you are entirely correct that there is a lot of room for optimization at the software level, and not just constant factors.

    Thankfully, tools (e.g., compilers, libraries) are improving all the time. One hopes that the nanobots that will live in our arteries in 2050 won’t be programmed in 2015-era Java using Eclipse.

  3. To me, one of the most promising non-conventional approach to make classical computers faster is rapid single flux quantum based processors (https://en.wikipedia.org/wiki/Rapid_single_flux_quantum).

    RSFQ technology uses hardly any power (because the components are superconducting) and can easily run at clock frequencies of the order of 100 GHz. Of course the downside is that they RSFQs need cryogenic operating temperatures. However, this is not so much of an issue in large computing centers where the infrastructure is already a big investment. In a bit further away in the future, I could even imagine consumer level RSFQ processors with some kind of miniature cooling unit to keep the operating temperature low enough.

    The operating principles of RSFQ processors have been proven, but there are still some engineering problems that need to be solved. However, it seems that the industry is not very interested in pursuing RSFQs for whatever reason. By googling, I can find several roadmaps and technology assessments (one by NSA) which all say that this technology should be doable within reasonable timescales.

  4. This is an interesting perspective. As the number of cores increases, the cores would be come more and more independent. Clearly, if you 4 cores, they can be synced rather easily. Syncing thousands of cores would be way more problematic. Therefore, programming will become increasingly parallel. We already see this trend with GPUs and distributed systems.

  5. @Anonymous

    Is the solution to use RSFQ processors? Maybe.

    But before we get too excited…

    A frequency of 100 GHz implies one pulse every 10 picoseconds. In 10 picoseconds, light travels 3 mm. Building chips 3 mm wide still implies cramming circuits in a very small space. That is fine for simple circuits, but for a generic processor core, that is too small given our current technology.

Leave a Reply

Your email address will not be published. Required fields are marked *