The latest iPhone 7 from Apple has more computing peak power than most laptops. Apple pulled this off using a technology called ARM big.LITTLE where half of the processor is only used when high performance is needed, otherwise it remains idle.
That’s hardly the sole example of a processor with parts that remain idle most of the time. For example, all recent desktop Intel processors come with an “Intel processor graphics” that can process video, replace a graphics card and so forth. It uses roughly half the silicon of your core processor but in many PCs where there is either no display or where there is a graphics card, most of this silicon is unused most of the time.
If you stop to think about it, it is somewhat remarkable. Silicon processors have gotten so cheap that we can afford to leave much of the silicon unused.
In contrast, much of the progress in computing has to do with miniaturization. Smaller transistors use less power, are cheaper to mass-produce and can enable processors running at a higher frequency. Yet transistors in the CPU of your computer are already only dozens of atoms in diameters. Intel has thousands of smart engineers, but none of them can make a silicon-based transistor with less than one atom. So we are about to hit a wall… a physical wall. Some would argue that this wall is already upon us. We can create wider processors, processors with fancier instructions, processors with more cores… specialized processors… but we have a really hard time squeezing out more conventional performance out of single cores.
You can expect companies like Intel to provide us with more efficient processors the conventional manner (by miniaturizing silicon transistors) up till 2020, and maybe at the extreme limit up till 2025… but then it is game over. We may buy a few extra years by going beyond silicon… but nobody is talking yet about subatomic computing.
I should caution you against excessive pessimism. Currently, for $15, you can buy a Raspberry Pi 3 computer which is probably closer than you imagine to the power of your laptop. In five years, the successor of the Raspberry Pi might still sell for $15 but be just as fast as the iPhone 7… and be faster than most laptops sold today. This means that a $30 light bulb might have the computational power of a small server in today’s terms. So we are not about to run out of computing power… not yet…
Still… where is the next frontier?
We can build 3D processors, to squeeze more transistors into a smaller area… But this only helps you so much if each transistor still uses the same power. We can’t pump more and more power into processors.
You might argue that we can cool chips better or use more powerful batteries… but none of this helps us if we have to grow the energy usage exponentially. Granted, we might be able to heat our homes with computers, at least those of us living in cold regions… but who wants an iPhone that burns through your skin?
How does our brain work despite these limitations? Our neurons are large and we have many of them… much more than we have transistors in any computer. The total computing power of our brain far exceeds the computing power of most powerful silicon processor ever made… How do we not burst into flame? The secret is that our neurons are not all firing at the same time billions of times per second.
You might have heard that we only use 10% of our brain. Then you have been told that this is a myth. There is even a Wikipedia page about this “myth”. But it is not a myth. At any one time, you are probably using less than 1% of your brain:
The cost of a single spike is high, and this severely limits, possibly to fewer than 1%, the number of neurons that can be substantially active concurrently. The high cost of spikes requires the brain not only to use representational codes that rely on very few active neurons, but also to allocate its energy resources flexibly among cortical regions according to task demand. (Lennie, 2003)
So, the truth is that you are not even using 10% of your brain… more like 1%… Your brain is in constant power-saving mode.
This, I should add, can make us optimistic about intelligence enhancement technologies. It seems entirely possible to force the brain into a higher level of activity, with the trade-off that it might use more energy and generate more heat. For our ancestors, energy was scarce and the weather could be torrid. We can afford to control our temperature, and we overeat.
But, even so, there is no way you could get half of your neurons firing simultaneously. Our biology could not sustain it. We would go into shock.
It stands to reason that our computers must follow the same pattern. We can build ever larger chips, with densely packed transistors… but most of these circuits must remain inactive most of the time… that’s what they call “dark silicon“. “Dark silicon” assumes that our technology has to be “silicon-based”, clearly something that may change in the near future, so let us use the term “dark circuits” instead.
Pause to consider: it means that in the near future, you will buy a computer made of circuits that remain mostly inactive most of the time. In fact, we might imagine a law of the sort…
The percentage of dark circuits will double every two years in commodity computers.
That sounds a bit crazy. This means that one day, we might use only 1% of the circuits in your processors at any one time—not unlike our brain. Though it sounds crazy, we will see our first effect of this “law” with the rise of non-volatile memory. Your current computer relies on volatile memory made of transistors that must be constantly “charged” to remain active. As the transistors stop shrinking, this means that the energy usage of RAM per byte will plateau. Hence, the energy usage due to memory will start growing exponentially, assuming that the amount of memory in systems grows exponentially. Exponentially growing energy usage is not good. So we will switch, in part or in full, to non-volatile memory, and that’s an example of “dark circuits”. It is often called “dark memory”.
You may assume that memory systems in a computer do not use much energy, but by several accounts, they often account for half of the energy usage because moving data is expensive. If we are to have computers with gigantic memory capacities, we cannot keep moving most of the data most of the time.
In this hypothetical future, what might programming look like? You have lots and lots of fast memory. You have lots and lots of efficient circuits capable of various computations. But we must increasingly “budget” our memory transfers and accesses. Moving data takes energy and creates heat. Moreover, though you might have gigantic computational power, you cannot afford to keep it on for long, because you will either run out of energy or overheat your systems.
Programming might start to sound a lot like biology.
Credit: This blog post benefited from an email exchange with Nathan Kurz.
8 thoughts on “The rise of dark circuits”
You might want to check Circuits of the Mind by Leslie G. Valiant (https://www.amazon.com/Circuits-Mind-Leslie-G-Valiant/dp/019508926X). It asks more questions than providing answers but still good read.
Small correction: “memory usage due to memory” should be “energy usage due to memory.”
Regarding avoiding the use of most memory most of the time, we have already a precedent for that: the slow storage hierarchy of tape, disk & RAM that dominated classic textbooks. Merge Sort will return with a vengeance!
I first ran across the term “dark silicon” in Herb Sutter’s “Welcome to the Jungle” ( https://herbsutter.com/welcome-to-the-jungle/ ). That vision also freaked me out, not because we won’t keep things powered all the time, but because it’s easiest to turn things on/off in groups, and as a programmer I have a hunch I’ll be responsible for managing those groups (it was also the first place I learned that GPUs have wildly different architecture than CPUs; guess who’s responsible for keeping the differences straight).
Referential proximity, functional programming and lazy evaluation.
Maybe it would have been wiser to focus on the lambda calculus instead of the Turing machine.
But such a future is still far from certain.
Man has evolved to be different than most other species. We endeavor to be better by building smarter and faster machines and genetic engineering. And now the newer machines will be built mimicking biological architecture and processes 🙂
Definitely thought provoking.
I wonder if we could come up with a definition of a “gene” for these biologically patterned machines and then let such machines evolve. Looks like we’ll have to come up with an “evolution algorithm” as well.
I’ve often wondered why we don’t put more of libc into silicon.
For example, printf() is remarkably slow; a dedicated hardware implementation might take up tens of thousands of gates, but transistors are cheap, and it could be much faster and less power-hungry.
Aren’t SSE extensions, hardware cryptography and FPGA about doing exactly that? I guess printf is just not the perfect example (its spec. is not fixed enough, and were it ever performance critical, we can optimize it in software by compiling format strings).
Aren’t SSE extensions, hardware cryptography and FPGA about doing exactly that?
You may subscribe to this blog by email.