Schools teach us theory so that we can be more productive workers. You learn grammar so that you can be a better writer. You learn about computer science, so that you may be a better programmer. You learn about electromagnetism so you can do electronics. You learn about thermodynamics so you can design engines.
It is tempting therefore to believe that theory precedes practice. We invented grammar first, and then the written language. We came up with the thermodynamics and then we invented engines. We discovered electromagnetism and then we could build electric circuits. We studied computer science and then we started programming.
But this is backward. Watt was certainly a knowledgeable technician when he designed the engine. Flowers was a skilled electrical engineer when he computerized the post office in the 1930s. However, clean principles only emerge after the fact.
There is a big difference between putting theory in practice, and trying something out while lacking the theory. My claim is that most inventions are of the latter sort, even if academics much prefer to imagine that the former plays a central role.
I believe that Torvalds put it best:
Don’t ever make the mistake [of thinking] that you can design something better than what you get from ruthless massively parallel trial-and-error with a feedback cycle. That’s giving your intelligence much too much credit. (Linus Torvalds)
As you invent something new, the existing theory is lacking. The principles do not quite apply. It is only after the fact that someone smart can finally derive clear principles.
The real-world is a complicated place. Our minds are limited to relatively simple models. We evolved as tinkerers. Sure, we are smart tinkerer in that we can trade ideas and designs, but our fundamental R&D strategy remains tinkering.
That is why we invented the written language long before scholars wrote grammars. That is why Watt invented the engine long before scientists conceived thermodynamics. We built electric circuits before scientists founded electromagnetism. We hacked computers together and then founded computer science. We toyed with uranium (and got sick) long before we could build an atomic bomb.
There is an important practical implication to my claim: if you want to invent something new, you need to be willing to go where our theory no longer applies. Sadly, this means you will get it wrong at times through no fault of your own.
Further reading: Problem solvers and theory builders by John D. Cook, quoting from Mathematics without Apologies by Michael Harris.
Update: Yoshua Bengio (U. Montréal) wrote on Facebook:
In my field of research (machine learning, and especially deep learning & neural nets), [theory lags practice] is a truth I have experienced first-hand.
13 thoughts on “Theory lags practice”
In Representing and Intervening, Ian Hacking argues persuasively that theory and practice take turns leading. He gives numerous historical examples. Many of the examples are from physics, but I believe a careful analysis will show this turn-taking in computer science. For example, I believe Turing discovered the idea of a universal computer before any actual universal computer was built.
No idea is 100% original. If theory and practice take turns leading, then it follows that every new theory can be traced back to practical precedents, and every new pratice can be traced back to theoretical precedents. Hacking is very persuasive. I highly recommend his book.
Turing’s concept of a universal Turing machine (a Turing machine that can compute anything that any other Turing machine can compute) is certainly a big step forward in precision and depth from the vague idea that human computers can be automated.
I think it’s worth $25, easily. (Disclaimer: Hacking was my PhD advisor.)
I have always viewed modern electronic computers as the automation of human computers, and these predate Turing.
Thanks Peter. Hacking’s ebook is $25. I find this a bit much. Of course, I can get it through the university (in hard copy), but then there is a time tax that comes in and the requirement to use paper. I will see if I can unearth an article that he has written on this topic.
My counterpoint is that we were already “programming” human computers with instructions that were Turing complete before Turing was born.
I was always told that early on, when Turing was thinking about computers, he was think about human computers. But even if he was thinking about electronic computers, he had to know about human computers, either directly or indirectly.
We can also trace back efforts, dating to the differential engine, to build ever more complete and automated mechanical computers. People were acutely aware, before Turing, that you could build mechanical computers that could do some of what human computers could do, but not quite everything. There was clearly a quest to build more complete mechanical computers. Going back to Babbage, there had been attempts to build more complete engines.
Granted, Turing’s contribution (founded on the work by GÃ¶del and Hilbert) is remarkable in that it laid out nice principles, resolving the matters once and for all.
My view is that this is not very different from Watt and thermodynamics. We first had the engine and then thermodynamics. We first had computers and then we got computer science.
Granted I could be entirely wrong, and maybe theory drives practice as often as the reverse, I will investigate… but the Turing example does not convince me.
“We toyed with uranium (and got sick) long before we could build an atomic bomb.”
I don’t find this example fitting. Actually, we first discovered quantum mechanics and relativity (theory) and then used them to concoct atomic bombs (practice).
Watt was not invented a steam engine. The idea has been known for hundreds of years. He build a first working and usable engine. Basic theory were created prior to this period. Many people have created more or less successful engines before him.
Today, such an approach based on a pure trial and error is almost impossible. For example, in the days of Edison, Tesla, it criticized as ineffective.
In my opinion the success is appropriate feedback loop between theory and practice.
A better (although closely related) example of theory leading practice would be Church’s lambda calculus. The theory directly led to McCarthy implementing LISP in hardware.
We spent decades toying with radioactive material without having a clue about what was going on. Theory followed much later.
Like some other commenters I generally find the analogy of a seesaw back and forth between theory and practice more compelling than practice consistently leading with theory coming along behind organize a better story. Of course how compelling something is to me isn’t necessarily worth much.
One specific theory-to-practice pattern is when formal models allow some kind of extrapolation to an idea that no one had stumbled on yet. One small example that I like is Dan Grossman’s essay “The Transactional Memory / Garbage Collection Analogy”.
The discussion of theory vs. practice made me think of the excellent book “Antifragile: Things That Gain From Disorder” by Nassim Nicholas Taleb. In it, he mainly argues that theory follows practice.
In my review of the book, I summarized it like this:
There is also a section on universities and technological development. Do universities cause technical progress. Not according to Taleb. There is a tension between education, which loves order, and innovation, which loves disorder. A lot of technical innovations come from luck, tinkering, and trial and error. Often, theory comes after, but when a discovery or innovation is described afterwards, it seems more planned and ordered than it really was.
Regarding the Linus quote, I also like this (from David Kelly):
“Enlightened trial and error outperforms the planning of flawless intellects.”
You may subscribe to this blog by email.