Biology and computing are more alike than you think…

Biological systems are seemingly of infinite complexity. We still don’t know what a heart attack is, not really. Despite billions spent, we still don’t know what Alzheimer’s is. We don’t what aging is, not really. We don’t know how genes work, we can’t reprogram you at will.

We study biology from the top down. We see a tree growing and we try to figure out how it does it.

Computing is built from the bottom up. We design chips. We assemble them. We write the software. And the whole thing comes together.

If your software and hardware are simple enough, then one person can fully understand a computing system. However, we have long run past this stage.

A single processor can include billions of transistors, organized in circuits that implement clever algorithms. Then we have layers upon layers of software abstractions.

It is hard to compare the complexity of a living being with that of a computer, but the human genome can easily fit on a cheap USB key. In fact, there are companies that will sequence and ship your own genome on a USB key for less money than you’d think.

You might think that the information contained in our genome is highly compressed data, but the evidence points in the other direction. Most of our DNA is seemingly unnecessary.

Of course, we know a lot more about how an iPhone works than a human being. It has to be so.

Nevertheless, the lines are blurring.

Recently, a piece of software, AlphaGo, defeated the best human player in the game of Go. The software and the underlying systems were all designed by people… but the resulting machine plays a game of Go at a much higher level than any of the designers of AlphaGo could. This means that, in a very real sense, the designers of AlphaGo don’t fully understand their creation. It escapes them. It will do things that they could not predict.

Increasingly, computing will become less about pure engineering, pure science and straight programming… and more about taming complexity.

Meanwhile, with the growth of synthetic biology and the application of computing to medicine, the work of biologists will become closer to that of the software engineers of today. Instead of letting the body heal itself with help, we will increasingly be reprogramming it.

At some point in this century, the difference between biology and computing will fade almost entirely.

10 thoughts on “Biology and computing are more alike than you think…”

  1. Taming complexity or not understanding it.

    To a certain level complexity is not absorbable by a human mind, and does it care really ?

    If you had to gamble your life on a game of Go played by next Alpha Go vs next Go World Champion, would you risk it ?

    And what is the impact on the decision of the human guy if you tell him that the complexity of the play of Alpha Go is vastly superior to its conceptors ? He could interpret that as less certainty or more
    certainty. It works both ways in our understanding of life.

    Secondly, the mapping of Chess is pretty certain one day by regressing from ending positions to the first ones.
    Now if you did possess the entire of tree of Chess game on a hard drive, you could not make use of it to understand what is the game of Chess, its subtleties and intricacies.
    Even with a small amount of pieces, say 4. There is no patterns discernable from the point of view of a human mind to reproduce the win in competition, because the solution is too vast and concrete.
    So imagine with 16 pieces on board…

    You can compare it, in a way, with the amount of information we can get in our daily lives, which do not help us but instead do drown us into inefficiency.

    Using analysed data which we do not have a grasp on, is oftenly completely inoperative from the point of view of taking a decision.

    Thirdly, even with small sensible tools like Excel an Powerpoint, we are not sure that the human organisations work better that with the old tools of the managers in the 50’s : paper an pencil. Since the end of WWII Global ROI of main world firms is ineluctably dropping, and noone can reverse the tendency.

    Lots of young players in Chess have weak minds and ideas because they rely too much on Computer during their training. Every GrandMaster know that the bulk ok of training analysis should be done at home on their own, and then just check on computer. Otherwise you do not have the strength necessary to be competitive.

    So ok Alpha Go is cool, it could relieve certainly a vast amount of stupid tasks from the minds of humans, but will they grow better ?

    I mean, just how strong can we be in our life and in our strategical decisions if we did not even experience one ounce of suffering or hard work before ?

  2. I agree with your analogy between biology and computing (I’ve been calling animals “robots” for the last few years, and I’m in awe of the complexity of their software).

    A few corrections, though:

    – junk DNA is not “unnecessary genes”; it’s DNA that’s outside of genes (only about 1.5% of the DNA is in genes – that is, code that ultimately generates proteins)

    – just as it was the case with “vestigial organs”, of which there were about a hundred and now we know that all of them are in fact useful, it appears that the “junk DNA” is also anything but; see http://www.wondersandmarvels.com/2016/03/our-genetic-dark-matter.html

    1. junk DNA is not “unnecessary genes”

      Point taken. Thanks.

      “junk DNA” is also anything but (…)

      The larger point is this… how much information is actually coded in our DNA. If some of the junk DNA has an effect… it is fine.

      The point is, in terms of bits of information, to describe the genetic background of an individual, you do not even need all of its genome. And even that fits on a USB key.

      So biology is not so far above computing…

  3. “At some point in this century, the difference between biology and computing will fade almost entirely.”

    That will happen when we can make backups of people. Imagine a surgeon using “Ctrl + z”. It would be awesome if we could get that in this century. Pursuing the low hanging fruit and planning only for the next 4 or 5 years (in the best cases) will more likely lead to economic collapse, IMHO.

    And then we have the gap between research and normal use. Self-drivig cars are already “real”, but when will the sales for self-driving cars surpass the traditional ones? AFAIK, that might never happen.

    1. And then we have the gap between research and normal use. Self-drivig cars are already “real”, but when will the sales for self-driving cars surpass the traditional ones? AFAIK, that might never happen.

      I am sure many people thought 20 years ago that the US would never have a black president.

  4. “At some point in this century, the difference between biology and computing will fade almost entirely.”

    I’m really curious what gives you confidence in this statement? A couple examples to give context to my question:

    According to Wikipedia the spherical shape of Earth was first conclusively established in the 3rd century BC. Surely at that time there were people who at least strongly speculated about circumnavigating the globe. Yet it wasn’t until the 16th century that it happened. If you asked during antiquity how long it would take for that feat to be accomplished, do you think anyone could have reasonably guessed nearly 2,000 years?

    A more modern example. I wonder how many years you would have to go back before you would find a majority of technologically literate people predicting that flying cars would be a practical reality before self-driving cars. I bet not that many.

    My general point is the banal one that accurately predicting technological developments is hard. I believe that something like the bio/computer convergence you mention is likely to happen at some point, but I wouldn’t want to bet real money on whether it’s going to be closer to 50 years or 1,000.

    Any concrete reason for your timeline?

    1. My general point is the banal one that accurately predicting technological developments is hard.

      If you read this blog, you’ll know that I make no claim to be an exception in this regard. The “predictions” that I make (I have got a whole page of them http://lemire.me/blog/predictions/) are not meant to actually “predict” the future. My brain has no special ability.

      The purpose is to actually make us think.

      I expect most of my predictions are not going to come true. So what? The process of making them is what matters.

      When I read other people predictions, I learn things. I think “wow… could this really happen… let me see…”

Leave a Reply

Your email address will not be published. Required fields are marked *