Over ten years ago, XML was all the rage in information technology. XML was what the cool kids used to store, exchange and process data. By 2005, all the major computer science conferences featured papers on XML technology. Today, XML might safely be considered a legacy technology…

In any case, back in 2005, I decided to offer a course on XML that I still offer today. I got criticized a lot for this choice of topic by other professors. Some felt that the subject was too technical. Other felt that it was too easy.

Though the course is technical at times, I think it is fair to say that very few students felt that it is an “easy” course. And here I come to my first realization:

1. Technical depth is hard.

Give me any technical topic… how to build a compiler, how to process XML, how to design an application in JavaScript… and I can make a very hard course out of it. In fact, that is a common comment from my students: “I thought the course would be easy… I was wrong.”

And it is not just hard for the students… it is hard for the teacher too. Every year, some student comes up with an example that challenges my understanding. It is a bit like playing Chess… you can play for many years, and still learn new tricks.

A pleasant realization is that despite how hard the course ended up being, most students rate it very favourably. That is my second realization…

2. Many students enjoy technical topics.

You would not think that this true given how few technical courses you find on campus. And I must say that I am slightly biased against technical courses myself… they sound boring… But I think that the reason the students end up finding them interesting is that they get to solve problems that they feel are relevant. I found that the ability to work hard on a problem depends very much on how relevant it seems to the student.

I also find practical topics more satisfying as a teacher because I have an easier time coming up with fun and useful examples. I do not struggle to make the course feel relevant to the students.

I was heavily criticized by academics when I first launched the course for mostly sticking with the core XML technologies (XSLT 1.0, XPath 1.0, DOM, DTD). This turned out to be a wise choice. In fact, in the sense that my course has evolved over ten years, it goes deeper into the core topics rather than covering more ground. Many of the topics that academics felt were important ten years ago have never picked up steam.

3. Academics view the future as ever more complex whereas practice often prunes unnecessary complexities.

Most technical subjects follow a Pareto law: 80% of the applications require the use of only 20% of the specifications. Thus, when teaching a technical topic, you can safely focus on the 20% that makes up the core of the subject. And that is a good thing because it allows you to dig deeply.

If you go around and check resumes, you will find plenty of people who list XML as a skill. Typically, this means that they are familiar with most of the basics. However, unless they have taken time to specialize in the topic, their understanding is probably quite shallow as any interview may reveal.

My favorite example is CSS. CSS is used on most web sites to format the HTML. However, 99% of the users of CSS treat it as a voodoo technology: use trial and error until the CSS does what you want. With complicated applications, this becomes problematic. Taking the time to really understand how CSS works can make a big difference.

Another example is performance… again, many people try to improve processing speed through trial and error… this works well in simple cases, but once the problem becomes large, it fails to scale up. That is why companies pay the big bucks to engineers with a deep understanding of the technology.

In fact, I suspect that professional status depends a lot more on how deep your understanding is than how broad it is. Anyhow can pick up ten books and skim them… but really understanding what is going on is much more difficult. For one thing, it is often not quite spelled out in books… real understanding often requires real practice. So challenging students to go deep is probably the best way to help them.

So I think that the best thing you can do for students is to encourage them to go deep in the topic.

4. With technical topics, depth is better than breadth.

One objection to this strategy is that companies like Google openly favour “generalists” (1, 2), but I do not think it contradicts my view: I hope to encourage my students to learn the basics really well rather than to get bogged down with many specific technologies. But even if my view does contradict Google’s recruiting standards, there is still a practical aspect: you can collect lots of expertise in many things, but chances that most of this expertise will be obsolete in a few years.

In my case, I was lucky: XML remains an important piece of technology in 2015. But that is not entirely a matter of luck: by the time I decided to make a course out of it, XML was already deeply integrated in databases, web applications and so on. Moreover, it was supported by its similarity with HTML. So I felt confident it would still be around in ten years. Now, in 2015, I can confidently say that XML is there for the long haul: all your ebooks are in XML, all your Office documents are in XML…

However, how we view XML has changed a lot. Back in 2005, XML was a standard data interchange format. There was also a huge industry around it. Much of it has collapsed. In many ways, support for XML is stagnating. We still have pesky configuration files in XML, but that is no longer considered automatically to be a good thing. We prefer to exchange data using JSON, a much simpler format.

When I started out, some students blamed me for not covering specific XML technologies… like particular libraries offered by XML. Professors wanted me to cover exoteric web services. Most of what I was asked to cover years ago has become obsolete.

More critically, I was forced to revisit many times the material offered to the students. But that keeps the course fun for me: I like learning about new technologies… so when JSON came about, I enjoyed having to learn about it. I probably went deeper in the topic than most.

5. If you are a technology enthusiast, keeping a technical course up-to-date can be fun.

Another piece of contention with technical courses is that they are not “the real thing”. College is supposed to teach you the grand ideas… and everything else is just straight applications. So if you know about data structures and Turing machines, learning to write a spreadsheet in XSLT is just monkey work.

But I have found the students quite easily cope with more theory once they have practical experience. For example, it is quite easy to discuss Turing-completeness once you have covered XSLT, XPath, CSS… and then you can have fun pointing out that most of these do end up being Turing-complete (albeit, in a contrive way sometimes).

6. Going from a deep technical knowledge to theory is relatively easy for many students.

Though I am probably biased, I find that it is a lot harder to take students from a theoretical understanding to a practical one… than to take someone with practical skills and teach him the theory. My instinct is that most people can more easily acquire an in-depth practical knowledge through practice (since the content is relevant) and they then can build on this knowledge to acquire the theory.

To put it another way, it is probably easier to first teach someone how to build an engine and then teach thermodynamics, than to do it in reverse. It helps that it is the natural order: we first built engines and then we came up with thermodynamics.

To put it differently, a good example, well understood, is worth a hundred theorems. And that is really the core lesson I have learned. Teaching a technical topic is mostly about presenting elaborate and relevant examples from which students can infer more general ideas.

So my next course is going to be a deeply technical course about advanced programming techniques. I am not going to shy away from getting students to study technical programming techniques. Yes, the course will pay lip service to the big ideas computer science is supposed to be teaching… but the meat of the course will be technical examples and practice.

Cegłowski, a web designer, wrote a beautiful essay called “Web Design: The First 100 Years“. His essay starts with a review of the aerospace industry…

  • Back in 1965, it looked like aerospace was the future. Each successive plane went faster than the previous one. At that rate, by the early twenty-first century, we would be traveling across the solar system.
  • Instead, we hit physical limitations. Faster-than-sound planes cost a lot to run and they were noisy. The market went instead with slower, shorter, safer and cheaper flights. In the last few decades, the real-cost of air fare went down by half despite ever rising fuel costs and taxes. There is also about twice as many flights as there were in the 1970s. We do not fly fast, but there is over a million people in the air at any one time.

This outcome was a bit surprising to many of us. As a kid, I would have expected the space industry to be quite large by 2015 and I would have expected really fast planes.

As a trade-off, we did get something that was largely unexpected: functional telepathy (a.k.a. the Internet). The Internet has connected us to each other in ways that no science fiction author from the 1960s could imagine. In fact, I cannot stand old school science fiction because they typically depict a world where we spend a great deal of time flying left and right, but where there is no distributed multimedia computer network. Even old Star Trek episodes are annoying: the engineer cannot be bothered to snap a picture of a defect and send it to the captain… he has to get the captain to come and see…

In any case, Cegłowski’s thesis is that information technology is following a similar path. We got faster and faster computers… but nobody cares about that anymore.

I think Cegłowski is right in many ways. I do much of my serious work on a laptop that runs at 2.2 GHz. I could easily get a laptop that runs 50% hotter. I use an iPad for much of my informal computer needs, and that is considerably less powerful than even my underpowered laptop. My web server, where this blog is hosted, runs on an old AMD CPU. In many cases, CPU cycles have gotten so plentiful that we have more than enough.

This mirror air travel: planes are so fast today that the time required to get to the airport and pass the security checks is a significant fraction of your travel time. It takes me 7 hours to get to Europe, but I need to be at the airport an hour and a half before. Because it can easily take me 30 minutes to get to the airport… only a bit more than two third of the travel time is spent in the plane. So I would not pay a lot more to fly twice as fast.

If you travel routinely from New York City to Tokyo, you are probably missing supersonic flights. Though you make up for it with in-flight Wifi, don’t you? Similarly, there are still people who need raw power. Some hardcore gamers… people doing numerical analysis… but most people do not see computing speed as a critical feature… not anymore than we view airplane speed as critical… it is a “nice to have” feature… easily overshadowed by more important considerations.

People want cheap, power-conscious, maintenance-free computing.

And the path forward seems clear. In fact, we are already there. We carry on ourselves low-powered computers. Smart jewelry (like watches) has a bright future. PCs are still around, sadly, but not for much longer…

We still have lots of accumulating computational power, but it is located in the cloud. And the cloud is not one super powerful processor… instead, the cloud is made of millions of shared processors, none of which is impressive on its own.

Cegłowski then falls into a conservative stance: “The Internet of 2060 is going to look recognizably the same as the Internet today. Unless we screw it up.” To be fair, he makes a living by selling a bookmark service for people who will not trust Google or Microsoft with their information. His whole business model depends on people remaining conservative. He believes that life has gotten worse for most people in the last 30 years. He writes that “we’re running into physical and economic barriers that aren’t worth crossing”.

Cegłowski would keep the technology as it is… “Why do we need to obsess on artificial intelligence, when we’re wasting so much natural intelligence?”

Technology is fine today. Let us work hard to keep it as it is.

I could not disagree more. We urgently need to improve our technology. The web as it stands today won’t be good enough in 30 years. Sadly, this means that services like the bookmarking service offered by Cegłowski will look antiquated. We have no choice. We need to move forward.

  • Though Cegłowski rightfully complains that we are wasting a lot of natural human potential right now… he fails to see that this is very much a technology problem. Schooling is a technology. And our schooling technology is stuck in the 1920s… And no, we are probably not going to fix it with animated HTML sites featuring multiple choice questions. I routinely meet with people who graduate college but they still can’t write a decent report or write a simple software application. There is a large fraction of the kids today who still do not come close to completing high school… and among those who do complete high school, most lack the skills that will be needed to get decent first-world jobs.

    Some star teachers manage to get kids to succeed despite the odds, but this approach currently does not scale: we do not know how to reliably produce great teachers, and to keep them great.

    I am not claiming that AI will fix schools… I am merely pointing that there is a massive gap between what we need to do and what we do today.

    It is fairly easy for a computer to track basic tests that a student fails. For example, we could easily keep track of all the words a kid can and cannot spell correctly (assuming we still care about spelling in 2015). Such tracking should be par for the course… but it is not. We could rather easily compare instruction as it happens to optimize it, the same way we optimize airplane routes… but, outside some narrow academic projects, none of that is happening.

    Smart kids are bored all day long… while weak kids are stressed out. It is such a waste!

  • In 2060, a quarter or more of the population will be over 65 in many countries. Many of these people will be overweight, sick, frail and in decline. I am not particularly anxious about us spending 90% of our GDP on health care, but do we really want to have half our population burdened with elderly care?

    We need technology so that older people can remain maximally healthy and productive till the very end. This means better medical technology, but also better computing. We need exoskeletons. We need real-time health monitoring so that cancers can be caught and stopped early.

    Already, computers can do better than radiologists in many cases, yet we still rely on these expensive human beings. We could easily collect vital signs from all of us and use machine learning to identify problems before they happen, but, instead, we rely on random doctor appointments.

    Where are we today? Well, in Montreal, we do not even have electronic medical records. Though some hospitals have electronic systems, sharing information is still done on paper. We have “brain games” that pretend to keep your mind sharp as you get older (I suspect that they are waste of time), but nothing to support failing memories. Routinely, older people suffering from dementia get lost and we have no inexpensive way to locate them. We have not even begun to investigate how wearable computing can keep us healthy and productive.

Fifteen years ago, people dreamed of software agents that would act on our behalf on the web… track the information we need, find the new medication that can help us, automagically connect us to clients… Instead, we got Twitter, Wikipedia, Uber and Amazon. Close by no cigar.

Cegłowski writes, with a mocking tone:

If you think your job is to FIX THE WORLD WITH SOFTWARE, then the web is just the very beginning. There’s a lot of work left to do. Really you’re going to need sensors in every house, and it will help if everyone looks through special goggles, and if every refrigerator can talk to the Internet and confess its contents.

I do not think that software, by itself, will fix the world… but the reason software is put forward is that it is cheap. New planes are noisy. Software to optimize how we use planes costs much less. Biomedical technology to reverse aging is expensive and risky. Software to keep elderly workers productive is going to be much cheaper. Training great teachers is hard and expensive… building software to help people acquire demanding skills is going to be much cheaper over time. Simply put: software is cheaper than either human beings or hardware once it is made.

Eventually, we are going to need pills to make learning faster… we are going to need better treatments so that 90-year-old engineers can be as sharp as younger programmers… we are going to need planes that fly on half the fuel they do now… we are going to need batteries with ten times the capacity they have right now…

But our jobs, as software people, is to maximize what we can do with the hardware we have… and there is still a lot we can do beyond the current web. We are not even within a factor of ten of what is possible. Sure, software in the future won’t magically run 1000x faster than it does today, so what?

The nerds online are (slightly) panicking: it looks like Moore’s law is coming to an end. Moore’s law is the observation that microprocessors roughly double in power every two years. The actual statement of the law, decades ago, had to do with the number of transistors… and there are endless debates about what the law should be exactly… but let us set semantics aside to examine the core issue.

When computers were first invented, one might have thought that to make a computer more powerful, one would make it bigger. If you open up your latest laptop, you might notice that the CPU is actually quite small. The latest Intel CPUs (Broadwell) have a die size of 82 mm2. Basically, it is 1 cm by 1 cm for well over 1 billion transistors. Each transistor is only a few nanometers wide. It is an astonishingly small unit of measure. Our white cells are micrometers wide… this means that you could cram maybe a million transistors in any one cell.

Why are chips getting smaller? If you think about the fact that the speed of the light is a fundamental limit, and you want information to go from any one part of the chip to any other part of the chip in one clock cycle, then the smaller the chip, the shorter the clock cycle can be. Hence, denser chips can run at a faster clock rate. They can also use less power.

We can build chips on a 7-nanometer scale in laboratories. That is pretty good. The Pentium 4 in 2000 was built on a 180-nanometer scale. That is 25 times better. But the Pentium 4 was in production back in 2000 whereas the 7-nanometer chips are in laboratories. And 25 times better represents only 4 or 5 doublings… in 15 years. That is quite a bit short of the 7 doublings Moore’s law would predict.

So scaling down transistors is becoming difficult using our current technology.

This is to be expected. In fact, the size of transistors cannot go down forever. The atoms we are using are 0.2 nanometers wide. So a 7-nanometer transistors is only about 35 atoms wide. Intuition should tell you that we probably will never make transistors 1-nanometer wide. I do not know where the exact limit lies, but we are getting close.

Yet we need to go smaller. We should not dismiss the importance of this challenge. We want future engineers to build robots no larger than a white cell that can go into our bodies and repair damages. We want paper-thin watches that have the power of our current desktops.

On the short term, however, unless you are a processor nerd, there is no reason to panic. For one thing, the processors near you keep on getting more and more transistors. Remember that the Pentium 4 had about 50 million transistors. Your GPU from 2000 had probably a similar transistor count. My current tablet has 3 billion transistors, that is 30 times better. Nerds will point out that my tablet is nowhere near 30 times as powerful as a Pentium 4 PC, but, again, no reason to panic.

As a reference point, you have about 20 billion neurons in your neocortex. Apple should have no problem matching this number in terms of transistors in a few years. No particular breakthrough is needed, no expensive R&D. (Of course, having 20 billion transistors won’t make your phone as smart as you, but that is another story.)

 Processor   Year   Billions of transistors 
Apple A6 2012 0.5
Apple A7 2013 1
Apple A8 2014 2
Apple A8X 2014 3

Another reason not to panic is that chips can get quite a bit wider. By that I mean that chips can have many more cores, running more or less independently, and each core can run wider instructions (affecting more bits). The only problem we face in this direction is that heat and power usage go up too… but chip makers are pretty good at scaling down inactive circuits and preserving power.

We are also moving from two dimensions to three. Most chips a few years ago were flat. By thickening our chips, we multiply the power per unit area without having to lower the clock speed. One still needs to dissipate heat somehow, but there is plenty of room for innovation without having to defeat the laws of physics.

And finally, we live in an increasingly networked world where computations can happen anywhere. Your mobile phone does not need to become ever more powerful as long as it can offload the computations to the cloud. Remember my dream of having white-cell size robots inside my body? These robots do not need to be fully autonomous, they can rely on each other and on computers located outside your body.

Still, how do we go smaller and faster?

I still think that the equivalent of Moore’s law will continue for many decades… however, we will have to proceed quite differently. If you think back about the introduction of trains at the start of the industrial revolution, we quickly saw faster and faster trains… until we hit limits. But transportation kept on getting better and more sophisticated. Today, I can have pretty much anything delivered to my door, cheaply, within a day. I can order something from China and get it the same week. Soon, we will have robots doing the delivery. Of course, driving in traffic is hardly any faster than it was decades ago, but we have better tools to avoid it.

So, since we cannot scale down our CPU circuits much further, we will have to come up with molecular computers. In this manner, we could get the equivalent of a 1-nanometer transistor. In fact, we already do some molecular computing: George Church’s team at Harvard showed how to cram 700 TB in one gram. To put it in context: if we reduce the size of Intel’s latest processors by a factor of 20, we would have something the size of an amoeba. That is only about 4 doublings of the density! That does not sound insurmountable if we replace transistors by something else (maybe nucleotides). And at that point, you can literally put chips into a nanobot small enough to fit in your arteries.

 object   Physical width (approximate) 
hydrogen atom 0.0001 micrometers
silicon atom 0.0002 micrometers
nucleotides (DNA) 0.0006 micrometers
transistor (2020s) 0.005 micrometers
transistor (2015) 0.02 micrometers
transistor (2000) 0.2 micrometers
red blood cell 8 micrometers
white blood cell 12 micrometers
neuron 100 micrometers
amoeba 500 micrometers
arteries 1 000 micrometers
CPU chip (2015) 10 000 micrometers

We do not have nanobots yet to repair our arteries and neurons. This will come, but we might have to wait to 2050 or later. We do have stem cells, the next best thing, and they are commonly used to fight cancer and improve your skin… but it will be a long time before they can used generally to improve your health.

Still. We are in 2015 and we have cool technology and expertise. What can we use to improve our health?

  • The current nutrition fads favour low-card high-protein diets (Paleo, Atkins). These almost surely shorten your lives. Yes, eating lots of protein will help you stay lean and might even improve your short-term health. Yet, if you look at what centenarians eat, a pattern emerge: they eat very little meat. It looks like eating lots of protein, especially from iron-rich sources like red meat, accelerates aging.

    Proteins in high doses are harmful in many ways. For example, they seem to lower amino acids (like cysteine) that are involved in your body’s anti-oxidants. They also tend to come with lots of iron. Too much iron is really bad for you. Proteins from calcium-rich sources such as milk or yogourt are probably less damaging on the long run because they provide calcium which reduces iron absorption. Proteins from legumes and beans are also better because they are not good sources of iron.

    Though I do not believe that it is necessarily helpful to avoid meat (most centenarians are not vegetarians), I eat only moderate amounts of meat. Meat tends to be inflammatory, rich in iron, rich in protein and so on. It is not bad in small quantities, but I think North Americans eat way too much meat.

    The common recommendation that you should avoid saturated fats and load up on unsaturated fats is probably way too simplistic. Nuts are rich in saturated fats, but good for you. I eat nuts every day and I believe it is helpful.

    You need a healthy dose of Omega-3 which you can get from sardines, salmon, kale and cod. Canned salmon with bones is a good source of calcium, so it is a good way to get proteins without loading up on iron.

    Tea, cocoa and coffee, especially when they do not interfere with sleep, appear to be good for you. I try to drink at least 5 cups of either of them each day, much of it caffeine free.

    You want to avoid insuline spikes and keep your microbiome healthy. So avoid added sugar and fiber-poor starchy foods (like potatoes). Most fruits and vegetables are fine, so you can load up on them. I especially like brocoli, kale, red onions and tomatoes. To be on the safe side, I eat fruits with moderation. Rice and pasta are probably fine in moderation, but if you take them with fibers (hint: vegetables), you will lower your insuline spikes. Re-heated rice, after it has been refrigerated, is probably healthier since it takes longer for you to digest it.

    I try to eat sugar-free yogourt every day. I make my own. I believe that it has greatly reduced my allergies. For much of my life, starting from what I was a kid, I had been a chronic allergy sufferer. I believe that I fixed my problem with yogourt.

    I see no evidence that organic or GMO-free food is better for you.

    Many people are into fasting. It appears to be good for you because it promotes autophagy. Basically, it forces your body to clean the house to grab nutrients. I do not fast because it is socially awkward and unproven in human beings. People who fast aggressively look worse to me. Moreover, as far as I can tell, few of the centenarians are into fasting. I suspect that the benefits of fasting can be had by limiting your protein intake. Moreover, you can get some benefits, I suspect, by overnight fasting: all you need is to strictly limit your caloric and protein intake after supper and before breakfast.

  • Sleep appears to be very important. It would seem that sleep deprivation weakens your immune system. The net result of poor sleep might be cancer or Alzheimer’s. Lack of sleep also lowers your IQ and makes you vulnerable to depression.
  • Keeping your ideal weight (a BMI of 21 or 22) appears to be important. I spent much of my life being 10 to 20 pounds heavier than I should. I recently fixed this problem by… eating smaller meals. Easier said than done, I know… I also used technology to help a bit: the Wii balance board computes and plot my weight automatically. In my case, having daily feedback regarding my weight helps me keep the fat off.
  • Moderate exercise seems to be about the very best thing you can do for your health. When they compared twins, one of which was sedentary, they found that the active twin was much healthier. There is a lot of debate as to what type of exercise is best… some people prefer lifting weights, others prefer running…

    For men, it appears that lifting weights is a good idea because it increases naturally your testosterone levels. It does not follow that having huge muscles is necessarily your best objective however. There seems to be no evidence that body builders are healthier than golfers.

    I try to spend my days standing up. Ideally, I only sit two hours a day or less. I suspect that in 20 years, we will look back at office workers sitting all day the same way as we look at people who smoke today: don’t they know this is killing them?

    One especially important aspect of exercise is that it preserves your balance. Dangerous accidents happen when you lose your balance. It is amazing how fast your balance goes to hell if you are sedentary.

    Is exercise important if you never have to do strenuous tasks and maintain a good weight? Yes. Exercise improves cognition. So if you are an intellectual, you need to be working out. Also exercise appears to significantly reduce your risks of several age-related diseases such as osteoporosis. It seems to keep your sex drive alive.

  • Near where I live, there are a few stores that sell supplements. As a geek, I find much appeal in the idea that you could “self-medicate” by assembling a bunch of pills. Futurists like Kurzweil like that very much.

    Sadly, taking supplements is not only likely to be a waste of money, it is also likely to be shortening your life. Remember the craze around taking anti-oxidants? Yes, your body suffers from oxidation, especially when you are “old” (after 25). A common sign of chronic oxidation is white hair. Your hair is being “oxidized” and turns white as a result. But notice how taking anti-oxidants does not reverse white hair? In fact, supplementing with anti-oxidants (in general) is almost surely harmful. However, eating food naturally containing anti-oxidants (e.g., brocoli) is probably good for you.

    Mineral supplements, like calcium, seem like a good idea. You would think that your body would use the minerals it needs and leave the rest. Sadly, it does not appear to work in this manner. It looks like many mineral supplements increase your risks of having cancer, even if the presence of these minerals in your regular food is not cancer causing.

    Some very specific supplements might be helpful however. I personally take a small amount of vitamin D every day. Taken in the morning, vitamin D significantly improves my sleep. There does not seem to be any evidence that, in moderation, vitamin D is harmful. (Megadoses are definitively harmful.)

    When working out, I take some creatine. Creatine is known to improve muscle mass and it “might” help your brain. As far as I can tell, it is entirely safe otherwise. I only take small quantities, and not every day.

    I also take small doses of aspirin daily. These have the potential to be harmful in many ways, and may cause haemorrhages… but aspirin lowers your risk of certain cancers and protects your heart. I figure that I would rather die from an haemorrhage than cancer or heart attack.

  • Some exposure to the sun appears useful but it causes skin damages that we do not yet know how to reverse. It is probably wise to wear sun glasses and a hat to protect your eyes, but you do want to go outside regularly. Nature walks appear to be especially beneficial.

There are a few things that are coming that should help us a great deal:

  • Health monitoring is crude today, outside critical care. I can measure my blood pressure and my heart rate from time to time… but that is unlikely to be very helpful.

    Thankfully, bracelets that can keep track of your sleep and heart rate are already available. Clearly, that is only the beginning. We are going to see more and more devices that track our health simply because the financial incentives are there and the technology is ready.

    The possibilities are endless. Devices could monitor skin condition, muscle tone, inflammation, hair color, scent, hormone levels… Though some of these measures are intrusive, we can find ways to make them part of our daily lives without making a mess.

    The holy grail would be devices that can diagnose cancer or heart conditions in the very early stages and monitor progression continuously.

    Given enough data, smart software could provide useful actionable advice. For example, you could be advised as to what you should not eat in the next few days.

  • Soon we will all get our genome and microbiome sequenced. With a detailed analysis, you will be able to predict how you might react to a medication, or how likely your to suffer from some diseases. It won’t be perfect, but it might increase your odds.
  • Current medical research is typically based on dozens of patients at a time. Hundreds at most. A few studies follow thousands of people, but they are rare. In the future, we will be able to monitor and follow thousands if not millions of people with the same condition along with complete genome and microbiome information. And we will not just get weekly blood tests or such silliness. I mean that we will be able to follow people hour-by-hour, minute-by-minute.

    (Privacy is an issue, but better health trumps privacy any day. Ask people who suffer from a deadly disease.)

Human beings are only going to get collectively healthier if we embrace science and technology.

Further reading: Stop the clock by Mangan.

Warning: This blog post is an opinion piece, not medical advice. Please consult a medical professional before making any change to your life. I am not a medical professional. If you follow any of my advice, you may die. Do not trust random people from the Internet with your health!

Old software tends to fail. If you upgrade to the last version of Windows, your old applications may fail to run. This is typically caused by a lack of update and commonly called bit rot. That is, if you stop maintaining software, it loses its usefulness because it is not longer in sync with current environments. There are many underlying causes of bit rot: e.g., companies that stop supporting software let it fall to bit rot.

To the contrary, Robin Hanson, a famous economist, believes that software becomes increasingly inflexible as we update it. That is, the more software engineers work on a piece of software, the worse it becomes until we have no choice but to throw it away. To put it another way, you only can modify a given piece of software a small number of times before it crumbles.

Let me state Hanson’s conjecture more formally.

Hanson’s law of computing: Any software system, including advanced intelligences, is bound to decline over time. It becomes less flexible and more fragile.

The matter could be of consequence in the far future… For example, would an artificial intelligence “grow old”? If you could somehow make human beings immortal, would their minds grow old?

We could justify this law by analogy with human beings. As we grow older, we become less mentally flexible and our fluid intelligence diminishes. The reduced flexibility could be explained in terms of economics alone: there is less benefit in acquiring new skills when you can already make a living with what you know. So we expect, using economics alone, new fields to be populated by the young. But, in human beings, we also know that the brain undergoes physical damages. The connectome degrades. Important hormones become lacking. The brain becomes inflamed and possibly infected. We lose neurons. All of this damage makes our brain more fragile over time. Indeed, if you make it to 90 years old, you have a chance out of three to suffer from dementia. None of these physical problems are likely to affect an artificial intelligence. And there is strong evidence that all this physical damage to our brain could be stopped or ever reversed in the next twenty years if medical progress continues at high speed.

Hanson proposes that the updates themselves damage any software system. So, to live a long time, an artificial intelligence might need to limit how much it learns.

I am arguing back that the open source framework running the Internet, and serving as a foundation for companies like Google and Apple, is a counterexample. Apache, the most important web server software today, is an old piece of technology whose name is a play on words (“a patched server”) indicating that it has been massively patched. The Linux kernel itself runs much of the Internet, and has served as the basis for the Android kernel. It has been heavily updated… Linus Torvalds wrote the original Linux kernel as a tool to run Unix on 386 PCs… Modern-day Linux is thousands of times more flexible.

So we have evolved from writing everything from scratch (in the seventies) to massively reusing and updated pre-existing software. And yet, the software industry is the most flexible, fast-growing industry on the planet. In my mind, the reason software is eating the world is precisely that we can build up on existing software and thus, improve what we can do at an exponential rate. If every start-up had to build its own database engine, its own web server… it would still cost millions of dollars to do anything. And that is exactly what would happen if old software grew inflexible: to apply Apache or MySQL to the need of your start-up, you would need to rewrite them first… a costly endeavour.

The examples do not stop with open source software. Oracle is very old, but still trusted by corporations worldwide. Is it “inflexible”? It is far more flexible than it ever was… Evidently, Oracle was not built from the ground up to run on thousands of servers in a cloud environment. So some companies are replacing Oracle with more recent alternatives. But they are not doing so because Oracle has gotten worse, or that Oracle engineers cannot keep up.

When I program in Java, I use an API that dates back to 1998 if not earlier. It has been repeatedly updated and it has become more flexible as a result… Newer programming languages are often interesting, but they are typically less flexible at first than older languages. Everything else being equal, older languages perform better and are faster. They improve over time.

Hanson does not provide a mechanism to back up his bit-rot conjecture. However, it would seem, intuitively, that more complex software becomes more difficult to modify. Applying any one change is more likely to create trouble in a more complex projects. But, just like writers of non-fiction still manage to write large volumes without ending with an incoherent mass, software programmers have learned to cope with very large and very complex endeavours. For example, the Linux kernel has over 20 million lines of code contributed by over 14,000 programmers. Millions of new codes are added every year. These millions of lines of code far exceed the memory capacity of any one programmer.

How is this possible?

  • One ingredient is is modularity. There are pieces of code responsible some actions and not others. For example, if you cannot get sound out of your mobile phone, the cause likely does not lie in any one of millions of lines of code, but can be quickly narrowed down to, say, the sound driver, which may
    only have a few thousand lines of code.

    We have strong evidence that the brain works in a similar way. There is neuroplasticity, but even so, given tasks as assigned to given neurons. So a stroke (that destroys neurons) could make you blind or prevent you from walking, but maybe not both things at once. And someone who forgets how to read, due to loss of neurons, might not be otherwise impaired.

  • Another important element is abstraction which is a sophisticated form of modularity. For example, the software the plays a song in your computer is distinct from the software that interfaces with the sound chip. There are high and low level functions. The human mind works this way as well. When you play football, you can think about the strategy without getting bogged down in the ball throwing techniques.

Software engineers have learned many other techniques to make sure that software gets better, not worse with updates. We have extensive test frameworks, great IDEs, version control, and so on.

However, there are concepts related to Hanson’s notion of bit rot.

  • Programmers, especially young programmers, often prefer to start from scratch. Why learn to use a testing framework? Write you own! Why learn to use a web server? Write your own! Why do programmers feel that way? In part because it is much more fun to write code than to read code, while both are equally hard.

    That taste for fresh code is not an indication that starting from scratch is a good habit. Quite the opposite!

    Good programmers produce as little new code as they can. They do not write their own database engines, they do not write their own web servers…

    I believe our brains work the same way. As much as possible, we try to reuse routines. For example, I probably use many of the same neurons whether I write in French or English.

  • Software evolves through competition and selection. For example, there are probably hundreds of software libraries to help you with any one task. New ones get written all the time, trying to outcompete the older ones by building on new ideas.

    The brain does that all the time. For example, I had self-taught myself a way to determine if a number could be divided by 7. There was a part of my brain that could run through such computations. While teaching my son, I learned of a much better way to do it. Today I can barely remember how I used to do it. I have switched to the new mode. Similarly, the Linux kernel routine switches drivers of components for new ones.

  • A related issue is that of “technical debt”. When programmers complain of crippling growing pain with software… that is often what they allude to. In effect, it is a scenario whereas the programmers have quickly adapted to new circonstances, but without solid testing, documentation and design. The software is known to be flawed and difficult, but it is not updated because it “works”. Brains do experience this same effect. For example, if you take a class and learn just enough to pass the tests… you have accumulated technical debt: if you ever need your knowledge for anything else, you will have to go back and relearn the material. You have made the assumption that you will not need to build on this new expertise. But that is as likely to affect young software and young brains.

    Corporation without a strong software culture often suffer from “technical debt”. The software is built to spec… and does what it must do, and not much else. That is like “knowing just enough to pass the test”.

    With people, we detect technical debt by experience: if the young accounting graduate cannot cope with the real-world, he probably studied too closely to the tests. With software, we use the same criterion: good software is software that has been used repeatedly in different contexts. In some sense, therefore, technical debt is flushed out by experience.

  • What about having to search through an ever expanding memory bank? That assumes that people, as they grow older, pursue exhaustive searches. But that is how intelligence has to work, and I do not think that is how human being works. When faced with a new case, we do not mentally review all related cases. Instead, we maintain a set of useful heuristics. And, over time, we let go of rarely used data and heuristics. For example, I once learned to play the flute, nearly forty years ago. Some of these memories are with me, but it is very unlikely that they are slowing me down for non-flute-related activities. Again, here we can exploit modularity… one can forget to play the flute without forgetting
    everything else.

    Search algorithms do not get slower proportionally with the size of the data bank. If this were so, Google’s search engine would slow to a crawl. We have built lots of expertise on how to search efficiently.

  • Abstraction leaks: to make our software, we use high level functions that run other functions and then more functions… down to processor instructions. Over time we use higher and higher levels of abstraction. A single mistake or undefined behaviour at any one level, and we produce an erroneous or unexpected result.

    That might be a rather fundamental limitation of software systems. That is, any sufficiently advanced system might produce erroneous and unexpected results. This probably puts a limit to how much abstraction one can do without much effort given the same “brain”.

In any case, for Hanson’s conjecture to hold, one should be able to measure “software age”. We should be able to measure the damage done by the programmers as they work on the software. There would be some kind of limit to the number of modifications we can make to a piece of software. There would be limit to what an artificial intelligence could learn… And we would need to observe that software being aggressively developed (e.g., the Linux kernel) grows old faster than software that is infrequently modified. But I believe the opposite is true: software that has been aggressively developed over many years is more likely to be robust and flexible.

Of course, the range of problems we can solve with software is infinite. So people like me keep on producing more and more software. Most of it will hardly be used, but the very best projects end up receiving more “love” (more updates) and they grow more useful, more robust and more flexible as a result.

I see no reason for why an artificial intelligence could not, for all practical purposes, be immortal. It could keep on learning and expanding nearly forever. Of course, unless the environment changes, it would hit diminishing returns… still, I expect older artificial intelligences to be better at most things than younger ones.

Next Page »

Powered by WordPress