Aging is a software bug

In my review of the movie Tomorrowland, I alluded to the fact that in the alternate world, people could stay young by drinking orange juice every morning. I conjectured that this was probably caused by nanotechnology.

I do not expect to ever drink some orange juice that keeps me young… but wouldn’t that be nice?

Someone accused me of being a follower of Ray Kurzweil. I had vaguely heard of Kurzweil as someone who advocates that, soon, computers will exceed the computational power of the human mind. Seems reasonable enough to me.

In any case, I went back and read some of Kurzweil’s work. It turns out that he does predict the arrival of nanotechnology-based rejuvenation. He even put a date to it: in 20 years, or around 2035. And after that point, forever youth could become reality.

One should point out that we cannot really defeat death. The best we could do is defeat aging: death as the result of an accident or a new disease is really hard to prevent, even in theory.

I do not believe that, currently, there is much you can do to extend your lifespan beside the usual if you are already healthy: keep your weight in check, exercise, have a loving family, don’t get mad at your collaborators… So, if you are healthy, do not bother looking for some magical orange juice. It is also not going to be easy to drastically extend our lifespan. If there was some simple herb that could rejuvenate us, the word would have gotten out by now. Moreover, if you could just flip a gene and live forever, we would have documented cases by now: there are billions of us… it is likely that one of us would have gotten the lucky mutation.

But Kurzweil believes that human technology is much stronger than mere luck.

By 2035, Kurzweil will have far exceeded men’s life expectancy (he will get close to 90). He has a plan to get there by closely monitoring his health and taking a crazy amount of supplements. That does not sound insane: if someone puts his mind to it, I am not sure why he cannot drastically increase his probability of living till late in his 80s.

Back to his predictions. It is evident that the rate of progress grows every year. If you must have a metric, look at the number of research articles published every year: it has consistently grown by more than 2% a year for as far back as I can look. By a small but consistent growth of about 3.5% a year, we went (in the US alone) from 110k medical research papers a year in 1996 to 200k in 2013. (This does not account for the fact that China hardly published anything in 1996 while it published 50k medical papers in 2013.) We get such progress because our tools get better and relatively cheaper every year while our collective expertise grows. Kurzweil says that to predict the amount of progress we shall make in the next 20 years, we must not simply project the progress of the past on the future, but rather multiply it. The farther you look, the wider the gap grows between a linear and exponential prediction.

Progress is necessarily unequal. Though we publish more medical papers every year than we ever did, some topics remain poorly researched whereas others are progressing much faster. But with an aging population worldwide, it is a safe bet that aging research is growing faster than medical research at large.

If you are not following closely biotechnology, it is easy to think that there is no much progress. But did you know that we have a relatively safe tool to edit human being genes called CRISPR since 2012? In fact, Chinese researchers are editing the genome of human embryos right now, hoping to prevent diseases. It is a safe bet that someone will soon attempt to raise “super human beings”: e.g., we already know of genetic mutations providing the recipients with superhuman muscles and unbreakable bones, an army of such people sounds like a tempting proposition. At Harvard, they are using this technology, as we speak, to turn an elephant into a mammoth by editing its genes. Did you know that we have commercially available bio 3D printers that can print skin or blood vessels? There is currently a competition to build an artificial liver good enough to keep a large animal alive for 90 days without any support: the competition ends in 2018. If the prize is won, and it will probably be, the next step is to create the other organs. At this rate, it is not hard to believe that, within ten years, we shall be technically able to replace any organ in one’s body, without any need for a human donor or for long-term dangerous medications.

Still. I have looked at Kurzweil predictions a bit more closely, and they seem a bit overoptimistic. I would say that you should probably add 10 years to all his dates. So, if Kurzweil was to predict that we would defeat aging in 30 or 40 years (in 2045 or 2055), then I would say that this is credible. If we go back 40 years ago, medicine was far, far less advanced. The rate of deaths from major diseases was often twice what it is today. If we project in the future several times the progress of the last 40 years, it is hard to imagine what we cannot do.

There is a problem with these predictions, of course. At the moment, we do not even know what aging is. Not really. We know that lobsters and naked mole rats do not age (they die but not because of an aging process similar to ours). The jellyfish and hydra are “immortal”. Some trees, like the bristlecode pine also do not age. Trees do not appear to age, or rather they age in reverse (getting stronger and more fertile with time), but are physically limited over time by their size (not unlike lobsters). We know that among creatures of the same size, say a mouse and a bat, there can be vast difference in longevities. Different species of sea urchins have reported lifespans ranging from 4 to more than 100 years. For individuals of the same species, big (or taller) individuals live shorter lives. We also know that if you inject the blood plasma of a young mouse in an old mouse (a technique called parabiosis), it rejuvenates the old mouse. But I do not think scientists can explain any of it (not to my satisfaction).

There are various theories about what aging is. Some say it is programmed. We are programmed to age. It does make sense from an evolutionary perspective that we are “programmed” to die after a time. And certainly, women are programmed to reproduce before the age of 40, but not after. This is probably well motivated: e.g, people who die make room for new people with possibly better genetic code…

Another theory is that evolution did its best to maximize our lifespan, and we have the very best we can get… short of becoming androids. But there is no reason to believe that evolution would seek to maximize our lifespan. The cycle of birth, reproduction and death works well for evolution. Evolution does not care for the individual, only for the species.

I have also revisited Aubrey de Grey who has this ambitious plan to defeat aging using advanced regenerative medicine. He believes that aging is the result of “accumulated damage”. To him, our bodies are like rusting cars. Evidently, our bodies break down as they age… but why would a mouse accumulate damage 30 times faster than a human being? Why would it accumulate damage 8 times faster than a bat? And how come a whale or a turtle can live much older than us: aren’t they damaged? Some of aging is definitively accumulated damage… your teeth become shorter as you grow older, you accumulate latent viruses, fat cells, you lose neurons in the neocortex (and, in human beings, they are not replenished)… but male baldness is not a random outcome due to damage. Working out damages your body, yet it also improves your health, even in old age. This means that your body does not activate all of its self-repair mechanisms.

So aging is not solely a matter of “damage” (as in an old car). That is not to say that de Grey is wrong… I believe that he is right and I have given out some of my own money to his foundation. But I am not sure the analogy between a car and the human body is best.

My own theory, after reading avidly on the topic for several days, is that we are like a piece of software designed in 1970 and still running three decades later… we are hitting various “year 2000 bugs“. Evolution did not try to maximize our life expectancy (as it may have done with turtles and some whales). If anything, evolution is glad we do not often live beyond 90.

It seems to me that the easiest way to live longer would be to hack our own software (our genome, our biome) as well as repairing various sources of damages (e.g., using stem cells). Sadly, as I have pointed out, it cannot be a simple matter of turning on one gene or the other. Software programmers know all too well how hard it can be to fix what might appear like a minor adjustment… Some bugs can be fixed by changing a few lines, but some require rewriting entire code segments. To turn the clock, we will need some fancy engineering.

What is this clock? There is some hope that the clock in question could be our telomeres. It is an apparently frivolous part of your DNA that grows shorter with every cell division. So it would seem like simply making the telomere longer could make us somewhat younger again. Thankfully, we know how to do just that using telomerase. De Jesus et al. showed that telomerase gene therapy in old mice delays aging and increases longevity without increasing cancer. So, maybe, if we could replenish our telomeres without killing ourselves, we could fool our body into thinking it is younger. It seems that people who live very old without cancer or Alzheimer’s are more likely to have a rare mutation that activates telomerase production. But there is no guarantee that it would work. For one thing, some cells do produce telomerase (e.g., the white cells) and their telomeres still grow short in some people. For another, we know that some cells rarely multiply and are thus unlikely to be limited by telomeres (e.g., your neurons). Moreover, eating well and exercising can extend your telomeres in some cells, though it evidently does not make you younger. There are other possible biological clocks such as DNA methylation. We really do not know enough about what makes a cell old!

It is not just your cells that get old. The tissues themselves (like your skin) fail to repair themselves properly with age. You can see wrinkles in people over 40. We also accumulate lots of broken protein that go on to contribute to Alzheimer’s, Parkinson’s and plenty of other diseases.

Still, I believe that telomeres elongation of some specific cells (with or without telomerase) coupled with advanced stem cell therapy and/or a few well-dosed hormones and proteins could probably rejuvenate you, maybe. For example, a simple, freely available, hormone, oxytocin can rejuvenate muscles. But it is also possible, even likely, that it is much more complicated, even assuming that I got everything right.

So it could easily take 500 years to defeat aging. The point is, we will defeat aging eventually… After all, we can already extend the lifespan of monkeys by reducing nearly by half their rate of death.

One can dream. I imagine that, in the future, you will live normally as human being until you are about 40 (when reproduction normally stops) at which point, you will start taking pills, or injections or nanobots, to make your body believe that you are still 30. There will be some wear and tear, as indicated by de Grey, but it won’t be a big problem. Using 3D printers, we shall be able to print new body parts out of our stem cells. Or, better yet, we will get in situ regeneration. You would need to replenish the neurons in your neocortex every few decades.

When that happens, be ready to work for 60 years or more! (Assuming that people still need to work in the future…)

Unfortunately, there seems to be less research on this important question that you may think. For example, there is a compound that is FDA approved (rapamycin) that is believed to extend lifespans of mammals through some gene hacking. Since we are already giving it to some human beings, you would think that we would have tested it on all mammals by now. Sadly no. There is a project to use it on dogs however, but I do not know whether they got funded.

(Do not go out on the black market to buy rapamycin. It has nasty side effects and it would, at best, delay aging… not reverse or stop it. Plus, it appears to be a telomerase inhibitor so it could actually make your telomeres shorter… and give you cancer and diabetes…)

Unfortunately, we will not defeat aging in the next ten years I would think (short of a surprising, 1 in a million, breakthrough). For one thing, “defeating aging” is not yet a socially acceptable goal. It is a taboo. We do not know what aging is. We have no proven means right now to extend human lifespan. You can probably help your chances of making it to 80, but there is nothing you can do to get beyond 125.

To have any realistic chance at defeating aging in ten years, we would need to have done it, right now, in a few people or in a mouse. To have a realistic chance of doing it in 20 years, we would need to have an excellent plan right now. Maybe someone has such a plan right now… it is hard to tell… given the profit involved, they might not freely share their plan… de Grey has such a plan, but he will only commit to a 25-year schedule on the condition that he has a billion dollars… yet he does not have a billion dollar.

But there are reasons to be hopeful. Google, of all places, created a company with the express goal of extending our healthspan by 20 to 100 years: Calico. It is not just a silly thing: they have recruited the best scientists that money could buy. Calico has hundreds of millions of dollars with commitments exceeding a billion dollars. Calico is hardly alone: Unity Biotechnology is another well-funded technology company that seeks to “cure aging” (it is backed by Amazon’s CEO) . If you want to be optimistic, you could imagine that Calico or some other laboratory could have, right now, a viable plan to put a dent in aging. If, they could be testing it in human beings in 20 years, and it could be ready for the rest of us in 30 years.

So, let me come up with a prediction: we will defeat aging in 40 years. By that I mean that age-related disease would be mostly under control, if not outright eliminated. So maybe you still get Alzheimer’s or a heart condition, but medical therapies are so good that you can go on living a happy and productive life. Compared to Kurzweil, I am pessimistic (he predicts 20 years or less.)

Speaking for myself, I expect to be dead in 40 years… if nothing changes. I do not think any man in my family has made in his eighties… So if I do not die of something else first, I can expect to die from an age-related disease in my seventies or sooner: cancer, Alzheimer’s, heart attack… Even if I were to survive that long, I fear I would be severely diminished…

There is, however, a small probability that I could have a very different life. For one thing, maybe Kurzweil is right and we will defeat aging in 20 years. I stand a good chance of making it there. That sounds much too optimistic however.

But there is another intriguing possibility popularized by Kurzweil and de Grey. Suppose that we defeat aging layer by layer. Imagine that, in five years, we find a way to rejuvenate old people by 3 years… and then, ten years later, by 5 years… and then ten years later by 7 years… if this were to happen, I am much more likely to make it to my eighties. And by then, it is much more likely that they will be able to reverse my aging. This concept is sometimes called the longevity escape velocity: you do not need to live long enough to see a cure for aging, you just need to live long enough to see partial progress.

Partial progress is much easier than defeating entirely aging. When we will understand better age-related gene expressions, we might be able to tune your “epigenetics” so that it is more youthful simply with a few injections. After all, if parabiosis works, it seems pretty clear that with the right doses of drugs, we could simulate the same effect without requiring a young person’s blood. Such an approach could buy all of us a few extra years. Then if we improve stem cell technology significantly, we might be able to undo other age-related damages. And then maybe we could find a way to elongate our telomeres… after all our body knows how to do it, it simply chooses not to. Successive waves of progress could add up in such a manner.

I have a really hard time imagining that we will still grow old 500 years from now. I do not have a lot of faith in biologists, but there are many of them and they have better and better tools.

But here is something interesting: we never imagine a future where people do not grow old. In Star Trek, James T. Kirk grows old. Even the fierce Vulcans grow old. In Star Wars, people grow old. The only science-fiction author who represented fairly, in my mind, what could happen as we defeat aging through technology is Peter F. Hamilton in his Commonwealth Saga (starting with Pandora’s Star).

We still grant public employees pension plans based on limited longevities. There is a very serious risk that we are grossly underestimating the life expectancy of 20-year-old employees. As far as I can tell, this is never discussed.

I believe that it is because defeating aging by technology is a taboo. Not even science-fiction writers want to consider it. In a sense, it is not surprising that only a few outliers like de Grey and Kurzweil talk about it. Sure, they are probably wrong in many important ways… but they are not wrong in the way that matters: aging can and will be defeated. I expect it is simply a bug in our software: we can reengineer our bodies so that they do not age. You may have to walk around with nanobots in you, but you will not age as long as you are careful.

Let me conclude by quoting Richard Feynman (one of the greatest scientists of the XXth century):

It is one of the most remarkable things that in all of the biological sciences there is no clue as to the necessity of death. If you say we want to make perpetual motion, we have discovered enough laws as we studied physics to see that it is either absolutely impossible or else the laws are wrong. But there is nothing in biology yet found that indicates the inevitability of death. This suggests to me that it is not at all inevitable and that it is only a matter of time before the biologists discover what it is that is causing us the trouble and that this terrible universal disease or temporariness of the human’s body will be cured.

Further reading:

Are you a techno-optimist? (A review of Tomorrowland)

Walt Disney released Tomorrowland. I brought my little family to see it and we had a blast.

(Warning: mild spoilers ahead.)

The movie has one message: let us be techno-optimists. Instead of being driven by fear, let us embrace new challenges. Let us go to Mars or beyond. Let us cure cancer. Let us live with large robots.

The movie has some brilliant elements:

  • Early in the movie, we see a young boy who has invented a jet pack. When asked about the purpose of his invention, he replied: “to inspire people”.

    This is a great and important answer. Almost all new inventions bring very little value on their own. That is true of even radical inventions. For example, if we could cure cancer, we would only extend our life expectancy by a few years (less than five if I recall correctly).

    Techno-optimism is the belief that pushing technology forward is good in itself, if only to inspire others.

    Yes, it may take decades or more before hospitals can print me a new lung or a new heart… so maybe I will needless die of a lung or heart disease in ten years… but I am still excited about 3D printing and steam cell research.

  • The cause of much of the misery in the world of the movie can be traced back to pessimism. Once you have convinced people to stop advancing (symbolized in the movie by the closure of a NASA center), the path becomes difficult.

    I have a lot of respect for conservatives like Nassim Taleb who advocate caution in all things. What if we are killing the planet? Should we not revert back to how our ancestors lived, just in case? What if genetically modified food is killing us? But the techno-optimist in me prefers to take the gamble. And, as illustrated in the movie, that is not necessarily any riskier.

    Before we had as much technology as we did today, people died horrible deaths. Earth was ravaged. That still happens today, of course… we are causing cancers, and polluting too much… but we are, as a species, far better off than we ever were. There are more of us (a good thing) and we are healthier, and smarter.

    The techno-optimist thinks that we should push ahead faster when the problems get more difficult. We should invest more in research and development when the problems are bigger, not less.

    And, yes, maybe by tempering with stem cells, we will create a Zombie virus that will wipe out humanity. But maybe these same stem cells will be able to rejuvenate our failing organs.

  • The movie shows a few marvellous inventions that can be used to differentiate the techno-optimists from the rest of the crowd.

    We have human-like “robots” that are genuinely indistinguishable from human beings, except for the fact that they do not grow or age. We have also a cure for aging. Indeed, we learn that the Tomorrowland scientists have cured aging, and all it takes is an orange juice a day… presumably the orange juice is fitted with nanotechnology that repairs the body and prevents aging.

    Most people around me are unwilling to consider these as possible inventions, even on the long term. Yet I believe that both are quite possible. I do not know yet why we would ever build human-like intelligence… but I certainly believe it will be quite possible some day.

    I do not believe that I will live forever. But I have always believed that preventing and reverting aging is a simple matter of technology. If I ever make it to an old age, will we have the technology to give me back my youth? It seems overly optimistic to think so, given that we cannot seem to make any progress against Alzheimer, and that we are probably not even close to curing cancer… But I nevertheless believe that it is simply a matter technology. And technology is accelerating all the time… so nobody can know what is possible in the medium term…

    Being a techno-optimist, I believe that we will soon significantly extend longevity. I do believe that in 20 or 30 years, they will be able to replace hearts and lungs with affordable replacement parts that are just as good (if not better) than the original. Two of my neighbours have artificial knees… and they mow their grass just as well as I do. (Admittedly, they do not do jumping jacks, but neither do I.)

    With all the money that people stand to make with it, I cannot imagine that in 30 years, we won’t be able to rejuvenate skin and muscles so that aging actresses can genuinely look as if they were just 30 or 40.

    Rejuvenating the brain, at least in some critical ways, should be commonplace in ten years. But what I really want to see is how we will extend the brain with electronics.

Of course, techno-optimism is a dogma. It is entirely possible that the net result of technology will be to shorten my life and that of my children, and makes us more miserable. But I have faith that we can find solutions through technology.

An interesting opposing dogma is what I call “biological determinism”. These people believe that we are fundamentally limited by biology. Thus, for example, we should not perturb the Earth with our technology for fear of causing irreparable harm. These people believe that the future looks bleak for people who “aren’t smart enough”…

I believe that we have been, and will continue to extend biology. It is true that people who aren’t smart do not seem to have room in Tomorrowland… but, to me, the obvious solution is to make people smarter. We can use genetics, brain augmentations… As for pollution, I think we can develop technologies that pollute less, as well as better techniques to clean toxins.

Of course, maybe techno-optimists are wrong. However, they can at least hope to be wrong in interesting ways.

Putting the evil academic publishers in perspective

Academic publishing is a bit of a perverted business. Let us recap what should be well known: professors write papers for free while publishers take the papers and resell them to universities for a large profit.

I do hope to live one day in a world where everyone can have free access to all the research in the world. There is irony in the fact that the Internet gives us free access to junk and informercials, but asks us to pay for high-quality government-sponsored sources. Sadly, that is what we have right now.

A common narrative is that universities are victims of this arrangement. They have to pay exorbitant prices to publishers, money that they would rather spend on their students.

There is just a small problem with this narrative: it does not fit the facts on the ground.

It is maybe worth pointing out that many colleges are themselves academic publishers (e.g., Oxford University Press). These college-based publishers are not shy about charging the full amount for their goods. Whenever I see a book priced upward of $40 on Amazon, it is almost always from an academic publisher. So, at a minimum, colleges are complicit in the business of overcharging for academic work.

But how much do academic publishers charge? Academic publishing is a small component of higher education. Harvard University (alone!) had a budget of over $4 billion in 2013. Meanwhile, one of the largest publishers, Elsevier, had revenues of only $3 billion. There is only a handful of large publishers, and thousands of large colleges… Even if Elsevier folded and gave away for free all its subscriptions, students would not see lower tuition fees.

Nobody likes a tax though, right?

Well. What about Microsoft and Oracle licences? Most colleges rely on Microsoft software to operate when they could as easily use free software to achieve much of the same goals. And, let us be honest, most colleges could replace their expensive Oracle software by a free alternative (PostgreSQL) with no lasting consequences. Yet few colleges have decided to do away with the “Microsoft tax“.

Why?

Because to do away with proprietary software and replacing it all by free software would not significantly affect budgets. And, at the margin, it may leave the impression that the school is too cheap to afford real software. Image is important.

The same is true with academic publishing. Library subscriptions are a small price to pay. Offering great library access, especially if it is a tad expensive for an individual, looks great.

Can you imagine a world where all the academic books and research papers were freely available? In such a world, university libraries would face an uphill battle to show their relevance.

Universities do not want to do away with their libraries and library budgets. Not really. If you are a curious fellow and want to read deeply on a subject… the current system pushes you to go to college, if only so you have good library access.

Many researchers are also very fond of publishers and librarians. They make researchers look good. I have yet to see one reputable academic calling for a library-free college. Most academics do not really want academic publishing to falter…

It may be that Elsevier is an evil company run by a Satanist cult. But keep in mind that Microsoft has been called the evil empire. Speaking for myself, I do not really worry about either Elsevier or Microsoft being evil.

Old people are not very sharp, are they?

Depression, obesity, stress, sleep deprivation and age affect negatively your brain. However, as I have previously argued, the commonly reported decline in intellectual productivity with age is not so simple as it was once thought.

Of course, we know that our brains incur some damage over time, so some decline of some of our abilities appears likely. However, it is probably not as simple as “we lose brain cells over time”. For example, perception problems, such as reduced hearing, can lead to the appearance of memory problems, or a lower IQ (Rabbitt, 1991). And we can compensate in many ways for a moderate decline: we can rely on cognitive jigs, we can improve our problem-solving strategies, we can use computers, and so on. The idea that our intelligence resides solely in our brain is more than a bit silly. In effect, if the hardware gets slightly slower, we can compensate with better software, and with new peripherals.

However, my belief is that a good share of the age-related cognitive decline is psychological, or caused by cognitive disuse. This sort of decline is not so easily compensated.

For example, we know that retirement significantly degrades your cognitive functions. That is, shortly (but not immediately) after retirement, you are no longer quite as sharp as you were:

Our results highlight a significant negative effect of retirement on cognitive functioning (…) all these results (…) suggest that retirement plays a significant role in explaining cognitive decline at older age. (Bonsanga et al., 2012)

Following retirement, your social network shrinks. You are less likely to engage in cognitively difficult tasks (e.g., no more driving during rush hour). Simply put, you no longer need to be as bright as you used to. And guess what happens? You lose some of your edge.

So maybe you should not worry that much about saving for your retirement?

Of course, it stands to reason that if retirement can have a large effect, so can other similar life style changes. When I was younger, I was constantly tested and pushed intellectually. I have now a much more confortable job: I could choose to let my brain rot a little more. In fact, I could even increase my professional status by doing more management and less of the highly challenging hands-on research and teaching work I enjoy.

As we grow older, we often do not need to learn quite as fast, we can rely more easily on established patterns… thus, we can let some of a cognitive abilities fall due to disuse. Doing Sudokus can maybe help a little, but I would not expect a strong overall effect.

But beyond disuse, there is also a placebo effect: if you are old and you believe that old people aren’t as sharp, you won’t be sharp. We know that this effect is real and strong. We can test it experimentally in a stereotype threat context. For example, if you invite young women to a mathematics test and you explain to them that you want to study why women do poorly in mathematics, they will do more poorly. It is that simple. It is not just women and mathematics… the same effect works for blacks and IQ tests… and, yes, it works on old people too.

In fact, the effect is so strong that removing the stereotype threat can be enough to eliminate age-related differences in specific experiments:

(…) these results demonstrate a direct link between stereotype activation and false-memory susceptibility, and they suggest that (…) age-related differences in false memories can be eliminated. (Thomas and Dubois, 2011)

If you run an experiment and you invite older people over, even the slightest hint that you are attempting to measure a decline in their cognitive functions could ensure that you will indeed measure a strong decline.

But the effect should be present outside a college laboratory as well. Old people convinced that they have rotten brains should not be expected to be sharp… “The aging process is, in part, a social construct.” (Levy, 2009). It is not just a vague theory, the effect that I describe has been put to the test repeatedly:

Those with more negative age stereotypes demonstrated significantly worse memory performance over 38 years than those with less negative age stereotypes, after adjusting for relevant covariates. (Levy et al., 2011)

Ramscar and Baayen stress that we are probably confounding many factors and unnecessarily stressing seniors about their cognitive functions:

What we do know is the changes in performance seen on tests (…) are not evidence of cognitive or physiological decline in ageing brains. Instead, they are evidence of continued learning and increased knowledge. This point is critical when it comes to older people’s beliefs about their cognitive abilities. People who believe their abilities can improve with work have been shown to learn far better than those who believe abilities are fixed. It is sobering to think of the damage that the pervasive myth of cognitive decline must be inflicting. (Ramscar and Baayen, 2014)

I think that this suggests that, to remain as smart as possible as long as possible… you should remain genuinely active professionally for as long as possible. Moving to more prestigious but less demanding jobs is maybe unwise… You probably also want to moderate your beliefs about age-related cognitive decline. Entertaining the idea that you are getting dumber might just be a self-fulfilling prophecy.

Further reading: Ramscar, M., Hendrix, P., Love, B., & Baayen, H. (2013). Learning is Not Decline: The mental lexicon as a window into cognition across the lifespan. The Mental Lexicon 8:3, 450-481

Do better written papers get more citations?

Everything else being equal, you would expect short and simple papers to get a wider readership. Long sentences, complicated terms, should all discourage readers from reading further.

So you would think that researchers and academics would outcompete each other, producing ever more accessible papers… to maximize the impact of their work.

Sadly, the incentives do not work in this manner:

  • The most important step for many researchers is to get the paper published in a “prestigious” venue. They could not care less if only ten researchers ever manage to decipher half their manuscript… as long as it gets published somewhere prestigious.

    You would think that the referees would recommend well written manuscripts… and everything else being equal, they will…

    Except that pompous language exists for a reason: it is meant to impress the reader.

    If you take a result and show that, ultimately, you can make it trivial… the referee might say “it is nice, but the problem was clearly not very hard”…

    So, at least in Computer Science, research papers often end up filled with complicated details. Very few of them are distilled to the essential parts.

    Authors respond to incentives: it is more important to impress the referee than to write well.

  • The second most important step for researchers is to get cited. You would think that well written work would get more citations… And there must be an effect: if people cannot quickly decipher what your work is about, they are less likely to cite you.

    However, people generally do not read the work they cite. They may scan the abstract, the conclusion… but rarely all of it.

    So papers containing a wide range of results, or more impressive-sounding claims, are probably more likely to be cited.

    The way out of this trap is to measure influence instead of citations. That is, you can reliably identify the references that are essential to follow-up work (see Zhu et al, 2015). Sadly, it requires a bit more work than merely counting citations.

To measure the relationship between writing quality and citations, Weinberger et al. (2015) have reviewed the abstracts (and not the whole papers) of several research articles. Though they do not express it in this manner, we could say that the quality of the writing has little to do with impact: differing from the average paper by more than one standard deviation on a desirable feature may coincide with a variation of the number of citations of about 5%. Their paper also fails to address the fact that citation counts have high statistical dispersion: most papers get few citations where a few get many. So any statistical analysis must be done with extra care: a few individual articles can account for much of the average. You need to take their results with a grain of salt. It worse than it sounds because, your goal as a researcher, is not increase the citations that one of your paper received from 5 to 6 (a 20% gain!)… whether it is 5 or 6, it is still inconsequential… your goal is to have about 100 citations or more for your paper… and whether you hit 80, 100, or 120 citations is irrelevant.

Nevertheless, their work shows that good writing can often coincide with fewer citations… Indeed, they found that long abstracts made of long sentences containing many adverbs and complicated or superlative words tends to coincide with more citations. They found that authors who stress the novelty of their results tend coincide with the most cited authors.

Thus, at least according to Weinberger et al. (2015), improving your writing can have a small negative effect. This should come as no surprise to those who have long observed that academic writing in unnecessarily dense. Authors write this way because it gets the job done.

Weinberger et al. explain their result as follows…

Despite the fact that anybody in their right mind would prefer to read short, simple, and well-written prose with few abstruse terms, when building an argument and writing a paper, the limiting step is the ability to find the right article. For this, scientists rely heavily on search techniques, especially search engines, where longer and more specific abstracts are favored. Longer, more detailed, prolix prose is simply more available for search. This likely explains our results, and suggests the new landscape of linguistic fitness in 21st century science.

Search engines encourage us to write poorly? Do search engines favour results with long sentences and superlative words? I think not. In any case, to make this demonstration, the authors should repeat their survey with older papers, prior to the emergence of powerful academic search engines.

A much more likely phenomenon, in my opinion, is that when looking to quickly cite a reference, one seeks impressive-sounding papers.

I used a similar trick in high school. I wanted to stand apart and impress my teachers, so I would intentionally use a very rich vocabulary. I think it worked.

So, what should you do? If your goal is to be widely read, you should still write short sentences using simple words. If your goal is to impress strangers who will probably never read you, use long and impressive sentences.

I think that Weinberger et al. made their preference clear: ironically maybe, their paper is short, to the point and well written.

Basic email skills

If there is one skill that is needed in a modern office is email. By email, I do not refer to the specific Internet protocol. I refer to the general process of exchanging electronic text online.

We have had thousands of years to learn how to talk to each other. We know how to read each other. Our brains have evolved to cope with these intricate exchanges.

Email is much more difficult. Email is an emerging art that few master.

Here are some basic concepts:

  • Keep your emotions in check. Email does not care how you feel. It takes great care to decipher and communicate emotions accurately online. If you get angry or otherwise preoccupied with the emails themselves, you are doing it wrong. If you want to express your feelings, take a theatre class or create a YouTube channel. Email is not your therapist.
  • Every email you send is public. I would never share without permission a private email, but many others would. In many corporations, all emails are archived and can be reviewed by management. Messages you send through Facebook or Twitter are evidently public.

    When I started blogging, people asked me how I could share with the world my thoughts without care. These same people often did not think twice about sending an ugly email.

    Behave as if everyone is reading your emails and you will be more effective.

  • Debating is usually a waste of your time. There are debating clubs, but none that operate by email. If you want to try to convince people of the error of their ways, write bona fide articles.
  • No mass email. It might be ok to send an email to your tribe (less than twelve people) but any email to a larger group is flat out wrong.

    If you are the recipient of an email sent to a lot of people, please do not hit “reply to all” without due consideration.

  • Short and infrequent. Bombarding someone with emails every day is not going to get you on their Christmas list. Long emails will not be read.
  • Keep your inbox clean. You are simply not supposed to have hundreds of messages in your inbox.
  • Not every email requires an answer. Though there might be social expectations that emails require responses… only send a response if it is genuinely useful. Also, if someone is not responding to your email, they are not “ignoring” you.
  • Not every email requires an immediate response. I have cultivated a habit of delaying my responses. If I have to respond to someone who does not know me well, I will send a quick warning that my response will not be immediate. Sometimes, after waiting a few days, no response is needed anymore. Other times, by waiting a bit, I have given myself time to produce a more thoughtful and engaging reply. It can take me weeks or months to answer some emails: I am not particularly worried about it.

To be creative, work alone


In his excellent book How to Fly a Horse, Ashton makes a case for working alone. He quotes Apple’s co-founder and technical genius Steven Wozniak:

Work alone. You’re going to be best able to design revolutionary products and features if you’re working on your own. Not on a committee. Not on a team.

This should be qualified: Wozniak did not work alone. What he means is that he designed his work alone.

Researchers such as Brooks have long advocated that design is a lonely task.

The idea that building up ideas is best done alone or in tiny teams flies in the face of many of our managerial practices:

  • companies like Google and Facebook leave little private space to their employees;
  • most research funding bodies specifically encourage collaboration… having entire funding programs to encourage collaboration.

Yet we know that researchers in smaller laboratories are more productive (Carayol and Matt, 2006). There is also no evidence that collaboration with outside groups improves productivity (Abramo et al., 2009).

I believe that despite all of the evidence, our intuition about when good and innovative work happens is all wrong. If you cannot go for one hour in a quiet room to think, you are just responding. Your brain is not deeply engaged.

Yet we think too often that it is in this multitasking mode, where we have many things on our minds, solving twelve problems at a time, that we are being creative.

Time alone to think is often viewed as egotistical. I would argue that it is even slightly shameful. Who are you to request time alone in a private office?

I think we have evolved a negative view of loneliness for good reasons. For our ancestors, being alone was being dead.

Our brains are marvellous computers that have evolved for sophisticated relationships. We have complex interactions with a few people every day. Our livelihood depends on these interactions. I think you would be right to model human beings as nodes in a computing cluster. We are fundamentally geared to relate frequently and deeply with our tribe.

For sure, there are a few people who cut off all lines of communication… but far fewer than you may think. Look at tenured professors… look at how many, at least in the sciences, write their papers alone. Further, look at how many write papers alone on work that has little to do with what other contemporary researchers do… Researchers are amazing gregarious.

Most of my own work was done in small teams. I almost never work alone per se. I find it much more enjoyable to work with others. But my collaboration patterns are usually iterative:

  • Joe provides piece A;
  • I take time alone to study A, and after a time I provide B;
  • Joe takes B and valides it… maybe providing me with a revised version C…

Each iteration can take hours, days… But notice how the core of the work, the important pieces, are done by individuals working alone.

I have grown convinced over time that the reason we need to design alone is that it is difficult otherwise to reach a state of flow. In a good day, I might enter a state of flow for an hour or two. That is when I do my most important work. The rest of the day is spent answering emails, grading papers, reviewing articles, getting back to students, and so on.

To enter the state of flow, I need to ignore everything but the problem at hand. In a robust state of flow, I will forget to eat. It is difficult to enter this state with people around unless they are careful not to disturb you.

I like to compare a state of flow to a compute that shuts down all non-essential background processes. The entire CPU cache (i.e., your short term memory) becomes dedicated to one problem and one problem only.

In a normal state, my brain has to think about many things… though I am not aware of it, I constantly check the time, review my agenda for the day, and so on. In a state of flow, all these background tasks are terminated.

To do your best work, you need to focus. To get this result, you cannot be, at the same time, in constant interaction with others. So, effectively, you have to be alone… at least when you are doing your important work.

Was life better in the 1970s?

People from my generation often complain that their parents were better off. They are often quick to dismiss the Internet and smart phones as irrelevant to their well-being.

Were they better off?

  1. Though it has recently peaked, the number of cars per person is higher than it was in the seventies. Current cars are much safer than there were.
  2. In percentage, home ownership was no higher in the seventies, and lower in the sixties than it is today. Pre-1945, hardly anyone in a city owned his home.
  3. Average retirement age was higher in the 1970s than it is today. That is despite the fact that we have since made forced retirement illegal, and despite the fact that there are far fewer physically demanding jobs. Of course, pre-1945 few people retired and life expectancy was often lower than 65 years.
  4. Air quality has gotten better. Gazoline no longer contains lead.
  5. Far more people attend college, far more people have a college degree than in the 1970s.
  6. Though it has fluctuated quite a bit, unemployment rates in the 1970s and 1980s were not smaller.
  7. Violent crime has greatly diminished. Car accidents have become less frequent and less fatal.

I could go on… On almost every measure that I can imagine, people are better off.

I have not even gotten started with the developing world… In the 1970s, all of China was starving. Today, young people in China proudly carry smartphones.

I also disagree strongly that the Internet and modern computer technology is an irrelevance. As a kid, I had access to a handful of science books. Today, kids the same age have an almost embarrassing wealth of choices. As a kid, I watched whatever was on television… Today, I watch Dr. Who together with my boys, at a time of our choosing.

Would I go back to live in the 1970s? I would not. Why would you?

There is one thing that people are sure to bring up: inequality. Though the poor have gotten richer, the rich have supposedly gotten richer even faster. I say “supposedly” because people spend little time thinking about wealth is. It is typically left undefined. We use various proxies, but it is hard to grasp what this means in reality. For example, is your health and your education part of your wealth? Are your skills a form of wealth?

If you are gay, a woman or a minority, you were more likely to be discriminated against in the 1970s. How do you factor this into your measure of wealth?

I would argue that most of our wealth is intangible. It is not cars and buildings that make us wealthy… You could destroy every building in the US… and though it would be tragic and create much misery, within twenty years, the country would have recovered much of the lost wealth. We know this from experience: Germany was entirely destroyed following the second world war. Its loss was almost incomprehensible. The country was broken in two. Yet within 15 years, West Germans recovered fully.

In any case, it is undeniable that while technology makes us richer, it also allows one individual to have much greater impact. Without electronic recording, Céline Dion would not be known in every corner of the world. This cut both ways… if you live in a remote location, you are made richer by your access to Céline Dion, but Céline Dion can also benefit from this greater reach. The same logic applies to CEOs.

So it is the case that a great professional singer can earn a lot more than an average one today… whereas the gap was much smaller in the middle ages. Should you be annoyed? You should not since both singers are now better off… (trust me, you do not want to go back to the middle ages)

If you set aside the neo-Marxist terminology (e.g., inequality), what is hidden is an age-old vice: envy.

We hate it when we meet people who are better off than we are. We get distressed when we hear that they might be getting even better off. Some people would give up all of the gains we have made since the 1970s if they could be certain that nobody is richer than they are.

I know that some people would do it because they did. The last century was a massive experiment where half the world adopted socialism: the idea that everyone has to be equal, whatever the cost. It culminated in the Berlin wall: some people sought to escape socialism, and so the defenders of socialism had them shot. Entire families were killed just because they tried to escape.

If you base your entire society on envy, it will be morally crippled.

I submit to you that the same holds at the individual level… if envy is what is driving you… you are morally bankrupt.

Here is a test: if you meet someone you went to school with… and this person has a much nicer car and much nicer house than you do… how do you feel? If you feel bad… is the problem with you or with capitalism?

Further reading: 26 charts and maps that show the world is getting much, much better.

Further thoughts: If you know that many people feel bad when you show up your big house or expensive sport car… and clearly, many people do… then why do it? It is fair enough to buy expensive shoes to pick up girls, but is it wise to buy the most expensive luxury car just because you can? Why are you posting a picture of your BMW on your Facebook page?

Evil abbreviations in programming languages

Programming language designers often abbreviate common function names. The benefits are sometimes dubious in an era where most programmers commonly use meaningful variable names like MyConnection or current_set as opposed to single letters like m or t.

Here are a few evil examples:

  • The C language provides the memcpy function to copy data. The clearer alternative memcopy requires one extra key stroke.1
  • To query for the length of an object like an array in Python and Go, we use the len function as in len(array). The expression length(array) would be clearer to most and require only three additional characters.
  • Though most languages use the instruction if or the equivalent to indicate a conditional clause, when it comes to “else if”, designers get creative. Ruby uses elsif, helpfully saving us from typing the character e. Python uses elif, saving us two key strokes. PHP alone seems to get it right by using elseif.
  • In Python, we abbreviate string to str. Most languages seem to abbreviate Boolean as bool.

I am not opposed to a judicious use of abbreviations. However, by going too far, we create a jargon. We make the code harder to read for those unfamiliar with the language without providing any benefit to anyone. Let us not forget that source code is often the ultimate documentation of our ideas.

Credit: John Cook’s comment on G+ inspired this blog post.

Update: Commenters point out that memcpy had to be shortened due to technical limitations restricting the length of functions to 6 characters in the old days. Fair enough. However, common C functions that use more than 6 characters also look like alphabet soup: fprintf, strcspn, strncpy, etc.

Accelerating intersections with SIMD instructions

Most people have a mental model of computation based on the Turing machine. The computer does one operation at a time. For example, maybe it adds two numbers and outputs the result.

In truth, most modern processor cores are superscalar. They execute several instructions per CPU cycle (e.g., 4 instructions). That is above and beyond the fact that many processors have several cores.

Programmers should care about superscalarity because it impacts performance significantly. For example, consider an array of integers. You can compute the gaps between the integers, y[i+1]=x[i+1]-x[i], faster than you can recover the original values from the gaps, x[i+1]=y[i+1]+x[i]. That is because the processor can compute several gaps at once whereas it needs to recover the values in sequence (e.g., x[i] before x[i+1]).

Superscalar execution is truly a wonderful piece of technology. It is amazing that our processors can reorder and regroup instructions without causing any bugs. And though you should be aware of it, it is mostly transparent: there is no need to rewrite your code to benefit from it.

There is another great modern feature that programmers need to be aware of: most modern processors support SIMD instructions. Instead of, say, adding two numbers, they can add two vectors of integers together. Recent Intel processors can add eight 32-bit integers using one instruction (vpaddd).

It is even better than it sounds: SIMD instructions are superscalar too… so that your processor could possibly add, say, sixteen 32-bit integers in one CPU cycle by executing two instructions at once. And it might yet squeeze a couple of other instructions, in the same CPU cycle!

Vectorization is handy to process images, graphics, arrays of data, and so on. However, unlike superscalar execution, vectorization does not come for free. The processor will not vectorize the computation for you. Thankfully, compilers and interpreters do their best to leverage SIMD instructions.

However, we are not yet at the point where compilers will rewrite your algorithms for you. If your algorithm does not takes into account vectorization, it may not be possible for the compiler to help you in this regard.

An important problem when working with databases or search engines is the computation of the intersection between sorted arrays. For example, given {1, 2, 10, 32} and {2, 3, 32}, you want {2, 32}.

If you assume that you are interested in arrays having about the same length, there are clever SIMD algorithms to compute the intersection. Ilya Katsov describes an elegant approach for 32-bit integers. If your integers fit in 16 bits, Schlegel et al. have similar algorithms using special string comparison functions available on Intel processors.

These algorithms are efficient, as long as the two input arrays have similar length… But life is not so easy. In many typical applications, you frequently need to compute the intersection between arrays having vastly different lengths. Maybe one array contains a hundred integers and the other one thousand. In such cases, you should fall back on a standard intersection algorithm based on a binary search (a technique sometimes called “galloping”).

Or should you fall back? In a recent paper, SIMD Compression and the Intersection of Sorted Integers, we demonstrate the power of a very simple idea to design better intersection algorithms. Suppose that you are given the number 5 and you want to know whether it appears in the list {1,2,4,6,7,8,15,16}. You can try to do it by binary search, or do a sequential scan… or better yet, you can do it with a simple vectorized algorithm:

  • First represent your single number as a vector made entirely of this value: 5 becomes {5,5,5,5,5,5,5,5}. Intel processors can do this operation very quickly with one instruction.
  • Compare the two vectors {5,5,5,5,5,5,5,5} and {1,2,4,6,7,8,15,16} using one instruction. That is, you can check eight equalities at once cheaply. In this instance, I would get {false,false,false,false,false,false,false,false}. It remains to check whether the resulting vector contains a true value which can be done using yet another instruction.

With this simple idea, we can accelerate a range of intersection algorithms with SIMD instructions. In our paper, we show that, on practical and realistic problems, you can double the speed of the state-of-the-art.

To learn more, you can grab our paper and check out our C++ code.

Reference:

  • Daniel Lemire, Nathan Kurz, Leonid Boytsov, SIMD Compression and the Intersection of Sorted Integers, Software: Practice and Experience, 2015. (arXiv:1401.6399)
  • Further reading: Efficient Intersections of compressed posting lists thanks to SIMD instructions by Leonid Boytsov.