Science and Technology links (February 16th, 2018)

  1. In all countries, in all years–without exception–girls did better than boys in academic performance (PISA) tests.
  2. Vinod Khosla said:

    There are, perhaps, a few hundred sensors in the typical car and none in the body. A single ad shown to you on Facebook has way more computing power applied to it than a $10,000 medical decision you have to make.

  3. The gender of an unknown user can be identified with an accuracy of over 95% using the way a user is typing.
  4. Citation counts work better than a random baseline (by a margin of 10%) in distinguishing important seminal research papers.
  5. By consuming vitamin B3 (or niacin), you can increase your body’s production of Nicotinamide Adenine Dinucleotide (NAD+ for short). It turns out that NAD+ supplementation normalizes key Alzheimer’s features (in mice). If I suffered from Alzheimer’s, I would not hesitate to take niacin supplements.
  6. The U.S. has never produced so much oil.
  7. According to Nature, birds living in large groups are smarter.
  8. A few years ago, we were told that the Pacific nation of Tuvalu would soon disappear due to climate change. In fact, for now, it is growing in size.

Science and Technology links (February 9th, 2018)

  1. We shed 50 million skin cells every day.
  2. A mutant crayfish reproduces by cloning. To my knowledge, this might be the largest animal to reproduce by cloning.

    Before about 25 years ago, the species simply did not exist (…) it has spread across much of Europe and gained a toehold on other continents. In Madagascar, where it arrived about 2007, it now numbers in the millions (…)

    I note two interesting aspects to this story. The first one is that it shows that, contrary to a common belief, new species are created even today. The second one is that it brings us back to an interesting puzzle. Cloning is a lot more efficient that sex, for procreation. So why do most large animals use sex? See The Red Queen: Sex and the Evolution of Human Nature.

  3. Some evidence that moderate (but not heavy) alcohol consumption might be good for your brain. I still would not recommend you start drinking if you aren’t drinking right now.
  4. While the average is 106 boys born to every 100 girls, for vegetarian mothers the ratio is just 85 boys to 100 girls. In other words, being a vegetarian makes it much more likely that you will give birth to girls.
  5. Researchers can simulate a worm’s brain with a few artificial neurons.
  6. Elon Musk’s SpaceX company launched the most powerful rocket in the world:

    The Falcon Heavy is the world’s 4th highest capacity rocket ever to be built (…) Falcon Heavy was designed from the outset to carry humans into space, including the Moon and Mars (…) The Falcon Heavy was developed with private capital with Musk stating that the cost was more than $500 million. No government financing was provided for its development.

    The Verge has a nice documentary on YouTube.

  7. Mitochondria are the “power stations” of our cells. As we age, we tend to accumulate malfunctioning mitochondria which might lead to various medical conditions. Researchers have found that a drug targetting mitochondria could improve cognition in old mice.
  8. Graphics processors are in high demand. Some of the best ones are made by NVIDIA. Year-over-year, NVIDIA’s full-year revenue increased 41% to finish at $9.71 billion in 2017.
  9. Using lasers, we found whole new Mayan settlements:

    The data reveals that the area was three or four times more densely populated than originally thought. “I mean, we’re talking about millions of people, conservatively,” says Garrison. “Probably more than 10 million people.”

  10. According to a recent research article, vitamin D-3 has the potential to significantly reverse the damage that high blood pressure, diabetes, atherosclerosis, and other diseases inflict on the cardiovascular system.
  11. A vaccine entirely cleared mice from cancer.

Don’t underestimate the nerds

I’m a little nerdy. According to my wife, I even look like a nerd. I am not very big. I have a long resume posted online, and I’ll proudly post my follower count, but if you meet me in person, I am unlikely to come across as “impressive”. I don’t talk using “big words”. I have been told that I lack “vision”. Given a choice between spending time with powerful people getting their attention, and reading a science article… I will always go for the latter.

I’m not at all modest, but I am humble. I get most things wrong, and I will gladly advertise my failures.

I’m lucky in that I have a few like-minded colleagues. I have a colleague, let us call her “Hass”. She gave us a talk about power laws. (The mathematical kind.) Who spends their lunchtime talking about power laws and probabilistic distributions?

We do.

However, if you have been deep down in the bowels of academia… You will find another animal. You have “political professors” whose main game is to achieve a high status in the most visible manner. Academia rewards this kind of behavior. If you can convince others that you are important, well regarded and that you do great work for humanity, you will receive lavish support. It makes sense given the business schools are into: delivering prestige.

If you visit a campus, you might be surprised at how often computing labs are empty, no professor to be found. Because of who I am, I would never ask for space unless I really needed it. But, see, that’s not how political animals think… to them, having space is a matter of status.

Nerds are, at best, part-time political animals. It would seem that nerds are weak. Are they?

My view is that nerds are almost a different species. Or, at least, a subspecies. They do signal strength, but not by having a luxurious car, a big house, a big office, a big title.

I remember meeting with the CEO of a company that was doing well. The CEO kept signaling to me. He talked endlessly about his prestigious new car. He was sharply dressed in what was obviously a very expensive suit. He kept telling me about how many millions they were making. Yet we were in my small office, in a state university. He kept on signaling… and you know how I felt in the end? Maybe he expected me to feel inferior to him. Yet I lost interest in anything he had to tell me. He wanted me to review some technology for them, but I discouraged him.

Big titles, displays of money… those do not impress me. If you signal strength through money alone, I’m more likely to pity you.

If Linus Torvalds were to meet Bill Gates, you think that Linus would be under Bill in the nerdom hierarchy? I doubt it. I have no idea how much money Linus has, and the fact that nobody cares should be a clue.

What did my colleague Hass do? She came and presented a kick-ass nerdy presentation. The kind of stuff you cannot make up if you don’t know what you are talking about. She displayed strength, strength that I recognize. I think everyone in the room saw it. Yet she did not wear expensive clothes and she did not advertise big titles.

My wife recently taught me how to recognize signaling between cats. You could live all your life with cats and never realize how they broadcast signals and strength.

It is a mistake to think that the introverted nerds are weak. This is a very common mistake. I once bought a well-rated book on introverts, written by an extrovert. The whole book was about how introverts should face their fears. The author clearly thought that we were weak, in need of help somehow.

You are making a mistake if you think that my colleague Hass is weak. She could kick your nerd ass anytime.

Science and Technology links (February 2nd, 2018)

  1. Most mammals, including human beings, age according to a Gompertz curve. It is a fancy way of saying that your risk of death goes up exponential with age. Naked mole rats are mammals that do not age, in the following sense:

    unlike all other mammals studied to date, and regardless of sex or breeding-status, the age-specific hazard of mortality did not increase with age, even at ages 25-fold past their time to reproductive maturity

  2. It seems that the brain of male infants differs from that of female infants:

    We show that brain volumes undergo age-related changes during the first month of life, with the corresponding patterns of regional asymmetry and sexual dimorphism. Specifically, males have larger total brain volume and volumes differ by sex in regionally specific brain regions, after correcting for total brain volume.

  3. The American National Institutes of Health are launching a major research program in genome editing ($190 million over six years).
  4. It appears that many of us are deficient in magnesium, and that’s an important driver for cardiovascular diseases. Most of us will die of a cardiovascular disease (given current medical knowledge).

Picking distinct numbers at random: benchmarking a brilliant algorithm (JavaScript edition)

Suppose you want to choose m distinct integers at random within some interval ([0,n)). How would you do it quickly?

I have a blog post on this topic dating back to 2013. This week I came across Adrian Colyer’s article where he presents a very elegant algorithm to solve this problem, attributed to Floyd by Bentley. The algorithm was presented in an article entitled “A sample of brilliance” in 1987.

Adrian benchmarks the brilliant algorithm and finds it to be very fast. I decided the revisit Adrian’s work. Like Adrian, I used JavaScript.

The simplest piece of code to solve this problem is a single loop…

let s = new Set();
while(s.size < m) {
      s.add(randInt(n));
}

The algorithm is “non-deterministic” in the sense that you will generally loop more than m times to select m distinct integers.

The brilliant algorithm is slightly more complicated, but it always loops exactly m times:

let s = new Set();
for (let j = n - m; j < n; j++) {
        const t = randInt(j);
        s.add( s.has(t) ? j : t );
}

It may seem mysterious, but it is actually an intuitive algorithm, as Adrian explains in his original article.

It seems like the second algorithm is much better and should be faster. But how much better is it?

Before I present you my results, let me port over to JavaScript my 2013 algorithm. Firstly, we introduce a function that can generate the answer using a bitset instead of a generic JavaScript Set.

function sampleBitmap(m, n) {
   var s = new FastBitSet();
   var cardinality = 0
   while(cardinality < m) {
      cardinality += s.checkedAdd(randInt(n));
   }
   return s
}

Bitsets are can be much faster than generic sets, see my post JavaScript and fast data structures.

Secondly, consider the fact that when you need to generate more than m = n/2 integers in the range [0,n), you can, instead, generate m – n integers, and then negate the result:

function negate(s, n) {
  var news = new FastBitSet()
  let i = 0
  s.forEach(j => {while(i<j) {
             news.add(i);
             i++}; 
             i = j+1})
  while(i<n) {news.add(i);i++}
  return news
}

My complete algorithm is as follows:

function fastsampleS(m, n) {
    if(m > n / 2 ) {
      let negatedanswer = fastsampleS(n-m, n)
      return negate(negatedanswer)
    }
    if(m * 1024 > n) {
      return sampleBitmap(m, n)
    }
    return sampleS(m, n)
}

So we have three algorithms, a naive algorithm, a brilliant algorithm, and my own (fast) version. How do they compare?

mnnaivebrilliantmy algo
10,0001,000,0001,200 ops/sec1,000 ops/sec4,000 ops/sec
100,0001,000,00096 ops/sec80 ops/sec700 ops/sec
500,0001,000,00014 ops/sec14 ops/sec120 ops/sec
750,0001,000,0006 ops/sec8 ops/sec80 ops/sec
1,000,0001,000,0000.4 ops/sec5 ops/sec200 ops/sec

So the brilliant algorithm does not fare better than the naive algorithm (in my tests), except when you need to select more than half of the values in the interval. However, in that case, you should probably optimize the problem by selecting the values you do not want to pick.

My fast bitset-based algorithm is about an order of magnitude faster. It relies on the FastBitSet.js library.

My complete source code is available.

More of Vinge’s “predictions” for 2025…

In my previous post, I reviewed some of the predictions made in the famous science-fiction book Rainbows end. The book was written in 2006 by Vernor Vinge and set in 2025.

The book alludes to a massive book digitalization effort under way. When the book was written, Google had initiated its book digitalization effort. It is impossible to know exactly how far Google is along in its project, but they reported having digitalized about a quarter of all books ever published in 2013. Google plans to have digitalized most books ever published by 2020. This makes Vernor Vinge into a pessimist: it seems absolutely certain that by 2025, most books will be available electronically. Sadly, most books won’t be available for free, but that has more to do with copyright law than technology.

It is easy to anticipate social outcry at some advances. Vinge imagined that the digitalization of books would be fiercely resisted… but nothing of the sort happened. Google faced lawsuits… but no mob in the streets. That’s not to say that advances are not sometimes met with angry mobs. Genetic engineering is resisted fiercely, especially in Europe. Again, though, what gets people down in the streets is hard to predict.

What is interesting to me is that this massive book digitalization effort has not had a big impact. Even if we had free access to all of the world’s literature, I doubt most people would notice. Mostly, people do not care very much about old books. Wikipedia is a much bigger deal.

And this makes sense in a fast evolving civilization… It is not that we do not care about, say, history… it is just that most of us do not have time to dig in old history books… what we are looking for are a few experts who have this time and can report back to us with relevant information.

Will artificial intelligence eventually be able to dig into all these old books and report back to us in a coherent manner? Maybe. For now, much hope has been invested in digital humanities… and I do not think it taught us much about Shakespeare.

Rainbows end depict delivery drones. Autonomous flying drones able to make deliveries must have sounded really far-fetched in 2006. The main difficulty today is regulatory: we have the technology to build autonomous drones that take a package and deliver it hundreds of meters away to a drop zone. Battery power is an issue, but drones can play relay games. Amazon’s plan for delivery drones is right on target to give us this technology by 2025. Short of short-sighted governments, it seems entirely certain that some areas of the USA will have routine drone deliveries by 2025.

Though Vernor Vinge is often depicted as an extremely optimistic technologist, many of his fantastic visions from the turn of the century are coming on right on schedule. We are also reminded of how long ten years can be in technology… Autonomous drones went from science-fiction and advanced research projects, to being available for sale online in ten years. Not everything follows this path… some “easy problems” turn out to be much harder than we anticipated. But surprisingly often, hard problems are solved faster than expected.

JavaScript and fast data structures: some initial experiments

Two of my favorite data structures are the bitset and the heap. The latter is typically used to implement a priority queue.

Both of these data structures come by default in Java. In JavaScript, there is a multitude of implementations, but few, if any, are focused on offering the best performance. That’s annoying because these data structures are routinely used to implement other fast algorithms. So I did what all programmers do, I started coding!

I first implemented a fast heap in JavaScript called FastPriorityQueue.js. As a programmer, I found that JavaScript was well suited to the task. My implementation feels clean.

How does it compare with Java’s PriorityQueue? To get some idea, I wrote a silly Java benchmark. The result? My JavaScript version can execute my target function over 27,000 times per second on Google’s V8 engine whereas Java can barely do it 13,000 times. So my JavaScript smokes Java in this case. Why? I am not exactly sure, but I believe that Java’s PriorityQueue implementation is at fault. I am sure that a heap implementation in Java optimized for the benchmark would fare much better. But I should point out that my JavaScript implementation uses far fewer lines of code. So bravo for JavaScript!

I also wrote a fast bitset implementation in JavaScript. This was more difficult. JavaScript does not have any support for 64-bit integers as far as I can tell though it supports arrays of 32-bit integers (Uint32Array). I did with what JavaScript had, and I published the FastBitSet.js library. How does it compare against Java? One benchmark of interest is the number of times you can compute the union between two bitsets (generating a new bitset in the process). In Java, I can do it nearly 3 million times a second. The JavaScript library appears limited to 1.1 million times per second. That’s not bad at all… especially if you consider that JavaScript is a very ill-suited language to implement a bitset (i.e., no 64-bit integers). When I tried to optimize the JavaScript version, to see if I could get it closer to the Java version, I hit a wall. At least with Google’s V8 engine, creating new arrays of integers (Uint32Array) is surprisingly expensive and seems to have nothing to do with just allocating memory and doing basic initialization. You might think that there would be some way to quickly copy an Uint32Array, but it seems to be much slower than I expect.

To illustrate my point, if I replace my bitset union code…

answer.words = new Uint32Array(answer.count);
for (var k = 0; k < answer.count; ++k) {
   answer.words[k] = t[k] | o[k];
}

by just the allocation…

answer.words = new Uint32Array(answer.count);

… the speed goes from 1.1 million times per second to 1.5 million times per second. This means that I have no chance to win against Java. Roughly speaking, JavaScript seems to allocate arrays about an order of magnitude slower than it should. That’s not all bad news. With further tests, I have convinced myself that if we can just reuse arrays, and avoid creating them, then we can reduce the gap between JavaScript and Java: Java is only twice as fast when working in-place (without creating new bitsets). I expected such a factor of two because JavaScript works with 32-bit integers whereas Java works with 64-bit integers.

What my experiments have suggested so far is that JavaScript’s single-threaded performance is quite close to Java’s. If Google’s V8 could gain support for 64-bit integers and faster array creation/copy, it would be smooth sailing.

Update: I ended up concluding that typed arrays (Uint32Array) should not be used. I switched to standard arrays for better all around performance.

Links to the JavaScript libraries:

Foolish enough to leave important tasks to a mere human brain?

To the ancient Greeks, the male reproductive organ was mysterious. They had this organ that can expand suddenly, then provide the seed of life itself. Today, much of biology remains uncatalogued and mysterious, but the male reproductive organ is now fairly boring. We know that it can be cut (by angry wives) and sewed back in place, apparently with no loss of function. As for providing the seed of life, artificial insemination is routine both in animals (e.g., cows) and human beings. In fact, by techniques such a cloning, we can create animals, and probably even human beings, with no male ever involved.

If we go back barely more than a century, flight was mysterious. Birds looked slightly magical. Then a couple of bicycle repairmen, who dropped out of high school, built the first airplane. Today, I would not think twice about embarking in a plane, with hundred of other people, and fly over the ocean in a few hours… something no bird could ever do.

This is a recurring phenomenon: we view something as magical, and then it becomes a boring mechanism that students learn in textbooks. I call it the biological-supremacy myth: we tend to overestimate the complexity of anything biology can do… until we find a way to do it ourselves.

Though there is still much we do not know about even the simplest functions of our body, the grand mystery remains our brain. And just like before, people fall prey to the biological-supremacy myth. Our brains are viewed as mythical organs that are orders of magnitude more complex than anything human beings could create in this century or the next.

We spend a great deal of time studying the brain, benchmarking the brain, in almost obsessive ways. Our kids spend two decades being tested, retested, trained, retrained… often for the sole purpose of determining the value of the brain. Can’t learn calculus very well? Your brain must not be very smart. Can’t learn the names of the state capitals? Your brain must be slightly rotten.

In the last few years, troubles have arisen for those who benchmark the brain. I can go to Google and ask, in spoken English, for the names of the state capitals, and it will give them to me, faster than any human being could. If I ask Google “what is the derivative of sin x”, not only does it know the answer, it can also point to complete derivation of the result. To make matters worse, the same tricks work anytime, anywhere, not just when I am at the library or at my desk. It works everywhere I have a smartphone, which is everywhere I might need calculus, for all practical purposes.

What is fascinating is that as we take down the brain from its pedestal, step by step, people remain eager to dismiss everything human-made as massively inferior:

  • “Sure, my phone can translate this Danish writing on the wall for me, but it got the second sentence completely wrong. Where’s your fantastic AI now?”
  • “Sure, I can go to any computer and ask Google, in spoken English, where Moldova is, and it will tell me better than a human being could… But when I ask it when my favorite rock band was playing again, it cannot figure out what my favorite rock band was. Ah! It is a joke!”

A general objection regarding the brain is that there is so much we do not know. As far as I can tell, we do not know how the brain transforms sounds into words, and words into ideas. We know which regions of the brains are activated, but we do not fully understand how even individual neurons work.

People assume that to surpass nature, we need to fully understand it and to further fully reproduce it. The Wright brothers would have been quite incapable of modeling bird flight, let alone reproduce it. And a Boeing looks like no bird I know… and that’s a good thing… I would hate to travel on top of a giant mechanical bird.

Any programmer will tell you that it can be orders of magnitude easier to reprogram something from scratch, rather than start from spaghetti code that was somehow made to work. We sometimes have a hard time matching nature, not because nature was so incredibly brilliant… but rather because, as an engineer, nature is a terrible hack: no documentation whatsoever, and an “if it works, it is good enough” attitude.

This same objection, “there is so much we do not know”, is used everywhere by pessimists. Academics are especially prone to fall back on this objection, because they like to understand… But, of course, all the time, we develop algorithms and medical therapies that work, without understanding everything about the problem. That’s the beautiful thing about the world we live in: we can act upon it in an efficient manner without understanding all of it.

Our puny brains may never understand themselves, but that does make our brain wonderful and mysterious… it is more likely the case that our brains are a hack that works well enough, but that is far from the best way to achieve intelligence.

Another mistake people make is to assume that evolution is an optimization process that optimizes for what we care about as human beings. For centuries, people thought that if we were meant to fly, we would have wings. Evolution did not give us wings, not as a sign that we couldn’t fly… but simply because there was no evolutionary path leading to monkeys with wings.

Similarly, there is no reason to believe that evolution optimized human intelligence. It seems that other human species had larger brains. Our ancestors had larger brains. Several monkeys have photographic memory, much better strength/mass ratios and better reflexes. The human body is nothing special. We are not the strongest, fastest and smartest species to ever roam the Earth. It is likely that we came to dominate the animal kingdom because, as a species, we have a good mix of skills, and as long as we stay in a group, we can take down any other animal because we are expert at social coordination among mammals.

Yes, it is true that evolution benefited from a lot of time… But that’s like asking a programmer to tweak a piece of code randomly until it works. If you give it enough time, the result will work. It might even look good from the outside. But, inside, you have a big pile of spaghetti code. It is patiently tuned code, but still far from optimality from our point of view.

The Wright brothers were initially mocked. This reassured the skeptics that believed that mechanical flight was a heresy. But, soon after, airplanes flew in the first world war.

In 20 years, we will have machines that surpass the human brain in every way that matters to us. It will look nothing like a human brain… probably more like a Google data warehouse at first… And then we will be stuck with the realization that, from our reproductive organs all the way to our brains, we are nothing special.

Many people refuse to believe that we will ever machines that are better than us in every way. And they are right to be scared because once you invent a machine that is smarter than you are, you have no choice: you have to put it in charge.

Human beings know that they are somehow irrelevant in the grand scheme of things. I write this blog post using a brain that consumes maybe 20 to 30 Watts, with the bulk of my neurons actually invested in controlling my body, not thinking abstractly. In a few decades, it will be trivial to outsmart me. And then I will be back to being an old, boring monkey… no longer a representative of the smartest species on Earth.

Of course, just because we do not need the male organ to procreate does not mean that people stop having sex. The birds did not stop flying when we invented the airplane. Television did not mean the end of radio. The Internet does not mean the end of the paper. Hopefully, my species will make use of its brains for many decades, many centuries… but soon enough, it will seem foolish to leave important decisions and tasks to a mere human brain.

Some of this future is already here.

Could big data and wearables help the fight against diseases?

Biologists and medical researchers are used to drinking data with a straw. Doctors measure heart rate, weight and blood pressure, one at a time, at a high cost. When patients suffer from serious diseases, like cancer, measures are even more expensive. To make matters worse, measures are usually not shared and reused. In fact, even the patients themselves can have a hard time accessing their own data.

How do we get medical data for research? Mostly through clinical trials or one-off studies. These are excessively expensive, narrowly focused, with often very few subjects. Out of all patients suffering from serious diseases, only a small percentage will ever contribute any research data points.

Today, nearly every aspect of cancer care is based on information gleaned from the roughly 3% of patients who participate in clinical trials. But new health technologies in development offer the ability to learn from every patient. These big data tools make it possible to aggregate, analyze, and learn from a wide range of medical data—electronic health records, genetic test results, and more—while protecting the security and confidentiality of a patient’s individual data. (Masters et al., 2015)

If my car breaks down and I bring it to the garage, they can talk to the onboard computer and have much of the relevant data necessary for a diagnostic. If I were to break down in the middle of writing this blog post, the hospital would have almost no data on me.

People are more complicated than cars. Nevertheless, it seems that we are at an inflection point where much will soon become possible.

  • We have entered the era of wearable computing. Everyone is wearing a computer these days, from Syrian refugees to elderly Alzheimer’s patients. These devices range from smartphones, smartwatches, all the way to activity trackers (e.g., FitBit). We can design smart fabrics, smart glasses… All these devices are constantly connected to the Internet and have more than enough power to process the data is situ if needed.
  • The range of non-invasive medical measures that one can take continuously is expanding with every year that passes. Not long ago, just measuring your heart beat in real time required annoying straps… Yet, today, anyone with an Apple watch gets real-time heart rate monitoring. In case of cardiac problems, we can even setup people with constant 24-hour ECG monitoring if needed, and the result is reliable and practical. Google has designed glucose-tracking lenses, and they are working on cancer-tracking devices. There are apps available right now that can monitor your skin for cancer. Qualcomm has setup the Tricorder X Prize which aims to build a Star-Trek tricorder for mobile medical diagnostic. South Korean researchers have designed a brassiere that can detect breast cancer.
  • We have a fantastic cloud infrastructure that is secure and scalable. For all practical purposes, one can consider that we have access to infinite storage and computational power. If we had collected all possible data on you, and you were diagnosed with some problem, it would be a trivial matter to quickly identify similar people who had the same problem in the past, and to review the therapies that worked for them.
  • We know how to process the data in clever ways. Scientists can detect a stroke event from simple activity tracking.

So it seems that we should be entering a new era.

Conclusion: We spent the last decade with our smartest kids working in marketing for companies like Facebook. I hope that, in the next decade, they will apply their computer skills to curing the sick.

The “consensus” is sometimes wrong

Anyone who has a critical mind and who attended college long enough, knows not to trust textbooks. They are full of mistakes. Because textbooks tend to copy each other, you cannot even trust a fact that appears in multiple textbooks. Mistakes reproduce.

Some mistakes are conceptual. For example, hardly anyone ever needs calculus over continuous domains, but this still get taught as if nearly everyone needed it. But many mistakes are also factual.

This is made worse by the fact that human being a very sensitive to arguments from authority: if someone important said or wrote something, it must be true, surely? And influential people are not immune to this fallacy, so they sometimes fall like dominoes, quickly embracing the wrong ideas… leaving all of us stuck with a mistake.

From at least 1921 up to 1955, it was widely believed that human DNA was made of 24 pairs of chromosomes. This came about because a distinguished zoologist called Theophilus Shickel Painter estimated that this must be the case.

Then, in 1955, a relatively undistinguished research fellow (Joe Hin Tjio), who did not even have a PhD at the time, recognized the mistake for what it was and published a short paper in a relatively modest journal that dared to contradict the established wisdom.

This was enough, in this case, to shift our view, but… think about the fact that there were countless researchers at the time that were better equipped, better funded and more established. None of them dared to question the textbooks. It took someone like Tjio.

There are countless such examples, but we are quick to dismiss them as they do not fit well in our view of the world. Yet I think we should always keep in mind Burwell’s famous quote:

“Half of what we are going to teach you is wrong, and half of it is right. Our problem is that we don’t know which half is which.” (Charles Sidney Burwell)