Science and Technology links (January 9th 2021)

  1. The Earth is spinning faster and faster: The 28 fastest days on record (since 1960) all occurred in 2020, with Earth completing its revolutions around its axis milliseconds quicker than average.
  2. We are soon getting a new Wi-Fi standard called Wi-Fi 6: it supports data transmission at over 1 GB/s, nearly three times the speed of previous standards.
  3. Eli Dourado predicts:

    By the middle of the decade, augmented reality will be widely deployed, in the same way that smart watches are today. Glasses will be computing devices. Every big tech company has a glasses project at a relatively mature stage in the lab today. The need for the glasses to understand context could result in much smarter digital assistants than today’s Siri, Alexa, and so on.

    I agree with Dourado.

  4. Vitamin D3 may reduce the risk of developing advanced cancer.
  5. High-tech running shoes improve elite marathon performance.
  6. In our brains, neurons are connected toghether by their dendrites. It seems that, at least in human beings, large dendrites are capable of computation on their own.
  7. A study of 145 journals in various fields, spanning 1.7 million authors, revealed that there was no obvious bias against authors with female names and, in fact, female authors might even be slightly favored by referees. Such a study does not show that sexism does not exist. It does not show that journals have never been biased against female authors.

Memory access on the Apple M1 processor

When a program is mostly just accessing memory randomly, a standard cost model is to count the number of distinct random accesses. The general idea is that memory access is much slower than most other computational tasks.

Furthermore, the cost model can be extended to count “nearby” memory accesses as free. That is, if I read a byte at memory address x and then I read a byte at memory address x+1, I can assume that the second byte comes “for free”.

This naive memory-access model is often sensible. However, you should always keep in mind that it is merely a model. A model can fail to predict real performance.

How might it fail? A CPU core can issue multiple memory requests at once. So if I need to access 7 memory locations at once, I can issue 7 memory requests and wait for them. It it is likely that waiting for 7 memory requests is slower than waiting for a single memory request, but is it likely to be 7 times slower?

The latest Apple laptop processor, the M1, has apparently a lot of memory-level parallelism. It looks like a single core has about 28 levels of memory parallelism, and possibly more.

Such a high degree of memory-level parallelism makes it less likely that our naive random-memory model applies.

To test it out, I designed the following benchmark where I compare three functions. The first one just grabs pairs of randomly selected bytes and it computes a bitwise XOR between them before adding them to a counter:

  for(size_t i = 0; i < 2*M; i+= 2) {
    answer += array[random[i]] ^ array[random[i + 1]];

We compare against a 3-wise version of this function:

  for(size_t i = 0; i < 3*M; i+= 3) {
    answer += array[random[i]] ^ array[random[i + 1]] 
              ^ array[random[i + 2]];

Our naive memory-access cost model predicts that the second function should be 50% more expensive. However many other models (such as a simple instruction count) would also predict a 50% overhead.

To give our naive memory-access model a run for its money, let us throw in a 2-wise version that also accesses nearby values (with one-byte offset):

  for(size_t i = 0; i < 2*M; i+= 2) {
    int idx1 = random[i];
    int idx2 = random[i + 1];
    answer += array[idx1] ^ array[idx1 + 1] 
           ^ array[idx2]  ^ array[idx2 + 1];

Our naive memory-access cost model would predict that first and last function should have about the same running time while the second function should be 50% more expensive.

Let us measure it out. I use a 1GB array and I report the average time spent in nanosecond on each iteration.

2-wise 8.9 ns
3-wise 13.0 ns
2-wise + 12.5 ns

At first glance, our naive memory-access model is validated: the 3-wise function is 46% more expensive than the 2-wise function. Yet we should not be surprised because most reasonable models would make such a prediction since in almost every way, the function does 50% more work.

It is more interesting to compare the two 2-wise function… the last one is 40% more expensive than the first 2-wise function. It contradicts our prediction. And so, at least in this instance, our simple memory-access cost model fails us on the Apple M1 processor.


  1. My source code is available. The run-to-run variability is relatively high on such a test, but the conclusion is robust, on my Apple M1 system.
  2. I posted the assembly online.
  3. Importantly, I do not predict that other systems will follow the same pattern. Please do not run this benchmark on your non-M1 PC and expect comparable results.
  4. This benchmark is meant to be run on an Apple MacBook with the M1 processor, compiled with Apple’s clang compiler. It is not meant to be used on other systems.

Peer-reviewed papers are getting increasingly boring

The number of researchers and peer-review publications is growing exponentially.  It has been estimated that the number of researchers in the world doubles every 16 years and the number of research outputs is increasing even faster.

If you accept that published research papers are an accurate measure of our scientific output, then we should be quite happy. However, Cowen and Southwood take an opposing point of view and represent this growth as a growing cost without associated gains.

(…) scientific inputs are being produced at a high and increasing rate, (…) It is a mistake, however, to infer that increases in these inputs are necessarily good news for progress in science (…) higher inputs are not valuable p​er se​, but instead they are a measure of cost, namely how much is being invested in scientific activity. The higher the inputs, or the steeper the advance in investment, presumably we might expect to see progress in science be all the more impressive. If not, then perhaps we should be worried all the more.

So are these research papers that we are producing in greater numbers… the kind of research papers that represent real progress? Bhattacharya and Packalen conclude that though we produce more papers, science itself is stagnating because of the worse incentives which focuses the research on low-risk/no-reward ventures as opposed to genuine progress:

This emphasis on citations in the measurement of scientific productivity shifted scientist rewards and behavior on the margin toward incremental science and away from exploratory projects that are more likely to fail, but which are the fuel for future breakthroughs. As attention given to new ideas decreased, science stagnated.

Thurner et al. concur in the sense that they find that “out-of-the-box” papers are getting harder to find:

over the past decades the fraction of mainstream papers increases, the fraction of out-of-the-box decreases

Surely, the scientists themselves have incentives to course correct and encourage themselves to produce more important and exciting research papers?

Collison and Nielsen challenge scientists and institutions to tackle this perceived diminishing scientific productivity:

Most scientists strongly favor more research funding. They like to portray science in a positive light, emphasizing benefits and minimizing negatives. While understandable, the evidence is that science has slowed enormously per dollar or hour spent. That evidence demands a large-scale institutional response. It should be a major subject in public policy, and at grant agencies and universities. Better understanding the cause of this phenomenon is important, and identifying ways to reverse it is one of the greatest opportunities to improve our future.

If we believe that research papers are becoming worse, that fewer of them convey important information, then the rational approach is to downplay them. Whenever you encounter a scientist and they tell you about how many papers they have published or where they were published, or how many citations they got… you should not mock the scientist in question, but you ought to bring the conversation at another level. What is the scientist working on and why is it important work? Dig below the surface.

Importantly, it does not mean that we should discourage people from publishing a lot of papers not anymore than we generally discourage programmer from writing many lines of code. Everything else being equal, people who love what they are doing, and who are good at it, will do more of it. But nobody would mistake someone who writes a lot as a good writer if they aren’t.

We need to challenge the conventional peer-reviewed research paper, by which I refer to a publication was reviewed by 2 to 5 peers before getting published. It is a relatively recent innovation that may not always be for the best. People like Einstein did not go through this process, at least not in their early years. Research used to be more more like “blogging”. You would write up your ideas and share them. People could read them and criticize them. This communication process can be done with different means: some researchers broadcast their research meetings online.

The peer-reviewed research papers allow you to “measure” productivity. How many papers in top-tier venues did research X produce? And that is why it grew so strong.

There is nothing wrong with people seeking recognition. Incentives are good. But we should reward people for the content of their research, not for the shallow metadata we can derive from their resume. If you have not read and used someone’s work, you have no business telling us whether they are good or bad.

The other related problem is the incestious relationship between researchers and assessment. Is the work on theory X important? “Let us ask people who work on theory X”. No. You have to have customers, users, people who have incentives to provide honest assessments. A customer is someone who uses your research in an objective way. If you design a mathematical theory or a machine-learning algorithm and an investment banker relies on it, they are your customer (whether they are paying you or not). If it fails, they will stop using it.

It seems like the peer-review research papers establish this kind of customer-vendor relationship where you get a frank assessment. Unfortunately, it fails as you scale it up. The customers of the research paper are the independent readers, that is true, but they are the readers who have their own motivations.

You cannot easily “fake” customers. We do so sometimes, with movie critics, say. But movie critics have an incentive to give your recommendations you can trust.

We could try to emulate the movie critic model in science. I could start reviewing papers on my blog. I would have every incentive to be a good critic because, otherwise, my reputation might suffer. But it is an expensive process. Being a movie critic is a full time job. Being a research paper critic would also be a full time job.

What about citations? Well, citations of often granted by your nearest peers. If they are doing work that resembles yours, they have no incentive to take it down.

In conclusion, I do find it credible that science might be facing a sort of systemic stagnation brought forth by a set of poorly aligned incentives. The peer-reviewed paper accepted at a good venue as the ultimate metric seems to be at the core of the problem. Further, the whole web of assessment in modern science often seems broken. It seems that, on an individual basis, researchers ought to adopt the following principles:

  1. Seek objective feedback regarding the quality of your own work using “customers”: people who would tell you frankly if your work was not good. Do not mislead citations or “peer review” for such an assessment.
  2.  When assessing another research, try your best to behave as a customer who has some distance from the research. Do not count inputs and outputs as a quality metric. Nobody would describe Stephen King as a great writer because he published many books. If you are telling me that Mr Smith is a great researcher, then you should be able to tell me about the research and why it is important.

Further reading:

My Science and Technology review for 2020

    1. The original PlayStation game console (1994) was revolution thanks in part to its CD drive that could read data at an astonishing 0.3 MB/s. In 2020, the PlayStation 5 came out with 5 GB/s of disk bandwidth, so over 15,000 more bandwidth than the original PlayStation.
    2. The Samsung S10+ phone can be purchased with 1 TB of storage. It is enough storage to record everything you hear in daily life for ten years or to store everything you see for several weeks. It is relatively easy to buy a 4 TB SSD for your PC, but difficult to go much higher.
    3. Drones are used to keep Europeans in check during the COVID 19 pandemics, they take people’s temperature and issue fines.
    4. In the state of New York, people can get married legally by videoconference.
    5. Virtually all kids and college students have taken online classes in 2020 in the developed world. It is now a widely held view (though not uncontested) that the future of colleges is online.
    6. UCLA researchers have achieved widespread rejuvenation in old mice through blood plasma dilution, a relatively simple process, they plan to conduct clinical trials in human beings “soon”. (Other reference.)

Science and Technology links (December 26th 2020)

  1. Researchers used a viral vector to manipulate eye cells genetically to improve the vision of human beings.
  2. Seemingly independently, researchers have reported significant progress regarding the solution of the Schrödinger equation using deep learning: Puppin et al., Hermann et al.
  3. The Dunning-Kruger Effect Is Probably Not Real. I am becoming quite upset with respect to the many effects in psychology that fail to be independently verified. And I’d feel better if it were only a problem in psychology.
  4. Can the technology behind COVID-19 vaccines lead to other breakthroughs?

In 2011, I predicted that the iPhone would have 1TB of storage in 2020

Someone reminded me of a prediction I made in 2011:

At the time, an iPhone could hold at most 32 GB of data, so 1 TB sounded insane.

Unfortunately, Google Plus is no more so you cannot see the plot showing my projection and I lost it as well. Yet we can build a table:

2010 iPhone 4 32 GB
2012 iPhone 5 64 GB
2014 iPhone 6 128 GB
2016 iPhone 7 256 GB
2018 iPhone XS 512 GB
2019 iPhone 11 Pro 512 GB
2020 iPhone 12 Pro 512 GB

How did my prediction fare ? I got it wrong, of course, but I think it was remarkably prescient. It seems obvious that Apple could have gone with 1 TB but chose not to. The Samsun Galaxy S10+ comes with 1 TB of storage.

Some analysts predict that the iPhone 13 might have 1 TB of storage.

Science and Technology links (December 19th 2020)

    1. The Flynn effect is the idea that people get smarter over time (generation after generation). The negative effect is the recent observations that people are getting dumber. It seems that there is no negative Flynn effect after all. We are not getting dumber.
    2. Year 536 was one of the worse years to be alive. Temperatures fell 1.5C to 2.5C during the summer and crops failed. Mass starvation soon followed. Cold weather is deadly.
    3. A drug reversed age-related cognitive decline in mice within a few days.
    4. Glucosamine, a popular supplement, reduces mortality. It may not do much against joint pain, however.
    5. Singapore will have flying electric taxi services.
    6. Japan’s population is projected to fall from a peak of 128 million in 2017 to less than 53 million by the end of the century.
    7. NASA spent $23.7 billion on the Orion spacecraft, which flew once. Meanwhile, the private company SpaceX received less than $20 billion in funding and executed more than 100 launches to orbit, it made vertical landing work, and more.
    8. We are working far fewer hours.

Cognitive biases

One-sided bet: People commonly assume implicitly that their actions may only have good outcomes. For example, increasing the minimum wage in a country may only benefit the poor. Taking a lottery ticket only has the upside of possibly winning a lot of money. Believing in God can only have benefits. And so forth. In truth, most actions are two-sided. They have good and bad effects.

Politician’s syllogism: We must do something, this is something so we must do it. “We must fight climate change, we can tax oil, so we must tax oil.” If there is a problem, it is important to assess the actions we could take and not believe that because they are actions in response to a real problem, they are intrinsically good.

Confirmation bias: “I believe that there are extraterrestrials, I have collected 1000 reports confirming their presence” (but I am blind to all of the negative evidence). People tend to make up their mind first and then to seek to rationalize their opinion whereas they should do the opposite.

Historical pessimism bias: “Human life was so much better 2 centuries ago!” Yet by almost any measure, human beings have better lives today.

Virtual reality… millions but not tens of millions… yet

In February 2016, I placed a bet against Greg Linden in these terms:

within the next three years, starting in March of this year, we would sell at least 10 million VR units a year (12 continuous months) worldwide.

According to some sources, around 5 million units have been sold each year in 2019 and 2020. Strictly nobody is claiming that near 10 million units were sold in a single year. Thus I conceded the bet against Greg and paid $100 to the Wikipedia foundation. Greg has a blog post on this bet.

I believe that both Greg and myself agree that though we have not reached the 10-million-unit threshold yet, we will in a few short years. You should expect a non-linear growth: as more headsets are sold, more applications are built, and thus more headsets are sold…

It is important to put yourself in the context where this bet was made. At the time, three VR headsets were about to be released (Facebook’s Oculus Rift, HTV Vive and the PlayStation VR). As far I know, neither Greg nor myself had any experience whatsoever with these headsets. The Oculus Rift was to ship with a game controller so we had reasons to be skeptical about the hardware quality.

I expected that selling 10 million units a year had long odds. I expected, at best, a close call. Yet I still expected that we would sell millions of units even if I lost, which I believe is what happened. I expected that at least one of the current players (Oculus, Sony and HTC) would fold while at least one new player would enter the market. It seems that HTC bet the farm initially on this market but reduced its presence over time while the Valve Index was a nice surprise.

I acquired several headsets. It turns out that the hardware exceeded my expectations. People who complain about the bulky headsets have often not followed through the various iterations. Hardware can always be lighter and finer, but the progress has exceeded by expectations.

I also built a few software prototypes of my own, and it was remarkably easy. Both of the software and the hardware aspect worked out much better than I expected, but the killer applications have not emerged yet.

My own laboratory acquired headsets and built prototypes. It took me months to reach rather elementary realizations. Explaining VR is harder than it sounds. No, it is not like having moving from a 2D surface to a 3D surface. It is an embodied experience. And that is where I conjecture the real difficulty lies. We are all familiar with video games and movies, and the web. But we have a much harder time thinking about VR and what it can and cannot do.

Let me revisit critically my statements from 2016:

  1. Virtual reality is a major next step so that backers will be generous and patient.
    It is unclear to me how much truth there was in this statement. Certainly Facebook, Valve and HTC have invested a lot but I kept hearing about start-up folding up early. The fact that hardly anyone made a lot of money did not help. Meanwhile, a lot of the people working in VR can quickly switch to more profitable non-VR projects, so the talented individuals do not stick around.
  2. I’d be surprised if the existing Oculus Rift sold more than a few hundred thousand units. It is just too expensive. It just not going to be on sale at Walmart.
    The Oculus Rift is on sale at Walmart for $300. But I am correct regarding the unit sales: the Oculus Rift did not sell in the millions of units.
  3. But within two years, we can almost guarantee that the hardware will either be twice as good or cost half as much. With any luck, in two years, you will be able to buy a computer with a good VR headset for a total of less than $1000 at Walmart.
    I did not foresee that standalone headsets like the Oculus Quest would essentially match the original PC headsets at a fraction of the cost. The Oculus Quest is under 500$. Cheaper than a game console. It is light (500g), it has high resolution ( 1832×1920 per eye). It has a low-latency 72 Hz display. Six degrees of freedom. Sadly, you must tie it to your Facebook account which is a turn off for many people. There are rumours of very good Chinese headsets but they have not been commercialized yet where I live.
  4. A company like Sony has more than enough time in three years to bring the prices down and get game designers interested. Will the technology be good enough to attract gamers? If it is, then it might just be possible to sell 10 million units in a year.
    Sony released the PlayStation 5 without stressing VR. Half-Life: Alyx was one of the best-selling game of 2019 but it did not sell in the millions. There are good VR video games but very few high-budget ventures.

Conclusion. VR did not see the same kind of explosive growth that other technologies have seen. But the infrastructure has been built and the growth will happen. Prices have fallen and quality has jumped up. Sooner than you think, VR will enter your life if it hasn’t yet.

Converting floating-point numbers to integers while preserving order

Many programming languages have a number type corresponding to the IEEE binary64. In many languages such as Java or C++, it is called a double. A double value uses 64 bits and it represents a significand (or mantissa) multiplied by a power of two: m * 2p. There is also a sign bit.

A simpler data type is the 64-bit unsigned integer. It is a simple binary representation of all numbers between 0 to 264.

In a low-level programming language like C++, you can access a double value as if it were an unsigned integer. After all, bits are bits. For some applications, it can be convenient to regard floating-point numbers as if they were simple 64-bit integers.

In C++, you can do the conversion as follows:

uint64_t to_uint64(double x) {
    uint64_t a;
    return a;

Though it looks expensive, an optimizing compiler might turn such code into something that is almost free.

In such an integer representation, a double value looks as follows:

  • The most significant bit is the sign bit. It has value 1 when the number is negative, and it has value 0 otherwise.
  • The next 11 bits are usually the exponent code (which determines p).
  • The other bits (52 of them) are the significand.

If you omit infinite values and not-a-number code, a comparison between two floating-point numbers is almost trivially the same as a comparison two integer values.

If you know that all of your numbers are positive and finite, then you are done. They are already in sorted order. The following comparison function should suffice:

bool is_smaller(double x1, double x2) {
    uint64_t i1 = to_uint64(x1);
    uint64_t i2 = to_uint64(x2);
    return i1 < i2;

If your values can be negative, then you minimally need to reverse the sign bit, since it is wrong: we want large values to have their most significant bits set, and small values to have it unset. But just flipping one bit is not enough, you want negative values having a large absolute value to become small. To do so, you need to negate all bits, but only when the sign bit is set. It turns out that some clever programmer has worked up an efficient solution:

uint64_t sign_flip(uint64_t x) {
   // credit
   // when the most significant bit is set, we need to
   // flip all bits
   uint64_t mask = uint64_t(int64_t(x) >> 63);
   // in all case, we need to flip the most significant bit
   mask |= 0x8000000000000000;
   return x ^ mask;

You have now an efficient comparator between two floating-point values using integer arithmetic:

bool generic_comparator(double x1, double x2) {
    uint64_t i1 = sign_flip(to_uint64(x1));
    uint64_t i2 = sign_flip(to_uint64(x2));
    return i1 < i2;

For finite numbers, we have shown how to map floating-point numbers to integer values while preserving order. The map is also invertible.

Sometimes you are working with floating-point numbers but would rather process integers. If you only need to preserve order, you can use such a map.

My source code is available.

ARM MacBook vs Intel MacBook: a SIMD benchmark

In my previous blog post, I compared the performance of my new ARM-based MacBook Pro with my 2017 Intel-based MacBook Pro. I used a number parsing benchmark. In some cases, the ARM-based MacBook Pro was nearly twice as fast as the older Intel-based MacBook Pro.

I think that the Apple M1 processor is a breakthrough in the laptop industry. It has allowed Apple to sell the first ARM-based laptop that is really good. It is not just the chip, of course. It is everything around it. For example, I fully expect that most people who buy these new ARM-based laptops to never realize that they are not Intel-based. The transition is that smooth.

I am excited because I think it will drive other laptop to rethink their designs. You can buy a thin laptop from Apple with a 20-hour battery life and the ability to do intensive computations like a much larger and heavier laptop would.

(This blog post has been updated after a corrected a methodological mistake. I was running the Apple M1 processor under x64 emulation.)

Yet I did not think that the new Apple processor is better than Intel processors in all things. One obvious caveat is that I am comparing the Apple M1 (a 2020 processor) with an older Intel processor (released in 2017). But I thought that even the older Intel processors can have an edge over the Apple M1 in some tasks and I wanted to make this clear. I did not think it was controversial. Yet I was criticized for making the following remark:

In some respect, the Apple M1 chip is far inferior to my older Intel processor. The Intel processor has nifty 256-bit SIMD instructions. The Apple chip has nothing of the sort as part of its main CPU. So I could easily come up with examples that make the M1 look bad.

This rubbed many readers the wrong way. They pointed out that ARM processors do have 128-bit SIMD instructions called NEON. They do. In some ways, the NEON instruction set is nicer than the x64 SSE/AVX one. Recent Apple ARM processors have four execution units capable of SIMD processing while Intel processors only have three. Furthermore, the Intel execution units have more restrictions. Thus 64-bit ARM NEON routines will outperform comparable SSE2 (128-bit SIMD) Intel routines despite the fact that they both work over 128-bit registers. In fact, I have a blog post making this point by using the iPhone’s processor.

But it does not follow that the 128-bit ARM NEON instructions are generally a match for the 256-bit SIMD instructions Intel and AMD offer.

Let us test out the issue. The simdjson library offers SIMD-heavy functions to minify JSON and validate UTF-8 inputs. I wrote a benchmark program that loads a file in memory and then repeatedly calls the minify and validate function, looking for the best possible speed. Anyone with a MacBook and Xcode should be able to reproduce my results.

The vectorized UTF-8 validation algorithm is described in Validating UTF-8 In Less Than One Instruction Per Byte (published in Software: Practice and Experience).

The simdjson library relies on an abstraction layer so that functions are implemented using higher-level C++ which gets translated into efficient SIMD intrinsic functions specific to the targeted system. That is, we are not comparing different hand-tuned assembly functions. You can check out the UTF-8 validation code for yourself online.

Let us look at the results:

minify UTF-8 validate
Apple M1 (2020 MacBook Pro) 6.6 GB/s 33 GB/s
Intel Kaby Lake (2017 MacBook Pro) 7.7 GB/s 29 GB/s
Intel/M1 ratio 1.2 0.9

As you can see, the older Intel processor is slightly superior to the Apple M1 in the minify test.

Of course, it is only one set of benchmarks. There are many confounding factors. Did the algorithmic choices favour the AVX2 ISA? It is possible. Thankfully all of the source code is available so any such bias can be assessed.

ARM MacBook vs Intel MacBook

Up to yesterday, my laptop was a large 15-inch MacBook Pro. It contains an Intel Kaby Lake processor (3.8 GHz). I just got a brand-new 13-inch 2020 MacBook Pro with Apple’s M1 ARM chip (3.2 GHz).

How do they compare? I like precise data points.

Recently, I have been busy benchmarking number parsing routines where you convert a string into a floating-point number. That seems like an interesting comparison. In my basic tests, I generate random floating-point numbers in the unit interval (0,1) and I parse them back exactly. The decimal significand spans 17 digits.

I run the same benchmarking program on both machines. I am compiling both benchmarks identically, using Apple builtin’s Xcode system with the LLVM C++ compiler. Evidently, the binaries will differ since one is an ARM binary and the other is a x64 binary. Both machines have been updated to the most recent compiler and operating system.

My results are as follows:

Intel x64 Apple M1 difference
strtod 80 MB/s 115 MB/s 40%
abseil 460 MB/s 580 MB/s 25%
fast_float 1000 MB/s 1800 MB/s 80%

My benchmarking software is available on GitHub. To reproduce, install Apple’s Xcode (with command line tools), CMake (install for command-line use) and type cmake -B build && cmake --build build && ./build/benchmarks/benchmark. It uses the the default Release mode in CMake (flags -O3 -DNDEBUG).

I do not yet understand why the fast_float library is so much faster on the Apple M1. It contains no ARM-specific optimization.

Note: I dislike benchmarking on laptops. In this case, the tests are short and I do not expect the processors to be thermally constrained.

Update. The original post had the following statement:

In some respect, the Apple M1 chip is far inferior to my older Intel processor. The Intel processor has nifty 256-bit SIMD instructions. The Apple chip has nothing of the sort as part of its main CPU. So I could easily come up with examples that make the M1 look bad.

This turns out to be false. See my post ARM MacBook vs Intel MacBook: a SIMD benchmark

Science and Technology (December 5th 2020)

  1. Researchers find that older people can lose weight just as easily as younger people.
  2. Google DeepMind claims to have solved the protein folding problem, an important problem in medicine. This breakthrough could greatly accelerate drug development and lead to new cures. Yet,not everyone is convinced that they actually solved the problem.
  3. “Indian Americans have risen to become the richest ethnicity in America, with an average household income of $126,891 (compared to the US average of $65,316). (…) Almost 40% of all Indians in the United States have a master’s, doctorate, or other professional degree, which is five times the national average.” (source)
  4. There is a popular idea in the US currently: we should just forgive all student debts. Catherine and Yannelis find that “universal and capped forgiveness policies are highly regressive, with the vast majority of benefits accruing to high-income individuals.”
  5. Researchers successfully deployed advanced genetic engineering techniques (based on CRISPR) against cancer in mice.
  6. Researchers rejuvenated the cells in the eyes old mice, restauring their vision. (Source: Nature.)
  7. Remember all these studies claiming that birth order determined your fate, with older siblings going more in science and younger siblings going for more artistic careers? It seems that these results do not replicate very well given a re-analysis. The effects are much weaker than initially believed and they do not necessarily go in the expected direction.
  8. Older people (over 70) have less zinc in their blood. Their zinc level predicts their mortality rate. The more zinc, the less likely they are to die.
  9. Shenzhen (China) has truly driveless cars on the roads.
  10. Centanarians have low levels of blood sugar, and they are less likely to suffer from diabetes than adults in general.
  11. We have an actual treatment to help people suffering from progeria, a crippling disease.
  12. Eating eggs is quite safe.
  13. The state-of-the-art in image processing includes convolutional neural networks (CNN). Though it gives good results, it is a computationally expensive approach. Google has adapted a technique from natural-language processing called transformers to the task and they report massive gains in computational efficiency.

Interview by Adam Gordon Bell

A few weeks ago, Adam Gordon Bell had me on his podcast. You can listen to it. Here is the abstract:

Did you ever meet somebody who seemed a little bit different than the rest of the world? Maybe they question things that others wouldn’t question or said things that others would never say. Daniel is a world-renowned expert on software performance, and one of the most popular open source developers. If you measure by GitHub followers. Today, he’s gonna share his story. It involves time at a research lab, teaching students in a new way. it will also involve upending people’s assumptions about IO performance. Elon Musk And Julia Roberts will come up a little bit more than you might expect.

I would not describe myself as “world renowned” about anything, but Adam needs to do the a bit of promotion. My interview is right after an interview with Brian Kernighan: he is world renowned.

I also do not think that I am “different from the rest of the world” though I have maybe given more thought than most to the need to be different. I have always preoccupied about trying to do work that others do not do: sadly, it is much harder than it sounds.

I usually talk mostly about my work, but Adam wanted to go a bit personal, like how I was initially struggling at school.


Further reading: After giving this interview, I read Paul Graham’s latest essay. If you liked my interview, you will probably enjoy Graham’s essay. You might enjoy his essay in any case.

Java Buffer types versus native arrays: which is faster?

When programming in C, one has to allocate and de-allocate memory by hand. It is an error prone process. In contrast, newer languages like Java often manage their memory automatically. Java relies on garbage collection. In effect, memory is allocated as needed by the programmer, and then Java figures out that some piece of data is no longer needed, and it retrieves the corresponding memory. The garbage collection process is fast and safe, but it is not free: despite decades of optimization, it can still cause major headaches to developers.

Java has native arrays (e.g., the int[] type). These arrays are typically allocated on the “Java heap”. That is, they are allocated and managed by Java as dynamic data, subject to garbage collection.

Java also has Buffer types such as the IntBuffer. These are high-level abstractions that can be backed by native Java arrays but also by other data sources, including data that is outside of the Java heap. Thus you can use Buffer types to avoid relying so much on the Java heap.

But my experience is that it comes with some performance penalty compared to native arrays. I would not say that Buffers are slow. In fact, given a choice between a Buffer and a stream (DataInputStream), you should strongly favour Buffer types. However, they are not as fast as native arrays in my experience.

I can create an array of 50,000 integers, either with “new int[50000]” or as “IntBuffer.allocate(50000)”. The latter should essentially create an array (on the Java heap) but wrappred with an IntBuffer “interface”.

A possible intuition is that wrapping an array with an high-level interface should be free. Though it is true that high level abstractions can come with no performance penalty (and sometimes, even, performance gains), whether they do is an empirical matter. You should never just assume that your abstraction comes for free.

Because I am making an empirical statement, let us test it out empirically with the simplest test I can imagine. I am going to add one to every element in the array/IntBuffer.

for(int k = 0; k  < s.array.length; k++) { 
    s.array[k] += 1;
for(int k = 0; k  < s.buffer.limit(); k++) { 
    s.buffer.put(k, s.buffer.get(k) + 1);

I get the following results on my desktop (OpenJDK 14, 4.2 GHz Intel processor):

int[] 2.5 mus
IntBuffer 12 mus

That is, arrays are over 4 times faster than IntBuffers in this test.

You can run the benchmark yourself if you’d like.

My expectation is that many optimizations that Java applies to arrays are not applied to Buffer types.

Of course, this tells us little about what happens when Buffers are used to map values from outside of the Java heap. My experience suggests that things can be even worse.

Buffer types have not made native arrays obsolete, at least not as far as performance is concerned.

Science and Technology links (November 28th 2020)

  • Homework favours kids with wealthier and better educated parents. My own kids have access to two parents with a college education, including a father who is publishing mathematically-intensive research papers. Do you think for a minute that it is fair to expect kids who have poorly educated parents to compete on homework assignments? (Not that I help my kids all that much…)
  • Though researchers have reported that animal populations are falling worldwide (presumably because of human beings), this trend is entirely driven by 3% of the animals that are strongly declining while most animals (vertebrates) are not in decline.
  • The expansion of parental leave and child care subsidies has not affected gender inequalities in the workplace. (That is not an argument for abolishing parental leave and child care subsidies.)
  • An hallucinogenic tea can help you grow new brain cells.
  • It appears that aging is partially caused by aging factors found in our blood. In mice, researchers achieved rejuvenation (improved cognition and reduced inflammation) by diluting blood plasma. It confirms earlier work on the topic but shows rejuvenation in the brain. It does not mean that we know how to rejuvenate human beings, but it gives you a new angle of attack that is safe and inexpensive.
  • A paper claims that hyperbaric oxygen therapy brings about rejuvenation in human beings. In effect, it shows a lengthening of the telomeres, this component of our DNA that grows shorter with each division. The lengthening is in some cells only. They also show a reduction of the number of senescent cells: these zombie cells that we tend to accumulate with age. The reduction in senescent cells is only for part of the body and it might be caused by the oxygen (that may kill the senescent cells). It is unclear how this expensive therapy compares with a good exercise regimen. We have reliable markers of biological age based on methylation and they were not used as part of this study.
  • Countries that adopt a flat tax system (as opposed to the more common progressive system) grow richer exponentially faster. That is, though it may seem intuitive that richer people should pay higher percentage of their income in taxes, it may come at a substantial cost with respect to overall wealth.
  • Diabetes is related to a disfunction of the pancreas. Thankfully we can create insuline producing cells, and we can even insert these cells in one’s pancreas. Sadly, they are soon attacked by the immune system and destroyed. It appears that progress is being made, and that viable cells have survived transplantation in the pancreas through a new technique that protects them from the immune system. It works in mice.
  • Cochrane, a credible source when it comes to medical research, published a review of the evidence regarding masks and hand washing with respect to respiratory viral infections:

    There is uncertainty about the effects of face masks. The low‐moderate certainty of the evidence means our confidence in the effect estimate is limited, and that the true effect may be different from the observed estimate of the effect. The pooled results of randomised trials did not show a clear reduction in respiratory viral infection with the use of medical/surgical masks during seasonal influenza. There were no clear differences between the use of medical/surgical masks compared with N95/P2 respirators in healthcare workers when used in routine care to reduce respiratory viral infection. Hand hygiene is likely to modestly reduce the burden of respiratory illness. Harms associated with physical interventions were under‐investigated.

    It does not follow that you should not wear masks or that you should avoid washing your hands. I do and I recommend you do too. However, you should be critical of any statement to the effect that science is telling us that masks and hand washing stop airborne viruses, especially when such statements are made in a political context.

How fast does interpolation search converge?

When searching in a sorted array, the standard approach is to rely on a binary search. If the input array contains N elements, after log(N) + 1 random queries in the sorted array, you will find the value you are looking for. The algorithm is well known, even by kids. You first guess that the value is in the middle, you check the value in the middle, you compare it against your target and go either to the upper half of lower half of the array based on the result of the comparison.

Binary search only requires that the values be sorted. What if the values are not only sorted, but they also follow a regular distribution. Maybe you are generating random values, uniformly distributed. Maybe you are using hash values.

In a classical paper, Perl et al. described a potentially more effective approach called interpolation search. It is applicable when you know the distribution of your data. The intuition is simple: instead of guessing that the target value is in the middle of your range, you adjust your guess based on the value. If the value is smaller than average, you aim near the beginning of the array. If the value much larger than average, you guess that the index should be near the end.

The expected search time is then much better: log(log(N)). To gain some intuition, I quickly implemented interpolation search in C++ and ran a little experiment, generating large arrays and search in them using interpolation search. As you can see,  as you multiply the size of the array by 10, the number of hits or comparisons remains nearly constant. Furthermore, interpolation search is likely to quickly get very close to the target. Thus the results are better than they look if memory locality is a factor.

N hits
100 2.9
1000 3.5
10000 3.8
100000 4.0
100000 4.5
1000000 4.6
10000000 4.9

You might object that such a result is inferior to a hash table, and I do expect well implemented hash tables to perform better, but you should be mindful that many hash table implementations gain performance at the expense of higher memory usage, and that they often lose the ability to visit the values in sorted order at high speed. It is also easier to merge two sorted arrays than to merge two hash tables.

This being said, I am not aware of interpolation search being actually used productively in software today. If you have a reference to such an artefact, please share!


Update: Some readers suggest that Big table relies on a form of interpolation search.

Update: It appears that interpolation search was tested out in git (1, 2). Credit: Jeff King.

Further reading: Interpolation search revisited by Muła

The disagreeable scientist conjecture

If you are a nerd, the Internet is a candy store… if only you stay away from mainstream sites. Some of the best scientists have blogs, YouTube channels, they post their papers online. When they review a paper, they speak frankly, openly. Is the work good or irrelevant? You can agree or disagree, but their points are clear and well stated.

You may expect that researchers always work in this manner. That they always speak their mind. Nothing could be further from the truth in my experience. We have a classical power structure with a few people deciding on the Overton window. Here are the subjects, we can discuss, here are the relevant topics. We have added layers and layers of filters to protect us against disruption. That is, there is free discussion… as long as you follow the beaten path. Here are some of the things that you must never discuss:

    • These people in field X are getting nowhere. I think that their work is no good. We should move on and leave them behind.
    • We have this theoretical modèle but it does not seem to help us very much in the real world, maybe we should drop it.

I find that the most interesting researchers break both of these barriers from time to time. In other words, they are not very reasonable.

My conjecture is that it is not an accident. To be precise, my conjecture is that the best scientists are disagreeable people. It is a technical statement. I am saying that they have the courage to offend as an intellectual.

The business of research is bureaucratic. In a bureaucracy, the day to day goes much smoother if you are agreeable. But being disagreeable at times might help career-wise: you can demand to be respected, demand to be credited. That is certainly valuable to get ahead and be promoted.

But I am not thinking about the business of science, I am thinking about science itself. The progress of scientific knowledge needs disagreeable people. The statement itself is obvious: to bring a new idea into the fold, someone must first champion it and since new ideas tend to displace old ideas. And so if you fear to displease others, you will never bring anything disruptive to the table. But that is not what I mean. Or it is not the only thing that I mean.

When we are thinking of new ideas, deciding whether to spend time on them, we weight many factors in our head. If you are a strong conformist, you will automatically, without thinking, prune out really disruptive ideas. There are some papers you will even refuse to read for fear that you might get in trouble, be rejected by some of your peers.

I believe that it takes disagreeable people to pick up the dangerous ideas and pursue them. Science needs risk taking, but the risks are disproportionnally taken by a few disagreeable people. To be clear, again, I use the term disagreeable in a technical manner: I do not mean that these people are not fun to have around.

My conjecture is falsifiable. I believe that after controlling for the potential benefits to one’s career of being disagreeable (insisting on credit and fighting for oneselve), we will find a strong correlation between breakthrough/disruptive research findings and being disagreeable.

It is a population-level prediction. I do not predict that a given individual will become known as the new Einstein. This being said, I have to wonder whether Einstein would have a YouTube channel where he voiced controversial opinions if he lived today. I bet he would.

My conjecture also leads to a cultural-level prediction, though it becomes harder to formalize it. I believe that cultures that protect more strongly freedom of speech in the scientific domain will contribute disproportionally to science. And that is because a culture of freedom of speech encourages and supports open dissent with established ideas.

Programming is social

Software programming looks at a glance like work done best done in isolation. Nothing could be further from the truth in my experience. Though you may be working on your little program alone, you should not dismiss the social component of the work.

I often say that “programming is social” to justify the fact that I know and practice multiple programming languages. I also use this saying to justify the popularity of programming languages like JavaScript, Go and even C and Java.

Let me elaborate on what I mean by “programming is social”:

  1. Programmers reuse each other’s work on a massive scale. Programmers are lazy and refuse to do the same task again and again. So they code frequently needed operations into packages. They tend distribute these packages. The most popular programming languages tend to have free, ready-made components to solve most problems already. JavaScript and Python have free and high-quality libraries and extensions for most things. So it pays to know popular programming languages.
  2. Most programmers encounter similar issues over time. Some programming difficulties are particularly vexing. Yet programmers are great at sharing questions and answers. You ability to ask clear questions, to provide clear answers, and to read and understand both, is important to your ongoing success as a programmer. Some programming languages have the advantage as they benefit from an accumulated set of knowledge. A programming language like Java does well in this respect. It pays to use well documented languages.
  3. Programming code is also, literally, a language. It is not uncommon that I will ask from someone that they code up their idea so I can understand it. Programming languages that easy to read win: Go and Python. Often, it pays to use the programming language that your community favours, even if you share no code with them, just so you can communicate more easily. It may be possible to write an Android application in Go, for example. But you would be wiser to using something like Kotlin or Java. Just because that is what your peers use.
  4. If you do great work, at some point you may need to teach others about how they can continue your work or use your work. Teaching requires good communication. It is helpful to have clear code in a language that many people know.


Double-blind peer review is a bad idea

When you submit a manuscript to a journal or to a conference, you do not know who reviews your manuscript. Increasingly, due to concerns with biases and homophily, journals and conferences are moving to a double-blind peer review where you have to submit your paper without disclosing your identity. There is also a competing move toward more openness where everyone’s identity is disclosed.

The intuition behind double-blind review is that it is harder to discriminate against people if you do not know their name and affiliation. Of course, editors and chairs still get to know your identity. The intuition behind open peer review is that if your reviews are published, you will be kept in check and may get punished if you are too biased. But people are concerned about their reviews or the reviews of their papers being published.

There are many undesirable biases involved in a professional setting. Of course, there are undesirable biases against some minorities and women. There are other biases as well. There are indications that the prestige of the author can be a determining factor when judging a piece of work. People generally tend to review people who are like themselves more highly. There are undesirable orthodoxy biases as well: uncommon ideas are far more difficult to defend even when the most common ideas have not been revisited lately. Conventional affiliations are more highly rated than unconventional affiliations.

Yet we should not immediately accept that hiding the identity of the author is the solution. The mere fact that we recognize a problem, and that there is some action related to the problem, does not imply that we must proceed with that action. Our tendency to do so relies on a fallacy known as the politician’s syllogism.

The Australian government, motivated by a study that claim blind auditions helped women, conducted an extensive evaluation of blind interviews and found the following:

This study assessed whether women and minorities are discriminated against in the early stages of the recruitment process for senior positions in the Australian Public Service (APS). It also tested the impact of implementing a ‘blind’ or de-identified approach to reviewing candidates. Over 2,100 public servants from 15 agencies participated in the trial. They completed an exercise in which they shortlisted applicants for a hypothetical senior role in their agency. Participants were randomly assigned to receive application materials for candidates in standard form or in de-identified form (with information about candidate gender, race and ethnicity removed). Overall, the results indicate the need for caution when moving towards ’blind’ recruitment processes in the APS, as de-identification may frustrate efforts aimed at promoting diversity.

To be clear, what they found was the reverse of what they were expecting: blinding interviews made things slightly worse for women.

And this study that shows that blind interviews helped women get hired by orchestra? Its statistical analysis does not stand up to scrutiny. And the left-leaning New York Times has recently published an essay arguing that blind interviews make orchestra less diverse.

Clearly, we believe that we can effectively combat undesirable prejudices in hiring since most employers do not hire based on a double-blind process. PhD students submit their thesis for review without hiding their name. Nobody is advocating that research papers be published anonymously as a rule. Nobody is advocating that we stop broadcasting the name of our employers, where we got our degrees and so forth. Nobody is advocating that when we report on a research result, we hide the name of the journal… Yet if we wanted to present pure research results, that is what we would do: hide affiliations, journal names, author names.

So why would we not want to hide the identity of the researchers during peer review despite the apparent advantages?

Firstly, the evidence for the benefits of double-blind peer reviews is a set of anecdotes. Double-blind experiments can bring biases to light the same way a microscope can show you a bacteria: they are great inquiry tools, but not necessary cures. What is scientific fact is that people have biases, homophily, and that you can, up to a point, anonymize content. However, the evidence for benefits is mixed. It is not clear that it helps women, for example. Do we get more participation from people outside the major universities over time under double-blind peer review? We do not know. Major conferences that did switch to double-blind peer review, like NeurIPS, are heavily dominated by a few elite institutions with almost no outsiders.

Secondly, telling someone from a poorly known organization, from a poor or non-English country or from non-dominant gender identity that they need to hide who they are to be treated fairly is not entirely a positive message. I certainly want to live in a world where a woman can publish her work as a woman. Stressing biases without properly addressing them can render fields unattractive to those who might suffer from these biases.

Another concern is that double-blind renders open scholarship difficult. I have been posting most my papers online, prior to peer review on arXiv or others servers, sometimes years before they are even submitted. I write all my software openly, engaging freely with multiple engineers and researchers. I practice what I call open scholarship. Obviously, it means I cannot reasonably take part in double-blind venues. Making open scholarship more difficult like seems a step backward. You can argue that you can still anonymize your contributions, in a bureaucratic manner, for the few days that the review last. But such a proposal dismisses the fact that open scholarship is primarily a cultural practice founded on the idea that the research happens in free and open networks.

And what happens after the work has been accepted? When the referees are biased, why would the readers not be biased as well? What is more important, the readers or the reviewers? Do we write papers to be published or to be read? I vote for the latter without hesitation. Yet, at best, double-blind peer review might help with getting papers accepted, but it does nothing for post-publication assessment. It is almost as if we thought that the end goal of the game was to get the research published in prestigious venues. Are we all about maximizing the impact factor or do we care to produce impactful research? If you are to be consistent with your beliefs, then if you promote double-blind peer review, you should also demand that we stop cataloguing and broadcasting affiliations. At a minimum, we should downplay the names of the authors: if we include them at all, they should be at the end of the paper, in small characters. If you are consistent with your beliefs, you should never, ever, give lists of names with affiliations. It seems logically incoherent for someone from an elite institution to be arguing for double-blind peer review while visibly broadcasting their elite institution. In part, I believe that they end up with such an illogical result because they start from a fallacy, the politician’s syllogism.

The San Francisco Declaration on Research Assessment tells us: “When involved in committees making decisions about funding, hiring, tenure, or promotion, make assessments based on scientific content rather than publication metrics.” Focusing on how papers get accepted misses the point of what we want to value. Yet a direct consequence of double-blind peer review is to make highly selective paper acceptance socially and politically more sustainable.

There is no free lunch. Double-blind peer review is not without cost.

Blank reported that authors from outside academia have a lower acceptance rate under double-blind peer review presumably because reviewers, when they can, tend to give a chance to outsiders despite the fact that outsider do not conform to the field’s orthodoxy as well as insiders may. Moreover, Blank indicates that double-blind peer review is overall harsher.

This “harsh” nature has been replicated and quantified. Double-blind peer review manuscripts are less likely to be successful than single-blind peer review manuscripts.

So there are unintended consequences to double-blind peer review. Having hasher reviews and lower acceptance rates may not be a positive. A student may think: “Why continue to seek approval, when you can leave science and do something else where you’ll be appreciated?”

And is the harsh nature entirely a side-effect? The introduction of double-blind peer review is partly justified by the mission we give the reviewers: select only the very best work. Once we relax this constraint on reviewers, double-blind peer review becomes much less necessary. In some sense, double-blind peer review is a way to make socially acceptable an elitist system.

If we want, for example, to increase the representation of women, there are potentially other means that are less intrusive and more positive, like, for example, including more women in the peer review process as reviewers, editors and so forth. The same applies to other biases. For example, you should ensure that people from small colleges are represented, or from poorer or non-English countries. And what about including people who have less orthodox ideas? What about including more outsiders? What about what Stonebraker might call “consumers of the research”? Look at the most desirable conferences in computer science that have adopted double-blind peer review. How many are chaired by people from non-elite institutions? When they organize plenary talks, how many are from non-elite institutions?

At a minimum, if we want to get more constructive reviews, we should give serious consideration to the demand that pre-publication peer reviews be published. Transparency is a good, practical strategy to fight undesirable biases and get people to be more constructive. We should be mindful that blinding a process, everything else being equal, makes it less transparent. In an open system, if I give raving reviews to my friends, and harsh reviews to ideas that I hate, I risk being exposed. In a fully blinded process, I can always claim impartiality. But if everyone is blinded bureaucratically, people with unacceptable biases can maintain plausible deniability should they ever be caught.

And here is another idea. Do we need the crazy low acceptance rates? In computer science, it is common that fewer than 15% of all papers are accepted. Do we realize that the outcome is unavoidably a power hierarchy controlled by a select few who pick the winners. By accepting more papers, we would necessarily make biases in peer review less harmful. We would reduce the power of the select few. Open source journals like PLOS One have shown that you can turn peer review away from a selection of the winners to a pruning of the bad research, with good results. The argument used to be that the conference was to be held in a hotel with only so many rooms, but zoom and youtube have millions of rooms. Of course, the downside then is that hiring and promotion committees cannot simply count the number of papers at prestigious venues and they must read the papers and discuss them. It is hard work. And the candidate can no longer just offer a list of papers, they have to explain why their work matters in a way that we can understand.

I do not think that the initial submission is the right time to judge the importance of a piece of work. If you look at even the best venues, most of the accepted papers are not impactful. That’s not the authors’ fault. It is just that really impactful work is rare and unpredictable. And it often takes time before we can recognize it. And different people will value different papers. By insisting that referees can reliably select the very best work, we fail to take into account the thoroughly documented limitations of pre-publication peer review. In some sense, by making it look more objective, we make things worse. We should just acknowledge that pre-publication reviews are intrinsically limited and build the system with these limitations in mind.

Though the problems that double-blind peer review seeks to address are real and significant, double-blind peer review is itself a rather crude and pessimistic solution that has several undesirable consequences. We can do better.

(Presented at the ACM Publications Board Meeting, November 19th 2020)

Further reading: Gender and peer review

Update: I love Peer Review: Implementing a “publish, then review” model of publishing

Appendix: Some selected reactions from twitter…