How fast can you pipe a large file to a C++ program?

Under many operating systems, you can send data from from one process to another using ‘pipes’. The term ‘pipe’ is probably used by analogy with plumbing and we often use the the symbol ‘|‘ to represent a pipe (it looks like a vertical pipe).

Thus, for example, you can sort a file and send it as input to another process:

sort file | dosomething

The operating system takes care of the data. It can be more convenient than to send the data to a file first. You can have a long sequence of pipes, processing the data in many steps.

How efficient is it computationally?

The speed of a pipe depends on the program providing the data. So let us build a program that just outputs a lot of spaces very quickly:

  constexpr size_t buflength = 16384;
  std::vector<char> buffer(buflength, ' ');
  for(size_t i = 0; i < repeat; i++) {
    std::cout.write(, buflength);

For the receiving program, let us write a simple program that receives the data, little else:

  constexpr size_t cache_length = 16384;
  char cachebuffer[cache_length];
  size_t howmany = 0;
  while(std::cin) {, cache_length);
    howmany += std::cin.gcount();

You could play with the buffer sizes: I use relatively large buffers to minimize the pipe overhead.

I am sure you could write more efficient programs, but I believe that most software using pipes is going to be less efficient than these two programs.

I guess speeds that are quite good under Linux but rather depressing under macOS:

macOS (Big Sur, Apple M1) 0.04 GB/s
Linux (Centos 7, ARM Rome) 2 GB/s to 6.5 GB/s

Your results will be different: please run my benchmark. It might be possible to go faster with larger inputs and larger buffers.

Even if the results are good under Linux, the bandwidth is not infinite. You will get better results passing data from within a program, even if you need to copy it.

As observed by one of the readers of this blog, you can fix the performance problem under macOS by falling back on a C API:

  size_t howmany = 0;
  size_t tr;
  while((tr = read(0, cachebuffer, cache_length))) {
    howmany += tr;

You lose portability, but you gain a lot of performance. I achieve a peak performance of 7 GB/s or above which is much more comparable to the cost of copying the data within a process.

It is not uncommon for standard C++ approaches to disappoint performance-wise.

Science and Technology links (July 31st 2021)

  1. Researchers built a microscope that might be 10 times better than the best available microscopes.
  2. Subsidizing college education can lower earnings due to lower job experience:

    The Post 9/11 GI Bill (PGIB) is among the largest and most generous college subsidies enacted thus far in the U.S. (…) the introduction of the PGIB raised college enrollment by 0.17 years and B.A. completion by 1.2 percentage points. But, the PGIB reduced average annual earnings nine years after separation from the Army.

  3. Better looking academics have more successful careers. If the current pandemic reduces in-person meetings, it could be that this effect might become weaker?
  4. It appears that women frequently rape men:

    (…) male rape happens about as often as female rape, and possibly exceeds it. Evidence also shows that 80% of those who rape men are women.

  5. In the past, Greenland experienced several sudden warming episodes by as much as 16 degrees, without obvious explanation.
  6. Researchers took stem cells, turned them into ovarian follicles, and ended up with viable mice offsprings. Maybe such amazing technology could come to a fertility clinic near your one day.
  7. We may soon benefit from a breakthrough that allows us to grow rice and potatoes with 50% more yield.
  8. American trucks sold today are often longer than military tanks used in the second world war.
  9. Organic food may not be better:

    If England and Wales switched 100 per cent to organic it would actually increase the greenhouse gas emissions associated with our food supply because of the greater need for imports. Scaling up organic agriculture might also put at risk the movement’s core values in terms of promoting local, fresh produce and small family farms.

  10. You can transmit data at over 300 terabits per second over the Internet.

    Not only have the researchers in Japan blown the 2020 record out of the proverbial water, but they’ve done so with a novel engineering method capable of integrating into modern-day fiber optic infrastructure with minimal effort.

    It suggests that we are far away from upper limits in our everyday Internet use and that there are still fantastic practical breakthroughs to come. What could you do with a nearly infinite data bandwidth?

  11. We are using robots to sculpt marble.
  12. Nuclear fusion might bring unlimited energy supplies. It seems that we might be close to a practical breakthrough.
  13. We still do not know why human females have permanent large breasts.
  14. It is unclear whether influenza vaccines are effective.
  15. Some ants effectively never age, because of a parasite living in them.

Measuring memory usage: virtual versus real memory

Software developers are often concerned with the memory usage of their applications, and rightly so. Software that uses too much memory can fail, or be slow.

Memory allocation will not work the same way under all systems. However, at a high level, most modern operating systems have virtual memory and physical memory (RAM). When you write software, you read and write memory at addresses. On modern systems, these addresses are 64-bit integers. For all practical purposes, you have an infinite number of these addresses: each running program could access hundreds of terabytes.

However, this memory is virtual. It is easy to forget what virtual mean. It means that we simulate something that is not really there. So if you are programming in C or C++ and you allocate 100 MB, you may not use 100 MB of real memory at all. The following line of code may not cost any real memory at all:

  constexpr size_t N = 100000000;
  char *buffer = new char[N]; // allocate 100MB

Of course, if you write or read memory at these ‘virtual’ memory addresses, some real memory will come into play. You may think that if you allocate an object that spans 32 bytes, your application might receive 32 bytes of real memory. But operating systems do not work with such fine granularity. Rather they allocate memory in units of “pages”. How big is a page depends on your operating system and on the configuration of your running process. On PCs, a page might often be as small as 4 kB, but it is often larger on ARM systems. Operating systems allow you to request large pages (e.g., one gigabyte). Your application receives “real” memory in units of pages. You can never just get “32 bytes” of memory from the operating system.

It means that there is no sense micro-optimizing the memory usage of your application: you should think in terms of pages. Furthermore, receiving pages of memory is a relative expensive process. So you probably do not want to constantly grab and release memory if efficiency is important to you.

Once you have allocated virtual memory, can we predict the actual (real) memory usage within the following loop?

  for (size_t i = 0; i < N; i++) {
    buffer[i] = 1;

The result will depend on your system. But a simple model is as follows: count the number of consecutive pages you have accessed, assuming that your pointer begins at the start of a page. The memory used by the pages is a lower-bound on the memory usage of your process, assuming that the system does not use other tricks (like memory compression or other heuristics).

I wrote a little C++ program under Linux which prints out the memory usage at regular intervals within the loop. I use about 100 samples. As you can see in the following figure, my model (indicated by the green line) is an excellent predictor of the actual memory usage of the process.

Thus a reasonable way to think about your memory usage is to count the pages that you access. The larger the pages, the higher will be the cost in this model. It may thus seem that if you want to be frugal with memory usage, you would use smaller pages. Yet a mobile operating system like Apple’s iOS has relatively larger pages (16 kB) than most PCs (4 kb). Given a choice, I would almost always opt for bigger pages because they make memory allocation and access cheaper. Furthermore, you should probably not worry too much about virtual memory. Do not blindly count the address ranges that your application has requested. It might have little to no relation with your actual memory usage.

Modern systems have a lot of memory and very clever memory allocation techniques. It is wise to be concerned with the overall memory usage of your application, but you are more likely to fix your memory issues at the software architecture level than by micro-optimizing the problem.

Faster sorted array unions by reducing branches

When designing an index, a database or a search engine, you frequently need to compute the union of two sorted sets. When I am not using fancy low-level instructions, I have most commonly computed the union of two sorted sets using the following approach:

    v1 = first value in input 1
    v2 = first value in input 2
    while(....) {
        if(v1 < v2) {
            output v1
            advance v1
        } else if (v1 > v2) {
            output v2
            advance v2
        } else {
           output v1 == v2
           advance v1 and v2

I wrote this code while trying to minimize the load instructions: each input value is loaded exactly once (it is optimal). It is not that load instructions themselves are expensive, but they introduce some latency. It is not clear whether having fewer loads should help, but there is a chance that having more loads could harm the speed if they cannot be scheduled optimally.

One defect with this algorithm is that it requires many branches. Each mispredicted branch comes with a severe penalty on modern superscalar processors with deep pipelines. By the nature of the problem, it is difficult to avoid the mispredictions since the data might be random.

Branches are not necessarily bad. When we try to load data at an unknown address, speculating might be the right strategy: when we get it right, we have our data without any latency! Suppose that I am merging values from [0,1000] with values from [2000,3000], then the branches are perfectly predictable and they will serve us well. But too many mispredictions and we might be on the losing end. You will get a lot of mispredictions if you are trying this algorithm with random data.

Inspired by Andrey Pechkurov, I decided to revisit the problem. Can we use fewer branches?

Mispredicted branches in the above routine will tend to occur when we conditionally jump to a new address in the program. We can try to entice the compiler to favour ‘conditional move’ instructions. Such instructions change the value of a register based on some condition. They avoid the jump and they reduce the penalties due to mispredictions. Given sorted arrays, with no duplicated element, we consider the following code:

while ((pos1 < size1) & (pos2 < size2)) {
    v1 = input1[pos1];
    v2 = input2[pos2];
    output_buffer[pos++] = (v1 <= v2) ? v1 : v2;
    pos1 = (v1 <= v2) ? pos1 + 1 : pos1;
    pos2 = (v1 >= v2) ? pos2 + 1 : pos2;

You can verify by using the assembly output that compilers are good at using conditional-move instructions with this sort of code. In particular, LLVM (clang) does what I would expect. There are still branches, but they are only related to the ‘while’ loop and they are not going to cause a significant number of mispredictions.

Of course, the processor still needs to load the right data. The address only becomes available in a definitive form just as you need to load the value. Yet we need several cycles to complete the load. It is likely to be a bottleneck, even more so in the absence of branches that can be speculated.

Our second algorithm has fewer branches, but it has more loads. Twice as many loads in fact! Modern processors can sustain more than one load per cycle, so it should not be a bottleneck if it can be scheduled well.

Testing this code in the abstract is a bit tricky. Ideally, you’d want code that stresses all code paths. In practice, if you just use random data, you will often have that the intersection between sets are small. Thus the branches are more predictable than they could be. Still, it is maybe good enough for a first benchmarking attempt.

I wrote a benchmark and ran it on the recent Apple processors as well as on an AMD Rome (Zen2) Linux box. I report the average number of nanoseconds per produced element so smaller values are better. With LLVM, there is a sizeable benefit (over 10%) on both the Apple (ARM) processor and the Zen 2 processor.  However, GCC fails to produce efficient code in the branchless mode. Thus if you plan to use the branchless version, you definitively should try compiling with LLVM. If you are a clever programmer, you might find a way to get GCC to produce code like LLVM does: if you do, please share.

system conventional union ‘branchless’ union
Apple M1, LLVM 12 2.5 2.0
AMD Zen 2, GCC 10 3.4 3.7
AMD Zen 2, LLVM 11 3.4 3.0

I expect that this code retires relatively few instructions per cycle. It means that you can probably add extra functionality for free, such as bound checking, because you have cycles to spare. You should be careful not to introduce extra work that gets in the way of the critical path, however.

As usual, your results will vary depending on your compiler and processor. Importantly, I do not claim that the branchless version will always be faster, or even that it is preferable in the real world. For real-world usage, we would like to test on actual data. My C++ code is available: you can check how it works out on your system. You should be able to modify my code to run on your data.

You should expect such a branchless approach to work well when you had lots of mispredicted branches to begin with. If your data is so regular that you a union is effectively trivial, or nearly so, then a conventional approach (with branches) will work better. In my benchmark, I merge ‘random’ data, hence the good results for the branchless approach under the LLVM compiler.

Further reading: For high speed, one would like to use SIMD instructions. If it is interesting to you, please see section 4.3 (Vectorized Unions Between Arrays) in Roaring Bitmaps: Implementation of an Optimized Software Library, Software: Practice and Experience 48 (4), 2018. Unfortunately, SIMD instructions are not always readily available.

Science and Technology links (July 10th 2021)

  1. We use CRISPR, a state-of-the-art gene editing technique, to edit the genes of live human patients in a clinical trials.
  2. A clinical trial has begun regarding an HIV vaccine.
  3. If you choose to forgo meat to fight climate change, you may lower your individual’s lifetime warming contribution by 2 to 4%.
  4. Age-related testosterone loss may strongly contribute to brain dysfunction in older men.
  5. Israel has used autonomous drone swarms to hunt down its adversaries.
  6. Cleaner air has improved agricultural outputs.
  7. Drinking alcohol could extend or reduce your life depending on how you go about it. Drinking to excess could be harmful but moderate alcohol consumption with meals could be beneficial.
  8. Injectable gene editing compounds have extended the lifespan of mice by more than 30% while improving physical performance.
  9. A gene therapy may get your heart to regenerate after a heart attack.
  10. Ocean acidification does not appear to harm coral reef fishes:

    Together, our findings indicate that the reported effects of ocean acidification on the behaviour of coral reef fishes are not reproducible, suggesting that behavioural perturbations will not be a major consequence for coral reef fishes in high CO2 oceans. (Source: Nature)

  11. It appears that when an embryon is formed, it briefly undergoes a rejuvenation effect. It would explain how old people can give birth to a young child.
  12. We have long believed that Alzheimer’s was caused by the accumulation of misfolded proteins in the brain. It appears that it could be, instead, due to the disappearance of soluble proteins in the brain. If so, the cure for Alzheimer’s would entail restoring a normal level of proteins instead of removing misfolded proteins.
  13. A new weight-loss drug was approved to fight against obesity.
  14. Unity Biotechnology announced some positive results in a clinical trial using senolytics (drugs that kill old cells) to treat macular degeneration (a condition that make old people blind).
  15. Lobsters are believed to experience negligible senescence, meaning that they do not age the way we do. They also appear to avoid cancer.

Compressing JSON: gzip vs zstd

JSON is the de facto standard for exchanging data on the Internet. It is relatively simple text format inspired by JavaScript. I say “relatively simple” because you can read and understand the entire JSON specification in minutes.

Though JSON is a concise format, it is also better used over a slow network in compressed mode. Without any effort, you can compress often JSON files by a factor of ten or more.

Compressing files adds an overhead. It takes time to compress the file, and it takes time again to uncompress it. However, it may be many times faster to send over the network a file that is many times smaller. The benefits of compression go down as the network bandwidth increases. Given the large gains we have experienced in the last decade, compression is maybe less important today. The bandwidth between nodes in a cloud setting (e.g., AWS) can be gigabytes per second. Having fast decompression is important.

There are many compression formats. The conventional approach, supported by many web servers, is gzip. There are also more recent and faster alternatives. I pick one popular choice: zstd.

For my tests, I choose a JSON file that is representative of real-world JSON: twitter.json. It is an output from the Twitter API.

Generally, you should expect zstd to compress slightly better than gzip. My results are as follow using standard Linux command-line tools with default settings:

uncompressed 617 KB
gzip (default) 51 KB
zstd (default) 48 KB

To test the decompression performance, I uncompress repeatedly the same file. Because it is a relatively small file, we should expect disk accesses to be buffered and fast.

Without any tweaking, I get twice the performance with zstd compared to the standard command-line gzip (which may differ from what your web server uses) while also having better compression. It is win-win. Modern compression algorithms like zstd can be really fast. For a fairer comparison, I have also included Eric Biggers’ libdeflate utility. It comes out ahead of zstd which stresses once more the importance of using good software!

gzip 175 MB/s
gzip (Eric Biggers) 424 MB/s
zstd 360 MB/s

My script is available. I run it under a Ubuntu system. I can create a RAM disk and the numbers go up slightly.

I expect that I understate the benefits of a fast compression routines:

    1. I use a docker container. If you use containers, then disk and network accesses are slightly slower.
    2. I use the standard command-line tools. With a tight integration of the software libraries within your software, you can probably avoid many system calls and bypass the disk entirely.

Thus my numbers are somewhat pessimistic. In practice, you are even more bounded by computational overhead and by the choice of algorithm.

The lesson is that there can be large differences in decompression speed and that these differences matter. You ought to benchmark.

What about parsing the uncompressed JSON? We have demonstrated that you can often parse JSON at 3 GB/s or better. I expect that, in practice,  you can make JSON parsing almost free compared to compression, disk and network delays.

Update: This blog post was updated to include Eric Biggers’ libdeflate utility.

Note: There has been many requests for more to expand this blog post with various parameters and so forth. The purpose of the blog post was to illustrate that there are large performance differences, not to provide a survey of the best techniques. It is simply out of the scope of the current blog post to identify the best approach. I mean to encourage you to run your own benchmarks.

See also: Cloudflare has its own implementation of the algorithm behind gzip. They claim massive performance gains. I have not tested it.

Further reading: Parsing Gigabytes of JSON per Second, VLDB Journal 28 (6), 2019

Science and Technology links (June 26th 2021)

  1. Reportedly, half of us own a smartphone.
  2. It is often reported that women or visible minority earn less money. However, ugly people are doing comparatively even more poorly.
  3. We have highly efficient and cheap solar panels. However, we must also quickly dispose of them after a few years which leads to trash. Handling an increasing volume of trash is not cheap:

    The totality of these unforeseen costs could crush industry competitiveness. (…) By 2035, discarded panels would outweigh new units sold by 2.56 times. In turn, this would catapult the [cost] to four times the current projection. The economics of solar — so bright-seeming from the vantage point of 2021 — would darken quickly as the industry sinks under the weight of its own trash.

  4. Ultrasounds may be a viable therapy against Alzheimer’s.
  5. Axolotls regenerate any part of their body, including their spinal cord. Though they only weight between 60g and 110g, they may live 20 years.
  6. Some dinosaurs once thrived in the Arctic.
  7. It was once believed that hair greying was essentially an irreversible process. Researchers looked at actual hair samples and found that in many cases, a hair that once turned gray could regain its colour.
  8. At least as far as your muscles go, you can remain fit up to an advanced age. As supportive evidence, researchers found that human muscle stem cells are refractory to aging:

    We find no measurable difference in regeneration across the range of ages investigated up to 78 years of age.

    Further work shows that you can promote muscle regeneration by reprogramming the cells in muscle fibers, thus potentially activating these muscle stem cells.

  9. Many people who are overweight suffer from the metabolic syndrome which includes abdominal obesity, high blood pressure, and high blood sugar. It affects about 25% of us worldwide. Researchers have found that a particular hormone, Asprosin, is elevated in people suffering for the metabolic syndrome. Suppressing asprosin in animal models reduced appetite and increased leanness, maybe curing the metabolic syndrome. We may hope that this work will quickly lead to human trials.
  10. During the current decade, the share of Canadians that are 65 years and older  will rise from 17.4 per cent to 22.5 per cent of the population.
  11. The Internet probably use less energy than you think and most projections are pessimistic.

How long should you work on a problem ?

Lev Reyzin says that working too long on a problem might be unproductive:

I, personally, have diminishing (or negative?) returns to my creative work as I explicitly work on a problem past some amount of time. I often have insights coming to me out of nowhere while I’m relaxing or enjoying hobbies on nights or weekends.

Whenever one considers innovative endeavors and their productivity, one must consider that innovation is fundamentally wasteful. The problem with innovation is not about how to get as much of it for as little of a cost as possible. It is to get innovation at all. By optimizing your production function, you risk losing all of it. I sooner blame someone for his publication list being too long than being too short, said Dijkstra.

My view is that we tend to underestimate “intellectual latency”. There is a delay between the time you approach a new idea and the time you have fully considered it.

Thus our brains are not unlike computers. Your processor might be able to run at 4 GHz and be able to retire 4 instructions per cycle… but very rarely are you able to reach such a throughput while working on a single task. The productive intellectuals that I know tend to work on a few ideas at once. Maybe they are writing a book while building a piece of software and writing an essay.

So you should not focus on one unique task in the hope of finishing it faster. You may complete it slightly faster if you omit everything else but the sum total of your productivity might be much lower.

There is also a social component to human cognition. If you hold on to a problem for very long, working tirelessly on it, you may well deprive yourself of the input of others. You should do go work and then quickly invite others to improve on your work. No matter how smart you think you are, you cannot come close to the superior ingenuity of the open world.

Energy and sanity are essential ingredients of sustain intellectual productivity. Hammering at a single problem for a long time is both maddening and energy limiting. Our brains are wired to like learning about new ideas. Your brain wants to be free to explore.

And finally, the most important reason to limit the amount of work you invest on a single task is that it is a poor strategy even if you can do it with all the energy and intelligence in the world. Sadly, most of what you do is utterly useless. You are like an investor in a stock market where almost all stocks are losers. Putting all your money on one stock would ensure your ruin. You cannot know, at any given time, what will prove useful. Maybe going outside and playing with your son sounds like a waste of your potential right now, but it might be the one step that puts your life on a trajectory of greatness. You want to live your life diversely, touching many people, trying many things. Learn to cook. Make cocktails. Dance. Go to the theatre. Play video games. Write assembly code. Craft a novel.

Many years ago I started to blog. I also started publishing my software as open source in a manner that could be useful to others. I started posting my research papers as PDFs that anyone could download. None of these decisions seemed wise at first. They took time away from “important problems”. I was ridiculed at one point or another for all of them. Yet these three decisions ended up being extremely beneficial to me.

Further reading: Peer-reviewed papers are getting increasingly boring

Science and Technology links (June 12th 2021)

    1. We completed the sequencing of the human genome.
    2. AstraZeneca’s drug Lynparza cut combined risk of recurrence of breast cancer or death by 42% among women in study.
    3. Glycine and N-acetylcysteine supplementation improves muscle strength and cognition.
    4. We found Egypt’s ancient capital. It has been lost of 3,400 years. It is unclear why it was abandoned.
    5. We estimate that over 6 million Americans have Alzheimer’s. Currently, Alzheimer’s is effectively incurable and no drug is known to reverse or halt it. Once you have Alzheimer’s, you start an irreversible and drastic cognitive decline. The USA has approved the first Alzheimer’s new drug in 20 years. The American government decided to approve the new drug, aducanumab, even though it is unclear whether it genuinely stops Alzheimer’s. It clears the brain from accumulated proteins and might slow the cognitive decline, but that latter claim is uncertain. The approval is controversial as the company producing aducanumab stands to make a lot of money while, possibly, providing no value whatsoever to the patient (and the drug might even have negative side-effects). Yet by deploying the drug today, we stand to learn much about its potential benefits and, if you are affected by Alzheimer’s, you may feel that you do not have much to lose.
    6. Trials begin on lozenge that rebuilds tooth enamel.
    7. The Google folks founded an anti-aging company called Calico a few years ago. One of the star employee is Cynthia Kenyon who is famous for showing that aging is malleable. Their latest paper suggests that we might be able to rejuvenate individual cells within the body safely.
    8. The incoming new disks (SSD) have a sequential read speed of up to 14 GB/s (PCIe 5.0).
    9. We are curing the blinds: “researchers added light-sensitive proteins to the man’s retina, giving him a blurry view of objects”. (Source: New York Times)
    10. You might think that government research grants are given on merit. Maybe not always. Applicants who shared both a home and a host organization with one panellist received a grant 40% more often than average. (Source: Nature)
    11. The Roman Empire thrived under a climate that was much warmer. The Empire declined when the temperature got colder:

      This record comparison consistently shows the Roman as the warmest period of the last 2 kyr, about 2 °C warmer than average values for the late centuries for the Sicily and Western Mediterranean regions. After the Roman Period a general cooling trend developed in the region with several minor oscillations. We hypothesis the potential link between this Roman Climatic Optimum and the expansion and subsequent decline of the Roman Empire.

      (Source: Nature)

    12. Reportedly, China is increasing its coal power capacity at a rate of 21% per year. Its yearly increase alone is six time Germanyʼs entire coal-fired capacity.

Computing the number of digits of an integer even faster

I my previous blog post, I documented how one might proceed to compute the number of digits of an integer quickly. E.g., given the integer 999, you want 3 but given the integer 1000, you want 4. It is effectively the integer logarithm in base 10.

On computers, you can quickly compute the integer logarithm in base 2, and it follows that you can move from one to the other rather quickly. You just need a correction which you can implement with a table. A very good solution found in references such as Hacker’s Delight is as follows:

    static uint32_t table[] = {9, 99, 999, 9999, 99999,
    999999, 9999999, 99999999, 999999999};
    int y = (9 * int_log2(x)) >> 5;
    y += x > table[y];
    return y + 1;

Except for the computation of the integer logarithm, it involves a multiplication by 9, a shift, a conditional move, a table lookup and an increment. Can you do even better? You might! Kendall Willets found an even more economical solution.

int fast_digit_count(uint32_t x) {
  static uint64_t table[] = {
      4294967296,  8589934582,  8589934582,  8589934582,  12884901788,
      12884901788, 12884901788, 17179868184, 17179868184, 17179868184,
      21474826480, 21474826480, 21474826480, 21474826480, 25769703776,
      25769703776, 25769703776, 30063771072, 30063771072, 30063771072,
      34349738368, 34349738368, 34349738368, 34349738368, 38554705664,
      38554705664, 38554705664, 41949672960, 41949672960, 41949672960,
      42949672960, 42949672960};
  return (x + table[int_log2(x)]) >> 32;

If I omit the computation of the integer logarithm in base 2, it requires just a table lookup, an addition and a shift:

add     rax, qword ptr [8*rcx + table]
shr     rax, 32

The table contains the numbers ceil(log10(2j)) * 232 + 232 – 10ceil(log10(2j)) for j from 2 to 30, and then just ceil(log10(2j)) for j = 31 and j = 32. The first value is 232 .

My implementation of Kendall’s solution is available.

Using modern C++, you can compute the table using constant expressions.

Further reading: Josh Bleecher Snyder has a blog post on this topic which tells the whole story.

Computing the number of digits of an integer quickly

Suppose I give you an integer. How many decimal digits would you need to write it out? The number ‘100’ takes 3 digits whereas the number ’99’ requires only two.

You are effectively trying to compute the integer logarithm in base 10 of the number. I say ‘integer logarithm’ because you need to round up to the nearest integer.

Computers represent numbers in binary form, so it is easy to count the logarithm in base two. In C using GCC or clang, you can do so as follows using a counting leading zeroes function:

int int_log2(uint32_t x) { return 31 - __builtin_clz(x|1); }

Though it looks ugly, it is efficient. Most optimizing compilers on most systems will turn this into a single instruction.

How do you convert the logarithm in base 2 into the logarithm in base 10? From elementary mathematics, we have that log10 (x) = log2(x) / log2(10). So all you need is to divide by log2(10)… or get close enough. You do not want to actually divide, especially not by a floating-point value, so you want mutiply and shift instead. Multiplying and shifting is a standard technique to emulate the division.

You can get pretty close to a division by log2(10) if you multiply by 9 and then divide by 32 (2 to the power of 5). The division by a power of two is just a shift. (I initially used a division by a much larger power but readers corrected me.)

Unfortunately, that is not quite good enough because we do not actually have the logarithm in base 2, but rather a truncated version of it. Thus you may need to do a off-by-one correction. The following code works:

    static uint32_t table[] = {9, 99, 999, 9999, 99999, 
    999999, 9999999, 99999999, 999999999};
    int y = (9 * int_log2(x)) >> 5;
    y += x > table[y];
    return y + 1;

It might compile to the following assembly:

        or      eax, 1
        bsr     eax, eax
        lea     eax, [rax + 8*rax]
        shr     eax, 5
        cmp     dword ptr [4*rax + table], edi
        adc     eax, 0
        add     eax, 1

Loading from the table probably incurs multiple cycles of latency (e.g., 3 or 4). The x64 bsr instruction has also a long latency of 3 or 4 cycles. My code is available.

You can port this function to Java as follows if you assume that the number is non-negative:

        int l2 = 31 - Integer.numberOfLeadingZeros(num|1);
        int ans = ((9*l2)>>>5);
        if (num > table[ans]) { ans += 1; }
        return ans + 1;

I wrote this blog post to answer a question by Chris Seaton on Twitter. After writing it up, I found that the always-brilliant Travis Downs had proposed a similar solution with a table lookup. I believe he requires a larger table. Robert Clausecker once posted a solution that might be close to what Travis has a in mind.

Furthermore, if the number of digits is predictable, then you can write code with branches and get better results in some cases. However, you should be concerned with the fact that a single branch miss can cost you 15 cycles and  tens of instructions.

Update: This a followup to this blog post… Computing the number of digits of an integer even faster

Further reading: Converting binary integers to ASCII character and Integer log 10 of an unsigned integer — SIMD version

Note: Victor Zverovich stated on Twitter than the fmt C++ library relies on a similar approach. Pete Cawley showed that you could achieve the same result that I got initially by multiplying by 77 and then shifting by 8, instead of my initially larger constants. He implemented his solution for LuaJIt. Giulietti pointed out to me by email that almost exactly the same routine appears in Hacker’s Delight at the end of chapter 11.

All models are wrong

All models are wrong, but some are useful is a common saying in statistics. It does not merely apply to statistics, however. It is general observation. Box (1976) wrote an  influential article on the topic. He says that you make progress by successively making models followed by experiments, followed by more models, and then more experiments. And so forth. Importantly, a more sophisticated model might be worse and lead to mediocrity.

Since all models are wrong the scientist cannot obtain a “correct” one by excessive elaboration. On the contrary following William of Occam he should seek an economical description of natural phenomena. Just as the ability to devise simple but evocative models is the signature of the great scientist so overelaboration and overparameterization is often the mark of mediocrity.

Kelly echoed this sentiment in his essay on thinkism:

Without conducting experiments, building prototypes, having failures, and engaging in reality, an intelligence can have thoughts but not results. It cannot think its way to solving the world’s problems.

Once you take for granted that all models are incorrect, the next question to ask is which models are useful, says Box:

Now it would be very remarkable if any system existing in the real world could be exactly represented by any simple model. However, cunningly chosen parsimonious models often do provide remarkably useful approximations. For example, the law PV = RT relating pressure P, volume V and temperature T of an “ideal” gas via a constant R is not exactly true for any real gas, but it frequently provides a useful approximation and furthermore its structure is informative since it springs from a physical view of the behavior of gas molecules. For such a model there is no need to ask the question “Is the model true?”. If “truth” is to be the “whole truth” the answer must be “No”. The only question of interest is “Is the model illuminating and useful?”.

How do you know if something is useful? By testing it out in the real world!

Why would all models be incorrect? Part of the answer has to do with the fact that pure logic, pure mathematics only works locally. It does not scale. It does not mean that pure logic is ‘bad’, only that its application is limited.

Programmers and other system designers are ‘complexity managers’. If you are working with very strict rules in a limited domain, you can make pure logic prevail. A programmer can prove that a given function is correct. However, at scale, all software, all laws, all processes, all theories, all constitutions are incorrect. You assess whether they are useful. You check that they are correct in the way you care about. You cannot run a country or a large business with logic alone. In a sense, pure logic does not scale. Too many people underestimate the forces that push us toward common law and away from top-down edicts.

If you are a programmer, you should therefore not seek to make your software flawless and perfect. You may end up with worse software in the process. It may become overengineered. To get good software, test it out in practice. See how well it meets the needs of your clients. Then revise it, again and again.

If you are doing research, you should not work from models alone. Rather, you should start from a model, test it out in a meaningful manner, refine it again (based on your experience) and so forth, in a virtuous circle.

Evidently, running a business has to follow the same paradigm. All business plans are wrong, some are useful. Start with a plan, but revise it at the end of the first day.

In whatever you do, focus on what works. Try to test out your ideas as much as possible. Adjust your models frequently.

Never, never accept an untested model no matter how smart and clever its authors are.

Credit: I am grateful to Peter Turney for links and ideas.

Further reading:  Gödel’s loophole.

Science and Technology links (May 22nd 2021)

  1. Most computer chips today in flagship phones and computers use a process based on a 5 nm or larger resolution. Finer resolutions usually translate into lower energy usage and lower heat production. Given that many of our systems are limited by heat or power, finer resolutions lead to higher performance. The large Taiwanese chip maker (TSMC) announced a breakthrough that might allow much finer resolutions (down to 1 nm). IBM recently reported a similar but less impressive breakthrough. It is unclear whether the American giant, Intel, is keeping up.
  2. Good science is reproducible. If other researchers follow whatever you describe in your research article, they should get the same results. That is, what you report should be an objective truth and not the side-effect of your beliefs or of plain luck. Unfornately, we rarely try to reproduce results. When we do, it is common to be unable to reproduce the results from a peer-reviewed research papers. The system is honor-based: we trust that people do their best to check their own results. What happens when mistakes happen? Over time, other researchers will find out. Unfortunately, reporting such failures is typically difficult. Nobody likes to make ennemies and the burden of the proof is always on you when you want to denounce other people’s research. It is so common that we have a name for the effect: the replication crisis. The reproduction crisis has attracted more and more attention because it is becoming an existential threat: if a system produces research that cannot be trusted, the whole institution might fall. We see the reproduction crisis in psychology, cancer research and machine learning. Researchers now report that unreproducible research can be cited 100 times more than reproducible research. It suggests that people who produce unreproducible research might have an advantage in their careers and that they might go up the ranks faster.
  3. Recent PCs and tablets store data on solid-state drives (SSDs) that can be remarkably fast. The latest Sony PlayStation has an SSD with a bandwidth exceeding 5 GB/s. Conventional (spinning) disks have lagged behind with a bandwidth of about 200 MB/s. However, conventional disks can be much larger. It seems, however, that conventional disks might be getting faster. The hard drive maker Seagate has been selling conventional disks that have a bandwidth of 500 MB/s.
  4. As you age, you accumulate cells that are dysfunctional and should otherwise die, they are called senescent cells. We are currently developing therapies to remove them. Martinez-Zamudio et al. report that a large fraction of some cells in the immune system of older human beings are senescent (e.g., 64%). Clearing these senescent cells could have a drastic effect. We shall soon know.
  5. There are 50 billion birds and 1.6 billion sparrows.
  6. Computer scientists train software neural networks (for artificial intelligence) using backpropagation. It seems that people believe that such a mechanism (backpropagation) is not likely to exist in biology. Furthermore, people seem to believe that in biological brain, learning is “local” (at the level of the synapse). Recently, researchers have shown that we can train software neural networks using another technique that is ‘biologically plausible’ called zero-divergence inference learning. The implicit assumption is that these software systems are thus a plausible model for biological brains. It is unclear to me whether that’s a valid scientific claim: is it falsifiable?
  7. Ancient Romans used lead for everything. It appears that Roman children suffered from lead poisoning and had a related high mortality rate.
  8. Knight et al. found strong evidence to support the hypothesis that vitamin D could help prevent breast cancer. Taking walks outside in the sun while not entirely covered provides your body with vitamin D.
  9. Mammal hearts do not regenerate very well. Hence, if your heart is damaged, it may never repair itself. It appears that some specific stem cells can survive when grafted to living hearts and induce regeneration.
  10. Persistent short sleep duration (6 hours or less) is associated with a 30% increased dementia risk. (Note that this finding does not imply that people sleeping a lot are in good health. It also does not imply that you are sick if you are sleeping little.)
  11. Researchers have rejuvenated the blood cells of old mice using a form of vitamin B3.
  12. There are 65 animal species that can laugh.
  13. Between the years 1300 and 1400, the area near Greenland became relatively free from ice, as it was effectively exporting its ice to subartic regions. It seems to match the beginning of the Little Ice Age, a time when, at least in Europe, cold temperatures prevailed.

Counting the number of matching characters in two ASCII strings

Suppose that you give me two ASCII strings having the same number of characters. I wish to compute efficiently the number of matching characters (same position, same character). E.g., the strings ‘012c’ and ‘021c’ have two matching characters (‘0’ and ‘c’).

The conventional approach in C would look as follow:

uint64_t standard_matching_bytes(char * c1, char * c2, size_t n) {
    size_t count = 0;
    size_t i = 0;
    for(; i < n; i++) {
        if(c1[i]  == c2[i]) { count++; }
    return count;

There is nothing wrong with this code. An optimizing compiler can auto-vectorize this code so that it will do far fewer than one instruction per byte, given long enough strings.

However, it does appear that the routine looks at every character, one by one. So it looks like you are loading two values, then you are comparing and then incrementing a counter, for each character. So it might compile to over 5 instructions per character (prior to auto-vectorization).

What you can do instead is load the data in blocks of 8 bytes, into 64-bit integers as in the following code. Do not be mislead by the apparently expensive memcpy calls: an optimizing compiler will turn these function calls into a single load instruction.

uint64_t matching_bytes(char * c1, char * c2, size_t n) {
    size_t count = 0;
    size_t i = 0;
    uint64_t x, y;
    for(; i + sizeof(uint64_t) <= n; i+= sizeof(uint64_t)) {
      memcpy(&x, c1 + i, sizeof(uint64_t) );
      memcpy(&y, c2 + i, sizeof(uint64_t) );
      count += matching_bytes_in_word(x,y);
    for(; i < n; i++) {
        if(c1[i]  == c2[i]) { count++; }
    return count;

So we just need a function that can compare two 64-bit integers and find how many matching bytes there are. Thankfully there are fairly standard techniques to do so such as the following. (I borrowed part of the routine from Wojciech Muła.)

uint64_t matching_bytes_in_word(uint64_t x, uint64_t y) {
  uint64_t xor_xy = x ^ y;
  const uint64_t t0 = (~xor_xy & 0x7f7f7f7f7f7f7f7fllu) + 0x0101010101010101llu;
  const uint64_t t1 = (~xor_xy & 0x8080808080808080llu);
  uint64_t zeros = t0 & t1;
  return ((zeros >> 7) * 0x0101010101010101ULL) >> 56;

With this routine, you can bring down the instruction count to about 2 per character, including all the overhead and the data loading. It is strictly better than what you could with character-by-character processing by a factor of two (for long strings).

Though I seem to restrict the problem to ASCII inputs, my code actually counts the number of matching bytes. If you know that the input is ASCII, you can further optimize the routine.

I leave it as an exercise for the reader to write a function that counts the number of matching characters within a range, or to determine whether all characters in a given range match.

The proper way to solve this problem is with SIMD instructions, and most optimizing compilers should do that for you starting from a simple loop. However, if it is not possible and you have relatively long strings, then the approach I described could be beneficial.

My source code is available.

Converting binary integers to ASCII characters: Apple M1 vs AMD Zen2

Programmers often need to write integers as characters. Thus given the 32-bit value 1234, you might need a function that writes the characters 1234. We can use the fact that the ASCII numeral characters are in sequence in the ASCII table: ‘0’+0 is ‘0’, ‘0’+1 is ‘1’ and so forth. With this optimization in mind, the standard integer-to-string algorithm looks as follow:

while(n >= 10)
  p = n / 10
  r = n % 10
  write '0' + r
  n = p
write '0' + n

This algorithm writes the digits in reverse. So actual C/C++ code will write a pointer that you decrement (and not increment):

  while (n >= 10) {
    const p = n / 10;
    r = n % 10;
    n = p;
    *c-- = '0' + r;
  *c-- = '0' + n;

You can bound the size of the string (10 characters for 32-bit integers, 20 characters for 64-bit integers). If you have signed integers, you can detect the sign initially and make the integer value non-negative, write out the digits and finish with the sign character if needed. If you know that your strings are long, you can do better by writing out the characters two at a time using lookup tables.

How fast is this function ? It is going to take dozens of instructions and CPU cycles. But where is the bottleneck?

If you look at the main loop, and pay only attention to the critical data dependency, you divide your numerator by 10, then you check its value, and so forth. So your performance is bounded by the speed at which you can divide the numerator by 10.

The division instruction is relatively slow, but most compilers will convert it into a multiplication and a shift. It implies that the whole loop has a latency of about 5 cycles if you count three cycles for the multiplication and one cycle for the shift, with one cycle for the loop overhead. Of course, the function must also compute the remainder and write out the result, but their cost is maybe less important. It is not that these operations are themselves free: computing the remainder is more expensive than computing the quotient. However, we may get them almost for free because they are on a critical data dependency path.

How correct is this analysis? How likely is it that you are just bounded by the division by 10? The wider your processor, the more instructions it can retire per cycle, the more true you’d expect this analysis to be. Our commodity processors are already quite wide. Conventional Intel/AMD processors can retire about 4 instructions per cycle. The Apple M1 processor can retire up to 8 instructions per cycle.

To test it out, let us add a function which only writes out the most significant digit.

  while (n >= 10) {
    n /= 10;
  *c = '0' + char(n);

Here is the number of nanoseconds required per integer on average according to a benchmark I wrote. The benchmark is designed to measure the latency.

function Apple M1 clang 12 AMD Zen2 gcc 10
fake itoa 11.6 ns/int 10.9 ns/int
real itoa 12.1 ns/int 12.0 ns/int

According to these numbers, my analysis seems correct on both processors. The numbers are a bit closer in the case of the Apple M1 processor, but my analysis is not sufficiently fine to ensure that this difference is significant.

Hence, at least in this instance, your best chance of speeding up this function is either by dividing by 10 faster (in latency) or else by reducing the number of iterations (by processing the data in large chunks). The latter is already found in production code.

In the comments, Travis Downs remarks that you can also try to break the chain of dependencies (e.g., by dividing the task in two).

Further reading: Faster Remainder by Direct Computation: Applications to Compilers and Software Libraries, Software: Practice and Experience 49 (6), 2019

Science and Technology links (May 15th 2021)

  1. There were rainforests near the south pole 90 million years ago.
  2. Though commercial exchanges are typically win-win for both the buyer and the seller, people tend to view the buyer as more likely to be taken advantage of.
  3. People with low self-esteem are more likely to blame the political system for their personal problems.
  4. Moscona find compelling evidence that the introduction of patents for plants in 1985 was followed by increased innovation.  This suggests that government interventions in agriculture can help increase productivity and entice further research and development.
  5. Ahmed et al. find that employers favour female candidates. Men are especially well advised to stay away from female-dominated fields:

    Male applicants were about half as likely as female applicants to receive a positive employer response in female-dominated occupations.

    Female applicants do not suffer from such discrimination according to this study. Note that this new study only supports earlier findings. For example, Williams and Ceci find that academic female applicants have an enormous advantage over academic male applicants:

    Contrary to prevailing assumptions, men and women faculty members from all four fields preferred female applicants 2:1 over identically qualified males with matching lifestyles (single, married, divorced), with the exception of male economists, who showed no gender preference.

    We also view managers as less moral when they fire women according to Reynolds et al.

  6. Working from home may not reduce output, but it seems to reduce productivity in the sense that workers need more hours to get the same work done. Importantly, this remains true even after accounting for the reduction in commute time. That is, it would appear that though people do not have to commute, they reinvest all of that time, and more, into their work. It applies to both people with children and people without children though the negative effect is more pronounced with people having children. (It is only a single study so it should be taken with some skepticism.)
  7. The gut of infants is free from microbial colonization before birth.
  8. Worldwide, since 2000, we have gained the equivalent of France is forest ground.
  9. We can transplant fresh ovaries in mice, and we might soon be able to do so in women.
  10. The amount of insulin in your blood increases as you age, robustly, irrespective of other factors. In layman’s terms, you are getting more and more diabetic over time.
  11. Scientists create early embryos that are part human, part monkey.


Constructing arrays of Boolean values in Java

It is not uncommon that we need to represent an array of Boolean (true or false) values. There are multiple ways to do it.

The most natural way could be to construct an array of booleans (the native Java type). It is likely that when stored in an array, Java uses a byte per value.

boolean[] array = new boolean[listSize];
for(int k = 0; k < listSize; k++) {
  array[k] = ((k & 1) == 0) ? true : false;

You may also use a byte type:

byte[] array = new byte[listSize];
for(int k = 0; k < listSize; k++) {
  array[k] = ((k & 1) == 0) ? (byte)1 : (byte)0;

You can get more creative and you could do it using an array of strings:

String[] array = new String[listSize];
for(int k = 0; k < listSize; k++) {
  array[k] = ((k & 1) == 0) ? "Found" : "NotFound";

In theory, Java could optimize the array so that it requires only one bit per entry. In practice, each reference to a string value will use either 32 bits or 64 bits. The string values themselves use extra memory, but Java is probably smart enough not to store multiple times in memory the string “Found”. It might store it just once.

And then you can do it using a BitSet, effectively using about a bit per value:

BitSet bitset = new BitSet(listSize);
for(int k = 0; k < listSize; k++) {
  if((k & 1) == 0) { bitset.set(k); }

The BitSet has tremendous performance advantages: low memory usage, fancy algorithms that benefit from word-level parallelism, and so forth.

Typically, you do not just construct such an array, you also use it. But let us say that I just want to construct it as fast as possible, how do these techniques differ? I am going to use 64K array with OpenJDK 8 on an Apple M1 processor.

My source code is available. In my benchmark, the content of the arrays is  known at compile time which is an optimistic case (the compiler could just precompute the results!). My results are as follow:

boolean 23 us
byte 23 us
String 60 us
BitSet 50 us

You may divide by 65536 to get the cost in nanoseconds per entry. You may further divide by 3.2GHz to get the number of cycles per entry.

We should not be surprised that the boolean (and byte) arrays are fastest. It may require just one instruction to set the value. The BitSet is about 3 times slower due to bit manipulations. It will also use 8 times less memory.

I was pleasantly surprised by the performance of the String approach. It will use between 4 and 8 times more memory than the simple array, and thus 32 to 64 times more memory than the BitSet approach, but it is reasonably competitive with the BitSet approach. But we should not be surprised. The string values are known at compile-time. Storing a reference to a string should be more computationally expensive than storing a byte value. These numbers tell us that Java can and will optimize String assignments.

I would still disapprove strongly of the use of String instances to store Boolean values. Java may not be able to always optimize away the computational overhead of class instances.

Furthermore, if you do not care about the extra functionality of the BitSet class, with its dense representation, then an array of boolean values (the native type) is probably quite sane.

Science and Technology links (May 1st 2021)

  1. Growing your own food could lower your carbon footprint by 3-5%.
  2. In recent years, we have acquired the ability to measure biological age: your chronological age does not necessarily match your biological since some people age faster. We measure biological aging with gene expression. Researchers found that an eight-week program of diet, exercise, and meditation could reduce biological age by two years. The study was small and short.
  3. You may have heard that younger and older people are happier, with middle-aged people reporting less happiness. Kratz suggests that such studies might not have been methodologically sound.
  4. Our brain is not good at producing new neurons. However, we have a rather abundant supply of another brain cell, astrocytes. Researchers took astrocytes and converted them into fully functional, integrated neurons (in mice). (Source: Nature)
  5. A beer might produce between 200,000 and 2 million bubbles before going flat.
  6. We now have the technology to edit your genes, or the genes your babies. CRISPR-Cas9 allows us to edit individual genes in a cell. But editing genes might often be unnecessary for medical purposes: it might suffice to silence or express the gene. Researchers have come up with a new technique called CRISPRoff which might do just that.
  7. The USA is approving the use of drones over people and at night.
  8. Omega 3 supplements may lower inflammation and boost repair mechanisms. (Source: Nature)
  9. You may finally have an effective vaccine against Malaria.
  10. It appears that bimekizumab might be a remarkably effective drug against psoriasis or other related diseases.
  11. New anti-depressants appear to increase suicide risks among teenagers.
  12. When seeking venture capital, female and Asians entrepreneurs may have slightly more luck with investors.
  13. As your cells divide, it is believed that small mutations are introduced. Recent research suggests that even cells that never divide may mutate.
  14. Adult mammals heal from injuries by forming scars. A new drug may prevent scars. It works in mice.
  15. Worms live longer when they have less food. It appears that smelling food is enough to make this effect go away. (Source: Nature)
  16. Reducing your blood pressure is always a good thing.

Ideal divisors: when a division compiles down to just a multiplication

The division instruction is one of the most expensive instruction in your CPU. Thus optimizing compilers often compile divisions by known constants down to a multiplication followed by a shift. However, in some lucky cases, the compiler does not even need a shift. I call the corresponding divisors ideal. For the math. geeks, they are related to Fermat numbers.

For 32-bit unsigned integers, we have two such divisors (641 and 6700417). For 64-bit unsigned integers, we have two different ones (274177 and 67280421310721). They are factors for 232 + 1 and 264 + 1 respectively. They are prime numbers.

So you have that

n/274177 = ( n * 67280421310721 ) >> 64


n/67280421310721 = ( n * 274177 ) >> 64.

In these expressions, the multiplication is the full multiplication (to a 128-bit result). It looks like there is still a ‘shift’ by 64 bits, but the ‘shift’ disappears in practice after compilation.

Of course, not all compilers may be able to pull this trick, but many do. Here is the assembly code produced by GCC when compiling n/274177 and n/67280421310721 respectively for an x64 target.

        movabs  rdx, 67280421310721
        mov     rax, rdi
        mul     rdx
        mov     rax, rdx
        mov     rax, rdi
        mov     edx, 274177
        mul     rdx
        mov     rax, rdx

You get similar results with ARM. It looks like ARM works hard to build the constant, but it is mostly a distraction again.

        mov     x1, 53505
        movk    x1, 0xf19c, lsl 16
        movk    x1, 0x3d30, lsl 32
        umulh   x0, x0, x1
        mov     x1, 12033
        movk    x1, 0x4, lsl 16
        umulh   x0, x0, x1

What about remainders?

What a good compiler will do  is to first compute the quotient, and then do a multiplication and a subtraction to derive the remainder. It is the general strategy. Thus, maybe surprisingly, it is more expensive to compute a remainder than a quotient in many cases!

You can do a bit better in some cases. There is a trick from our Faster Remainder by Direct Computation paper that compilers do not know about. You can compute the remainder directly, using exactly two multiplications (and a few move instructions):

n % 274177 = (uint64_t( n * 67280421310721 ) * 274177) >> 64


n % 67280421310721 = (uint64_t( n * 274177 ) * 67280421310721) >> 64.

In other words, the following two C++ functions are strictly equivalent:

// computes n % 274177
uint64_t div1(uint64_t n) {
    return n % 274177;

// computes n % 274177
uint64_t div2(uint64_t n) {
    return (uint64_t( n * 67280421310721 ) 
              * __uint128_t(274177)) >> 64;

Though the second function is more verbose and uglier, it will typically compile to more efficient code involving just two multiplications, back to back. It may seem a lot but it is likely better than what the compiler will do.

In any case, if you are asked to pick a prime number and you expect to have to divide by it, you might consider these ideal divisors.

Further reading. Integer Division by Constants: Optimal Bounds

Some useful regular expressions for programmers

In my blog post, My programming setup, I stressed how important regular expressions are to my programming activities.

Regular expressions can look intimidating and outright ugly. However, they should not be underestimated.

Someone asked for examples of regular expressions that I rely upon. Here a few.

  1. It is commonly considered a faux pas to include ‘trailing white space’ in code. That is, your lines should end with the line-return control characters and nothing else. In a regular expression, the end of the string (or line) is marked by the ‘$’ symbol, and a white-space can be indicated with ‘\s’, and a sequence of one or more white space is ‘\s+’. Thus if I search for ‘\s+$‘, I will locate all offending lines.
  2. It is often best to avoid non-ASCII characters in source code. Indeed, in some cases, there is no standard way to tell the compiler about your character encoding, so non-ASCII might trigger problems. To check all non-ASCII characters, you may do [^\x00-\x7F].
  3. Sometimes you insert too many spaces between a variable or an operator. Multiple spaces are fine at the start of a line, since they can be used for indentation, but other repeated spaces are usually in error. You can check for them with the expression \b\s{2,}. The \b indicate a word boundary.
  4. I use spaces to indent my code, but I always use an even number of spaces (2, 4, 8, etc.). Yet I might get it wrong and insert an odd number of spaces in some places. To detect these cases, I use the expression ^(\s\s)*\s[^\s]. To delete the extra space, I can select it with look-ahead and look-behind expressions such as (?<=^(\s\s)*)\s(?=[^\s]).
  5. I do not want a space after the opening parenthesis nor before the closing parenthesis. I can check for such a case with (\(\s|\s\)). If I want to remove the spaces, I can detect them with a look-behind expression such as (?<=\()\s.
  6. Suppose that I want to identify all instances of a variable, I can search for \bmyname\b. By using word boundaries, I ensure that I do not catch instances of the string inside other functions or variable names. Similarly, if I want to select all variable that end with some expression, I can do it with an expression like \b\w*myname\b.

The great thing with regular expressions is how widely applicable they are.

Many of my examples have to do with code reformatting. Some people wonder why I do not simply use code reformatters. I do use such tools all of the time, but they are not always a good option. If you are going to work with other people who have other preferences regarding code formatting, you do not want to trigger hundreds of formatting changes just to contribute a new function. It is a major faux pas to do so. Hence you often need to keep your reformatting in check.