New release of the simdjson library: version 1.0

The most popular data format on the web is arguably JSON. It is a simple and convenient format. Most web services allow to send and receive data in JSON.

Unfortunately, parsing JSON can be time and energy consuming. Back in 2019, we released the simdjson library. It broke speed records and it is still one of the most efficient and fast JSON parsing library. It makes few compromises. It provides exact float parsing, exact unicode validation and so forth.

An independent benchmark compares it with other fast C++ libraries and demonstrates that it can use far less energy.

simdjson 2.1 J
RapidJSON 6.8 J
C++ for Modern C++ 41 J

We have recently released version 1.0. It took us two years to get at that point. It is an important step for us.

There are many ways a library can give you access to the JSON data. A convenient approach is the DOM tree (DOM stands for document object model). In a DOM-based approach, the document is parsed entirely and materialized in an in-memory construction. For some applications, it is the right model, but in other instances, it is a wasteful step.

We also have streaming-based approaches. In such approaches, you have an event-based interface where the library calls user-provided function when encountering different elements. Though it can be highly efficient, in part because it sidesteps the need to construct a DOM tree, it is a challenging programming paradigm.

Another approach is a simple serialization-deserialization. You provide a native data structure and you ask the library to either write it out in JSON or to turn the JSON into your data structure. It is often a great model. However, it has limited flexibility.

In simdjson, we are proposing a new approach which we call On Demand. The On Demand approach feels like a DOM approach, but it sidesteps the construction of the DOM tree. It is entirely lazy: it decodes only the parts of the document that you access.

With On Demand, you can write clean code as follows:

#include <iostream>
#include "simdjson.h"
using namespace simdjson;
int main(void) {
    ondemand::parser parser;
    padded_string json = padded_string::load("twitter.json");
    ondemand::document tweets = parser.iterate(json);
    std::cout << uint64_t(tweets["search_metadata"]["count"]) << " results." << std::endl;
}

In such an example, the library accesses only the content that you require, doing only minimal validation and indexing of the whole document.

With On Demand, if you open a file containing 1000 numbers and you need just one of these numbers, only one number is parsed. If you need to put the numbers into your own data structure, they are materialized there directly, without being first written to a temporary tree. Thus we expect that the simdjson On Demand might often provide superior performance, when you do not need to intermediate materialized view of a DOM tree. The On Demand front-end was primarily developed by John Keiser.

In release 1.0 of the simdjson library, the On Demand frontend is our default though we also support a DOM-based approach.

Release 1.0 adds several key features:

  1. In big data analytics, it is common to serialize large sets of records as multiple JSON documents separated by while spaces. You can now get the benefits of On Demand while parsing almost infinitely long streams of JSON records. At each step, you have access to the current document, but a secondary thread indexes the following block. You can thus access enormous files while using a small amount of memory and achieve record-breaking speeds.
  2. Given an On Demand instance (value, array, object, etc.), you can now convert it to a JSON string using the to_json_string method which returns a string view in the original document for unbeatable speeds.
  3. The On Demand front-end now supports the JSON Pointer specification. You can request a specific value using a JSON Pointer within a large document.

The release 1.0 is robust. We have extended and improved our documentation. We have added much testing.

The simdjson library is the result of the work of many people. I would like to thank Nicolas Boyer for working with me over the summer on finishing this version.

You can find simdjson on GitHub. You can use it by adding two files to your project (simdjson.h and simdjson.cpp), or as a CMake dependency or using many popular package managers.

Science and Technology links (September 18th 2021)

    1. 4.5% of us are psychopaths.
    2. U.S. per capita CO2 emissions are lower than they were in 1918.
    3. 9/10 of People With Alzheimer’s Lose Some of Their Sense of Smell.
    4. Graphene-based hard drives could have ten times the storage capacity.
    5. Ageing yields improvements as well as declines across attention and executive functions.
    6. Progress toward an Alzheimer’s vaccine.
    7. The world’s first recorded case of an autonomous drone attacking humans took place in March 2020.
    8. Abstinence from alcohol is associated with an increased risk for all-cause dementia.

Random identifiers are poorly compressible

It is common in data engineering to find that we have too much data. Thus engineers commonly seek compression routines.

At the same time, random identifiers are handy. Maybe you have many users or transactions and you want to assign each one of them a unique identifier. It is not uncommon for people to use wide randomized identifiers, e.g. 64-bit integers.

By definition, if your identifiers are random, they are hard to compress. Compression fundamentally works by finding and eliminating redundancy.

Thus, if you want your identifiers to be compressible, you should make sure that they are not random. For example, you can use local identifiers that are sequential or nearly sequential (1, 2,…). Or you may try to use as few bits as possible per identifier: if you have only a couple of billions of identifiers, then you may want to limit yourself to 32-bit identifiers.

Often people do not want to design their systems by limiting the identifiers. They start with wide random identifiers (64-bit, 128-bit or worse) and they seek to engineer compressibility back into the system.

If you generate distinct identifiers, some limited compression is possible. Indeed, if the same integer cannot occur twice, then knowing one of them gives you some information on the others. In the extreme case where you have many, many distinct identifiers, then make become highly compressible. For example, if I tell you that I have 264 distinct 64-bit integers, then you know exactly what they are (all of them). But you are unlikely to ever have 264 distinct elements in your computer.

If you have relatively few distinct 64-bit integers, how large is the possible compression?

We can figure it out with a simple information-theoretical analysis. Let n be the number of identifiers (say 264), and let k be the number of distinct identifiers in your system. There “n choose k” (the binomial coefficient) different possibilities. You can estimate “n choose k” with nk/k! when k is small compared to n and n is large. If I want to figure out how many bits are required to index a value amount nk/k!, I need to compute the logarithm. I get k log n – log k! where the log is in base 2. Without compression, we can trivially use log n bits per entry. We see that with compression, I can save about 1/k log k! bits per entry. By Stirling’s approximation, that is no more than log k. So if you have k unique identifiers, you can save about log k bits per identifier with compression.

My analysis is rough, but should be in the right ball park. If you have 10,000 unique 64-bit identifiers then, at best, you should require about per identifier… so about a 20% saving. It may not be practically useful. With a million unique 64-bit identifiers, things are a bit better and you can reach a saving of about 40%. You probably can get close to this ideal compression ratio with relatively simple techniques (e.g., binary packing).

To get some compression, you need to group many of your identifiers. That is not always convenient. Furthermore, the compression itself may trade some performance away for compression.

How I debate

Many of us feel that the current intellectual climate is difficult to bear. When I first noticed the phenomenon, people told me that it was because of Donald Trump. He just made it impossible to debate calmly. But now that Trump is gone, the climate is just as bad and, if nothing else, much worse.

Debates are essential in a free society. The alternative to debate is force. Either you convince your neighbour to do as you think they should do, or else you send men with guns to his place.

It is tempting, when you have the upper hand, to use force and aggressive tactics against your opponents. However, this leaves little choice to your opponents: they have to return the favour. And if you both live long enough, chances are that they will.

Civility is a public good. We have to all commit to it.

I do not pretend to be the perfect debater. I make mistakes all the time. However, I try to follow these rules.

  1. Do not hope to change people’s core stance. This rarely, if ever, happens. That is not why we debate. If someone is in favour of Brexit and you are not, you can argue until you are blue in the face and they won’t change their stance. One of the core reasons to debate is to find common ground. People will naturally shy away from arguments that are weak. You can see a debate as a friendly battleground. Once the battle is over, you have probably not taken the other person’s moat, but if you did your job, you have enticed them to drop bad arguments. And they have done the same: they have exposed weaknesses in your models. It implies that the debate should bear on useful elements, like arguments and facts. It also implies that debate can and should be productive… even if it never changes anyone’s stance.
  2. Your goal in a debate is neither to demonstrate that the other person is bad or that you are good. Let people’s character out of the debate. This include your own character. For example, never argue that you are a good person. Reject character assassination, either of yourself or of others. The most popular character assassination tactic is “by association”: “your employer once gave money to Trump so you are a racist”. “You read this news source so you are part of their cult.” You must reject such arguments, whether they are applied to you or to others. Another popular tactic is to question people’s motives. Maybe someone works for a big oil company, so that explains why they are in favour of Brexit. Maybe someone is a member of the communist party, and that’s why they want to give the government more power. It is true that people’s motives impact their opinions, but it has no room in civil debate. You can privately think that a given actor is “sold out”, but you should not say it.
  3. Shy away from authority-based arguments. Saying that such and such is true because such and such individual says so, is counterproductive because the other side can do the same and the debate will be sterile. You can and should provide references and sources, but for the facts and arguments that they carry, not for their authority.

I believe that a case can be made that without a good intellectual climate, liberalism is bound to fade away. If you want to live in a free society, you have to help enforce good debates. If you are witnessing bad debates, speak up. Remind people of the rules. In fact, if I deviate from these rules, remind me: I will thank you.

Further reading: Arne Næss.

The big-load anti-pattern

When doing data engineering, it is common for engineers to want to first load all of the data in memory before processing the data. If you have sufficient memory and the loaded data is not ephemeral or you have small volumes, it is a sensible approach. After all, that is how a spreadsheet typically works: you load the whole speadsheet in memory.

But what if your application falls in the “extract-transform-load” category: as you scan the data, you discard it? What if you have large volumes of data? Then I consider the “load everything in memory at once” a performance anti-pattern when you are doing high performance data engineering.

The most obvious problem with a big load is the scalability. As your data inputs get larger and larger, you consume more and more memory. Though it is true that over time we get more memory, we also tend to get more processing cores, and RAM is a shared ressource. Currently, in 2021, some of the most popular instance types on the popular cloud system AWS have 4 GB per virtual CPU. If you have the means, AWS will provide you with memory-optimized virtual nodes that have 24 TB of RAM. However, these nodes have 448 logical processors sharing that memory.

Frequent and large memory allocations are somewhat risky. A single memory allocation failure may force an entire process to terminate. On a server, it might be difficult to anticipate what other processes do and how much memory they use. Thus each process should seek to keep its memory usage predictable, if not stable. Simply put, it is nicer to build your systems so that, as much as possible, they use a constant amount of memory irrespective of the input size. If you are designing a web service, and you put a hard limit on the size of a single result, you will help engineers build better clients.

You may also encounter various limits which reduce your portability. Not every cloud framework will allow you to upload a 40 GB file at once, without fragmentation. And, of course, on-device processing in a mobile setting becomes untenable if you have no bound on the data inputs.

But what about the performance? If you have inefficient code (maybe written in JavaScript or bad C++), then you should have no worries. If you are using a server that is not powerful, then you will typically have little RAM and a big load is a practical problem irrespective of the performance: you may just run out of memory. But if you are concerned with performance and you have lots of ressources, the story gets more intricate.

If you are processing the data in tiny increments, you can keep most of the data that you are consuming in CPU cache. However, if you are using a big-load, then you need to allocate a large memory region, initialize it, fill it up and then read it again. The data goes from the CPU to the RAM and back again.

The process is relatively expensive. To illustrate the point, I wrote a little benchmark. I consider a function which allocates memory and populates an array of integer with the values 0,1,2…

  int * content = new int[volume/sizeof(int)];
  init(content, volume);
  delete[] content;

It is a silly function: everything you would do that involves memory allocation is likely far slower. So how fast is this fast function? I get the following numbers on two of my machines. I pick the best results within each run.

1 MB 1 GB
alloc-init (best) – AMD Rome Linux 33 GB/s 7 GB/s
alloc-init (best) – Apple M1 30 GB/s 9 GB/s

Simply put, allocating memory and pushing data into it gets slower and slower with the volume. We can explain it in terms of CPU cache and RAM, but the principle is entirely general.

You may consider 7 GB/s or 9 GB/s to be a good speed, and indeed these processors and operating systems are efficient. However, consider that it is actually your starting point. We haven’t read the data yet. If you need to actually “read” that data, let alone transform it or do any kind of reasoning over it, you must then bring it back from RAM to cache. So you have the full cache to RAM and RAM to cache cycle. In practice, it is typically worse: you load the whole huge file into memory. Then you allocate memory for an in-memory representation of the content. You then rescan the file and put it in your data structure, and then you scan again your data structure. Unavoidably, your speed will start to fall… 5 GB/s, 2 GB/s… and soon you will be in the megabytes per second.

Your pipeline could be critically bounded because it is built out of slow software (e.g., JavaScript code) or because you are relying on slow networks and disk. To be fair, if the rest of your pipeline runs in the megabytes per second, then memory allocation might as well be free from a speed point of view. That is why I qualify the big-load to be an anti-pattern for high-performance data engineering.

In a high-performance context, for efficiency, you should stream through the data as much as possible, reading it in chunks that are roughly the size of your CPU cache (e.g., megabytes). The best chunk size depends on many parameters, but it is typically not tiny (kilobytes) nor large (gigabytes). If you bypass such an optimization as part of your system’s architecture, you may have hard limits on your performance later.

It is best to view the processor as a dot at the middle of a sequence of concentric circles. The processor is hard of hearing: they can only communicate with people in the inner circle. But there is limited room in each circle. The further you are from the processor, the more expensive it is for the processor to talk to you because you first need to move to the center, possibly pushing out some other folks. The room close to the processor is crowded and precious. So if you can, you should have your guests come into the center once, and then exit forever. What a big load tends to do is to get people into the inner circle, and then out to some remote circle, and then back again into the inner circle. It works well when there are few guests because everyone gets to stay in the inner circle or nearby, but as more and more people come in, it becomes less and less efficient.

It does not matter how your code looks: if you need to fully deserialize all of a large data file before you process it, you have a case of big load. Whether you are using fancy techniques such as memory file mapping or not, does not change the equation. Some parameters like the size of your pages may help, but they do not affect the core principles.

Adding more memory to your system is likely to make the problem relatively worse. Indeed, systems with lots of memory can often pre-empt or buffer input/output accesses. It means that their best achievable throughput is higher, and thus the big-load penalty relatively worse.

How may you avoid the big-load anti-pattern?

    • Within the files themselves, you should have some kind of structure so that you do not need to consume the whole file at once when it is large. It comes naturally with popular formats such as CSV where you can often consume just one line at a time. If you are working with JSON data files, you may want to adopt to JSON streaming for an equivalent result. Most data-engineering formats will support some concept of chunk or page to help you.
    • Consider splitting your data. If you have a database engine, you may consider sharding. If you are working with large files, you may want to use smaller files. You should be cautious not to fall for the small-load anti-pattern. E.g., do not store only a few bytes per file and do not fragment your web applications into 400 loadable ressources.
    • When compressing data, try to make sure you can uncompress small usable chunks (a few megabytes).

 

Note: If you are an experienced data engineer, you might object that everything I wrote is obvious. I would agree. This post is not meant to be controversial.

How fast can you pipe a large file to a C++ program?

Under many operating systems, you can send data from from one process to another using ‘pipes’. The term ‘pipe’ is probably used by analogy with plumbing and we often use the the symbol ‘|‘ to represent a pipe (it looks like a vertical pipe).

Thus, for example, you can sort a file and send it as input to another process:

sort file | dosomething

The operating system takes care of the data. It can be more convenient than to send the data to a file first. You can have a long sequence of pipes, processing the data in many steps.

How efficient is it computationally?

The speed of a pipe depends on the program providing the data. So let us build a program that just outputs a lot of spaces very quickly:

  constexpr size_t buflength = 16384;
  std::vector<char> buffer(buflength, ' ');
  for(size_t i = 0; i < repeat; i++) {
    std::cout.write(buffer.data(), buflength);
  }

For the receiving program, let us write a simple program that receives the data, little else:

  constexpr size_t cache_length = 16384;
  char cachebuffer[cache_length];
  size_t howmany = 0;
  while(std::cin) {
    std::cin.read(cachebuffer, cache_length);
    howmany += std::cin.gcount();
  }

You could play with the buffer sizes: I use relatively large buffers to minimize the pipe overhead.

I am sure you could write more efficient programs, but I believe that most software using pipes is going to be less efficient than these two programs.

I guess speeds that are quite good under Linux but rather depressing under macOS:

macOS (Big Sur, Apple M1) 0.04 GB/s
Linux (Centos 7, ARM Rome) 2 GB/s to 6.5 GB/s

Your results will be different: please run my benchmark. It might be possible to go faster with larger inputs and larger buffers.

Even if the results are good under Linux, the bandwidth is not infinite. You will get better results passing data from within a program, even if you need to copy it.

As observed by one of the readers of this blog, you can fix the performance problem under macOS by falling back on a C API:

  size_t howmany = 0;
  size_t tr;
  while((tr = read(0, cachebuffer, cache_length))) {
    howmany += tr;
  }

You lose portability, but you gain a lot of performance. I achieve a peak performance of 7 GB/s or above which is much more comparable to the cost of copying the data within a process.

It is not uncommon for standard C++ approaches to disappoint performance-wise.

Science and Technology links (July 31st 2021)

  1. Researchers built a microscope that might be 10 times better than the best available microscopes.
  2. Subsidizing college education can lower earnings due to lower job experience:

    The Post 9/11 GI Bill (PGIB) is among the largest and most generous college subsidies enacted thus far in the U.S. (…) the introduction of the PGIB raised college enrollment by 0.17 years and B.A. completion by 1.2 percentage points. But, the PGIB reduced average annual earnings nine years after separation from the Army.

  3. Better looking academics have more successful careers. If the current pandemic reduces in-person meetings, it could be that this effect might become weaker?
  4. It appears that women frequently rape men:

    (…) male rape happens about as often as female rape, and possibly exceeds it. Evidence also shows that 80% of those who rape men are women.

  5. In the past, Greenland experienced several sudden warming episodes by as much as 16 degrees, without obvious explanation.
  6. Researchers took stem cells, turned them into ovarian follicles, and ended up with viable mice offsprings. Maybe such amazing technology could come to a fertility clinic near your one day.
  7. We may soon benefit from a breakthrough that allows us to grow rice and potatoes with 50% more yield.
  8. American trucks sold today are often longer than military tanks used in the second world war.
  9. Organic food may not be better:

    If England and Wales switched 100 per cent to organic it would actually increase the greenhouse gas emissions associated with our food supply because of the greater need for imports. Scaling up organic agriculture might also put at risk the movement’s core values in terms of promoting local, fresh produce and small family farms.

  10. You can transmit data at over 300 terabits per second over the Internet.

    Not only have the researchers in Japan blown the 2020 record out of the proverbial water, but they’ve done so with a novel engineering method capable of integrating into modern-day fiber optic infrastructure with minimal effort.

    It suggests that we are far away from upper limits in our everyday Internet use and that there are still fantastic practical breakthroughs to come. What could you do with a nearly infinite data bandwidth?

  11. We are using robots to sculpt marble.
  12. Nuclear fusion might bring unlimited energy supplies. It seems that we might be close to a practical breakthrough.
  13. We still do not know why human females have permanent large breasts.
  14. It is unclear whether influenza vaccines are effective.
  15. Some ants effectively never age, because of a parasite living in them.

Measuring memory usage: virtual versus real memory

Software developers are often concerned with the memory usage of their applications, and rightly so. Software that uses too much memory can fail, or be slow.

Memory allocation will not work the same way under all systems. However, at a high level, most modern operating systems have virtual memory and physical memory (RAM). When you write software, you read and write memory at addresses. On modern systems, these addresses are 64-bit integers. For all practical purposes, you have an infinite number of these addresses: each running program could access hundreds of terabytes.

However, this memory is virtual. It is easy to forget what virtual mean. It means that we simulate something that is not really there. So if you are programming in C or C++ and you allocate 100 MB, you may not use 100 MB of real memory at all. The following line of code may not cost any real memory at all:

  constexpr size_t N = 100000000;
  char *buffer = new char[N]; // allocate 100MB

Of course, if you write or read memory at these ‘virtual’ memory addresses, some real memory will come into play. You may think that if you allocate an object that spans 32 bytes, your application might receive 32 bytes of real memory. But operating systems do not work with such fine granularity. Rather they allocate memory in units of “pages”. How big is a page depends on your operating system and on the configuration of your running process. On PCs, a page might often be as small as 4 kB, but it is often larger on ARM systems. Operating systems allow you to request large pages (e.g., one gigabyte). Your application receives “real” memory in units of pages. You can never just get “32 bytes” of memory from the operating system.

It means that there is no sense micro-optimizing the memory usage of your application: you should think in terms of pages. Furthermore, receiving pages of memory is a relative expensive process. So you probably do not want to constantly grab and release memory if efficiency is important to you.

Once you have allocated virtual memory, can we predict the actual (real) memory usage within the following loop?

  for (size_t i = 0; i < N; i++) {
    buffer[i] = 1;
  }

The result will depend on your system. But a simple model is as follows: count the number of consecutive pages you have accessed, assuming that your pointer begins at the start of a page. The memory used by the pages is a lower-bound on the memory usage of your process, assuming that the system does not use other tricks (like memory compression or other heuristics).

I wrote a little C++ program under Linux which prints out the memory usage at regular intervals within the loop. I use about 100 samples. As you can see in the following figure, my model (indicated by the green line) is an excellent predictor of the actual memory usage of the process.

Thus a reasonable way to think about your memory usage is to count the pages that you access. The larger the pages, the higher will be the cost in this model. It may thus seem that if you want to be frugal with memory usage, you would use smaller pages. Yet a mobile operating system like Apple’s iOS has relatively larger pages (16 kB) than most PCs (4 kb). Given a choice, I would almost always opt for bigger pages because they make memory allocation and access cheaper. Furthermore, you should probably not worry too much about virtual memory. Do not blindly count the address ranges that your application has requested. It might have little to no relation with your actual memory usage.

Modern systems have a lot of memory and very clever memory allocation techniques. It is wise to be concerned with the overall memory usage of your application, but you are more likely to fix your memory issues at the software architecture level than by micro-optimizing the problem.

Faster sorted array unions by reducing branches

When designing an index, a database or a search engine, you frequently need to compute the union of two sorted sets. When I am not using fancy low-level instructions, I have most commonly computed the union of two sorted sets using the following approach:

    v1 = first value in input 1
    v2 = first value in input 2
    while(....) {
        if(v1 < v2) {
            output v1
            advance v1
        } else if (v1 > v2) {
            output v2
            advance v2
        } else {
           output v1 == v2
           advance v1 and v2
        }
    }

I wrote this code while trying to minimize the load instructions: each input value is loaded exactly once (it is optimal). It is not that load instructions themselves are expensive, but they introduce some latency. It is not clear whether having fewer loads should help, but there is a chance that having more loads could harm the speed if they cannot be scheduled optimally.

One defect with this algorithm is that it requires many branches. Each mispredicted branch comes with a severe penalty on modern superscalar processors with deep pipelines. By the nature of the problem, it is difficult to avoid the mispredictions since the data might be random.

Branches are not necessarily bad. When we try to load data at an unknown address, speculating might be the right strategy: when we get it right, we have our data without any latency! Suppose that I am merging values from [0,1000] with values from [2000,3000], then the branches are perfectly predictable and they will serve us well. But too many mispredictions and we might be on the losing end. You will get a lot of mispredictions if you are trying this algorithm with random data.

Inspired by Andrey Pechkurov, I decided to revisit the problem. Can we use fewer branches?

Mispredicted branches in the above routine will tend to occur when we conditionally jump to a new address in the program. We can try to entice the compiler to favour ‘conditional move’ instructions. Such instructions change the value of a register based on some condition. They avoid the jump and they reduce the penalties due to mispredictions. Given sorted arrays, with no duplicated element, we consider the following code:

while ((pos1 < size1) & (pos2 < size2)) {
    v1 = input1[pos1];
    v2 = input2[pos2];
    output_buffer[pos++] = (v1 <= v2) ? v1 : v2;
    pos1 = (v1 <= v2) ? pos1 + 1 : pos1;
    pos2 = (v1 >= v2) ? pos2 + 1 : pos2;
 }

You can verify by using the assembly output that compilers are good at using conditional-move instructions with this sort of code. In particular, LLVM (clang) does what I would expect. There are still branches, but they are only related to the ‘while’ loop and they are not going to cause a significant number of mispredictions.

Of course, the processor still needs to load the right data. The address only becomes available in a definitive form just as you need to load the value. Yet we need several cycles to complete the load. It is likely to be a bottleneck, even more so in the absence of branches that can be speculated.

Our second algorithm has fewer branches, but it has more loads. Twice as many loads in fact! Modern processors can sustain more than one load per cycle, so it should not be a bottleneck if it can be scheduled well.

Testing this code in the abstract is a bit tricky. Ideally, you’d want code that stresses all code paths. In practice, if you just use random data, you will often have that the intersection between sets are small. Thus the branches are more predictable than they could be. Still, it is maybe good enough for a first benchmarking attempt.

I wrote a benchmark and ran it on the recent Apple processors as well as on an AMD Rome (Zen2) Linux box. I report the average number of nanoseconds per produced element so smaller values are better. With LLVM, there is a sizeable benefit (over 10%) on both the Apple (ARM) processor and the Zen 2 processor.  However, GCC fails to produce efficient code in the branchless mode. Thus if you plan to use the branchless version, you definitively should try compiling with LLVM. If you are a clever programmer, you might find a way to get GCC to produce code like LLVM does: if you do, please share.

system conventional union ‘branchless’ union
Apple M1, LLVM 12 2.5 2.0
AMD Zen 2, GCC 10 3.4 3.7
AMD Zen 2, LLVM 11 3.4 3.0

I expect that this code retires relatively few instructions per cycle. It means that you can probably add extra functionality for free, such as bound checking, because you have cycles to spare. You should be careful not to introduce extra work that gets in the way of the critical path, however.

As usual, your results will vary depending on your compiler and processor. Importantly, I do not claim that the branchless version will always be faster, or even that it is preferable in the real world. For real-world usage, we would like to test on actual data. My C++ code is available: you can check how it works out on your system. You should be able to modify my code to run on your data.

You should expect such a branchless approach to work well when you had lots of mispredicted branches to begin with. If your data is so regular that you a union is effectively trivial, or nearly so, then a conventional approach (with branches) will work better. In my benchmark, I merge ‘random’ data, hence the good results for the branchless approach under the LLVM compiler.

Further reading: For high speed, one would like to use SIMD instructions. If it is interesting to you, please see section 4.3 (Vectorized Unions Between Arrays) in Roaring Bitmaps: Implementation of an Optimized Software Library, Software: Practice and Experience 48 (4), 2018. Unfortunately, SIMD instructions are not always readily available.

Science and Technology links (July 10th 2021)

  1. We use CRISPR, a state-of-the-art gene editing technique, to edit the genes of live human patients in a clinical trials.
  2. A clinical trial has begun regarding an HIV vaccine.
  3. If you choose to forgo meat to fight climate change, you may lower your individual’s lifetime warming contribution by 2 to 4%.
  4. Age-related testosterone loss may strongly contribute to brain dysfunction in older men.
  5. Israel has used autonomous drone swarms to hunt down its adversaries.
  6. Cleaner air has improved agricultural outputs.
  7. Drinking alcohol could extend or reduce your life depending on how you go about it. Drinking to excess could be harmful but moderate alcohol consumption with meals could be beneficial.
  8. Injectable gene editing compounds have extended the lifespan of mice by more than 30% while improving physical performance.
  9. A gene therapy may get your heart to regenerate after a heart attack.
  10. Ocean acidification does not appear to harm coral reef fishes:

    Together, our findings indicate that the reported effects of ocean acidification on the behaviour of coral reef fishes are not reproducible, suggesting that behavioural perturbations will not be a major consequence for coral reef fishes in high CO2 oceans. (Source: Nature)

  11. It appears that when an embryon is formed, it briefly undergoes a rejuvenation effect. It would explain how old people can give birth to a young child.
  12. We have long believed that Alzheimer’s was caused by the accumulation of misfolded proteins in the brain. It appears that it could be, instead, due to the disappearance of soluble proteins in the brain. If so, the cure for Alzheimer’s would entail restoring a normal level of proteins instead of removing misfolded proteins.
  13. A new weight-loss drug was approved to fight against obesity.
  14. Unity Biotechnology announced some positive results in a clinical trial using senolytics (drugs that kill old cells) to treat macular degeneration (a condition that make old people blind).
  15. Lobsters are believed to experience negligible senescence, meaning that they do not age the way we do. They also appear to avoid cancer.

Compressing JSON: gzip vs zstd

JSON is the de facto standard for exchanging data on the Internet. It is relatively simple text format inspired by JavaScript. I say “relatively simple” because you can read and understand the entire JSON specification in minutes.

Though JSON is a concise format, it is also better used over a slow network in compressed mode. Without any effort, you can compress often JSON files by a factor of ten or more.

Compressing files adds an overhead. It takes time to compress the file, and it takes time again to uncompress it. However, it may be many times faster to send over the network a file that is many times smaller. The benefits of compression go down as the network bandwidth increases. Given the large gains we have experienced in the last decade, compression is maybe less important today. The bandwidth between nodes in a cloud setting (e.g., AWS) can be gigabytes per second. Having fast decompression is important.

There are many compression formats. The conventional approach, supported by many web servers, is gzip. There are also more recent and faster alternatives. I pick one popular choice: zstd.

For my tests, I choose a JSON file that is representative of real-world JSON: twitter.json. It is an output from the Twitter API.

Generally, you should expect zstd to compress slightly better than gzip. My results are as follow using standard Linux command-line tools with default settings:

uncompressed 617 KB
gzip (default) 51 KB
zstd (default) 48 KB

To test the decompression performance, I uncompress repeatedly the same file. Because it is a relatively small file, we should expect disk accesses to be buffered and fast.

Without any tweaking, I get twice the performance with zstd compared to the standard command-line gzip (which may differ from what your web server uses) while also having better compression. It is win-win. Modern compression algorithms like zstd can be really fast. For a fairer comparison, I have also included Eric Biggers’ libdeflate utility. It comes out ahead of zstd which stresses once more the importance of using good software!

gzip 175 MB/s
gzip (Eric Biggers) 424 MB/s
zstd 360 MB/s

My script is available. I run it under a Ubuntu system. I can create a RAM disk and the numbers go up slightly.

I expect that I understate the benefits of a fast compression routines:

    1. I use a docker container. If you use containers, then disk and network accesses are slightly slower.
    2. I use the standard command-line tools. With a tight integration of the software libraries within your software, you can probably avoid many system calls and bypass the disk entirely.

Thus my numbers are somewhat pessimistic. In practice, you are even more bounded by computational overhead and by the choice of algorithm.

The lesson is that there can be large differences in decompression speed and that these differences matter. You ought to benchmark.

What about parsing the uncompressed JSON? We have demonstrated that you can often parse JSON at 3 GB/s or better. I expect that, in practice,  you can make JSON parsing almost free compared to compression, disk and network delays.

Update: This blog post was updated to include Eric Biggers’ libdeflate utility.

Note: There has been many requests for more to expand this blog post with various parameters and so forth. The purpose of the blog post was to illustrate that there are large performance differences, not to provide a survey of the best techniques. It is simply out of the scope of the current blog post to identify the best approach. I mean to encourage you to run your own benchmarks.

See also: Cloudflare has its own implementation of the algorithm behind gzip. They claim massive performance gains. I have not tested it.

Further reading: Parsing Gigabytes of JSON per Second, VLDB Journal 28 (6), 2019

Science and Technology links (June 26th 2021)

  1. Reportedly, half of us own a smartphone.
  2. It is often reported that women or visible minority earn less money. However, ugly people are doing comparatively even more poorly.
  3. We have highly efficient and cheap solar panels. However, we must also quickly dispose of them after a few years which leads to trash. Handling an increasing volume of trash is not cheap:

    The totality of these unforeseen costs could crush industry competitiveness. (…) By 2035, discarded panels would outweigh new units sold by 2.56 times. In turn, this would catapult the [cost] to four times the current projection. The economics of solar — so bright-seeming from the vantage point of 2021 — would darken quickly as the industry sinks under the weight of its own trash.

  4. Ultrasounds may be a viable therapy against Alzheimer’s.
  5. Axolotls regenerate any part of their body, including their spinal cord. Though they only weight between 60g and 110g, they may live 20 years.
  6. Some dinosaurs once thrived in the Arctic.
  7. It was once believed that hair greying was essentially an irreversible process. Researchers looked at actual hair samples and found that in many cases, a hair that once turned gray could regain its colour.
  8. At least as far as your muscles go, you can remain fit up to an advanced age. As supportive evidence, researchers found that human muscle stem cells are refractory to aging:

    We find no measurable difference in regeneration across the range of ages investigated up to 78 years of age.

    Further work shows that you can promote muscle regeneration by reprogramming the cells in muscle fibers, thus potentially activating these muscle stem cells.

  9. Many people who are overweight suffer from the metabolic syndrome which includes abdominal obesity, high blood pressure, and high blood sugar. It affects about 25% of us worldwide. Researchers have found that a particular hormone, Asprosin, is elevated in people suffering for the metabolic syndrome. Suppressing asprosin in animal models reduced appetite and increased leanness, maybe curing the metabolic syndrome. We may hope that this work will quickly lead to human trials.
  10. During the current decade, the share of Canadians that are 65 years and older  will rise from 17.4 per cent to 22.5 per cent of the population.
  11. The Internet probably use less energy than you think and most projections are pessimistic.

How long should you work on a problem ?

Lev Reyzin says that working too long on a problem might be unproductive:

I, personally, have diminishing (or negative?) returns to my creative work as I explicitly work on a problem past some amount of time. I often have insights coming to me out of nowhere while I’m relaxing or enjoying hobbies on nights or weekends.

Whenever one considers innovative endeavors and their productivity, one must consider that innovation is fundamentally wasteful. The problem with innovation is not about how to get as much of it for as little of a cost as possible. It is to get innovation at all. By optimizing your production function, you risk losing all of it. I sooner blame someone for his publication list being too long than being too short, said Dijkstra.

My view is that we tend to underestimate “intellectual latency”. There is a delay between the time you approach a new idea and the time you have fully considered it.

Thus our brains are not unlike computers. Your processor might be able to run at 4 GHz and be able to retire 4 instructions per cycle… but very rarely are you able to reach such a throughput while working on a single task. The productive intellectuals that I know tend to work on a few ideas at once. Maybe they are writing a book while building a piece of software and writing an essay.

So you should not focus on one unique task in the hope of finishing it faster. You may complete it slightly faster if you omit everything else but the sum total of your productivity might be much lower.

There is also a social component to human cognition. If you hold on to a problem for very long, working tirelessly on it, you may well deprive yourself of the input of others. You should do go work and then quickly invite others to improve on your work. No matter how smart you think you are, you cannot come close to the superior ingenuity of the open world.

Energy and sanity are essential ingredients of sustain intellectual productivity. Hammering at a single problem for a long time is both maddening and energy limiting. Our brains are wired to like learning about new ideas. Your brain wants to be free to explore.

And finally, the most important reason to limit the amount of work you invest on a single task is that it is a poor strategy even if you can do it with all the energy and intelligence in the world. Sadly, most of what you do is utterly useless. You are like an investor in a stock market where almost all stocks are losers. Putting all your money on one stock would ensure your ruin. You cannot know, at any given time, what will prove useful. Maybe going outside and playing with your son sounds like a waste of your potential right now, but it might be the one step that puts your life on a trajectory of greatness. You want to live your life diversely, touching many people, trying many things. Learn to cook. Make cocktails. Dance. Go to the theatre. Play video games. Write assembly code. Craft a novel.

Many years ago I started to blog. I also started publishing my software as open source in a manner that could be useful to others. I started posting my research papers as PDFs that anyone could download. None of these decisions seemed wise at first. They took time away from “important problems”. I was ridiculed at one point or another for all of them. Yet these three decisions ended up being extremely beneficial to me.

Further reading: Peer-reviewed papers are getting increasingly boring

Science and Technology links (June 12th 2021)

    1. We completed the sequencing of the human genome.
    2. AstraZeneca’s drug Lynparza cut combined risk of recurrence of breast cancer or death by 42% among women in study.
    3. Glycine and N-acetylcysteine supplementation improves muscle strength and cognition.
    4. We found Egypt’s ancient capital. It has been lost of 3,400 years. It is unclear why it was abandoned.
    5. We estimate that over 6 million Americans have Alzheimer’s. Currently, Alzheimer’s is effectively incurable and no drug is known to reverse or halt it. Once you have Alzheimer’s, you start an irreversible and drastic cognitive decline. The USA has approved the first Alzheimer’s new drug in 20 years. The American government decided to approve the new drug, aducanumab, even though it is unclear whether it genuinely stops Alzheimer’s. It clears the brain from accumulated proteins and might slow the cognitive decline, but that latter claim is uncertain. The approval is controversial as the company producing aducanumab stands to make a lot of money while, possibly, providing no value whatsoever to the patient (and the drug might even have negative side-effects). Yet by deploying the drug today, we stand to learn much about its potential benefits and, if you are affected by Alzheimer’s, you may feel that you do not have much to lose.
    6. Trials begin on lozenge that rebuilds tooth enamel.
    7. The Google folks founded an anti-aging company called Calico a few years ago. One of the star employee is Cynthia Kenyon who is famous for showing that aging is malleable. Their latest paper suggests that we might be able to rejuvenate individual cells within the body safely.
    8. The incoming new disks (SSD) have a sequential read speed of up to 14 GB/s (PCIe 5.0).
    9. We are curing the blinds: “researchers added light-sensitive proteins to the man’s retina, giving him a blurry view of objects”. (Source: New York Times)
    10. You might think that government research grants are given on merit. Maybe not always. Applicants who shared both a home and a host organization with one panellist received a grant 40% more often than average. (Source: Nature)
    11. The Roman Empire thrived under a climate that was much warmer. The Empire declined when the temperature got colder:

      This record comparison consistently shows the Roman as the warmest period of the last 2 kyr, about 2 °C warmer than average values for the late centuries for the Sicily and Western Mediterranean regions. After the Roman Period a general cooling trend developed in the region with several minor oscillations. We hypothesis the potential link between this Roman Climatic Optimum and the expansion and subsequent decline of the Roman Empire.

      (Source: Nature)

    12. Reportedly, China is increasing its coal power capacity at a rate of 21% per year. Its yearly increase alone is six time Germanyʼs entire coal-fired capacity.

Computing the number of digits of an integer even faster

I my previous blog post, I documented how one might proceed to compute the number of digits of an integer quickly. E.g., given the integer 999, you want 3 but given the integer 1000, you want 4. It is effectively the integer logarithm in base 10.

On computers, you can quickly compute the integer logarithm in base 2, and it follows that you can move from one to the other rather quickly. You just need a correction which you can implement with a table. A very good solution found in references such as Hacker’s Delight is as follows:

    static uint32_t table[] = {9, 99, 999, 9999, 99999,
    999999, 9999999, 99999999, 999999999};
    int y = (9 * int_log2(x)) >> 5;
    y += x > table[y];
    return y + 1;

Except for the computation of the integer logarithm, it involves a multiplication by 9, a shift, a conditional move, a table lookup and an increment. Can you do even better? You might! Kendall Willets found an even more economical solution.

int fast_digit_count(uint32_t x) {
  static uint64_t table[] = {
      4294967296,  8589934582,  8589934582,  8589934582,  12884901788,
      12884901788, 12884901788, 17179868184, 17179868184, 17179868184,
      21474826480, 21474826480, 21474826480, 21474826480, 25769703776,
      25769703776, 25769703776, 30063771072, 30063771072, 30063771072,
      34349738368, 34349738368, 34349738368, 34349738368, 38554705664,
      38554705664, 38554705664, 41949672960, 41949672960, 41949672960,
      42949672960, 42949672960};
  return (x + table[int_log2(x)]) >> 32;
}

If I omit the computation of the integer logarithm in base 2, it requires just a table lookup, an addition and a shift:

add     rax, qword ptr [8*rcx + table]
shr     rax, 32

The table contains the numbers ceil(log10(2j)) * 232 + 232 – 10ceil(log10(2j)) for j from 2 to 30, and then just ceil(log10(2j)) for j = 31 and j = 32. The first value is 232 .

My implementation of Kendall’s solution is available.

Using modern C++, you can compute the table using constant expressions.

Further reading: Josh Bleecher Snyder has a blog post on this topic which tells the whole story.

Computing the number of digits of an integer quickly

Suppose I give you an integer. How many decimal digits would you need to write it out? The number ‘100’ takes 3 digits whereas the number ’99’ requires only two.

You are effectively trying to compute the integer logarithm in base 10 of the number. I say ‘integer logarithm’ because you need to round up to the nearest integer.

Computers represent numbers in binary form, so it is easy to count the logarithm in base two. In C using GCC or clang, you can do so as follows using a counting leading zeroes function:

int int_log2(uint32_t x) { return 31 - __builtin_clz(x|1); }

Though it looks ugly, it is efficient. Most optimizing compilers on most systems will turn this into a single instruction.

How do you convert the logarithm in base 2 into the logarithm in base 10? From elementary mathematics, we have that log10 (x) = log2(x) / log2(10). So all you need is to divide by log2(10)… or get close enough. You do not want to actually divide, especially not by a floating-point value, so you want mutiply and shift instead. Multiplying and shifting is a standard technique to emulate the division.

You can get pretty close to a division by log2(10) if you multiply by 9 and then divide by 32 (2 to the power of 5). The division by a power of two is just a shift. (I initially used a division by a much larger power but readers corrected me.)

Unfortunately, that is not quite good enough because we do not actually have the logarithm in base 2, but rather a truncated version of it. Thus you may need to do a off-by-one correction. The following code works:

    static uint32_t table[] = {9, 99, 999, 9999, 99999, 
    999999, 9999999, 99999999, 999999999};
    int y = (9 * int_log2(x)) >> 5;
    y += x > table[y];
    return y + 1;

It might compile to the following assembly:

        or      eax, 1
        bsr     eax, eax
        lea     eax, [rax + 8*rax]
        shr     eax, 5
        cmp     dword ptr [4*rax + table], edi
        adc     eax, 0
        add     eax, 1

Loading from the table probably incurs multiple cycles of latency (e.g., 3 or 4). The x64 bsr instruction has also a long latency of 3 or 4 cycles. My code is available.

You can port this function to Java as follows if you assume that the number is non-negative:

        int l2 = 31 - Integer.numberOfLeadingZeros(num|1);
        int ans = ((9*l2)>>>5);
        if (num > table[ans]) { ans += 1; }
        return ans + 1;

I wrote this blog post to answer a question by Chris Seaton on Twitter. After writing it up, I found that the always-brilliant Travis Downs had proposed a similar solution with a table lookup. I believe he requires a larger table. Robert Clausecker once posted a solution that might be close to what Travis has a in mind.

Furthermore, if the number of digits is predictable, then you can write code with branches and get better results in some cases. However, you should be concerned with the fact that a single branch miss can cost you 15 cycles and  tens of instructions.

Update: This a followup to this blog post… Computing the number of digits of an integer even faster

Further reading: Converting binary integers to ASCII character and Integer log 10 of an unsigned integer — SIMD version

Note: Victor Zverovich stated on Twitter than the fmt C++ library relies on a similar approach. Pete Cawley showed that you could achieve the same result that I got initially by multiplying by 77 and then shifting by 8, instead of my initially larger constants. He implemented his solution for LuaJIt. Giulietti pointed out to me by email that almost exactly the same routine appears in Hacker’s Delight at the end of chapter 11.

All models are wrong

All models are wrong, but some are useful is a common saying in statistics. It does not merely apply to statistics, however. It is general observation. Box (1976) wrote an  influential article on the topic. He says that you make progress by successively making models followed by experiments, followed by more models, and then more experiments. And so forth. Importantly, a more sophisticated model might be worse and lead to mediocrity.

Since all models are wrong the scientist cannot obtain a “correct” one by excessive elaboration. On the contrary following William of Occam he should seek an economical description of natural phenomena. Just as the ability to devise simple but evocative models is the signature of the great scientist so overelaboration and overparameterization is often the mark of mediocrity.

Kelly echoed this sentiment in his essay on thinkism:

Without conducting experiments, building prototypes, having failures, and engaging in reality, an intelligence can have thoughts but not results. It cannot think its way to solving the world’s problems.

Once you take for granted that all models are incorrect, the next question to ask is which models are useful, says Box:

Now it would be very remarkable if any system existing in the real world could be exactly represented by any simple model. However, cunningly chosen parsimonious models often do provide remarkably useful approximations. For example, the law PV = RT relating pressure P, volume V and temperature T of an “ideal” gas via a constant R is not exactly true for any real gas, but it frequently provides a useful approximation and furthermore its structure is informative since it springs from a physical view of the behavior of gas molecules. For such a model there is no need to ask the question “Is the model true?”. If “truth” is to be the “whole truth” the answer must be “No”. The only question of interest is “Is the model illuminating and useful?”.

How do you know if something is useful? By testing it out in the real world!

Why would all models be incorrect? Part of the answer has to do with the fact that pure logic, pure mathematics only works locally. It does not scale. It does not mean that pure logic is ‘bad’, only that its application is limited.

Programmers and other system designers are ‘complexity managers’. If you are working with very strict rules in a limited domain, you can make pure logic prevail. A programmer can prove that a given function is correct. However, at scale, all software, all laws, all processes, all theories, all constitutions are incorrect. You assess whether they are useful. You check that they are correct in the way you care about. You cannot run a country or a large business with logic alone. In a sense, pure logic does not scale. Too many people underestimate the forces that push us toward common law and away from top-down edicts.

If you are a programmer, you should therefore not seek to make your software flawless and perfect. You may end up with worse software in the process. It may become overengineered. To get good software, test it out in practice. See how well it meets the needs of your clients. Then revise it, again and again.

If you are doing research, you should not work from models alone. Rather, you should start from a model, test it out in a meaningful manner, refine it again (based on your experience) and so forth, in a virtuous circle.

Evidently, running a business has to follow the same paradigm. All business plans are wrong, some are useful. Start with a plan, but revise it at the end of the first day.

In whatever you do, focus on what works. Try to test out your ideas as much as possible. Adjust your models frequently.

Never, never accept an untested model no matter how smart and clever its authors are.

Credit: I am grateful to Peter Turney for links and ideas.

Further reading:  Gödel’s loophole.

Science and Technology links (May 22nd 2021)

  1. Most computer chips today in flagship phones and computers use a process based on a 5 nm or larger resolution. Finer resolutions usually translate into lower energy usage and lower heat production. Given that many of our systems are limited by heat or power, finer resolutions lead to higher performance. The large Taiwanese chip maker (TSMC) announced a breakthrough that might allow much finer resolutions (down to 1 nm). IBM recently reported a similar but less impressive breakthrough. It is unclear whether the American giant, Intel, is keeping up.
  2. Good science is reproducible. If other researchers follow whatever you describe in your research article, they should get the same results. That is, what you report should be an objective truth and not the side-effect of your beliefs or of plain luck. Unfornately, we rarely try to reproduce results. When we do, it is common to be unable to reproduce the results from a peer-reviewed research papers. The system is honor-based: we trust that people do their best to check their own results. What happens when mistakes happen? Over time, other researchers will find out. Unfortunately, reporting such failures is typically difficult. Nobody likes to make ennemies and the burden of the proof is always on you when you want to denounce other people’s research. It is so common that we have a name for the effect: the replication crisis. The reproduction crisis has attracted more and more attention because it is becoming an existential threat: if a system produces research that cannot be trusted, the whole institution might fall. We see the reproduction crisis in psychology, cancer research and machine learning. Researchers now report that unreproducible research can be cited 100 times more than reproducible research. It suggests that people who produce unreproducible research might have an advantage in their careers and that they might go up the ranks faster.
  3. Recent PCs and tablets store data on solid-state drives (SSDs) that can be remarkably fast. The latest Sony PlayStation has an SSD with a bandwidth exceeding 5 GB/s. Conventional (spinning) disks have lagged behind with a bandwidth of about 200 MB/s. However, conventional disks can be much larger. It seems, however, that conventional disks might be getting faster. The hard drive maker Seagate has been selling conventional disks that have a bandwidth of 500 MB/s.
  4. As you age, you accumulate cells that are dysfunctional and should otherwise die, they are called senescent cells. We are currently developing therapies to remove them. Martinez-Zamudio et al. report that a large fraction of some cells in the immune system of older human beings are senescent (e.g., 64%). Clearing these senescent cells could have a drastic effect. We shall soon know.
  5. There are 50 billion birds and 1.6 billion sparrows.
  6. Computer scientists train software neural networks (for artificial intelligence) using backpropagation. It seems that people believe that such a mechanism (backpropagation) is not likely to exist in biology. Furthermore, people seem to believe that in biological brain, learning is “local” (at the level of the synapse). Recently, researchers have shown that we can train software neural networks using another technique that is ‘biologically plausible’ called zero-divergence inference learning. The implicit assumption is that these software systems are thus a plausible model for biological brains. It is unclear to me whether that’s a valid scientific claim: is it falsifiable?
  7. Ancient Romans used lead for everything. It appears that Roman children suffered from lead poisoning and had a related high mortality rate.
  8. Knight et al. found strong evidence to support the hypothesis that vitamin D could help prevent breast cancer. Taking walks outside in the sun while not entirely covered provides your body with vitamin D.
  9. Mammal hearts do not regenerate very well. Hence, if your heart is damaged, it may never repair itself. It appears that some specific stem cells can survive when grafted to living hearts and induce regeneration.
  10. Persistent short sleep duration (6 hours or less) is associated with a 30% increased dementia risk. (Note that this finding does not imply that people sleeping a lot are in good health. It also does not imply that you are sick if you are sleeping little.)
  11. Researchers have rejuvenated the blood cells of old mice using a form of vitamin B3.
  12. There are 65 animal species that can laugh.
  13. Between the years 1300 and 1400, the area near Greenland became relatively free from ice, as it was effectively exporting its ice to subartic regions. It seems to match the beginning of the Little Ice Age, a time when, at least in Europe, cold temperatures prevailed.

Counting the number of matching characters in two ASCII strings

Suppose that you give me two ASCII strings having the same number of characters. I wish to compute efficiently the number of matching characters (same position, same character). E.g., the strings ‘012c’ and ‘021c’ have two matching characters (‘0’ and ‘c’).

The conventional approach in C would look as follow:

uint64_t standard_matching_bytes(char * c1, char * c2, size_t n) {
    size_t count = 0;
    size_t i = 0;
    for(; i < n; i++) {
        if(c1[i]  == c2[i]) { count++; }
    }
    return count;
}

There is nothing wrong with this code. An optimizing compiler can auto-vectorize this code so that it will do far fewer than one instruction per byte, given long enough strings.

However, it does appear that the routine looks at every character, one by one. So it looks like you are loading two values, then you are comparing and then incrementing a counter, for each character. So it might compile to over 5 instructions per character (prior to auto-vectorization).

What you can do instead is load the data in blocks of 8 bytes, into 64-bit integers as in the following code. Do not be mislead by the apparently expensive memcpy calls: an optimizing compiler will turn these function calls into a single load instruction.

uint64_t matching_bytes(char * c1, char * c2, size_t n) {
    size_t count = 0;
    size_t i = 0;
    uint64_t x, y;
    for(; i + sizeof(uint64_t) <= n; i+= sizeof(uint64_t)) {
      memcpy(&x, c1 + i, sizeof(uint64_t) );
      memcpy(&y, c2 + i, sizeof(uint64_t) );
      count += matching_bytes_in_word(x,y);
    }
    for(; i < n; i++) {
        if(c1[i]  == c2[i]) { count++; }
    }
    return count;
}

So we just need a function that can compare two 64-bit integers and find how many matching bytes there are. Thankfully there are fairly standard techniques to do so such as the following. (I borrowed part of the routine from Wojciech Muła.)

uint64_t matching_bytes_in_word(uint64_t x, uint64_t y) {
  uint64_t xor_xy = x ^ y;
  const uint64_t t0 = (~xor_xy & 0x7f7f7f7f7f7f7f7fllu) + 0x0101010101010101llu;
  const uint64_t t1 = (~xor_xy & 0x8080808080808080llu);
  uint64_t zeros = t0 & t1;
  return ((zeros >> 7) * 0x0101010101010101ULL) >> 56;
}

With this routine, you can bring down the instruction count to about 2 per character, including all the overhead and the data loading. It is strictly better than what you could with character-by-character processing by a factor of two (for long strings).

Though I seem to restrict the problem to ASCII inputs, my code actually counts the number of matching bytes. If you know that the input is ASCII, you can further optimize the routine.

I leave it as an exercise for the reader to write a function that counts the number of matching characters within a range, or to determine whether all characters in a given range match.

The proper way to solve this problem is with SIMD instructions, and most optimizing compilers should do that for you starting from a simple loop. However, if it is not possible and you have relatively long strings, then the approach I described could be beneficial.

My source code is available.

Converting binary integers to ASCII characters: Apple M1 vs AMD Zen2

Programmers often need to write integers as characters. Thus given the 32-bit value 1234, you might need a function that writes the characters 1234. We can use the fact that the ASCII numeral characters are in sequence in the ASCII table: ‘0’+0 is ‘0’, ‘0’+1 is ‘1’ and so forth. With this optimization in mind, the standard integer-to-string algorithm looks as follow:

while(n >= 10)
  p = n / 10
  r = n % 10
  write '0' + r
  n = p
write '0' + n

This algorithm writes the digits in reverse. So actual C/C++ code will write a pointer that you decrement (and not increment):

  while (n >= 10) {
    const p = n / 10;
    r = n % 10;
    n = p;
    *c-- = '0' + r;
  }
  *c-- = '0' + n;

You can bound the size of the string (10 characters for 32-bit integers, 20 characters for 64-bit integers). If you have signed integers, you can detect the sign initially and make the integer value non-negative, write out the digits and finish with the sign character if needed. If you know that your strings are long, you can do better by writing out the characters two at a time using lookup tables.

How fast is this function ? It is going to take dozens of instructions and CPU cycles. But where is the bottleneck?

If you look at the main loop, and pay only attention to the critical data dependency, you divide your numerator by 10, then you check its value, and so forth. So your performance is bounded by the speed at which you can divide the numerator by 10.

The division instruction is relatively slow, but most compilers will convert it into a multiplication and a shift. It implies that the whole loop has a latency of about 5 cycles if you count three cycles for the multiplication and one cycle for the shift, with one cycle for the loop overhead. Of course, the function must also compute the remainder and write out the result, but their cost is maybe less important. It is not that these operations are themselves free: computing the remainder is more expensive than computing the quotient. However, we may get them almost for free because they are on a critical data dependency path.

How correct is this analysis? How likely is it that you are just bounded by the division by 10? The wider your processor, the more instructions it can retire per cycle, the more true you’d expect this analysis to be. Our commodity processors are already quite wide. Conventional Intel/AMD processors can retire about 4 instructions per cycle. The Apple M1 processor can retire up to 8 instructions per cycle.

To test it out, let us add a function which only writes out the most significant digit.

  while (n >= 10) {
    n /= 10;
    c--;
  }
  c--;
  *c = '0' + char(n);

Here is the number of nanoseconds required per integer on average according to a benchmark I wrote. The benchmark is designed to measure the latency.

function Apple M1 clang 12 AMD Zen2 gcc 10
fake itoa 11.6 ns/int 10.9 ns/int
real itoa 12.1 ns/int 12.0 ns/int

According to these numbers, my analysis seems correct on both processors. The numbers are a bit closer in the case of the Apple M1 processor, but my analysis is not sufficiently fine to ensure that this difference is significant.

Hence, at least in this instance, your best chance of speeding up this function is either by dividing by 10 faster (in latency) or else by reducing the number of iterations (by processing the data in large chunks). The latter is already found in production code.

In the comments, Travis Downs remarks that you can also try to break the chain of dependencies (e.g., by dividing the task in two).

Further reading: Faster Remainder by Direct Computation: Applications to Compilers and Software Libraries, Software: Practice and Experience 49 (6), 2019