Filtering numbers quickly with SVE on Amazon Graviton 3 processors

I have had access to Amazon’s latest ARM processors (graviton 3) for a few weeks. To my knowledge, these are the first widely available processors supporting Scalable Vector Extension (SVE).

SVE is part of the Single Instruction/Multiple Data paradigm: a single instruction can operate on many values at once. Thus, for example, you may add N integers with N other integers using a single instruction.

What is unique about SVE is that you work with vectors of values, but without knowing specifically how long the vectors are. This is in contrast with conventional SIMD instructions (ARM NEON, x64 SSE, AVX) where the size of the vector is hardcoded. Not only do you write your code without knowing the size of the vector, but even the compiler may not know. This means that the same binary executable could work over different blocks (vectors) of data, depending on the processor. The benefit of this approach is that your code might get magically much more efficient on new processors.

It is a daring proposal. It is possible to write code that would work on one processor but fail on another processor, even though we have the same instruction set.

But is SVE on graviton 3 processors fast? To test it out, I wrote a small benchmark. Suppose you want to prune out all of the negative integers out of an array. A textbook implementation might look as follows:

void remove_negatives_scalar(const int32_t *input, 
        int64_t count, int32_t *output) {
  int64_t i = 0;
  int64_t j = 0;
  for(; i < count; i++) {
    if(input[i] >= 0) {
      output[j++] = input[i];
    }
  }
}

However, the compiler will probably generate a branch and if your input has a random distribution, this could be inefficient code. To help matters, you may rewrite your code in a manner that is more likely to generate a branchless binary:

  for(; i < count; i++) {
    output[j] = input[i];
    j += (input[i] >= 0);
  }

Though it looks less efficient (because every input value in written out), such a branchless version is often practically faster.

I ported this last implementation to SVE using ARM intrinsic functions. At each step, we load a vector of integers (svld1_s32), we compare them with zero (svcmpge_n_s32), we remove the negative values (svcompact_s32) and we store the result (svst1_s32). During most iterations, we have a full vector of integers… Yet, during the last iteration, some values will be missing but we simply ignore them with the while_mask variable which indicates which integer values are ‘active’.  The entire code sequence is done entirely using SVE instructions: there is no need to process separately the end of the sequence, as would be needed with conventional SIMD instruction sets.

#include <arm_sve.h>
void remove_negatives(const int32_t *input, int64_t count, int32_t *output) {
  int64_t i = 0;
  int64_t j = 0;
  svbool_t while_mask = svwhilelt_b32(i, count);
  do {
    svint32_t in = svld1_s32(while_mask, input + i);
    svbool_t positive = svcmpge_n_s32(while_mask, in, 0);
    svint32_t in_positive = svcompact_s32(positive, in);
    svst1_s32(while_mask, output + j, in_positive);
    i += svcntw();
    j += svcntp_b32(while_mask, positive);
    while_mask = svwhilelt_b32(i, count);
  } while (svptest_any(svptrue_b32(), while_mask));
}

Using a graviton 3 processor and GCC 11 on my benchmark, I get the following results:

cycles/integer instructions/integer instructions/cycle
scalar 9.0 6.000 0.7
branchless scalar 1.8 8.000 4.4
SVE 0.7 1.125 1.6

The SVE code uses far fewer instructions. In this particular test, SVE is 2.5 times faster than the best competitor (branchless scalar). Furthermore, it might use even fewer instructions on future processors, as the underlying registers get wider.

Of course, my code is surely suboptimal, but I am pleased that the first SVE benchmark I wrote turns out so well. It suggests that SVE might do well in practice.

Credit: Thanks to Robert Clausecker for the related discussion.

Memory-level parallelism : Intel Ice Lake versus Amazon Graviton 3

One of the most expensive operation in a processor and memory system is a random memory access. If you try to read a value in memory, it can take tens of nanosecond on average or more. If you are waiting on the memory content for further action, your processor is effectively stalled. While our processors are generally faster, the memory latency has not improved quickly and the latency can even be higher on some of the most expensive processors. For this reason, a modern processor core can issue multiple memory requests at a given time. That is, the processor tries to load one memory element, keeps going, and can issue another load (even though the previous load is not completed), and so forth. Not long ago, Intel processor cores could support about 10 independent memory requests at a time. I benchmarked some small ARM cores that could barely issue 4 memory requests.

Today, the story is much nicer. The powerful processor cores can all sustain many memory requests. They support better memory-level parallelism.

To measure the performance of the processor, we use a pointer-chasing scheme where you ask a C program to load a memory address which contains the next memory address and so forth. If a processor could only sustain a single memory request, such a test would use all available ressources. We then modify this test so that we have have two interleaved pointer-chasing scheme, and then three and then four, and so forth. We call each new interleaved pointer-chasing component a ‘lane’.

As you add more lanes, you should see better performance, up to a maximum. The faster the performance goes up as you add lane, the more memory-level parallelism your processor core has. The best Amazon (AWS) servers come with either Intel Ice Lake or Amazon’s very own Graviton 3. I benchmark both of them, using a core of each type. The Intel processor has the upper hand in absolute terms. We achieve a 12 GB/s maximal bandwidth compared to 9 GB/s for the Graviton 3. The one-lane latency is 120 ns for the Graviton 3 server versus 90 ns for the Intel processor. The Graviton 3 appears to sustain about 19 simultaneous loads per core against about 25 for the Intel processor.

Thus Intel wins, but the Graviton 3 has nice memory-level parallelism… much better than the older Intel chips (e.g., Skylake) and much better than the early attempts at ARM-based servers.

The source code is available. I am using Ubuntu 22.04 and GCC 11. All machines have small page sizes (4kB). I chose not to tweak the page size for these experiments.

Prices for Graviton 3 are 2.32 $US/hour (64 vCPU) compared to 2.448 $US/hour for Ice Lake. So Graviton 3 appears to be marginally cheaper than the Intel chips.

When I write these posts, comparing one product to another, there is always hate mail afterward. So let me be blunt. I love all chips equally.

If you want to know which system is best for your application: run benchmarks. Comprehensive benchmarks found that Amazon’s ARM hardware could be advantageous for storage-intensive tasks.

Further reading: I enjoyed Graviton 3: First Impressions.

Data structure size and cache-line accesses

On many systems, memory is accessed in fixed blocks called “cache lines”. On Intel systems, the cache line spans 64 bytes. That is, if you access memory at byte address 64, 65… up to 127… it is all on the same cache line. The next cache line starts at address 128, and so forth.

In turn, data in software is often organized in data structures having a fixed size (in bytes). We often organize these data structures in arrays. In general, a data structure may reside on more than one cache line. For example, if I put a 5-byte data structure at byte address 127, then it will occupy the last byte of one cache line, and four bytes in the next cache line.

When loading a data structure from memory, a naive model of the cost is the number of cache lines that are accessed. If your data structure spans 32 bytes or 64 bytes, and you have aligned the first element of an array, then you only ever need to access one cache line every time you load a data structure.

What if my data structures has 5 bytes? Suppose that I packed them in an array, using only 5 bytes per instance. What if I pick one at random… how many cache lines do I touch? Expectedly, the answer is barely more than 1 cache line on average.

Let us generalize.

Suppose that my data structure spans z bytes. Let g be the greatest common divisor between z and 64. Suppose that you load one instance of the data structure at random from a large array. In general, the expected number of additional cache lines accesses is (z – g)/64. The expected total number of cache line accesses is one more: 1 + (z – g)/64. You can check that it works for z = 32, since g is then 32 and you have (z – g)/64 is (32-32)/64 or zero.

I created the following table for all data structures no larger than a cache line. The worst-case scenario is a data structure spanning 63 bytes: you then almost always touch two cache lines.

I find it interesting that you have the same expected number of cache line accesses for data structures of size 17, 20, 24. It does not follow that computational cost a data structure spanning 24 bytes is the same as the cost of a data structure spanning 17 bytes. Everything else being identical, a smaller data structure should fare better, as it can fit more easily in CPU cache.

size of data structure (z) expected cache line access
1 1.0
2 1.0
3 1.03125
4 1.0
5 1.0625
6 1.0625
7 1.09375
8 1.0
9 1.125
10 1.125
11 1.15625
12 1.125
13 1.1875
14 1.1875
15 1.21875
16 1.0
17 1.25
18 1.25
19 1.28125
20 1.25
21 1.3125
22 1.3125
23 1.34375
24 1.25
25 1.375
26 1.375
27 1.40625
28 1.375
29 1.4375
30 1.4375
31 1.46875
32 1.0
33 1.5
34 1.5
35 1.53125
36 1.5
37 1.5625
38 1.5625
39 1.59375
40 1.5
41 1.625
42 1.625
43 1.65625
44 1.625
45 1.6875
46 1.6875
47 1.71875
48 1.5
49 1.75
50 1.75
51 1.78125
52 1.75
53 1.8125
54 1.8125
55 1.84375
56 1.75
57 1.875
58 1.875
59 1.90625
60 1.875
61 1.9375
62 1.9375
63 1.96875
64 1.0

Thanks to Maximilian Böther for the motivation of this post.

Parsing JSON faster with Intel AVX-512

Many recent Intel processors benefit from a new family of instructions called AVX-512. These instructions operate over wide registers (up to 512 bits) and follow the Single instruction, multiple data (SIMD) paradigm. These new AVX-512 instructions allow you to break some speed records, such as decoding base64 data at the speed of a memory copy.

Most modern processors have SIMD instructions. The AVX-512 instructions are wider (more bits per register), but that is not necessarily their main appeal. If you merely take existing SIMD algorithms and apply them to AVX-512, you will probably not benefit as much as you would like. It is true that wider registers are beneficial, but in superscalar processors (processors that can issue several instructions per cycle), the number of instructions you can issue per cycle matters as much if not more. Typically, 512-bit AVX-512 instructions are more expensive and the processor can issue fewer of them per cycle. To fully benefit from AVX-512, you need to carefully design your code. It is made more challenging by the fact that Intel is releasing these instructions progressively: the recent processors have many new powerful AVX-512 instructions that were not initially available. Thus, AVX-512 is not “one thing” but rather a family of instruction sets.

Furthermore, early implementations of the AVX-512 instructions often lead to measurable downclocking: the processor would reduce its frequency for a time following the use of these instructions. Thankfully, the latest Intel processors to support AVX-512 (Rocket Lake and Ice Lake) have done away with this systematic frequency throttling. Thankfully, it is easy to detect these recent processors at runtime.

Amazon’s powerful Intel servers are based on Ice Lake. Thus if you are deploying your software applications to the cloud on powerful servers, you probably have pretty good support for AVX-512 already !

A few years ago, we released a really fast C++ JSON parser called simdjson. It is somewhat unique as a parser in the fact that it relies critically on SIMD instructions. On several metrics, it was and still is the fastest JSON parser though other interesting competitors have emerged.

Initially, I had written a quick and dirty AVX-512 kernel for simdjson. We never merged it and after a time, I just deleted it. I then forgot about it.

Thanks to contributions from talented Intel engineers (Fangzheng Zhang and Weiqiang Wan) as well as indirect contributions from readers of this blog (Kim Walisch and Jatin Bhateja), we produced a new and shiny AVX-512 kernel. As always, keep in mind that the simdjson is the work of many people, a whole community of dozens of contributors. I must express my gratitude to Fangzheng Zhang who first wrote to me about an AVX-512 port.

We just released in the latest version of simdjson. It breaks new speed records.

Let us consider an interesting test where you seek to scan a whole file (spanning kilobytes) to find a value corresponding to some identifier. In simdjson, the code is as follows:

   auto doc = parser.iterate(json);    
   for (auto tweet : doc.find_field("statuses")) {
      if (uint64_t(tweet.find_field("id")) == find_id) {
        result = tweet.find_field("text");
        return true;
      }
    }
    return false;

On a Tiger Lake processor, with GCC 11, I get a 60% increase in the processing speed, expressed by the number of input bytes processed per second.

simdjson (512-bit SIMD): new 7.4 GB/s
simdjson (256-bit SIMD): old 4.6 GB/s

The speed gain is so important because in this task we mostly just read the data, and we do relatively little secondary processing. We do not create a tree out of the JSON data, we do not create a data structure.

The simdjson library has a minify function which just strips unnecessary spaces from the input. Maybe surprisingly, we are more than twice as fast as the previous baseline:

simdjson (512-bit SIMD): new 12 GB/s
simdjson (256-bit SIMD): old 4.3 GB/s

Another reasonable benchmark is to fully parse the input into a DOM tree with full validation. Parsing a standard JSON file (twitter.json), I get nearly a 30% gain:

simdjson (512-bit SIMD): new 3.6 GB/s
simdjson (256-bit SIMD): old 2.8 GB/s

While 30% may sound unexciting, we are starting from a fast baseline.

Could we do better? Assuredly. There are many AVX-512 instructions that we are not using yet. We do not use ternary Boolean operations (vpternlog). We are not using the new powerful shuffle functions (e.g., vpermt2b). We have an example of coevolution: better hardware requires new software which, in turn, makes the hardware shine.

Of course, to get these new benefits, you need recent Intel processors with adequate AVX-512 support and, evidently, you also need relatively recent C++ processors. Some of the recent laptop-class Intel processors do not support AVX-512 but you should be fine if you rely on AWS and have big Intel nodes.

You can grab our release directly or wait for it to reach one of the standard package managers (MSYS2, conan, vcpkg, brew, debian, FreeBSD, etc.).

Avoid exception throwing in performance-sensitive code

There are various ways in software to handle error conditions. In C or Go, one returns error code. Other programming languages like C++ or Java prefer to throw exceptions. One benefit of using exceptions is that it keeps your code mostly clean since the error-handling code is often separate.

It is debatable whether handling exceptions is better than dealing with error codes. I will happily use one or the other.

What I will object to, however, is the use of exceptions for control flow. It is fine to throw an exception when a file cannot be opened, unexpectedly. But you should not use exceptions to branch on the type of a value.

Let me illustrate.

Suppose that my code expects integers to be always positive. I might then have a function that checks such a condition:

int get_positive_value(int x) {
    if(x < 0) { throw std::runtime_error("it is not positive!"); }
    return x;
}

So far, so good. I am assuming that the exception is normally never thrown. It gets thrown if I have some kind of error.

If I want to sum the absolute values of the integers contained in an array, the following branching code is fine:

    int sum = 0;
    for (int x : a) {
        if(x < 0) {
            sum += -x;
        } else {
            sum += x;
        }
    }

Unfortunately, I often see solutions abusing exceptions:

    int sum = 0;
    for (int x : a) {
        try {
            sum += get_positive_value(x);
        } catch (...) {
            sum += -x;
        }
    }

The latter is obviously ugly and hard-to-maintain code. But what is more, it can be highly inefficient. To illustrate, I wrote a small benchmark over random arrays containing a few thousand elements. I use the LLVM clang 12 compiler on a skylake processor. The normal code is 10000 times faster in my tests!

normal code 0.05 ns/value
exception 500 ns/value

Your results will differ but it is generally the case that using exceptions for control flow leads to suboptimal performance. And it is ugly too!

Faster bitset decoding using Intel AVX-512

I refer to “bitset decoding” as the action of finding the positions of the 1s in a stream of bits. For example, given the integer value 0b11011 (or 27 in decimal),  I want to find 0,1,3,4.

In my previous post, Fast bitset decoding using Intel AVX-512, I explained how you can use Intel’s new instructions, from the AVX-512 family, to decode bitsets faster. The AVX-512 instructions, as the name implies, often can process 512-bit (or 64-byte) registers.

At least two readers (Kim Walisch and Jatin Bhateja) pointed out that you could do better if you used the very latest AVX-512 instructions available on Intel processors with the Ice Lake or Tiger Lake microarchitectures. These processors support VBMI2 instructions including the vpcompressb instruction and its corresponding intrinsics (such as _mm512_maskz_compress_epi8). What this instruction does is take a 64-bit word and a 64-byte register, and it outputs (in packed manner) only the bytes corresponding to set bits in the 64-bit word. Thus if you use as the 64-bit word the value 0b11011 and you provide a 64-byte register with the values 0,1,2,3,4… you will get as a result 0,1,3,4. That is, the instruction effectively does the decoding already, with the caveat that it will only write bytes. In practice, you often want the indexes as 32-bit integers. Thankfully, you can go from packed bytes to packed 32-bit integers easily. One possibility is to extract successive 128-bit subwords (using the vextracti32x4 instruction or its intrinsic _mm512_extracti32x4_epi32), and expand them (using the vpmovzxbd instruction or its intrinsic _mm512_cvtepu8_epi32). You get the following result:

void vbmi2_decoder_cvtepu8(uint32_t *base_ptr, uint32_t &base,
                                           uint32_t idx, uint64_t bits) {
  __m512i indexes = _mm512_maskz_compress_epi8(bits, _mm512_set_epi32(
    0x3f3e3d3c, 0x3b3a3938, 0x37363534, 0x33323130,
    0x2f2e2d2c, 0x2b2a2928, 0x27262524, 0x23222120,
    0x1f1e1d1c, 0x1b1a1918, 0x17161514, 0x13121110,
    0x0f0e0d0c, 0x0b0a0908, 0x07060504, 0x03020100
  ));
  __m512i t0 = _mm512_cvtepu8_epi32(_mm512_castsi512_si128(indexes));
  __m512i t1 = _mm512_cvtepu8_epi32(_mm512_extracti32x4_epi32(indexes, 1));
  __m512i t2 = _mm512_cvtepu8_epi32(_mm512_extracti32x4_epi32(indexes, 2));
  __m512i t3 = _mm512_cvtepu8_epi32(_mm512_extracti32x4_epi32(indexes, 3));
  __m512i start_index = _mm512_set1_epi32(idx);
  
  _mm512_storeu_si512(base_ptr + base, _mm512_add_epi32(t0, start_index));
  _mm512_storeu_si512(base_ptr + base + 16, _mm512_add_epi32(t1, start_index));
  _mm512_storeu_si512(base_ptr + base + 32, _mm512_add_epi32(t2, start_index));
  _mm512_storeu_si512(base_ptr + base + 48, _mm512_add_epi32(t3, start_index));
  
  base += _popcnt64(bits);
}

If you try to use this approach unconditionally, you will write 256 bytes of data for each 64-bit word you decode. In practice, if your word contains mostly just zeroes, you will be writing a lot of zeroes.

Branching is bad for performance, but only when it is hard to predict. However, it should be rather easy for the processor to predict whether we have fewer than 16 bits set in the provided word, of fewer than 32 bits, and so forth. So some level of branching is adequate. The following function should do:

void vbmi2_decoder_cvtepu8_branchy(uint32_t *base_ptr, uint32_t &base,
                                           uint32_t idx, uint64_t bits) {
  if(bits == 0) { return; }

  __m512i indexes = _mm512_maskz_compress_epi8(bits, _mm512_set_epi32(
    0x3f3e3d3c, 0x3b3a3938, 0x37363534, 0x33323130,
    0x2f2e2d2c, 0x2b2a2928, 0x27262524, 0x23222120,
    0x1f1e1d1c, 0x1b1a1918, 0x17161514, 0x13121110,
    0x0f0e0d0c, 0x0b0a0908, 0x07060504, 0x03020100
  ));
  __m512i start_index = _mm512_set1_epi32(idx);
  
  int count = _popcnt64(bits);
  __m512i t0 = _mm512_cvtepu8_epi32(_mm512_castsi512_si128(indexes));
  _mm512_storeu_si512(base_ptr + base, _mm512_add_epi32(t0, start_index));
  
  if(count > 16) {   
    __m512i t1 = _mm512_cvtepu8_epi32(_mm512_extracti32x4_epi32(indexes, 1));
    _mm512_storeu_si512(base_ptr + base + 16, _mm512_add_epi32(t1, start_index));
    if(count > 32) {   
      __m512i t2 = _mm512_cvtepu8_epi32(_mm512_extracti32x4_epi32(indexes, 2));
      _mm512_storeu_si512(base_ptr + base + 32, _mm512_add_epi32(t2, start_index));
      if(count > 48) {   
        __m512i t3 = _mm512_cvtepu8_epi32(_mm512_extracti32x4_epi32(indexes, 3));
        _mm512_storeu_si512(base_ptr + base + 48, _mm512_add_epi32(t3, start_index));
      }
    }
  }
  base += count;
}

The results will vary depending on the input data, but I already have a realistic case with moderate density (about 10% of the bits are set) that I am reusing. Using a Tiger-Lake processor and GCC 9, I get the following timings per set value, when using a sizeable input:

nanoseconds/value
basic 0.95
unrolled (simdjson) 0.74
AVX-512 (previous post) 0.57
AVX-512 (new) 0.29

My code is available.

That is a rather remarkable performance, especially considering how we do not need any large table or sophisticated algorithm. All we need are fancy AVX-512 instructions.

Fast bitset decoding using Intel AVX-512

In software, we often use ‘bitsets’: you work with arrays of bits to represent sets of small integers. It is a concise and fast data structure. Sometimes you want to go from the bitset (e.g., 0b110011) to the integers (e.g., 0, 1, 5, 6 in this instance). We consider with ‘average’ density (e.g., more than a handful of bits set per 64-bit word).

You could check the value of each bit, but a better option is to use the fact that processors have fast instructions to compute the number of “trailing zeros”. Given 0b10001100100, this instruction would give you 2. This gives you the first index. Then you need to unset this least significant bit using code such as word & (word - 1).

  while (word != 0) {
    result[i] = trailingzeroes(word);
    word = word & (word - 1);
    i++;
  }

The problem with this code is that the number of iterations might be hard to predict, thus you might often cause your processor to mispredict the number of branches. A misprediction is expensive on modern processor. You can do better by further unrolling this loop. I describe how in an earlier blog post.

Intel latest processors have new instruction sets (AVX-512) that are quite powerful. In this instance, it allows to do the decoding without any branch and with few instructions. The key is the vpcompressd instruction and its corresponding C/C++ Intel function (_mm512_mask_compressstoreu_epi32). What it does is that given up to 16 integers, it only selects the ones corresponding to a bit set in a bitset. Thus given the array 0,1,2,3….16 and given the bitset 0b111010, you would generate the output 1,3,4,6. The function does not tell you how many relevant values are written out, but you can just count the number of ones, and conveniently, we have a fast instruction for that, available through the _popcnt64 function. So the following code sequence would process 16-bit masks and write them out to a pointer (base_ptr).

  __m512i base_index = _mm512_setr_epi32(0,1,2,3,4,5,
    6,7,8,9,10,11,12,13,14,15);

  _mm512_mask_compressstoreu_epi32(base_ptr, 
    mask, base_index);

  base_ptr += _popcnt64(mask);

The full function which processes 64-bit masks is somewhat longer, but it is essentially just 4 copies of the simple sequence.

void avx512_decoder(uint32_t *base_ptr, uint32_t &base,
    uint32_t idx, uint64_t bits) {
  __m512i start_index = _mm512_set1_epi32(idx);
  __m512i base_index = _mm512_setr_epi32(0,1,2,3,4,5,
    6,7,8,9,10,11,12,13,14,15);
  base_index = _mm512_add_epi32(base_index, start_index);
  uint16_t mask;
  mask = bits & 0xFFFF;
  _mm512_mask_compressstoreu_epi32(base_ptr + base, 
    mask, base_index);
  base += _popcnt64(mask);
  const __m512i constant16 = _mm512_set1_epi32(16);
  base_index = _mm512_add_epi32(base_index, constant16);
  mask = (bits>>16) & 0xFFFF;
  _mm512_mask_compressstoreu_epi32(base_ptr + base, 
     mask, base_index);
  base += _popcnt64(mask);
  base_index = _mm512_add_epi32(base_index, constant16);
  mask = (bits>>32) & 0xFFFF;
  _mm512_mask_compressstoreu_epi32(base_ptr + base, 
    mask, base_index);
  base += _popcnt64(mask);
  base_index = _mm512_add_epi32(base_index, constant16);
  mask = bits>>48;
  _mm512_mask_compressstoreu_epi32(base_ptr + base, 
    mask, base_index);
  base += _popcnt64(mask);
}

There is a downside to using AVX-512: for a short time, the processor might reduce its frequency when wide registers (512 bits) are used. You can still use the same instructions on shorter registers (e.g., use _mm256_mask_compressstoreu_epi32 instead of _mm512_mask_compressstoreu_epi32) but in this instance, it doubles the number of instructions.

On a skylake-x processor with GCC, my benchmark reveals that the new AVX-512 is superior even with frequency throttling. Compared to the basic approach above, the AVX-512 approach use 45% times fewer cycles and 33% less time. We report the number of instructions, cycles and nanoseconds per value set in the bitset. The AVX-512 generates no branch misprediction.

instructions/value cycles/value nanoseconds/value
basic 9.3 4.4 1.5
unrolled (simdjson) 9.9 3.6 1.2
AVX-512 6.2 2.4 1.0

My code is available.

The AVX-512 routine has record-breaking speed. It is also possible that the routine could be improved.

Removing characters from strings faster with AVX-512

In software, it is a common problem to want to remove specific characters from a string. To make the problem precise, let us consider the removal of all ASCII control characters and spaces. In practice, it means the removal of all byte values smaller or equal than 32.

I covered a related problem before, the removal of all spaces from strings. At the time, I concluded that the fastest approach might be to use SIMD instructions coupled with a large lookup table. A SIMD instruction is such that it can operate on many words at any given time: most commodity processors have instructions able to operate on 16 bytes at a time. Thus, using a single instruction, you can compare 16 consecutive bytes and identify the location of all spaces, for example. Once it is done, you must somehow move the unwanted characters. Most instruction sets do not have instructions for that purpose, however x64 processors have an instruction that can move bytes around as long as you have a precomputed shuffle mask (pshufb). ARM NEON has similar instructions as well. Thus you proceed in the following manner:

  1. Identify all unwanted characters in a block (e.g., 16 bytes).
  2. Lookup a shuffle mask in a large table.
  3. Move the unwanted bytes using the shuffle mask.

Such an approach is fast but it requires possibly large tables.  Indeed, if you load 16 bytes, you need a table with 65536 shuffle masks. Storing such large tables is not very practical.

Recent Intel processors have handy new instructions that do exactly what we want: they prune out unwanted bytes (vpcompressb). It requires a recent processor with AVX-512 VBMI2 such as Ice Lake, Rocket Lake, Alder Lake, or Tiger Lake processors. Intel makes it difficult to figure out which features is available on which processor, so you need to do some research to find out if your favorite Intel processors supports the desired instructions. AMD processors do not support VBMI2.

On top of the new instructions, AVX-512 also allows you process the data in larger blocks (64 bytes). Using Intel instructions, the code is almost readable. I create a register containing only the space byte, and I then iterate over my data, each time loading 64 bytes of data. I compare it with the space: I only want to keep values that are large (in byte values) than the space. I then call the compress instruction which takes out the unwanted bytes. I read at regular intervals (every 64 bytes) but I write a variable number of bytes, so I advance the write pointer by the number of set bits in my mask: I count those using a fast instruction (popcnt).

  __m512i spaces = _mm512_set1_epi8(' ');
  size_t i = 0;
  for (; i + 63 < howmany; i += 64) {
    __m512i x = _mm512_loadu_si512(bytes + i);
    __mmask64  notwhite = _mm512_cmpgt_epi8_mask  (x, spaces);
    _mm512_mask_compressstoreu_epi8  (bytes + pos, notwhite, x);
    pos += _popcnt64(notwhite);
  }

I have updated the despacer library and its benchmark. With a Tiger Lake processor (3.3 GHz) and GCC 9 (Linux), I get the following results:

function speed
conventional (despace32) 0.4 GB/s
SIMD with large lookup (sse42_despace_branchless) 2.0 GB/s
AVX-512 (vbmi2_despace) 8.5 GB/s

Your results will differ. Nevertheless, we find that AVX-512 is highly useful for this task and the related function surpasses all other such functions. It is not merely the raw speed, it is also the fact that we do not require a lookup table and that the code does not rely on branch prediction: there is no hard-to-predict branches that may harm your speed in practice.

The result should not surprise us since, for the first time, we almost have direct hardware support for the operation (“pruning unwanted bytes”). The downside is that few processors support the desired instruction set. And it is not clear whether AMD will ever support these fancy instructions.

I should conclude with Linus Torvalds take regarding AVX-512:

I hope AVX-512 dies a painful death, and that Intel starts fixing real problems instead of trying to create magic instructions to then create benchmarks that they can look good on

I cannot predict what will happen to Intel or AVX-512, but if the past is any indication, specialized and powerful instructions have a bright future.

An overview of version control in programming

In practice, computer code is constantly being transformed. At the beginning of a project, the computer code often takes the form of sketches that are gradually refined. Later, the code can be optimized or corrected, sometimes for many years.

Soon enough, programmers realized that they needed to not only to store files, but also to keep track of the different versions of a given file. It is no accident that we are all familiar with the fact that software is often associated with versions. It is necessary to distinguish the different versions of the computer code in order to keep track of updates.

We might think that after developing a new version of a software, the previous versions could be discarded. However, it is practical to keep a copy of each version of the computer code for several reasons:

  1. A change in the code that we thought was appropriate may cause problems: we need to be able to go back quickly.
  2. Sometimes different versions of the computer code are used at the same time and it is not possible for all users to switch to the latest version. If an error is found in a previous version of the computer code, it may be necessary for the programmer to go back and correct the error in an earlier version of the computer code without changing the current code. In this scenario, the evolution of the software is not strictly linear. It is therefore possible to release version 1.0, followed by version 2.0, and then release version 1.1.
  3. It is sometimes useful to be able to go back in time to study the evolution of the code in order to understand the motivation behind a section of code. For example, a section of code may have been added without much comment to quickly fix a new bug. The attentive programmer will be able to better understand the code by going back and reading the changes in context.
  4. Computer code is often modified by different programmers working at the same time. In such a social context, it is often useful to be able to quickly determine who made what change and when. For example, if a problem is caused by a segment of code, we may want to question the programmer who last worked on that segment.

Programmers quickly realized that they needed version control systems. The basic functions that a version control system provides are rollback and a history of changes made. Over time, the concept version control has spread. There are even several variants intended for the general public such as DropBox where various files, not only computer code, are stored.

The history of software version control tools dates back to the 1970s (Rochkind, 1975). In 1972, Rochkind developed the SCCS (Source Code Control System) at Bell Laboratories. This system made it possible to create, update and track changes in a software project. SCCS remained a reference from the end of the 1970s until the 1980s. One of the constraints of SCCS is that it does not allow collaborative work: only one person can modify a given file at a given time.

In the early 1980s, Tichy proposed the RCS (Revision Control System), which innovated with respect to SCCS by storing only the differences between the different versions of a file in backward order, starting from the latest file. In contrast, SCCS stored differences in forward order starting from the first version. For typical use where we access the latest version, RCS is faster.

In programming, we typically store computer code within text files. Text files most often use ASCII or Unicode (UTF-8 or UTF-16) encoding. Lines are separated by a sequence of special characters that identify the end of a line and the beginning of a new line. Two characters are often used for this purpose: “carriage return” (CR) and “line feed” (LF). In ASCII and UTF-8, these characters are represented with the byte having the value 13 and the byte having the value 10 respectively. In Windows, the sequence is composed of the CR character followed by the LF character, whereas in Linux and macOS, only the LF character is used. In most programming languages, we can represent these two characters with the escape sequences \r and \n respectively. So the string “a\nb\nc” has three lines in most programming languages under Linux or macOS: the lines “a”, “b” and “c”.

When a text file is edited by a programmer, usually only a small fraction of all lines are changed. Some lines may also be inserted or deleted. It is convenient to describe the differences as succinctly as possible by identifying the new lines, the deleted lines and the modified lines.

The calculation of differences between two text files is often done first by breaking the text files into lines. We then treat a text file as a list of lines. Given two versions of the same file, we want to associate as many lines in the first version as possible with an identical line in the second version. We also assume that the order of the lines is not reversed.

We can formalize this problem by looking for the longest common subsequence. Given a list, a subsequence simply takes a part of the list, excluding some elements. For example, (a,b,d) is a subsequence of the list (a,b,c,d,e). Given two lists, we can find a common subsequence, e.g. (a,b,d) is a subsequence of the list (a,b,c,d,e) and the list (z,a,b,d). The longest common subsequence between two lists of text lines represents the list of lines that have not been changed between the two versions of a text file. It might be difficult to solve this program using brute force. Fortunately, we can compute the longest common subsequence by dynamic programming. Indeed, we can make the following observations.

  1. If we have two strings with a longest subsequence of length k, and we add at the end of each of the two strings the same character, the new strings will have a longer subsequence of length k+1.
  2. If we have two strings of lengths m and n, ending in distinct characters (for example, “abc” and “abd”), then the longest subsequence of the two strings is the longest subsequence of the two strings after removing the last character from one of the two strings. In other words, to determine the length of the longest subsequence between two strings, we can take the maximum of the length of the subsequence after amputating one character from the first string while keeping the second unchanged, and the length of the subsequence after amputating one character from the second string while keeping the first unchanged.
    These two observations are sufficient to allow an efficient calculation of the length of the longest common subsequence. It is sufficient to start with strings comprising only the first character and to add progressively the following characters. In this way, one can calculate all the longest common subsequences between the truncated strings. It is then possible to reverse this process to build the longest subsequence starting from the end. If two strings end with the same character, we know that the last character will be part of the longest subsequence. Otherwise, one of the two strings is cut off from its last character, making our choice in such a way as to maximize the length of the longest common subsequence.

The following function illustrates a possible solution to this problem. Given two arrays of strings, the function returns the longest common subsequence. If the first string has length m and the second n, then the algorithm runs in O(m*n) time.

func longest_subsequence(file1, file2 []string) []string {
	m, n := len(file1), len(file2)
	P := make([]uint, (m+1)*(n+1))
	for i := 1; i <= m; i++ {
		for j := 1; j <= n; j++ {
			if file1[i-1] == file2[j-1] {
				P[i*(n+1)+j] = 1 + P[(i-1)*(n+1)+(j-1)]
			} else {
				P[i*(n+1)+j] = max(P[i*(n+1)+(j-1)], P[(i-1)*(n+1)+j])
			}
		}
	}
	longest := P[m*(n+1)+n]
	i, j := m, n
	subsequence := make([]string, longest)
	for k := longest; k > 0; {
		if P[i*(n+1)+j] == P[i*(n+1)+(j-1)] {
			j-- // the two strings end with the same char
		} else if P[i*(n+1)+j] == P[(i-1)*(n+1)+j] {
			i--
		} else if P[i*(n+1)+j] == 1+P[(i-1)*(n+1)+(j-1)] {
			subsequence[k-1] = file1[i-1]
			k--; i--; j--
		}
	}
	return subsequence
}

Once the subsequence has been calculated, we can quickly calculate a description of the difference between the two text files. Simply move forward in each of the text files, line by line, stopping as soon as you reach a position corresponding to an element of the longest sub-sequence. The lines that do not correspond to the subsequence in the first file are considered as having been deleted, while the lines that do not correspond to the subsequence in the second file are considered as having been added. The following function illustrates a possible solution.

func difference(file1, file2 []string) []string {
    subsequence := longest_subsequence(file1, file2)
    i, j, k := 0, 0, 0
    answer := make([]string, 0)
    for i &lt; len(file1) &amp;&amp; k &lt; len(file2) {
        if file2[k] == subsequence[j] &amp;&amp; file1[i] == subsequence[j] {
            answer = append(answer, "'"+file2[k]+"'\n")
            i++; j++; k++
        } else {
            if file1[i] != subsequence[j] {
                answer = append(answer, "deleted: '"+file1[i]+"'\n")
                i++
            }
            if file2[k] != subsequence[j] {
                answer = append(answer, "added: '"+file2[k]+"'\n")
                k++
            }
        }
    }
    for ; i &lt; len(file1); i++ {
        answer = append(answer, "deleted: '"+file1[i]+"'\n")
    }
    for ; k &lt; len(file2); k++ {
        answer = append(answer, "added: '"+file2[k]+" \n")
    }
    return answer
}


The function we propose as an illustration for computing the longest subsequence uses O(m*n) memory elements. It is possible to reduce the memory usage of this function and simplify it (Hirschberg, 1975). Several other improvements are possible in practice (Miller and Myers, 1985). We can then represent the changes between the two files in a concise way.

Suggested reading: article Diff (wikipedia)

Like SCCS, RCS does not allow multiple programmers to work on the same file at the same time. The need to own a file to the exclusion of all other programmers while working on it may have seemed a reasonable constraint at the time, but it can make the work of a team of programmers much more cumbersome.

In 1986 Grune developed the Concurrent Versions System (CVS). Unlike previous systems, CVS allows multiple programmers to work on the same file simultaneously. It also adopts a client-server model that allows a single directory to be present on a network, accessible by multiple programmers simultaneously. The programmer can work on a file locally, but as long as he has not transmitted his version to the server, it remains invisible to the other developers.

The remote server also serves as a de facto backup for the programmers. Even if all the programmers’ computers are destroyed, it is possible to start over with the code on the remote server.

In a version control system, there is usually always a single latest version. All programmers make changes to this latest version. However, such a linear approach has its limits. An important innovation that CVS has updated is the concept of a branch. A branch allows to organize sets of versions that can evolve in parallel. In this model, the same file is virtually duplicated. There are then two versions of the file (or more than two) capable of evolving in parallel. By convention, there is usually one main branch that is used by default, accompanied by several secondary branches. Programmers can create new branches whenever they want. Branches can then be merged: if a branch A is divided into two branches (A and B) which are modified, it is then possible to bring all the modifications into a single branch (merging A and B). The branch concept is useful in several contexts:

  1. Some software development is speculative. For example, a programmer may explore a new approach without being sure that it is viable. In such a case, it may be better to work in a separate branch and merge with the main branch only if successful.
  2. The main branch may be restricted to certain programmers for security reasons. In such a case, programmers with reduced access may be restricted to separate branches. A programmer with privileged access may then merge the secondary branch after a code inspection.
  3. A branch can be used to explore a particular bug and its fix.
  4. A branch can be used to update a previous version of the code. Such a version may be kept up to date because some users depend on that earlier version and want to receive certain fixes. In such a case, the secondary branch may never be integrated with the main branch.

One of the drawbacks of CVS is poor performance when projects include multiple files and multiple versions. In 2000, Subversion (SVN) was proposed as an alternative to CVS that meets the same needs, but with better performance.

CVS and Subversion benefit from a client-server approach, which allows multiple programmers to work simultaneously with the same version directory. Yet programmers often want to be able to use several separate remote directories.

To meet these needs, various “distributed version control systems” (DVCS) have been developed. The most popular one is probably the Git system developed by Torvalds (2005). Torvalds was trying to solve a problem of managing Linux source code. Git became the dominant version management tool.
tool. It has been adopted by Google, Microsoft, etc. It is free software.

In a distributed model, a programmer who has a local copy of the code can synchronize it with either one directory or another. They can easily create a new copy of the remote directory on a new server. Such flexibility is considered essential in many complex projects such as the Linux operating system kernel.

Several companies offer Git-based services including GitHub. Founded in 2008, GitHub has tens of millions of users. In 2018, Microsoft acquired GitHub for $7.5 billion.

For CVS and Subversion, there is only one set of software versions. With a distributed approach, multiple sets can coexist on separate servers. The net result is that a software project can evolve differently, under the responsibility of different teams, with possible future reconciliation.

In this sense, Git is distributed. Although many users rely on GitHub (for example), your local copy can be attached to any remote directory, and it can even be attached to multiple remote directories. The verb “clone” is sometimes used to describe the recovery of a Git project locally, since it is a complete copy of all files, changes, and branches.

If a copy of the project is attached to another remote directory, it is called a fork. We often distinguish between branches and forks. A branch always belongs to the main project. A fork is originally a complete copy of the project, including all branches. It is possible for a fork to rejoin the main project, but it is not essential.

Given a publicly available Git directory, anyone can clone it and start working on it and contributing to it. We can create a new fork. From a fork, we can submit a pull request that invites people to integrate our changes. This allows a form of permissionless innovation. Indeed, it becomes possible to retrieve the code, modify it and propose a new version without ever having to interact directly with the authors.

Systems like CVS and subversion could become inefficient and take several minutes to perform certain operations. Git, in contrast, is generally efficient and fast, even for huge projects. Git is robust and does not get “corrupted” easily. However, it is not recommended to use Git for huge files such as multimedia content: Git’s strength lies in text files. It should be noted that the implementation of Git has improved over time and includes sophisticated indexing techniques.

Git is often used on the command line. It is possible to use graphical clients. Services like GitHub make Git a little easier.

The basic logical unit of Git is the commit, which is a set of changes to multiple files. A commit includes a reference to at least one parent, except for the first commit which has no parent. A single commit can be the parent of several children: several branches can be created from a commit and each subsequent commit becomes a child of the initial commit. Furthermore, when several branches are merged, the resulting commit will have several parents. In this sense, the commits form an “acyclic directed graph”.

With Git, we want to be able to refer to a commit in an easy way, using a unique identifier. That is, we want to have a short numeric value that corresponds to one commit and one commit only. We could assign each commit a version number (1.0, 2.0, etc.). Unfortunately, such an approach is difficult to reconcile with the fact that commits do not form a linear chain where a commit has one and only one parent. As an alternative, we use a hash function to compute the unique identifier. A hash function takes elements as parameters and calculates a numerical value (hash value). There are several simple hash functions. For example, we can iterate over the bytes contained in a message from a starting value h, by computing h = 31 * h + b where b is the byte value. For example, a message containing bytes 3 and 4 might have a hash value of 31 * (31 * 3) + 4 if we start h = 0. Such a simple approach is effective in some cases, but it allows malicious users to create collisions: it would be possible to create a fake commit that has the same hash value and thus create security holes. For this reason, Git uses more sophisticated hashing techniques (SHA-1, SHA-256) developed by cryptographic specialists. Commits are identified using a hash value (for example, the hexadecimal numeric value 921103db8259eb9de72f42db8b939895f5651489) which is calculated from the date and time, the comment left by the programmer, the user’s name, the parents and the nature of the change. In theory, two commits could have the same hash value, but this is an unlikely event given the hash functions used by Git. It’s not always practical to reference a hexadecimal code. To make things easier, Git allows you to identify a commit with a label (e.g., v1.0.0). The following command will do: git tag -a v1.0.0 -m "version 1.0.0".

Though tags can be any string, tags often contain sequences of numbers indicating a version. There is no general agreement among programmers on how  to attribute version numbers to a version. However, tags sometimes take the form of three numbers separated by dots: MAJOR.MINOR.PATCH (for example, 1.2.3). With each new version, 1 is added to at least one of the three numbers. The first number often starts at 1 while the next two start at 0.

  • The first number (MAJOR) must be increased when you make major changes to the code. The other two numbers (MINOR and PATCH) are often reset to zero. For example, you can go from version 1.2.3 to version 2.0.0.
  • The second number (MINOR) is increased for minor changes (for example, adding a function). When increasing the second number, the first number (MAJOR) is usually kept unchanged and the last number (PATCH) is reset to zero.
  • The last number (PATCH) is increased when fixing bugs. The other two numbers are not increased.
    There are finer versions of this convention like “semantic versioning“.

With Git, the programmer can have a local copy of the commit graph. They can add new commits. In a subsequent step, the programmer must “push” his changes to a remote directory so that the changes become visible to other programmers. The other programmers can fetch the changes using a `pull’.

Git has advanced collaborative features. For example, the git blame command lets you know who last modified a given piece of code.

Conclusion

Version control in computing is a sophisticated approach that has benefited from many years of work. It is possible to store multiple versions of the same file at low cost and navigate from one version to another quickly.

If you develop code without using a version control tool like Git or the equivalent, you are bypassing proven practices. It’s likely that if you want to work on complex projects with multiple programmers, your productivity will be much lower without version control.

Floats have 15-digit accuracy in their normal range

In programming languages like JavaScript or Python, numbers are typically represented using 64-bit IEEE number types (binary64). For these numbers, we have 15 digits of accuracy. It means that you can pick a 15-digit number, such as 1.23456789012345e100 and it can be represented exactly: there exists a floating-point number that has exactly these 15-most significant digits. In this particular case, it is the number 6355009312518497 * 2280. Having 15-digit of accuracy is excellent: few engineering processes can ever exceed this accuracy.

This 15-digit accuracy fails for numbers that outside the valid range. For example, the number 1e500 is too large and cannot be directly represented using standard 64-bit floating-point numbers. Similarly, 1e-500 is too small and it can only be represented as zero.

The range of 64-bit floating-point number might be defined as going from 4.94e-324 to 1.8e308 and -1.8e308 to -4.94e-324, together with exactly 0. However, this range includes subnormal numbers where the relative accuracy can be small. For example, the number 5.00000000000000e-324 is best represented as 4.94065645841247e-324, meaning that we have zero-digit accuracy.

For the 15-digit accuracy rule to work, you might remain in the normal range, e.g., from 2.225e−308 to 1.8e308 and -1.8e308 to -2.225e−308. There are other good reasons to remain in the normal range, such as poor performance and low accuracy in the subnormal range.

To summarize, standard floating-point numbers have excellent accuracy (at least 15 digits) as long you remain in their normal range which is between 2.225e−308 to 1.8e308 for positive numbers.

String representations are not unique: learn to normalize!

Most strings in software today are represented using the unicode standard. The unicode standard can represent most human readable strings. Unicode works by representing each ‘character’ as a numerical value (called a code point) between 0 and 1 114 112.

Thus the character é is typically represented as the numerical value 233 (or 0xe9 in hexadecimal). Thus in Python, JavaScript and many other programming languages, you get the following:

>>> "\u00e9"
'é'

Unfortunately, unicode does not ensure that there is a unique way to achieve every visual character. For example, you can combine the letter ‘e’ (code point 0x65) with ‘acute accent’ (code point 0x0301):

>>> "\u0065\u0301"
'é'

Unfortunately, in most programming languages, these strings will not be considered to be the same even though they look the same to us:

>>> "\u0065\u0301"=="\u00e9"
False

For obvious reason, it can be a problem within a computer system. What if you are doing some search in a database for name with the character ‘é’ in it?

The standard solution is to normalize your strings. In effect, you transform them so that strings that are semantically equal are written with the same code points. In Python, you may do it as follows:

>>> import unicodedata
>>> unicodedata.normalize('NFC',"\u00e9") == unicodedata.normalize('NFC',"\u0065\u0301")
True

There are multiple ways to normalize your strings, and there are nuances.

In JavaScript and other programming languages, there are equivalent functions:

> "\u0065\u0301".normalize() == "\u00e9".normalize()
true

Though you should expect normalization to be efficient, it is unlikely to be computationally free. Thus you should not repeatedly normalize your strings, as I have done. Rather you should probably normalize the strings as they enter your system, so that each string is normalized only once.

Normalization alone does not solve all of your problems, evidently. There are multiple complicated issues with internalization, but if you are at least aware of the normalization problem, many perplexing issues are easily explained.

Further reading: Internationalization for Turkish:
Dotted and Dotless Letter “I”

Converting integers to decimal strings faster with AVX-512

In most systems, integers are stored using a fixed binary representation. It is common to store integers using 32-bit or 64-bit words. You sometimes need to convert it to a string. For example, the integer 12345 might need to be converted to the five characters ‘12345’.

In an earlier blog post, I presented the simpler problem of converting integers to fixed-digit strings, using exactly 16-characters with leading zeroes as needed. For example, the integer 12345 becomes the string ‘0000000000012345’.

For this problem, the most practical approach might be a tree-based version with a small table. The core idea is to start from the integer, compute an integer representing the 8 most significant decimal digits, and another integer representing the least significant 8 decimal digit. Then we repeat, dividing the two eight-digit integers into two four-digit integers, and so forth until we get to two-digit integers in which case we use a small table to convert them to a decimal representation. The code in C++ might look as follows:

void to_string_tree_table(uint64_t x, char *out) {
  static const char table[200] = {
      0x30, 0x30, 0x30, 0x31, 0x30, 0x32, 0x30, 0x33, 0x30, 0x34, 0x30, 0x35,
      0x30, 0x36, 0x30, 0x37, 0x30, 0x38, 0x30, 0x39, 0x31, 0x30, 0x31, 0x31,
      0x31, 0x32, 0x31, 0x33, 0x31, 0x34, 0x31, 0x35, 0x31, 0x36, 0x31, 0x37,
      0x31, 0x38, 0x31, 0x39, 0x32, 0x30, 0x32, 0x31, 0x32, 0x32, 0x32, 0x33,
      0x32, 0x34, 0x32, 0x35, 0x32, 0x36, 0x32, 0x37, 0x32, 0x38, 0x32, 0x39,
      0x33, 0x30, 0x33, 0x31, 0x33, 0x32, 0x33, 0x33, 0x33, 0x34, 0x33, 0x35,
      0x33, 0x36, 0x33, 0x37, 0x33, 0x38, 0x33, 0x39, 0x34, 0x30, 0x34, 0x31,
      0x34, 0x32, 0x34, 0x33, 0x34, 0x34, 0x34, 0x35, 0x34, 0x36, 0x34, 0x37,
      0x34, 0x38, 0x34, 0x39, 0x35, 0x30, 0x35, 0x31, 0x35, 0x32, 0x35, 0x33,
      0x35, 0x34, 0x35, 0x35, 0x35, 0x36, 0x35, 0x37, 0x35, 0x38, 0x35, 0x39,
      0x36, 0x30, 0x36, 0x31, 0x36, 0x32, 0x36, 0x33, 0x36, 0x34, 0x36, 0x35,
      0x36, 0x36, 0x36, 0x37, 0x36, 0x38, 0x36, 0x39, 0x37, 0x30, 0x37, 0x31,
      0x37, 0x32, 0x37, 0x33, 0x37, 0x34, 0x37, 0x35, 0x37, 0x36, 0x37, 0x37,
      0x37, 0x38, 0x37, 0x39, 0x38, 0x30, 0x38, 0x31, 0x38, 0x32, 0x38, 0x33,
      0x38, 0x34, 0x38, 0x35, 0x38, 0x36, 0x38, 0x37, 0x38, 0x38, 0x38, 0x39,
      0x39, 0x30, 0x39, 0x31, 0x39, 0x32, 0x39, 0x33, 0x39, 0x34, 0x39, 0x35,
      0x39, 0x36, 0x39, 0x37, 0x39, 0x38, 0x39, 0x39,
  };
  uint64_t top = x / 100000000;
  uint64_t bottom = x % 100000000;
  uint64_t toptop = top / 10000;
  uint64_t topbottom = top % 10000;
  uint64_t bottomtop = bottom / 10000;
  uint64_t bottombottom = bottom % 10000;
  uint64_t toptoptop = toptop / 100;
  uint64_t toptopbottom = toptop % 100;
  uint64_t topbottomtop = topbottom / 100;
  uint64_t topbottombottom = topbottom % 100;
  uint64_t bottomtoptop = bottomtop / 100;
  uint64_t bottomtopbottom = bottomtop % 100;
  uint64_t bottombottomtop = bottombottom / 100;
  uint64_t bottombottombottom = bottombottom % 100;
  //
  memcpy(out, &table[2 * toptoptop], 2);
  memcpy(out + 2, &table[2 * toptopbottom], 2);
  memcpy(out + 4, &table[2 * topbottomtop], 2);
  memcpy(out + 6, &table[2 * topbottombottom], 2);
  memcpy(out + 8, &table[2 * bottomtoptop], 2);
  memcpy(out + 10, &table[2 * bottomtopbottom], 2);
  memcpy(out + 12, &table[2 * bottombottomtop], 2);
  memcpy(out + 14, &table[2 * bottombottombottom], 2);
}

It compiles down to dozens of instructions.

Could you do better without using a much larger table?

It turns out that you can do much better if you have a recent Intel processor with the appropriate AVX-512 instructions (IFMA, VBMI), as demonstrated by an Internet user called InstLatX64.

We rely on the observation that you can compute directly the quotient and the remainder of the division using a series of multiplications and shifts (Lemire et al. 2019).

The code is a bit technical, but remarkably, it does not require a table. And it generates several times fewer instructions. For the sake of simplicity, I merely provide an implementation using Intel intrinsics. Importantly, you are not expected to follow through with the code, but you should notice that it is rather short.

void to_string_avx512ifma(uint64_t n, char *out) {
  uint64_t n_15_08  = n / 100000000;
  uint64_t n_07_00  = n % 100000000;
  __m512i bcstq_h   = _mm512_set1_epi64(n_15_08);
  __m512i bcstq_l   = _mm512_set1_epi64(n_07_00);
  __m512i zmmzero   = _mm512_castsi128_si512(_mm_cvtsi64_si128(0x1A1A400));
  __m512i zmmTen    = _mm512_set1_epi64(10);
  __m512i asciiZero = _mm512_set1_epi64('0');

  __m512i ifma_const	= _mm512_setr_epi64(0x00000000002af31dc, 0x0000000001ad7f29b, 
    0x0000000010c6f7a0c, 0x00000000a7c5ac472, 0x000000068db8bac72, 0x0000004189374bc6b,
    0x0000028f5c28f5c29, 0x0000199999999999a);
  __m512i permb_const	= _mm512_castsi128_si512(_mm_set_epi8(0x78, 0x70, 0x68, 0x60, 0x58,
    0x50, 0x48, 0x40, 0x38, 0x30, 0x28, 0x20, 0x18, 0x10, 0x08, 0x00));
  __m512i lowbits_h	= _mm512_madd52lo_epu64(zmmzero, bcstq_h, ifma_const);
  __m512i lowbits_l	= _mm512_madd52lo_epu64(zmmzero, bcstq_l, ifma_const);
  __m512i highbits_h	= _mm512_madd52hi_epu64(asciiZero, zmmTen, lowbits_h);
  __m512i highbits_l	= _mm512_madd52hi_epu64(asciiZero, zmmTen, lowbits_l);
  __m512i perm          = _mm512_permutex2var_epi8(highbits_h, permb_const, highbits_l);
  __m128i digits_15_0	= _mm512_castsi512_si128(perm);
  _mm_storeu_si128((__m128i *)out, digits_15_0);
}

Remarkably, the AVX-512 is 3.5 times faster than the table-based approach:

function time per conversion
table 8.8 ns
AVX-512 2.5 ns

I use GCC 9 and an Intel Tiger Lake processor  (3.30GHz). My benchmarking code is available.

A downside of this nifty approach is that it is (obviously) non-portable. There are still few Intel processors supporting these nifty extensions, and it is currently limited to Intel: no AMD or ARM processor can do the same right now. However, the gain might be sufficient that it is worth the effort deploying it in some instances.

 

Writing out large arrays in Go: binary.Write is inefficient for large arrays

Programmers often need to write data structures to disk or to networks. The data structure then needs to be interpreted as a sequence of bytes. Regarding integer values, most computer systems adopt “little endian” encoding whereas an 8-byte integer is written out using the least significant bytes first. In the Go programming language, you can write an array of integers to a buffer as follows:

var data []uint64
var buf *bytes.Buffer = new(bytes.Buffer)

...

err := binary.Write(buf, binary.LittleEndian, data)

Until recently, I assumed that the binary.Write function did not allocate memory. Unfortunately, it does. The function converts the input array to a new, temporary byte arrays.

Instead, you can create a small buffer just big enough to hold you 8-byte integer and write that small buffer repeatedly:

var item = make([]byte, 8)
for _, x := range data {
    binary.LittleEndian.PutUint64(item, x)
    buf.Write(item)
}

Sadly, this might have poor performance on disks or networks where each write/read has a high overhead. To avoid this problem, you can use Go’s buffered writer and write the integers one by one. Internally, Go will allocate a small buffer.

writer := bufio.NewWriter(buf)
var item = make([]byte, 8)
for _, x := range data {
	binary.LittleEndian.PutUint64(item, x)
	writer.Write(item)
}
writer.Flush()

I wrote a small benchmark that writes an array of 100M integers to memory.

function memory usage time
binary.Write 1.5 GB 1.2 s
one-by-one 0 0.87 s
buffered one-by-one 4 kB 1.2 s

(Timings will vary depending on your hardware and testing procedure. I used Go 1.16.)

The buffered one-by-one approach is not beneficial with respect to speed in this instance, but it would be more helpful in other cases. In my benchmark, the simple one-by-one approach is fastest and uses least memory. For small inputs, binary.Write would be faster. The ideal function might have a fast path for small arrays, and a more careful handling of the larger inputs.

Enforcement by software

At my university, one of our internal software systems allows a professor to submit a revision to a course. The professor might change the content or the objectives of the course.

In a university, professors have extensive freedom regarding course content. As long as you reasonably meet the course objectives, you can do whatever you like. You can pick the textbook you prefer, write your own and so forth.

However, the staff that built our course revision system decided to make it so that every single change should go through all layers of approvals. So if I want to change the title of an assignment, according to this tool, I need the department to approve.

KaneJamison.com

When I first encountered this new tool, I immediately started to work around it. And because I am department chair, I brought my colleagues along for the ride. So we ‘pretend’ to get approval, submitting fake documents. The good thing in a bureaucracy is that most people are too bored to check up on the fine prints.

Surprisingly, it appears that no other department has been routing around the damage that is this new tool. I should point out that I am not doing anything illegal or against the rules. I am a good soldier. I just route around the software system. But it makes people uneasy.

And there lies the scary point. People are easily manipulated by computing.

People seem to think that if the software requires some document, then surely the rules require the document in question. That is, human beings believe that the software must be an accurate embodiment of the law.

In some sense, software does the policing. It enforces the rules. But like the actual police, software can go far beyond the law… and most people won’t notice.

An actual policeman can be intimidating. However, it is a human being. If they ask something that does not make sense, you are likely to question them. You are also maybe more likely to think that a policeman could be mistaken. Software is like a deaf policeman. And people want software to be correct.

Suppose you ran a university and you wanted all professors to include a section on religion in all their courses. You could not easily achieve such a result by the means of law. Changing the university regulations to add such a requirement would be difficult at a secular institution. However, if you simply make it that all professors must fill out a section on religion when registering a course, then professors would probably do it without question.

Of course, you can achieve the same result with bureaucracy. You just change the forms and the rules. But it takes much effort. Changing software is comparatively easier. There is no need to document the change very much. There is no need to train the staff.

I think that there is great danger in some of the recent ‘digit ID’ initiatives that various governments are pushing. Suppose, for example, that your driver’s license is on your mobile phone. It seems reasonable, at first, for the government to be able to activate it and deactivate it remotely. You no longer need to go to a government office to get your new driver’s license. However, it now makes it possible for a civil servant to decide that you cannot drive your car on Tuesdays. They do not need a new law, they do not need your consent, they can just switch a flag inside a server.

You may assume then that people would complain loudly, and they may. However, they are much less likely to complain than if it is a policeman that comes to their door on Tuesdays to take away their driver’s license. We have a bias as human being to accept without question software enforcement.

It can be used for good. For example, the right software can probably help you lose weight. However, software can enable arbitrary enforcement. For crazy people like myself, it will fail. Sadly, not everyone is as crazy as I am.

The Canadian Common CV and the captured academy

Most Canadian academics have to write their resumes using a government online tool called the Common CV. When it was first introduced, it was described as a time-saving tool: instead of writing your resume multiple times for different grant agencies, you would write it just once and be done with it. In practice, it turned into something of a nightmare for many of us. You have to adapt it each time for each grant application, sometimes in convoluted ways, using a clunky interface.

What the Common CV does do is provide much power to middle-managers who can insert various bureaucratic requirements. You have to use their tool, and they can tailor it administratively without your consent. It is part of an ongoing technocratic invasion.

How did Canadian academics react? Did they revolt? Not at all. In fact, they are embracing it. I recently had to formally submit my resume as part of a routine internal review process, they asked for my Common CV. That is, instead of fighting against the techno-bureaucratic process, they extend its application to every aspect of their lives including internal functions. And it is not that everyone enjoys it: in private, many people despise the Common CV.

So why won’t they dissent?

One reason might be that they are demoralized. Why resist the Common CV when every government agency providing funding to professors requires it?

If so, they are confused. We dissent as an appeal to the intelligence of a future day. A dissent today is a message to the future people who will have the power to correct our current mistakes. These messages from today are potential tools in the future. “The reasonable man adapts himself to the world: the unreasonable one persists in trying to adapt the world to himself. Therefore all progress depends on the unreasonable man.” (Shaw)

The lack of dissent is hardly new of course. Only a minority of academics questioned the Vietnam war (Schreiber, 1973), and much of the resistance came when it became safe to speak out. The scientists described by Freeman Dyson in The Scientist as Rebel have always been a fringe.

Chomski lamented on this point:

IT IS THE RESPONSIBILITY of intellectuals to speak the truth and to expose lies. This, at least, may seem enough of a truism to pass over without comment. Not so, however. For the modern intellectual, it is not at all obvious. Thus we have Martin Heidegger writing, in a pro-Hitler declaration of 1933, that “truth is the revelation of that which makes a people certain, clear, and strong in its action and knowledge”; it is only this kind of “truth” that one has a responsibility to speak. Americans tend to be more forthright. When Arthur Schlesinger was asked by The New York Times in November, 1965, to explain the contradiction between his published account of the Bay of Pigs incident and the story he had given the press at the time of the attack, he simply remarked that he had lied; and a few days later, he went on to compliment the Times for also having suppressed information on the planned invasion, in “the national interest,” as this term was defined by the group of arrogant and deluded men of whom Schlesinger gives such a flattering portrait in his recent account of the Kennedy Administration. It is of no particular interest that one man is quite happy to lie in behalf of a cause which he knows to be unjust; but it is significant that such events provoke so little response in the intellectual community—for example, no one has said that there is something strange in the offer of a major chair in the humanities to a historian who feels it to be his duty to persuade the world that an American-sponsored invasion of a nearby country is nothing of the sort. And what of the incredible sequence of lies on the part of our government and its spokesmen concerning such matters as negotiations in Vietnam? The facts are known to all who care to know. The press, foreign and domestic, has presented documentation to refute each falsehood as it appears. But the power of the government’s propaganda apparatus is such that the citizen who does not undertake a research project on the subject can hardly hope to confront government pronouncements with fact.

How many digits in a product?

We often represent integers with digits. E.g., the integer 1234 has 4 digits. By extension, we use ‘binary’ digits, called bits, within computers. Thus the integer  7 requires three bits: 0b111.

If I have two integers that use 3 digits, say, how many digits will their product have?

Mathematically, we might count the number of digits of an integer using the formula ceil(log(x+1)) where the log is the in the base you are interested in. In base 10, the integers with three digits go from 100 to 999, or from 102 to 103-1, inclusively. For example, to compute the number of digits in base 10, you might use the following Python expression ceil(log10(x+1)). More generally, an integer has d digits in base b if it is between bd-1 and bd-1, inclusively. By convention, the integer 0 has no digit in this model.

The product between an integer having d1 digits and integer having d2 digits is between bd1+d2-2 and bd1+d2bd1bd2+1 (inclusively). Thus the product has either d1+d2-1 digits or d1+d2 digits.

To illustrate, let us consider the product between two integers having three digits. In base 10, the smallest product is 100 times 100 or 10,000, so it requires 5 digits. The largest product is 999 times 999 or 998,001 so 6 digits.

Thus if you multiply a 32-bit number with another 32-bit number, you get a number than has at most 64 binary digits. The maximum value will be 264 – 233 + 1.

It seems slightly counter-intuitive that the product of two 32-bit numbers does not span the full range of 64-bit numbers because it cannot exceed 264 – 233 + 1. A related observation is that any given product may have several possible pairs of 32-bit numbers. For example, the product 4 can be achieved by the multiplication of 1 with 4 or the multiplication of 2 times 2. Furthermore, many other 64-bit values may not be produced from two 32-bit values: e.g., any prime number larger or equal than 232 and smaller than 264 .

Further reading: Computing the number of digits of an integer even faster

The end of the monopolistic web?

Except maybe in totalitarian states, you cannot have a single publisher. Most large cities had multiple independent newspapers.

In recent years, we saw a surge of concentration in newspaper and television ownership. However, this was accompanied by a surge of online journalism. The total number of publishers increased, if nothing else.

You can more easily have a single carrier/distributor than a monopolistic publisher. For example, the same delivery service provides me my newspaper as well as a range of competing newspapers. The delivery man does not much care for the content of my newspaper. A few concentrated Internet providers support diverse competing services.

The current giants (Facebook, Twitter and Google) were built initially as neutral distributors. Google was meant to give you access to all of the web’s information. If the search engine is neutral, there is no reason to have more than one. If twitter welcomes everyone, then there is no reason to have competing services. Newspapers have fact-checking services, but newspaper delivery services do not.

Of course, countries like Russia and China often had competing services, but most of the rest of the world fell back on American-based large corporations for their web infrastructure. Even the Talibans use Twitter.

It has now become clear that Google search results are geared toward favouring some of their own services. Today, we find much demand for services like Facebook and Twitter to more closely vet their content. Effectively, they are becoming publishers. They are no longer neutral. It is undeniable that they now see their roles as arbitrer of content. They have fact-checking services and they censor individuals.

If my mental model is correct, then we will see the emergence of strong competitors. I do not predict the immediate downfall of Facebook and Twitter. However, much of their high valuation was due to them being considered neutral carriers. The difference in value between a monopoly and a normal player can be significant. People who know more about online marketing than I do also tell me that online advertisement might be overrated. And advertisement on a platform that is no longer universal is less valuable: the pie is shared. Furthermore, I would predict that startups that were dead on arrival ten years ago might be appealing businesses today. Thus, at the margin, it makes it more appealing for a young person to go work for a small web startup.

I should stress that this is merely a model. I do not claim to be right. I am also not providing investment or job advice.

Further reading: Stop spending so much time being trolled by billionaire corporations!

SWAR explained: parsing eight digits

It is common to want to parse long strings of digits into integer values. Because it is a common task, we want to optimize it as much as possible.

In the blog post, Quickly parsing eight digits, I presented a very quick way to parse eight ASCII characters representing an integers (e.g., 12345678) into the corresponding binary value. I want to come back to it and explain it a bit more, to show that it is not magic. This works in most programming languages, but I will stick with C for this blog post.

To recap, the long way is a simple loop:

uint32_t parse_eight_digits(const unsigned char *chars) {
  uint32_t x = chars[0] - '0';
  for (size_t j = 1; j < 8; j++)
    x = x * 10 + (chars[j] - '0');
  return x;
}

We use the fact that in ASCII, the numbers 0, 1, … are in consecutive order in terms of byte values. The character ‘0’ is 0x30 (or 48 in decimal), the character ‘1’ is 0x31 (49 in decimal) and so forth. At each step in the loop, we multiple the running value by 10 and add the value of the next digit.

It assumes that all characters are in the valid range (from ‘0’ to ‘9’): other code should check that it is the case.

An optimizing compiler will probably unroll the loop and produce code that might look like this in assembly:

        movzx   eax, byte ptr [rdi]
        lea     eax, [rax + 4*rax]
        movzx   ecx, byte ptr [rdi + 1]
        lea     eax, [rcx + 2*rax]
        lea     eax, [rax + 4*rax]
        movzx   ecx, byte ptr [rdi + 2]
        lea     eax, [rcx + 2*rax]
        lea     eax, [rax + 4*rax]
        movzx   ecx, byte ptr [rdi + 3]
        lea     eax, [rcx + 2*rax]
        lea     eax, [rax + 4*rax]
        movzx   ecx, byte ptr [rdi + 4]
        lea     eax, [rcx + 2*rax]
        lea     eax, [rax + 4*rax]
        movzx   ecx, byte ptr [rdi + 5]
        lea     eax, [rcx + 2*rax]
        lea     eax, [rax + 4*rax]
        movzx   ecx, byte ptr [rdi + 6]
        lea     eax, [rcx + 2*rax]
        lea     eax, [rax + 4*rax]
        movzx   ecx, byte ptr [rdi + 7]
        lea     eax, [rcx + 2*rax]
        add     eax, -533333328

Notice how there are many loads, and a whole lot of operations.

We can substantially shorten the resulting code, down to something that looks like the following:

        imul    rax, qword ptr [rdi], 2561
        movabs  rcx, -1302123111085379632
        add     rcx, rax
        shr     rcx, 8
        movabs  rax, 71777214294589695
        and     rax, rcx
        imul    rax, rax, 6553601
        shr     rax, 16
        movabs  rcx, 281470681808895
        and     rcx, rax
        movabs  rax, 42949672960001
        imul    rax, rcx
        shr     rax, 32

How do we do it? We use a technique called SWAR which stands for SIMD within a register. The intuition behind is that modern computers have 64-bit registers. Processing eight consecutive bytes as eight distinct words, as in the native code above, is inefficient given how wide our registers are.

The first step is to load all eight characters into a 64-bit register. In C, you might do it in this manner:

int64_t val; 
memcpy(&val, chars, 8);

It looks maybe expensive, but most compilers will translate the memcpy instruction into a single load, when compiling with optimizations turned on.

Computers store values in little-endian order. This means that the first byte you encounter is going to be used as the least significant byte, and so forth.

Then we want to subtract the character ‘0‘ (or 0x30 in hexadecimal). We can do it with a single operation:

val = val - 0x3030303030303030;

So if you had the string ‘12345678‘, you will now have the value 0x0807060504030201.

Next we are going to do a kind of pyramidal computation. We add pairs of successive bytes, then pairs of successive 16-bit values and then pairs of successive 32-bit bytes.

It goes something like this, suppose that you have the sequence of digit values b1, b2, b3, b4, b5, b6, b7, b8. You want to do…

  • add pairs of bytes: 10*b1+b2, 10*b3+b4, 10*b5+b6, 10*b7+b8
  • combine first and third sum: 1000000*(10*b1+b2) + 100*(10*b5+b6)
  • combine second and fourth sum: 10*b7+b8 + 10000*(10*b3+b4)

I will only explain the first step (pairs of bytes) as the other two steps are similar. Consider the least significant two bytes, which have value 256*b2 + b1. We multiply it by 10, and we add the value shifted by 8 bits, and we get b1+10*b2 in the least significant byte. We can compute 4 such operations in one operation…

val = (val * 10) + (val >> 8);

The next two steps are similar:

val1 = (((val & 0x000000FF000000FF) * (100 + (1000000ULL << 32)));

val2 = (((val >> 16) & 0x000000FF000000FF) 
          * (1 + (10000ULL << 32))) >> 32;

And the overall code looks as follows…

uint32_t  parse_eight_digits_unrolled(uint64_t val) {
  const uint64_t mask = 0x000000FF000000FF;
  const uint64_t mul1 = 0x000F424000000064; // 100 + (1000000ULL << 32)
  const uint64_t mul2 = 0x0000271000000001; // 1 + (10000ULL << 32)
  val -= 0x3030303030303030;
  val = (val * 10) + (val >> 8); // val = (val * 2561) >> 8;
  val = (((val & mask) * mul1) + (((val >> 16) & mask) * mul2)) >> 32;
  return val;
}

Appendix: You can do much the same in C# starting with a byte pointer (byte* chars):

ulong val = Unsafe.ReadUnaligned<ulong>(chars);
const ulong mask = 0x000000FF000000FF;
const ulong mul1 = 0x000F424000000064; 
// 100 + (1000000ULL << 32)
const ulong mul2 = 0x0000271000000001; 
// 1 + (10000ULL << 32)
val -= 0x3030303030303030;
val = (val * 10) + (val >> 8); // val = (val * 2561) >> 8;
val = (((val & mask) * mul1) + (((val >> 16) & mask) * mul2)) >> 32;

What is the ‘range’ of a number type?

In programming, we often represent numbers using types that have specific ranges. For example, 64-bit signed integer types can represent all integers between -9223372036854775808 and 9223372036854775807, inclusively. All integers inside this range are valid, all integers outside are “out of range”. It is simple.

What about floating-point numbers? The nuance with floating-point numbers is that they cannot represent all numbers within a continuous range. For example, the real number 1/3 cannot be represented using binary floating-point numbers. So the convention is that given a textual representation, say “1.1e100”, we seek the closest approximation.

Still, are there ranges of numbers that you should not represent using floating-point numbers? That is, are there numbers that you should reject?

It seems that there are two different interpretation:

  • My own interpretation is that floating-point types can represent all numbers from -infinity to infinity, inclusively. It means that ‘infinity’ or 1e9999 are indeed “in range”. For 64-bit IEEE floating-point numbers, this means that numbers smaller than 4.94e-324 but greater than 0 can be represented as 0, and that numbers greater than 1.8e308 should be infinity. To recap, all numbers are always in range.
  • For 64-bit numbers, another interpretation is that only numbers in the ranges 4.94e-324 to 1.8e308 and -1.8e308 to -4.94e-324, together with exactly 0, are valid. Numbers that are too small (less than 4.94e-324 but greater than 0) or numbers that are larger than 1.8e308 are “out of range”. Common implementations of the strtod function or of the C++ equivalent follow this convention.

This matters because the C++ specification for the from_chars functions state that

If the parsed value is not in the range representable by the type of value, value is unmodified and the member ec of the return value is equal to errc​::​result_­out_­of_­range.

I am not sure programmers have a common understanding of this specification.

How programmers make sure that their software is correct

Our most important goal in writing software is that it be correct. The software must do what the programmer wants it to do. It must meet the needs of the user.

In the business world, double-entry bookkeeping is the idea that transactions are recorded in at least two accounts (debit and credit). One of the advantages of double-entry bookkeeping, compared to a more naive approach, is that it allows for some degree of auditing and error finding. If we compare accounting and software programming, we could say that double-entry accounting and its subsequent auditing is equivalent to software testing.

For an accountant, converting a naive accounting system into a double-entry system is a difficult task in general. In many cases, one would have to reconstruct it from scratch. In the same manner, it can be difficult to add tests to a large application that has been developed entirely without testing. And that is why testing should be first on your mind when building serious software.

A hurried or novice programmer can quickly write a routine, compile and run it and be satisfied with the result. A cautious or experienced programmer will know not to assume that the routine is correct.

Common software errors can cause problems ranging from a program that abruptly terminates to database corruption. The consequences can be costly: a software bug caused the explosion of an Ariane 5 rocket in 1996 (Dowson, 1997). The error was caused by the conversion of a floating point number to a signed integer represented with 16 bits. Only small integer values could be represented. Since the value could not be represented, an error was detected and the program stopped because such an error was unexpected. The irony is that the function that triggered the error was not required: it had simply been integrated as a subsystem from an earlier model of the Ariane rocket. In 1996 U.S. dollars, the estimated cost of this error is almost $400 million.

The importance of producing correct software has long been understood. The best scientists and engineers have been trying to do this for decades.

There are several common strategies. For example, if we need to do a complex scientific calculation, then we can ask several independent teams to produce an answer. If all the teams arrive at the same answer, we can then conclude that it is correct. Such redundancy is often used to prevent hardware-related faults (Yeh, 1996). Unfortunately, it is not practical to write multiple versions of your software in general.

Many of the early programmers had advanced mathematical training. They hoped that we could prove that a program is correct. By putting aside the hardware failures, we could then be certain that we would not encounter any errors. And indeed, today we have sophisticated software that allows us to sometimes prove that a program is correct.

Let us consider an example of formal verification to illustrate our point. We can use the z3 library from Python (De Moura and Bjørner, 2008). If you are not a Python user, don’t worry: you don’t have to be to follow the example. We can install the necessary library with the command pip install z3-solver or the equivalent. Suppose we want to be sure that the inequality ( 1 + y ) / 2 < y holds for all 32-bit integers. We can use the following script:

import z3
y = z3.BitVec("y", 32)
s = z3.Solver()
s.add( ( 1 + y ) / 2 >= y )
if(s.check() == z3.sat):
    model = s.model()
    print(model)

In this example we construct a 32-bit word (BitVec) to represent our example. By default, the z3 library interprets the values that can be represented by such a variable as ranging from -2147483648 to 2147483647 (from \(-2^{31}\) to \(2^{31}-1\) inclusive). We enter the inequality opposite to the one we wish to show (( 1 + y ) / 2 >= y). If z3 does not find a counterexample, then we will know that the inequality ( 1 + y ) / 2 < y holds.

When running the script, Python displays the integer value 2863038463 which indicates that z3 has found a counterexample. The z3 library always gives a positive integer and it is up to us to interpret it correctly. The number 2147483648 becomes -2147483648, the number 2147483649 becomes -2147483647 and so on. This representation is often called the two’s complement. Thus, the number 2863038463 is in fact interpreted as a negative number. Its exact value is not important: what matters is that our inequality (( 1 + y ) / 2 < y) is incorrect when the variable is negative. We can check this by giving the variable the value -1, we then get 0 < -1. When the variable takes the value 0, the inequality is also false (0<0). We can also check that the inequality is false when the variable takes the value 1. So let us add as a condition that the variable is greater than 1 (s.add( y > 1 )):

import z3
y = z3.BitVec("y", 32)
s = z3.Solver()
s.add( ( 1 + y ) / 2 >= y )
s.add( y > 1 )

if(s.check() == z3.sat):
    model = s.model()
    print(model)

Since the latter script does not display anything on the screen when it is executed, we can conclude that the inequality is satisfied as long as the variable of variable is greater than 1.

Since we have shown that the inequality ( 1 + y ) / 2 < y is true, perhaps the inequality ( 1 + y ) < 2 * y is true too? Let’s try it:

import z3
y = z3.BitVec("y", 32)
s = z3.Solver()
s.add( ( 1 + y ) >= 2 * y )
s.add( y > 1 )

if(s.check() == z3.sat):
    model = s.model()
    print(model)

This script will display 1412098654, half of 2824197308 which is interpreted by z3 as a negative value. To avoid this problem, let’s add a new condition so that the double of the variable can still be interpreted as a positive value:


import z3
y = z3.BitVec("y", 32)
s = z3.Solver()
s.add( ( 1 + y ) / 2 >= y )
s.add( y > 0 )
s.add( y < 2147483647/2)

if(s.check() == z3.sat):
model = s.model()
print(model)

This time the result is verified. As you can see, such a formal approach requires a lot of work, even in relatively simple cases. It may have been possible to be more optimistic in the early days of computer science, but by the 1970s, computer scientists like Dijkstra were expressing doubts:

we see automatic program verifiers verifying toy programs and one observes the honest expectation that with faster machines with lots of concurrent processing, the life-size problems wiIl come within reach as well. But, honest as these expectations may be, are they justified? I sometimes wonder… (Dijkstra, 1975)

It is impractical to apply such a mathematical method on a large scale. Errors can take many forms, and not all of these errors can be concisely presented in mathematical form. Even when it is possible, even when we can accurately represent the problem in a mathematical form, there is no reason to believe that a tool like z3 will always be able to find a solution: when problems become difficult, computational times can become very long. An empirical approach is more appropriate in general.

Over time, programmers have come to understand the need to test their software. It is not always necessary to test everything: a prototype or an example can often be provided without further validation. However, any software designed in a professional context and having to fulfill an important function should be at least partially tested. Testing allows us to reduce the probability that we will have to face a disastrous situation.

There are generally two main categories of tests.

  • There are unit tests. These are designed to test a particular component of a software program. For example, a unit test can be performed on a single function. Most often, unit tests are automated: the programmer can execute them by pressing a button or typing a command. Unit tests often avoid the acquisition of valuable resources, such as creating large files on a disk or making network connections. Unit testing does not usually involve reconfiguring the operating system.
  • Integration tests aim to validate a complete application. They often require access to networks and access to sometimes large amounts of data. Integration tests sometimes require manual intervention and specific knowledge of the application. Integration testing may involve reconfiguring the operating system and installing software. They can also be automated, at least in part. Most often, integration tests are based on unit tests that serve as a foundation.

Unit tests are often part of a continuous integration process (Kaiser et al., 1989). Continuous integration often automatically performs specific tasks including unit testing, backups, applying cryptographic signatures, and so on. Continuous integration can be done at regular intervals, or whenever a change is made to the code.

Unit tests are used to structure and guide software development. Tests can be written before the code itself, in which case we speak of test-driven development. Often, tests are written after developing the functions. Tests can be written by programmers other than those who developed the functions. It is sometimes easier for independent developers to provide tests that are capable of uncovering errors because they do not share the same assumptions.

It is possible to integrate tests into functions or an application. For example, an application may run a few tests when it starts. In such a case, the tests will be part of the distributed code. However, it is more common not to publish unit tests. They are a component reserved for programmers and they do not affect the functioning of the application. In particular, they do not pose a security risk and they do not affect the performance of the application.

Experienced programmers often consider tests to be as important as the original code. It is therefore not uncommon to spend half of one’s time on writing tests. The net effect is to substantially reduce the initial speed of writing computer code. Yet this apparent loss of time often saves time in the long run: setting up tests is an investment. Software that is not well tested is often more difficult to update. The presence of tests allows us to make changes or extensions with less uncertainty.

Tests should be readable, simple and they should run quickly. They often use little memory.

Unfortunately, it is difficult to define exactly how good tests are. There are several statistical measures. For example, we can count the lines of code that execute during tests. We then talk about test coverage. A coverage of 100% implies that all lines of code are concerned by the tests. In practice, this coverage measure can be a poor indication of test quality.

Consider this example:

package main

import (
    "testing"
)


func Average(x, y uint16) uint16 {
   return (x + y)/2
}

func TestAverage(t *testing.T) {
    if Average(2,4) != 3 {
        t.Error(Average(2,4))
    }
}

In the Go language, we can run tests with the command go test. We have an Average function with a corresponding test. In our example, the test will run successfully. The coverage is 100%.

Unfortunately, the Average function may not be as correct as we would expect. If we pass the integers 40000 and 40000 as parameters, we would expect the average value of 40000 to be returned. But the integer 40000 added to the integer 40000 cannot be represented with a 16-bit integer (uint16): the result will be instead (40000+4000)%65536=14464. So the function will return 7232 which may be surprising. The following test will fail:


func TestAverage(t *testing.T) {
if Average(40000,40000) != 40000 {
t.Error(Average(40000,40000))
}
}

When possible and fast, we can try to test the code more exhaustively, like in this example where we include several values:

package main

import (
    "testing"
)


func Average(x, y uint16) uint16 {
   if y > x {
     return (y - x)/2 + x
   } else {
     return (x - y)/2 + y
   }
}

func TestAverage(t *testing.T) {
  for x := 0; x < 65536; x++ {
    for y := 0; y < 65536; y++ {
      m := int(Average(uint16(x),uint16(y)))
      if x < y {
        if m < x || m > y {
          t.Error("error ", x, " ", y)
        }           
      } else {
        if m < y || m > x {
          t.Error("error ", x, " ", y)
        }  
      }
    }
  } 
}

In practice, it is rare that we can do exhaustive tests. We can instead use pseudo-random tests. For example, we can generate pseudo-random numbers and use them as parameters. In the case of random tests, it is important to keep them deterministic: each time the test runs, the same values are tested. This can be achieved by providing a fixed seed to the random number generator as in this example:

package main

import (
    "testing"   
        "math/rand"
)


func Average(x, y uint16) uint16 {
   if y > x {
     return (y - x)/2 + x
   } else {
     return (x - y)/2 + y
   }
}

func TestAverage(t *testing.T) {
  rand.Seed(1234)
  for test := 0; test < 1000; test++ {
    x := rand.Intn(65536)
    y := rand.Intn(65536)
    m := int(Average(uint16(x),uint16(y)))
    if x < y {
      if m < x || m > y {
        t.Error("error ", x, " ", y)
      }           
    } else {
      if m < y || m > x {
        t.Error("error ", x, " ", y)
      }  
    }
  } 
}

Tests based on random exploration are part of a strategy often called fuzzing (Miller at al., 1990).

We generally distinguish two types of tests. Positive tests aim at verifying that a function or component behaves in an agreed way. Thus, the first test of our Average function was a positive test. Negative tests verify that the software behaves correctly even in unexpected situations. We can produce negative tests by providing our functions with random data (fuzzing). Our second example can be considered a negative test if the programmer expected small integer values.

The tests should fail when the code is modified (Budd et al., 1978). On this basis, we can also develop more sophisticated measures by testing for random changes in the code and ensuring that such changes often cause tests to fail.

Some programmers choose to generate tests automatically from the code. In such a case, a component is tested and the result is captured. For example, in our example of calculating the average, we could have captured the fact that Average(40000,40000) has the value 7232. If a subsequent change occurs that changes the result of the operation, the test will fail. Such an approach saves time since the tests are generated automatically. We can quickly and effortlessly achieve 100% code coverage. On the other hand, such tests can be misleading. In particular, it is possible to capture incorrect behaviour. Furthermore, the objective when writing tests is not so much their number as their quality. The presence of several tests that do not contribute to validate the essential functions of our software can even become harmful. Irrelevant tests can waste programmers’ time in subsequent revisions.

Finally, we review the benefits of testing: tests help us organize our work, they are a measure of quality, they help us document the code, they avoid regression, they help debugging and they can produce more efficient code.

Organization

Designing sophisticated software can take weeks or months of work. Most often, the work will be broken down into separate units. It can be difficult, until you have the final product, to judge the outcome. Writing tests as we develop the software helps to organize the work. For example, a given component can be considered complete when it is written and tested. Without the test writing process, it is more difficult to estimate the progress of a project since an untested component may still be far from being completed.

Quality

Tests are also used to show the care that the programmer has put into his work. They also make it possible to quickly evaluate the care taken with the various functions and components of a software program: the presence of carefully composed tests can be an indication that the corresponding code is reliable. The absence of tests for certain functions can serve as a warning.

Some programming languages are quite strict and have a compilation phase that validates the code. Other programming languages (Python, JavaScript) leave more freedom to the programmer. Some programmers consider that tests can help to overcome the limitations of less strict programming languages by imposing on the programmer a rigour that the language does not require.

Documentation

Software programming should generally be accompanied by clear and complete documentation. In practice, the documentation is often partial, imprecise, erroneous or even non-existent. Tests are therefore often the only technical specification available. Reading tests allows programmers to adjust their expectations of software components and functions. Unlike documentation, tests are usually up-to-date, if they are run regularly, and they are accurate to the extent that they are written in a programming language. Tests can therefore provide good examples of how the code is used.
Even if we want to write high-quality documentation, tests can also play an important role. To illustrate computer code, examples are often used. Each example can be turned into a test. So we can make sure that the examples included in the documentation are reliable. When the code changes, and the examples need to be modified, a procedure to test our examples will remind us to update our documentation. In this way, we avoid the frustrating experience of readers of our documentation finding examples that are no longer functional.

Regression

Programmers regularly fix flaws in their software. It often happens that the same problem occurs again. The same problem may come back for various reasons: sometimes the original problem has not been completely fixed. Sometimes another change elsewhere in the code causes the error to return. Sometimes the addition of a new feature or software optimization causes a bug to return, or a new bug to be added. When software acquires a new flaw, it is called a regression. To prevent such regressions, it is important to accompany every bug fix or new feature with a corresponding test. In this way, we can quickly become aware of regressions by running the tests. Ideally, the regression can be identified while the code is being modified, so we avoid regression. In order to convert a bug into a simple and effective test, it is useful to reduce it to its simplest form. For example, in our previous example with Average(40000,40000), we can add the detected error in additional test:

package main

import (
    "testing
)


func Average(x, y uint16) uint16 {
   if y > x {
     return (y - x)/2 + x
   } else {
     return (x - y)/2 + y
   }
}

func TestAverage(t *testing.T) {
   if Average(2,4) != 3 {
     t.Error("error 1")
   } 
   if Average(40000,40000) != 40000 {
     t.Error("error 2")
   }           
}

Bug fixing

In practice, the presence of an extensive test suite makes it possible to identify and correct bugs more quickly. This is because testing reduces the extent of errors and provides the programmer with several guarantees. To some extent, the time spent writing tests saves time when errors are found while reducing the number of errors.
Furthermore, an effective strategy to identify and correct a bug involves writing new tests. It can be more efficient on the long run than other debugging strategies such as stepping through the code. Indeed, after your debugging session is completed, you are left with new unit tests in addition to a corrected bug.

Performance

The primary function of tests is to verify that functions and components produce the expected results. However, programmers are increasingly using tests to measure the performance of components. For example, the execution speed of a function, the size of the executable or the memory usage can be measured. It is then possible to detect a loss of performance following a modification of the code. You can compare the performance of your code against a reference code and check for differences using statistical tests.

Conclusion

All computer systems have flaws. Hardware can fail at any time. And even when the hardware is reliable, it is almost impossible for a programmer to predict all the conditions under which the software will be used. No matter who you are, and no matter how hard you work, your software will not be perfect. Nevertheless, you should at least try to write code that is generally correct: it most often meets the expectations of users.
It is possible to write correct code without writing tests. Nevertheless, the benefits of a test suite are tangible in difficult or large-scale projects. Many experienced programmers will refuse to use a software component that has been built without tests.
The habit of writing tests probably makes you a better programmer. Psychologically, you are more aware of your human limitations if you write tests. When you interact with other programmers and with users, you may be better able to take their feedback into account if you have a test suite.

Suggested reading

  • James Whittaker, Jason Arbon, Jeff Carollo, How Google Tests Software, Addison-Wesley Professional; 1st edition (March 23 2012)
  • Lisa Crispin, Janet Gregory, Agile Testing: A Practical Guide for Testers and Agile Teams, Addison-Wesley Professional; 1st edition (Dec 30 2008)

Credit

The following Twitter users contributed ideas: @AntoineGrodin, @dfaranha, @Chuckula1, @EddyEkofo, @interstar, @Danlark1, @blattnerma, @ThuggyPinch, @ecopatz, @rsms, @pdimov2, @edefazio, @punkeel, @metheoryt, @LoCtrl, @richardstartin, @metala, @franck_guillaud, @__Achille__, @a_n__o_n, @atorstling, @tapoueh, @JFSmigielski, @DinisCruz, @jsonvmiller, @nickblack, @ChrisNahr, @ennveearr1, @_vkaku, @kasparthommen, @mathjock, @feO2x, @pshufb, @KishoreBytes, @kspinka, @klinovp, @jukujala, @JaumeTeixi