Science and Technology links (July 23 2022)

    1. Compared to 1800, we eat less saturated fat and much more processed food and vegetable oils and it does not seem to be good for us:

      Saturated fats from animal sources declined while polyunsaturated fats from vegetable oils rose. Non-communicable diseases (NCDs) rose over the twentieth century in parallel with increased consumption of processed foods, including sugar, refined flour and rice, and vegetable oils. Saturated fats from animal sources were inversely correlated with the prevalence of non-communicable diseases.

      Kang et al. found that saturated fats reduce your risk of having a stroke:

      a higher consumption of dietary saturated fat is associated with a lower risk of stroke, and every 10 g/day increase in saturated fat intake is associated with a 6% relative risk reduction in the rate of stroke.

      Saturated fats come from meat and dairy products (e.g., butter). A low-fat diet can significantly increase the risk of coronary heart disease events.
      Leroy and Cofnas argue against a reduction of red meat consumption:

      The IARC’s (2015) claim that red meat is “probably carcinogenic” has never been substantiated. In fact, a risk assessment by Kruger and Zhou (2018) concluded that this is not the case. (…) a meta-analysis of RCTs has shown that meat eating does not lead to deterioration of cardiovascular risk markers (O’Connor et al., 2017). The highest category of meat eating even paralleled a potentially beneficial increase in HDL-C level. Whereas plant-based diets indeed seem to lower total cholesterol and LDL-C in intervention studies, they also increase triglyceride levels and decrease HDL-C (Yokoyama et al., 2017), which are now often regarded as superior markers of cardiovascular risk (Jeppesen et al., 2001). (…) We believe that a large reduction in meat consumption, such as has been advocated by the EAT-Lancet Commission (Willett et al., 2019), could produce serious harm. Meat has long been, and continues to be, a primary source of high-quality nutrition. The theory that it can be replaced with legumes and supplements is mere speculation. While diets high in meat have proved successful over the long history of our species, the benefits of vegetarian diets are far from being established, and its dangers have been largely ignored by those who have endorsed it prematurely on the basis of questionable evidence.

    2. People dislike research that appears to favour males:

      In both studies, both sexes reacted less positively to differences favouring males; in contrast to our earlier research, however, the effect was larger among female participants. Contrary to a widespread expectation, participants did not react less positively to research led by a female. Participants did react less positively, though, to research led by a male when the research reported a male-favouring difference in a highly valued trait. Participants judged male-favouring research to be lower in quality than female-favouring research, apparently in large part because they saw the former as more harmful.

    3. During the Jurassic era, atmospheric CO2 was very high, forests extended all the way to the North pole. Even so, there were freezing winters:

      Forests were present all the way to the Pangean North Pole and into the southern latitudes as far as land extended. Although there may have been other contributing factors, the leading hypothesis is that Earth was in a “greenhouse” state because of very high atmospheric PCO2 (partial pressure of CO2), the highest of the past 420 million years. Despite modeling results indicating freezing winter temperatures at high latitudes, empirical evidence for freezing has been lacking. Here, we provide empirical evidence showing that, despite extraordinary high PCO2, freezing winter temperatures did characterize high Pangean latitudes based on stratigraphically widespread lake ice-rafted debris (L-IRD) in early Mesozoic strata of the Junggar Basin, northwest China. Traditionally, dinosaurs have been viewed as thriving in the warm and equable early Mesozoic climates, but our results indicate that they also endured freezing winters.

    4. In mice, researchers found that injecting stem-cell-derived “conditioned medium” protects against neurodegeneration:

      Neuronal cell death is causal in many neurodegenerative diseases, including age-related loss of memory and dementias (such as Alzheimer’s disease), Parkinson’s disease, strokes, as well as diseases that afflict broad ages, i.e., traumatic brain injury, spinal cord injury, ALS, and spinal muscle atrophy. These diseases are characterized by neuroinflammation and oxidative cell damage, many involve perturbed proteostasis and all are devastating and without a cure. Our work describes a feasible meaningful disease-minimizing treatment for ALS and suggests a clinical capacity for treating a broad class of diseases of neurodegeneration, and excessive cell apoptosis.

    5. Greenland and the North of Europe were once warmer than they are today: the Medieval Warm Period (950 to 1250) overlaps with the Viking age (800–1300). Bajard et al. (2022) suggest that the Viking were quite adept at adapting their agricultural practices:

      (…) the period from The Viking Age to the High Middle Ages was a period of expansion with the Viking diaspora, increasing trade, food and goods production and the establishment of Scandinavian towns. This period also sees a rapid increase in population and settlements, mainly due to a relatively stable warm climate (…) temperature was the main driver of agricultural practices in Southeastern Norway during the Late Antiquity. Direct comparison between the reconstructed temperature variability and palynological data from the same sediment sequence shows that small changes in temperature were synchronous with changes in agricultural practices (…) We conclude that the pre-Viking age society in Southwestern Scandinavia made substantial changes in their way of living to adapt to the climate variability of this period.

      The Vikings grew barley in Greenland, a plant that grows normally in a temperate climate. In contrast, agriculture in Greenland today is nearly non-existent due to the harsh climate.

    6. Ashkenazi intelligence often score exceptionally well on intelligence tests, and they achieve extraordinary results in several intellectual pursuits.
      Nevertheless, Wikipedia editors deleted the article on Ashkenazi intelligence. Tezuka argues that it is the result of an ideological bias that results in systematic censorship.
    7. You can rejuvenate old human skins by grafting it on young mice.
    8. Tabarrok reminds us that research funding through competitions might result in total waste through rent dissipation…

      A scientist who benefits from a 2-million-dollar NIH grant is willing to spend a million dollars of their time working on applications or incur the cost of restricting their research ideas in order to get it. Importantly, even though only one scientist will get the grant, hundreds of scientists are spending resources in competition to get it. So the gains we might be seeing from transferring resources to one researcher are dissipated multiplicatively across all the scientists who spent time and money competing for the grant but didn’t get it. The aggregate time costs to our brightest minds from this application contest system are quantifiably large, possibly entirely offsetting the total scientific value of the research that the funding supports.

    9. Corporate tax cuts lead to an increase in productivity and research over the long term. Conversely, increases in taxation reduce long-term productivity as well as research and development.

Negative incentives in academic research

In the first half of the XXth century, there were relatively few scientists, and these scientists were generally not lavishly funded. Yet it has been convincingly argued that these scientists were massively more productive. We face a major replication crisis where important results in fields such as psychology and medicine cannot be reproduced independently. Academic researchers routinely fail to even fill out the results from clinical trials. We have entered a ‘dark age’ where we are mostly stagnant. It is not that there is no progress per se, but progress is slow, uncommon and expensive.

Why might that be? I believe that it has to do with important ‘negative incentives’ that we have introduced. In effect, we have made scientists less productive. We probably did so through several means, but two effects are probably important: the widespread introduction of research competitions and the addition of extrinsic motivations.

  1.  Prior to 1960, there was hardly any formal research funding competitions. Today, by some estimates, it takes about 40 working days to prepare a single new grant application with about 30 working days for a resubmission, and the success rates is often low which means that for a single successful research grant, hundreds of days might have been spent, purely on the acquisition of funding. This effect is known in economics as rent dissipation. Suppose that I offer to give you $100 to support your research if you enter a competition. How much time are you willing to spend? Maybe you are willing to spend the equivalent of $50, if the success rate is 50%. The net result is that two researchers may each waste $50 in time so that one of them acquire $100 of support. There may be no net gain! Furthermore, if grant applications are valued enough (e.g., needed to get promotion), scientists may be willing to spend even more time than is rational to do so, and the introduction of a new grant competition may in fact reduce the overall research output. You should not underestimate the effect that constant administrative and grant writing might have on a researcher: many graduate students will tell you of their disappointment when encountering high status scientists who cannot seem to do actual research anymore. It can cause a vicious form of accelerated aging. If Albert Einstein had been stuck writing grant applications and reporting on the results from his team, history might have taken a different turn.
  2. We have massively increased the number and importance of ‘extrinsic motivations’ in science. Broadly speaking, we can distinguish between two types of motivations… intrinsic and extrinsic motivations. Winning prizes or securing prestigious positions are extrinsic motivations. Solving an annoying problem or pursuing a personal quest are intrinsic motivations. We repeatedly find that intrinsic motivations are positively correlated with long-term productivity whereas extrinsic motivations are negatively correlated with long-term productivity (e.g., Horodnic and Zaiţh 2015). In fact, extrinsic motivations even cancel out intrinsic motivations (Wrzesniewski et al., 2014). Extrinsically motivated individuals will focus on superficial gains, as opposed to genuine advances. Of course, the addition of extrinsic motivations may also create a selection effect: the field tends to recruit people who seek prestige for its own sake, as opposed to having a genuine interest in scientific pursuits. Thus creating prestigious prizes, prestigious positions, and prestigious conferences, may end up being detrimental to scientific productivity.

Many people object that the easiest explanation for our stagnation has to do with the fact that most of the easy findings have been covered. However, at all times in history, there were people making this point. In 1894, Michelson said:

While it is never safe to affirm that the future of Physical Science has no marvels in store even more astonishing than those of the past, it seems probable that most of the grand underlying principles have been firmly established and that further advances are to be sought chiefly in the rigorous application of these principles to all the phenomena which come under our notice.

Silverstein in the “The End is Near!”: The Phenomenon of the Declaration of Closure in a Discipline, documents carefully how, historically, many people predicted (wrongly) that their discipline was at an end.

How quickly can you convert floats to doubles (and back)?

Many programming languages have two binary floating-point types: float (32-bit) and double (64-bit). It reflects the fact that most general-purpose processors supports both data types natively.

Often we need to convert between the two types. Both ARM and x64 processors can do in one inexpensive instructions. For example, ARM systems may use the fcvt instruction.

The details may differ, but most current processors can convert one number (from float to double, or from double to float) per CPU cycle. The latency is small (e.g., 3 or 4 cycles).

A typical processor might run at 3 GHz, thus we have 3 billion cycles per second. Thus we can convert 3 billion numbers per second. A 64-bit number uses 8 bytes, so it is a throughput of 24 gigabytes per second.

It is therefore unlikely that the type conversion can be a performance bottleneck, in general. If you would like to measure the speed on your own system: I have written a small C++ benchmark.

Filtering numbers faster with SVE on Graviton 3 processors

Processors come, roughly, in two large families x64 processors from Intel and AMD, and ARM processors from Apple, Samsung, and many other vendors. For a long time, ARM processors occupied mostly the market of embedded processors (the computer running your fridge at home) with the ‘big processors’ being almost exclusively the domain of x64 processors.

Reportedly, the Apple CEO (Steve Jobs) went to see Intel back when Apple was designing the iPhone to ask for a processor deal. Intel turned Apple down. So Apple went with ARM.

Today, we use ARM processors for everything: game consoles (Nintendo Switch), powerful servers (Amazon and Google), mobile phones, embedded devices, and so forth.

Amazon makes available its new ARM-based processors (Graviton 3). These processors have sophisticated SIMD instructions (SIMD stands for Single Instruction Multiple Data) called SVE (Scalable Vector Extensions). With these instructions, we can greatly accelerate software. It is a form of single-core parallelism, as opposed to the parallelism that one gets by using multiple cores for one task. The SIMD parallelism, when it is applicable, is often far more efficient than multicore parallelism.

Amazon’s Graviton 3 appears to have 32-byte registers, since it is based on the ARM Neoverse V1 design. You can fit eight 32-bit integers in one register. Mainstream ARM processors (e.g., the ones that Intel uses) have SIMD instructions too (NEON), but with shorter registers (16 bytes). Having wider registers and instructions capable of operating over these wide registers allows you reduce the total number of instructions. Executing fewer instructions is a very good way to accelerate code.

To investigate SVE, I looked at a simple problem where you want to remove all negative integers from an array. That is, you read and array containing signed random integers and you want to write out to an output array only the positive integers. Normal C code might look as follows:

void remove_negatives_scalar(const int32_t *input, 
        int64_t count, int32_t *output) {
  int64_t i = 0;
  int64_t j = 0;
  for(; i < count; i++) {
    if(input[i] >= 0) {
      output[j++] = input[i];
    }
  }
}

 

Replacing this code with new code that relies on special SVE functions made it go much faster (2.5 times faster). At the time, I suggested that my code was probably not nearly optimal. It processed 32 bytes per loop iteration, using 9 instructions. A sizeable fraction of these 9 instructions have to do with managing the loop, and few do the actual number crunching.  A reader, Samuel Lee, proposed to effectively unroll my loop. He predicted much better performance (at least when the array is large enough) due to lower loop overhead. I include his proposed code below.

Using a graviton 3 processor and GCC 11 on my benchmark, I get the following results:

cycles/int instr./int instr./cycle
scalar 9.0 6.000 0.7
branchless scalar 1.8 8.000 4.4
SVE 0.7 1.125 ~1.6
unrolled SVE 0.4385 0.71962 ~1.6

The new unrolled SVE code uses about 23 instructions to process 128 bytes (or 32 32-bit integers), hence about 0.71875 instructions per integer. That’s about 10 times fewer instructions than scalar code and roughly 4 times faster than scalar code in terms of CPU cycles.

The number of instructions retired per cycle is about the same for the two SVE functions, and it is relatively low, somewhat higher than 1.5 instructions retired per cycle.

Often the argument in favour of SVE is that it does not require special code to finish the tail of the processing. That is, you can process an entire array with SVE instructions, even if its length is not divisible by the register size (here 8 integers). I find Lee’s code interesting because it illustrates that you might actually need to handle the end of long array differently, for efficiency reasons.

Overall, I think that we can see that SVE works well for the problem at hand (filtering out 32-bit integers).

Appendix: Samuel Lee’s code.

void remove_negatives(const int32_t *input, int64_t count, int32_t *output) {
  int64_t j = 0;
  const int32_t* endPtr = input + count;
  const uint64_t vl_u32 = svcntw();

  svbool_t all_mask = svptrue_b32();
  while(input <= endPtr - (4*vl_u32))
  {
      svint32_t in0 = svld1_s32(all_mask, input + 0*vl_u32);
      svint32_t in1 = svld1_s32(all_mask, input + 1*vl_u32);
      svint32_t in2 = svld1_s32(all_mask, input + 2*vl_u32);
      svint32_t in3 = svld1_s32(all_mask, input + 3*vl_u32);

      svbool_t pos0 = svcmpge_n_s32(all_mask, in0, 0);
      svbool_t pos1 = svcmpge_n_s32(all_mask, in1, 0);
      svbool_t pos2 = svcmpge_n_s32(all_mask, in2, 0);
      svbool_t pos3 = svcmpge_n_s32(all_mask, in3, 0);

      in0 = svcompact_s32(pos0, in0);
      in1 = svcompact_s32(pos1, in1);
      in2 = svcompact_s32(pos2, in2);
      in3 = svcompact_s32(pos3, in3);

      svst1_s32(all_mask, output + j, in0);
      j += svcntp_b32(all_mask, pos0);
      svst1_s32(all_mask, output + j, in1);
      j += svcntp_b32(all_mask, pos1);
      svst1_s32(all_mask, output + j, in2);
      j += svcntp_b32(all_mask, pos2);
      svst1_s32(all_mask, output + j, in3);
      j += svcntp_b32(all_mask, pos3);

      input += 4*vl_u32;
  }

  int64_t i = 0;
  count = endPtr - input;

  svbool_t while_mask = svwhilelt_b32(i, count);
  do {
    svint32_t in = svld1_s32(while_mask, input + i);
    svbool_t positive = svcmpge_n_s32(while_mask, in, 0);
    svint32_t in_positive = svcompact_s32(positive, in);
    svst1_s32(while_mask, output + j, in_positive);
    i += svcntw();
    j += svcntp_b32(while_mask, positive);
    while_mask = svwhilelt_b32(i, count);
  } while (svptest_any(svptrue_b32(), while_mask));
}

Go generics are not bad

When programming, we often need to write ‘generic’ functions where the exact data type is not important. For example, you might want to write a simple function that sums up numbers.

Go lacked this notion until recently, but it was recently added (as of version 1.18). So I took it out for a spin.

In Java, generics work well enough as long as you need “generic” containers (arrays, maps), and as long as stick with functional idioms. But Java will not let me code the way I would prefer. Here is how I would write a function that sums up numbers:

    int sum(int[] v) {
        int summer = 0;
        for(int k = 0; k < v.length; k++) {
            summer += v[k];
        }
        return summer;
    }

What if I need to support various number types? Then I would like to write the following generic function, but Java won’t let me because Java generics only work with classes not primitive types.

    // this Java code won't compile
    static <T extends Number>  T sum(T[] v) {
        T summer = 0;
        for(int k = 0; k < v.length; k++) {
            summer += v[k];
        }
        return summer;
    }

Go is not object oriented per se, so you do not have a ‘Number’ class. However, you can create your own generic ‘interfaces’ which serves the same function. So here is how you solve the same problem in Go:

type Number interface {
  uint | int | float32 | float64
}


func sum[T Number](a []T) T{
    var summer T
    for _, v := range(a) {
        summer += v
    }
   return summer
}

So, at least in this one instance, Go generics are more expressive than Java generics. What about performance?

If I apply the above code to an array of integers, I get the following tight loop in assembly:

pc11:
        MOVQ    (AX)(DX*8), SI
        INCQ    DX
        ADDQ    SI, CX
        CMPQ    BX, DX
        JGT     pc11

As far as Go is concerned, this is as efficient as it gets.

So far, I am giving an A to Go generics.

Looking at assembly code with gdb

Most of us write code using higher level languages (Go, C++), but if you want to understand the code that matters to your processor, you need to look at the ‘assembly’ version of your code. Assembly is a just a series of instructions.

At first, assembly code looks daunting, and I discourage you from writing sizeable programs in assembly. However, with little training, you can learn to count instructions and spot branches. It can help you gain a deeper insight into how your program works. Let me illustrate what you can learn by look at assembly. Let us consider the following C++ code:

long f(int x) {
    long array[] = {1,2,3,4,5,6,7,8,999,10};
    return array[x];
}

long f2(int x) {
    long array[] = {1,2,3,4,5,6,7,8,999,10};
    return array[x+1];
}

This code contains two 80 bytes arrays, but they are identical. Is this a source of worry? If you look at the assembly code produced by most compilers, you will find that exactly identical constants are generally ‘compressed’ (just one version is stored). If I compile these two functions with gcc or clang compilers using the -S flag, I can plainly see the compression because the array occurs just once: (Do not look at all the instructions… just scan the code.)

	.text
	.file	"f.cpp"
	.globl	_Z1fi                           // -- Begin function _Z1fi
	.p2align	2
	.type	_Z1fi,@function
_Z1fi:                                  // @_Z1fi
	.cfi_startproc
// %bb.0:
	adrp	x8, .L__const._Z2f2i.array
	add	x8, x8, :lo12:.L__const._Z2f2i.array
	ldr	x0, [x8, w0, sxtw #3]
	ret
.Lfunc_end0:
	.size	_Z1fi, .Lfunc_end0-_Z1fi
	.cfi_endproc
                                        // -- End function
	.globl	_Z2f2i                          // -- Begin function _Z2f2i
	.p2align	2
	.type	_Z2f2i,@function
_Z2f2i:                                 // @_Z2f2i
	.cfi_startproc
// %bb.0:
	adrp	x8, .L__const._Z2f2i.array
	add	x8, x8, :lo12:.L__const._Z2f2i.array
	add	x8, x8, w0, sxtw #3
	ldr	x0, [x8, #8]
	ret
.Lfunc_end1:
	.size	_Z2f2i, .Lfunc_end1-_Z2f2i
	.cfi_endproc
                                        // -- End function
	.type	.L__const._Z2f2i.array,@object  // @__const._Z2f2i.array
	.section	.rodata,"a",@progbits
	.p2align	3
.L__const._Z2f2i.array:
	.xword	1                               // 0x1
	.xword	2                               // 0x2
	.xword	3                               // 0x3
	.xword	4                               // 0x4
	.xword	5                               // 0x5
	.xword	6                               // 0x6
	.xword	7                               // 0x7
	.xword	8                               // 0x8
	.xword	999                             // 0x3e7
	.xword	10                              // 0xa
	.size	.L__const._Z2f2i.array, 80

	.ident	"Ubuntu clang version 14.0.0-1ubuntu1"
	.section	".note.GNU-stack","",@progbits
	.addrsig

However, if you modify even slightly the constants, then this compression typically does not happen (e.g., if you try to append one integer value to one of the arrays, the code will duplicate the arrays in full).

To assess the performance of a code routine, my first line of attack is always to count instructions. Keeping everything the same, if you can rewrite your code so that it generates fewer instructions, it should be faster. I also like to spot conditional jumps because that is often where your code can suffer, if the branch is hard to predict.

It is easy to convert a whole set of functions to assembly, but it becomes unpractical as your projects become larger. Under Linux, the standard ‘debugger’ (gdb) is a great tool to look selectively at the assembly code produced by the compile. Let us consider my previous blog post, Filtering numbers quickly with SVE on Amazon Graviton 3 processors. In that blog post, I present several functions which I have implemented in a short C++ file. To examine the result, I simply load the compiled binary into gdb:

$ gdb ./filter

Then I can examine functions… such as the remove_negatives function:

(gdb) set print asm-demangle
(gdb) disas remove_negatives
Dump of assembler code for function remove_negatives(int const*, long, int*):
   0x00000000000022e4 <+0>:	mov	x4, #0x0                   	// #0
   0x00000000000022e8 <+4>:	mov	x3, #0x0                   	// #0
   0x00000000000022ec <+8>:	cntw	x6
   0x00000000000022f0 <+12>:	whilelt	p0.s, xzr, x1
   0x00000000000022f4 <+16>:	nop
   0x00000000000022f8 <+20>:	ld1w	{z0.s}, p0/z, [x0, x3, lsl #2]
   0x00000000000022fc <+24>:	cmpge	p1.s, p0/z, z0.s, #0
   0x0000000000002300 <+28>:	compact	z0.s, p1, z0.s
   0x0000000000002304 <+32>:	st1w	{z0.s}, p0, [x2, x4, lsl #2]
   0x0000000000002308 <+36>:	cntp	x5, p0, p1.s
   0x000000000000230c <+40>:	add	x3, x3, x6
   0x0000000000002310 <+44>:	add	x4, x4, x5
   0x0000000000002314 <+48>:	whilelt	p0.s, x3, x1
   0x0000000000002318 <+52>:	b.ne	0x22f8 <remove_negatives(int const*, long, int*)+20>  // b.any
   0x000000000000231c <+56>:	ret
End of assembler dump.

At address 52, we conditionally go back to address 20. So we have a total of 9 instructions in our main loop. From my benchmarks (see previous blog post), I use 1.125 instructions per 32-bit word, which is consistent with each loop processing 8 32-bit words.

Another way to assess performance is to look at branches. Let us disassemble remove_negatives_scalar, a branchy function:

(gdb) disas remove_negatives_scalar
Dump of assembler code for function remove_negatives_scalar(int const*, long, int*):
   0x0000000000002320 <+0>:	cmp	x1, #0x0
   0x0000000000002324 <+4>:	b.le	0x234c <remove_negatives_scalar(int const*, long, int*)+44>
   0x0000000000002328 <+8>:	add	x4, x0, x1, lsl #2
   0x000000000000232c <+12>:	mov	x3, #0x0                   	// #0
   0x0000000000002330 <+16>:	ldr	w1, [x0]
   0x0000000000002334 <+20>:	add	x0, x0, #0x4
   0x0000000000002338 <+24>:	tbnz	w1, #31, 0x2344 <remove_negatives_scalar(int const*, long, int*)+36>
   0x000000000000233c <+28>:	str	w1, [x2, x3, lsl #2]
   0x0000000000002340 <+32>:	add	x3, x3, #0x1
   0x0000000000002344 <+36>:	cmp	x4, x0
   0x0000000000002348 <+40>:	b.ne	0x2330 <remove_negatives_scalar(int const*, long, int*)+16>  // b.any
   0x000000000000234c <+44>:	ret
End of assembler dump.

We see the branch at address 24 (instruction tbnz), it conditionally jumps over the next two instructions. We had an equivalent ‘branchless’ function called remove_negatives_scalar_branchless. Let us see if it is indeed branchless:

(gdb) disas remove_negatives_scalar_branchless
Dump of assembler code for function remove_negatives_scalar_branchless(int const*, long, int*):
   0x0000000000002350 <+0>:	cmp	x1, #0x0
   0x0000000000002354 <+4>:	b.le	0x237c <remove_negatives_scalar_branchless(int const*, long, int*)+44>
   0x0000000000002358 <+8>:	add	x4, x0, x1, lsl #2
   0x000000000000235c <+12>:	mov	x3, #0x0                   	// #0
   0x0000000000002360 <+16>:	ldr	w1, [x0], #4
   0x0000000000002364 <+20>:	str	w1, [x2, x3, lsl #2]
   0x0000000000002368 <+24>:	eor	x1, x1, #0x80000000
   0x000000000000236c <+28>:	lsr	w1, w1, #31
   0x0000000000002370 <+32>:	add	x3, x3, x1
   0x0000000000002374 <+36>:	cmp	x0, x4
   0x0000000000002378 <+40>:	b.ne	0x2360 <remove_negatives_scalar_branchless(int const*, long, int*)+16>  // b.any
   0x000000000000237c <+44>:	ret
End of assembler dump.
(gdb)

Other than the conditional jump produced by the loop (address 40), the code is indeed branchless.

In this particular instance, with one small binary file, it is easy to find the functions I need. What if I load a large binary with many compiled functions?

Let me examine the benchmark binary from the simdutf library. It has many functions, but let us assume that I am looking for a function that might validate UTF-8 inputs. I can use info functions to find all functions matching a given pattern.

(gdb) info functions validate_utf8
All functions matching regular expression "validate_utf8":

Non-debugging symbols:
0x0000000000012710  event_aggregate simdutf::benchmarks::BenchmarkBase::count_events<simdutf::benchmarks::Benchmark::run_validate_utf8(simdutf::implementation const&, unsigned long)::{lambda()#1}>(simdutf::benchmarks::Benchmark::run_validate_utf8(simdutf::implementation const&, unsigned long)::{lambda()#1}, unsigned long) [clone .constprop.0]
0x0000000000012b54  simdutf::benchmarks::Benchmark::run_validate_utf8(simdutf::implementation const&, unsigned long)
0x0000000000018c90  simdutf::fallback::implementation::validate_utf8(char const*, unsigned long) const
0x000000000001b540  simdutf::arm64::implementation::validate_utf8(char const*, unsigned long) const
0x000000000001cd84  simdutf::validate_utf8(char const*, unsigned long)
0x000000000001d7c0  simdutf::internal::unsupported_implementation::validate_utf8(char const*, unsigned long) const
0x000000000001e090  simdutf::internal::detect_best_supported_implementation_on_first_use::validate_utf8(char const*, unsigned long) const

You see that the info functions gives me both the function name as well as the function address. I am interested in simdutf::arm64::implementation::validate_utf8. At that point, it becomes easier to just refer to the function by its address:

(gdb) disas 0x000000000001b540
Dump of assembler code for function simdutf::arm64::implementation::validate_utf8(char const*, unsigned long) const:
   0x000000000001b540 <+0>:	stp	x29, x30, [sp, #-144]!
   0x000000000001b544 <+4>:	adrp	x0, 0xa0000
   0x000000000001b548 <+8>:	cmp	x2, #0x40
   0x000000000001b54c <+12>:	mov	x29, sp
   0x000000000001b550 <+16>:	ldr	x0, [x0, #3880]
   0x000000000001b554 <+20>:	mov	x5, #0x40                  	// #64
   0x000000000001b558 <+24>:	movi	v22.4s, #0x0
   0x000000000001b55c <+28>:	csel	x5, x2, x5, cs  // cs = hs, nlast
   0x000000000001b560 <+32>:	ldr	x3, [x0]
   0x000000000001b564 <+36>:	str	x3, [sp, #136]
   0x000000000001b568 <+40>:	mov	x3, #0x0                   	// #0
   0x000000000001b56c <+44>:	subs	x5, x5, #0x40
   0x000000000001b570 <+48>:	b.eq	0x1b7b8 <simdutf::arm64::implementation::validate_utf8(char const*, unsigned long) const+632>  // b.none
   0x000000000001b574 <+52>:	adrp	x0, 0x86000
   0x000000000001b578 <+56>:	adrp	x4, 0x86000
   0x000000000001b57c <+60>:	add	x6, x0, #0x2f0
   0x000000000001b580 <+64>:	adrp	x0, 0x86000
...

I have cut short the output because it is too long. When single functions become large, I find it more convenient to redirect the output to a file which I can process elsewhere.

gdb -q ./benchmark -ex "set pagination off" -ex "set print asm-demangle" -ex "disas 0x000000000001b540" -ex quit > gdbasm.txt

Sometimes I am just interested in doing some basic statistics such as figuring out which instructions are used by the function:

$ gdb -q ./benchmark -ex "set pagination off" -ex "set print asm-demangle" -ex "disas 0x000000000001b540" -ex quit | awk '{print $3}' | sort |uniq -c | sort -r | head
     32 and
     24 tbl
     24 ext
     18 cmhi
     17 orr
     16 ushr
     16 eor
     14 ldr
     13 mov
     10 movi

And we see that the most common instruction in this code is and. It reassures me that the code was properly compiled. I can do some research on all the generated instructions and they all seem like adequate choices given the code that I produce.

The general lesson is that looking at the generated assembly is not so difficult and with little training, it can make you a better programmer.

Tip: It helps sometimes to disable pagination (set pagination off).

Filtering numbers quickly with SVE on Amazon Graviton 3 processors

I have had access to Amazon’s latest ARM processors (graviton 3) for a few weeks. To my knowledge, these are the first widely available processors supporting Scalable Vector Extension (SVE).

SVE is part of the Single Instruction/Multiple Data paradigm: a single instruction can operate on many values at once. Thus, for example, you may add N integers with N other integers using a single instruction.

What is unique about SVE is that you work with vectors of values, but without knowing specifically how long the vectors are. This is in contrast with conventional SIMD instructions (ARM NEON, x64 SSE, AVX) where the size of the vector is hardcoded. Not only do you write your code without knowing the size of the vector, but even the compiler may not know. This means that the same binary executable could work over different blocks (vectors) of data, depending on the processor. The benefit of this approach is that your code might get magically much more efficient on new processors.

It is a daring proposal. It is possible to write code that would work on one processor but fail on another processor, even though we have the same instruction set.

But is SVE on graviton 3 processors fast? To test it out, I wrote a small benchmark. Suppose you want to prune out all of the negative integers out of an array. A textbook implementation might look as follows:

void remove_negatives_scalar(const int32_t *input, 
        int64_t count, int32_t *output) {
  int64_t i = 0;
  int64_t j = 0;
  for(; i < count; i++) {
    if(input[i] >= 0) {
      output[j++] = input[i];
    }
  }
}

However, the compiler will probably generate a branch and if your input has a random distribution, this could be inefficient code. To help matters, you may rewrite your code in a manner that is more likely to generate a branchless binary:

  for(; i < count; i++) {
    output[j] = input[i];
    j += (input[i] >= 0);
  }

Though it looks less efficient (because every input value in written out), such a branchless version is often practically faster.

I ported this last implementation to SVE using ARM intrinsic functions. At each step, we load a vector of integers (svld1_s32), we compare them with zero (svcmpge_n_s32), we remove the negative values (svcompact_s32) and we store the result (svst1_s32). During most iterations, we have a full vector of integers… Yet, during the last iteration, some values will be missing but we simply ignore them with the while_mask variable which indicates which integer values are ‘active’.  The entire code sequence is done entirely using SVE instructions: there is no need to process separately the end of the sequence, as would be needed with conventional SIMD instruction sets.

#include <arm_sve.h>
void remove_negatives(const int32_t *input, int64_t count, int32_t *output) {
  int64_t i = 0;
  int64_t j = 0;
  svbool_t while_mask = svwhilelt_b32(i, count);
  do {
    svint32_t in = svld1_s32(while_mask, input + i);
    svbool_t positive = svcmpge_n_s32(while_mask, in, 0);
    svint32_t in_positive = svcompact_s32(positive, in);
    svst1_s32(while_mask, output + j, in_positive);
    i += svcntw();
    j += svcntp_b32(while_mask, positive);
    while_mask = svwhilelt_b32(i, count);
  } while (svptest_any(svptrue_b32(), while_mask));
}

Using a graviton 3 processor and GCC 11 on my benchmark, I get the following results:

cycles/integer instructions/integer instructions/cycle
scalar 9.0 6.000 0.7
branchless scalar 1.8 8.000 4.4
SVE 0.7 1.125 1.6

The SVE code uses far fewer instructions. In this particular test, SVE is 2.5 times faster than the best competitor (branchless scalar). Furthermore, it might use even fewer instructions on future processors, as the underlying registers get wider.

Of course, my code is surely suboptimal, but I am pleased that the first SVE benchmark I wrote turns out so well. It suggests that SVE might do well in practice.

Credit: Thanks to Robert Clausecker for the related discussion.

Memory-level parallelism : Intel Ice Lake versus Amazon Graviton 3

One of the most expensive operation in a processor and memory system is a random memory access. If you try to read a value in memory, it can take tens of nanosecond on average or more. If you are waiting on the memory content for further action, your processor is effectively stalled. While our processors are generally faster, the memory latency has not improved quickly and the latency can even be higher on some of the most expensive processors. For this reason, a modern processor core can issue multiple memory requests at a given time. That is, the processor tries to load one memory element, keeps going, and can issue another load (even though the previous load is not completed), and so forth. Not long ago, Intel processor cores could support about 10 independent memory requests at a time. I benchmarked some small ARM cores that could barely issue 4 memory requests.

Today, the story is much nicer. The powerful processor cores can all sustain many memory requests. They support better memory-level parallelism.

To measure the performance of the processor, we use a pointer-chasing scheme where you ask a C program to load a memory address which contains the next memory address and so forth. If a processor could only sustain a single memory request, such a test would use all available ressources. We then modify this test so that we have have two interleaved pointer-chasing scheme, and then three and then four, and so forth. We call each new interleaved pointer-chasing component a ‘lane’.

As you add more lanes, you should see better performance, up to a maximum. The faster the performance goes up as you add lane, the more memory-level parallelism your processor core has. The best Amazon (AWS) servers come with either Intel Ice Lake or Amazon’s very own Graviton 3. I benchmark both of them, using a core of each type. The Intel processor has the upper hand in absolute terms. We achieve a 12 GB/s maximal bandwidth compared to 9 GB/s for the Graviton 3. The one-lane latency is 120 ns for the Graviton 3 server versus 90 ns for the Intel processor. The Graviton 3 appears to sustain about 19 simultaneous loads per core against about 25 for the Intel processor.

Thus Intel wins, but the Graviton 3 has nice memory-level parallelism… much better than the older Intel chips (e.g., Skylake) and much better than the early attempts at ARM-based servers.

The source code is available. I am using Ubuntu 22.04 and GCC 11. All machines have small page sizes (4kB). I chose not to tweak the page size for these experiments.

Prices for Graviton 3 are 2.32 $US/hour (64 vCPU) compared to 2.448 $US/hour for Ice Lake. So Graviton 3 appears to be marginally cheaper than the Intel chips.

When I write these posts, comparing one product to another, there is always hate mail afterward. So let me be blunt. I love all chips equally.

If you want to know which system is best for your application: run benchmarks. Comprehensive benchmarks found that Amazon’s ARM hardware could be advantageous for storage-intensive tasks.

Further reading: I enjoyed Graviton 3: First Impressions.

Data structure size and cache-line accesses

On many systems, memory is accessed in fixed blocks called “cache lines”. On Intel systems, the cache line spans 64 bytes. That is, if you access memory at byte address 64, 65… up to 127… it is all on the same cache line. The next cache line starts at address 128, and so forth.

In turn, data in software is often organized in data structures having a fixed size (in bytes). We often organize these data structures in arrays. In general, a data structure may reside on more than one cache line. For example, if I put a 5-byte data structure at byte address 127, then it will occupy the last byte of one cache line, and four bytes in the next cache line.

When loading a data structure from memory, a naive model of the cost is the number of cache lines that are accessed. If your data structure spans 32 bytes or 64 bytes, and you have aligned the first element of an array, then you only ever need to access one cache line every time you load a data structure.

What if my data structures has 5 bytes? Suppose that I packed them in an array, using only 5 bytes per instance. What if I pick one at random… how many cache lines do I touch? Expectedly, the answer is barely more than 1 cache line on average.

Let us generalize.

Suppose that my data structure spans z bytes. Let g be the greatest common divisor between z and 64. Suppose that you load one instance of the data structure at random from a large array. In general, the expected number of additional cache lines accesses is (z – g)/64. The expected total number of cache line accesses is one more: 1 + (z – g)/64. You can check that it works for z = 32, since g is then 32 and you have (z – g)/64 is (32-32)/64 or zero.

I created the following table for all data structures no larger than a cache line. The worst-case scenario is a data structure spanning 63 bytes: you then almost always touch two cache lines.

I find it interesting that you have the same expected number of cache line accesses for data structures of size 17, 20, 24. It does not follow that computational cost a data structure spanning 24 bytes is the same as the cost of a data structure spanning 17 bytes. Everything else being identical, a smaller data structure should fare better, as it can fit more easily in CPU cache.

size of data structure (z) expected cache line access
1 1.0
2 1.0
3 1.03125
4 1.0
5 1.0625
6 1.0625
7 1.09375
8 1.0
9 1.125
10 1.125
11 1.15625
12 1.125
13 1.1875
14 1.1875
15 1.21875
16 1.0
17 1.25
18 1.25
19 1.28125
20 1.25
21 1.3125
22 1.3125
23 1.34375
24 1.25
25 1.375
26 1.375
27 1.40625
28 1.375
29 1.4375
30 1.4375
31 1.46875
32 1.0
33 1.5
34 1.5
35 1.53125
36 1.5
37 1.5625
38 1.5625
39 1.59375
40 1.5
41 1.625
42 1.625
43 1.65625
44 1.625
45 1.6875
46 1.6875
47 1.71875
48 1.5
49 1.75
50 1.75
51 1.78125
52 1.75
53 1.8125
54 1.8125
55 1.84375
56 1.75
57 1.875
58 1.875
59 1.90625
60 1.875
61 1.9375
62 1.9375
63 1.96875
64 1.0

Thanks to Maximilian Böther for the motivation of this post.

Parsing JSON faster with Intel AVX-512

Many recent Intel processors benefit from a new family of instructions called AVX-512. These instructions operate over wide registers (up to 512 bits) and follow the Single instruction, multiple data (SIMD) paradigm. These new AVX-512 instructions allow you to break some speed records, such as decoding base64 data at the speed of a memory copy.

Most modern processors have SIMD instructions. The AVX-512 instructions are wider (more bits per register), but that is not necessarily their main appeal. If you merely take existing SIMD algorithms and apply them to AVX-512, you will probably not benefit as much as you would like. It is true that wider registers are beneficial, but in superscalar processors (processors that can issue several instructions per cycle), the number of instructions you can issue per cycle matters as much if not more. Typically, 512-bit AVX-512 instructions are more expensive and the processor can issue fewer of them per cycle. To fully benefit from AVX-512, you need to carefully design your code. It is made more challenging by the fact that Intel is releasing these instructions progressively: the recent processors have many new powerful AVX-512 instructions that were not initially available. Thus, AVX-512 is not “one thing” but rather a family of instruction sets.

Furthermore, early implementations of the AVX-512 instructions often lead to measurable downclocking: the processor would reduce its frequency for a time following the use of these instructions. Thankfully, the latest Intel processors to support AVX-512 (Rocket Lake and Ice Lake) have done away with this systematic frequency throttling. Thankfully, it is easy to detect these recent processors at runtime.

Amazon’s powerful Intel servers are based on Ice Lake. Thus if you are deploying your software applications to the cloud on powerful servers, you probably have pretty good support for AVX-512 already !

A few years ago, we released a really fast C++ JSON parser called simdjson. It is somewhat unique as a parser in the fact that it relies critically on SIMD instructions. On several metrics, it was and still is the fastest JSON parser though other interesting competitors have emerged.

Initially, I had written a quick and dirty AVX-512 kernel for simdjson. We never merged it and after a time, I just deleted it. I then forgot about it.

Thanks to contributions from talented Intel engineers (Fangzheng Zhang and Weiqiang Wan) as well as indirect contributions from readers of this blog (Kim Walisch and Jatin Bhateja), we produced a new and shiny AVX-512 kernel. As always, keep in mind that the simdjson is the work of many people, a whole community of dozens of contributors. I must express my gratitude to Fangzheng Zhang who first wrote to me about an AVX-512 port.

We just released in the latest version of simdjson. It breaks new speed records.

Let us consider an interesting test where you seek to scan a whole file (spanning kilobytes) to find a value corresponding to some identifier. In simdjson, the code is as follows:

   auto doc = parser.iterate(json);    
   for (auto tweet : doc.find_field("statuses")) {
      if (uint64_t(tweet.find_field("id")) == find_id) {
        result = tweet.find_field("text");
        return true;
      }
    }
    return false;

On a Tiger Lake processor, with GCC 11, I get a 60% increase in the processing speed, expressed by the number of input bytes processed per second.

simdjson (512-bit SIMD): new 7.4 GB/s
simdjson (256-bit SIMD): old 4.6 GB/s

The speed gain is so important because in this task we mostly just read the data, and we do relatively little secondary processing. We do not create a tree out of the JSON data, we do not create a data structure.

The simdjson library has a minify function which just strips unnecessary spaces from the input. Maybe surprisingly, we are more than twice as fast as the previous baseline:

simdjson (512-bit SIMD): new 12 GB/s
simdjson (256-bit SIMD): old 4.3 GB/s

Another reasonable benchmark is to fully parse the input into a DOM tree with full validation. Parsing a standard JSON file (twitter.json), I get nearly a 30% gain:

simdjson (512-bit SIMD): new 3.6 GB/s
simdjson (256-bit SIMD): old 2.8 GB/s

While 30% may sound unexciting, we are starting from a fast baseline.

Could we do better? Assuredly. There are many AVX-512 instructions that we are not using yet. We do not use ternary Boolean operations (vpternlog). We are not using the new powerful shuffle functions (e.g., vpermt2b). We have an example of coevolution: better hardware requires new software which, in turn, makes the hardware shine.

Of course, to get these new benefits, you need recent Intel processors with adequate AVX-512 support and, evidently, you also need relatively recent C++ processors. Some of the recent laptop-class Intel processors do not support AVX-512 but you should be fine if you rely on AWS and have big Intel nodes.

You can grab our release directly or wait for it to reach one of the standard package managers (MSYS2, conan, vcpkg, brew, debian, FreeBSD, etc.).

Avoid exception throwing in performance-sensitive code

There are various ways in software to handle error conditions. In C or Go, one returns error code. Other programming languages like C++ or Java prefer to throw exceptions. One benefit of using exceptions is that it keeps your code mostly clean since the error-handling code is often separate.

It is debatable whether handling exceptions is better than dealing with error codes. I will happily use one or the other.

What I will object to, however, is the use of exceptions for control flow. It is fine to throw an exception when a file cannot be opened, unexpectedly. But you should not use exceptions to branch on the type of a value.

Let me illustrate.

Suppose that my code expects integers to be always positive. I might then have a function that checks such a condition:

int get_positive_value(int x) {
    if(x < 0) { throw std::runtime_error("it is not positive!"); }
    return x;
}

So far, so good. I am assuming that the exception is normally never thrown. It gets thrown if I have some kind of error.

If I want to sum the absolute values of the integers contained in an array, the following branching code is fine:

    int sum = 0;
    for (int x : a) {
        if(x < 0) {
            sum += -x;
        } else {
            sum += x;
        }
    }

Unfortunately, I often see solutions abusing exceptions:

    int sum = 0;
    for (int x : a) {
        try {
            sum += get_positive_value(x);
        } catch (...) {
            sum += -x;
        }
    }

The latter is obviously ugly and hard-to-maintain code. But what is more, it can be highly inefficient. To illustrate, I wrote a small benchmark over random arrays containing a few thousand elements. I use the LLVM clang 12 compiler on a skylake processor. The normal code is 10000 times faster in my tests!

normal code 0.05 ns/value
exception 500 ns/value

Your results will differ but it is generally the case that using exceptions for control flow leads to suboptimal performance. And it is ugly too!

Faster bitset decoding using Intel AVX-512

I refer to “bitset decoding” as the action of finding the positions of the 1s in a stream of bits. For example, given the integer value 0b11011 (or 27 in decimal),  I want to find 0,1,3,4.

In my previous post, Fast bitset decoding using Intel AVX-512, I explained how you can use Intel’s new instructions, from the AVX-512 family, to decode bitsets faster. The AVX-512 instructions, as the name implies, often can process 512-bit (or 64-byte) registers.

At least two readers (Kim Walisch and Jatin Bhateja) pointed out that you could do better if you used the very latest AVX-512 instructions available on Intel processors with the Ice Lake or Tiger Lake microarchitectures. These processors support VBMI2 instructions including the vpcompressb instruction and its corresponding intrinsics (such as _mm512_maskz_compress_epi8). What this instruction does is take a 64-bit word and a 64-byte register, and it outputs (in packed manner) only the bytes corresponding to set bits in the 64-bit word. Thus if you use as the 64-bit word the value 0b11011 and you provide a 64-byte register with the values 0,1,2,3,4… you will get as a result 0,1,3,4. That is, the instruction effectively does the decoding already, with the caveat that it will only write bytes. In practice, you often want the indexes as 32-bit integers. Thankfully, you can go from packed bytes to packed 32-bit integers easily. One possibility is to extract successive 128-bit subwords (using the vextracti32x4 instruction or its intrinsic _mm512_extracti32x4_epi32), and expand them (using the vpmovzxbd instruction or its intrinsic _mm512_cvtepu8_epi32). You get the following result:

void vbmi2_decoder_cvtepu8(uint32_t *base_ptr, uint32_t &base,
                                           uint32_t idx, uint64_t bits) {
  __m512i indexes = _mm512_maskz_compress_epi8(bits, _mm512_set_epi32(
    0x3f3e3d3c, 0x3b3a3938, 0x37363534, 0x33323130,
    0x2f2e2d2c, 0x2b2a2928, 0x27262524, 0x23222120,
    0x1f1e1d1c, 0x1b1a1918, 0x17161514, 0x13121110,
    0x0f0e0d0c, 0x0b0a0908, 0x07060504, 0x03020100
  ));
  __m512i t0 = _mm512_cvtepu8_epi32(_mm512_castsi512_si128(indexes));
  __m512i t1 = _mm512_cvtepu8_epi32(_mm512_extracti32x4_epi32(indexes, 1));
  __m512i t2 = _mm512_cvtepu8_epi32(_mm512_extracti32x4_epi32(indexes, 2));
  __m512i t3 = _mm512_cvtepu8_epi32(_mm512_extracti32x4_epi32(indexes, 3));
  __m512i start_index = _mm512_set1_epi32(idx);
  
  _mm512_storeu_si512(base_ptr + base, _mm512_add_epi32(t0, start_index));
  _mm512_storeu_si512(base_ptr + base + 16, _mm512_add_epi32(t1, start_index));
  _mm512_storeu_si512(base_ptr + base + 32, _mm512_add_epi32(t2, start_index));
  _mm512_storeu_si512(base_ptr + base + 48, _mm512_add_epi32(t3, start_index));
  
  base += _popcnt64(bits);
}

If you try to use this approach unconditionally, you will write 256 bytes of data for each 64-bit word you decode. In practice, if your word contains mostly just zeroes, you will be writing a lot of zeroes.

Branching is bad for performance, but only when it is hard to predict. However, it should be rather easy for the processor to predict whether we have fewer than 16 bits set in the provided word, of fewer than 32 bits, and so forth. So some level of branching is adequate. The following function should do:

void vbmi2_decoder_cvtepu8_branchy(uint32_t *base_ptr, uint32_t &base,
                                           uint32_t idx, uint64_t bits) {
  if(bits == 0) { return; }

  __m512i indexes = _mm512_maskz_compress_epi8(bits, _mm512_set_epi32(
    0x3f3e3d3c, 0x3b3a3938, 0x37363534, 0x33323130,
    0x2f2e2d2c, 0x2b2a2928, 0x27262524, 0x23222120,
    0x1f1e1d1c, 0x1b1a1918, 0x17161514, 0x13121110,
    0x0f0e0d0c, 0x0b0a0908, 0x07060504, 0x03020100
  ));
  __m512i start_index = _mm512_set1_epi32(idx);
  
  int count = _popcnt64(bits);
  __m512i t0 = _mm512_cvtepu8_epi32(_mm512_castsi512_si128(indexes));
  _mm512_storeu_si512(base_ptr + base, _mm512_add_epi32(t0, start_index));
  
  if(count > 16) {   
    __m512i t1 = _mm512_cvtepu8_epi32(_mm512_extracti32x4_epi32(indexes, 1));
    _mm512_storeu_si512(base_ptr + base + 16, _mm512_add_epi32(t1, start_index));
    if(count > 32) {   
      __m512i t2 = _mm512_cvtepu8_epi32(_mm512_extracti32x4_epi32(indexes, 2));
      _mm512_storeu_si512(base_ptr + base + 32, _mm512_add_epi32(t2, start_index));
      if(count > 48) {   
        __m512i t3 = _mm512_cvtepu8_epi32(_mm512_extracti32x4_epi32(indexes, 3));
        _mm512_storeu_si512(base_ptr + base + 48, _mm512_add_epi32(t3, start_index));
      }
    }
  }
  base += count;
}

The results will vary depending on the input data, but I already have a realistic case with moderate density (about 10% of the bits are set) that I am reusing. Using a Tiger-Lake processor and GCC 9, I get the following timings per set value, when using a sizeable input:

nanoseconds/value
basic 0.95
unrolled (simdjson) 0.74
AVX-512 (previous post) 0.57
AVX-512 (new) 0.29

My code is available.

That is a rather remarkable performance, especially considering how we do not need any large table or sophisticated algorithm. All we need are fancy AVX-512 instructions.

Fast bitset decoding using Intel AVX-512

In software, we often use ‘bitsets’: you work with arrays of bits to represent sets of small integers. It is a concise and fast data structure. Sometimes you want to go from the bitset (e.g., 0b110011) to the integers (e.g., 0, 1, 5, 6 in this instance). We consider with ‘average’ density (e.g., more than a handful of bits set per 64-bit word).

You could check the value of each bit, but a better option is to use the fact that processors have fast instructions to compute the number of “trailing zeros”. Given 0b10001100100, this instruction would give you 2. This gives you the first index. Then you need to unset this least significant bit using code such as word & (word - 1).

  while (word != 0) {
    result[i] = trailingzeroes(word);
    word = word & (word - 1);
    i++;
  }

The problem with this code is that the number of iterations might be hard to predict, thus you might often cause your processor to mispredict the number of branches. A misprediction is expensive on modern processor. You can do better by further unrolling this loop. I describe how in an earlier blog post.

Intel latest processors have new instruction sets (AVX-512) that are quite powerful. In this instance, it allows to do the decoding without any branch and with few instructions. The key is the vpcompressd instruction and its corresponding C/C++ Intel function (_mm512_mask_compressstoreu_epi32). What it does is that given up to 16 integers, it only selects the ones corresponding to a bit set in a bitset. Thus given the array 0,1,2,3….16 and given the bitset 0b111010, you would generate the output 1,3,4,6. The function does not tell you how many relevant values are written out, but you can just count the number of ones, and conveniently, we have a fast instruction for that, available through the _popcnt64 function. So the following code sequence would process 16-bit masks and write them out to a pointer (base_ptr).

  __m512i base_index = _mm512_setr_epi32(0,1,2,3,4,5,
    6,7,8,9,10,11,12,13,14,15);

  _mm512_mask_compressstoreu_epi32(base_ptr, 
    mask, base_index);

  base_ptr += _popcnt64(mask);

The full function which processes 64-bit masks is somewhat longer, but it is essentially just 4 copies of the simple sequence.

void avx512_decoder(uint32_t *base_ptr, uint32_t &base,
    uint32_t idx, uint64_t bits) {
  __m512i start_index = _mm512_set1_epi32(idx);
  __m512i base_index = _mm512_setr_epi32(0,1,2,3,4,5,
    6,7,8,9,10,11,12,13,14,15);
  base_index = _mm512_add_epi32(base_index, start_index);
  uint16_t mask;
  mask = bits & 0xFFFF;
  _mm512_mask_compressstoreu_epi32(base_ptr + base, 
    mask, base_index);
  base += _popcnt64(mask);
  const __m512i constant16 = _mm512_set1_epi32(16);
  base_index = _mm512_add_epi32(base_index, constant16);
  mask = (bits>>16) & 0xFFFF;
  _mm512_mask_compressstoreu_epi32(base_ptr + base, 
     mask, base_index);
  base += _popcnt64(mask);
  base_index = _mm512_add_epi32(base_index, constant16);
  mask = (bits>>32) & 0xFFFF;
  _mm512_mask_compressstoreu_epi32(base_ptr + base, 
    mask, base_index);
  base += _popcnt64(mask);
  base_index = _mm512_add_epi32(base_index, constant16);
  mask = bits>>48;
  _mm512_mask_compressstoreu_epi32(base_ptr + base, 
    mask, base_index);
  base += _popcnt64(mask);
}

There is a downside to using AVX-512: for a short time, the processor might reduce its frequency when wide registers (512 bits) are used. You can still use the same instructions on shorter registers (e.g., use _mm256_mask_compressstoreu_epi32 instead of _mm512_mask_compressstoreu_epi32) but in this instance, it doubles the number of instructions.

On a skylake-x processor with GCC, my benchmark reveals that the new AVX-512 is superior even with frequency throttling. Compared to the basic approach above, the AVX-512 approach use 45% times fewer cycles and 33% less time. We report the number of instructions, cycles and nanoseconds per value set in the bitset. The AVX-512 generates no branch misprediction.

instructions/value cycles/value nanoseconds/value
basic 9.3 4.4 1.5
unrolled (simdjson) 9.9 3.6 1.2
AVX-512 6.2 2.4 1.0

My code is available.

The AVX-512 routine has record-breaking speed. It is also possible that the routine could be improved.

Removing characters from strings faster with AVX-512

In software, it is a common problem to want to remove specific characters from a string. To make the problem precise, let us consider the removal of all ASCII control characters and spaces. In practice, it means the removal of all byte values smaller or equal than 32.

I covered a related problem before, the removal of all spaces from strings. At the time, I concluded that the fastest approach might be to use SIMD instructions coupled with a large lookup table. A SIMD instruction is such that it can operate on many words at any given time: most commodity processors have instructions able to operate on 16 bytes at a time. Thus, using a single instruction, you can compare 16 consecutive bytes and identify the location of all spaces, for example. Once it is done, you must somehow move the unwanted characters. Most instruction sets do not have instructions for that purpose, however x64 processors have an instruction that can move bytes around as long as you have a precomputed shuffle mask (pshufb). ARM NEON has similar instructions as well. Thus you proceed in the following manner:

  1. Identify all unwanted characters in a block (e.g., 16 bytes).
  2. Lookup a shuffle mask in a large table.
  3. Move the unwanted bytes using the shuffle mask.

Such an approach is fast but it requires possibly large tables.  Indeed, if you load 16 bytes, you need a table with 65536 shuffle masks. Storing such large tables is not very practical.

Recent Intel processors have handy new instructions that do exactly what we want: they prune out unwanted bytes (vpcompressb). It requires a recent processor with AVX-512 VBMI2 such as Ice Lake, Rocket Lake, Alder Lake, or Tiger Lake processors. Intel makes it difficult to figure out which features is available on which processor, so you need to do some research to find out if your favorite Intel processors supports the desired instructions. AMD processors do not support VBMI2.

On top of the new instructions, AVX-512 also allows you process the data in larger blocks (64 bytes). Using Intel instructions, the code is almost readable. I create a register containing only the space byte, and I then iterate over my data, each time loading 64 bytes of data. I compare it with the space: I only want to keep values that are large (in byte values) than the space. I then call the compress instruction which takes out the unwanted bytes. I read at regular intervals (every 64 bytes) but I write a variable number of bytes, so I advance the write pointer by the number of set bits in my mask: I count those using a fast instruction (popcnt).

  __m512i spaces = _mm512_set1_epi8(' ');
  size_t i = 0;
  for (; i + 63 < howmany; i += 64) {
    __m512i x = _mm512_loadu_si512(bytes + i);
    __mmask64  notwhite = _mm512_cmpgt_epi8_mask  (x, spaces);
    _mm512_mask_compressstoreu_epi8  (bytes + pos, notwhite, x);
    pos += _popcnt64(notwhite);
  }

I have updated the despacer library and its benchmark. With a Tiger Lake processor (3.3 GHz) and GCC 9 (Linux), I get the following results:

function speed
conventional (despace32) 0.4 GB/s
SIMD with large lookup (sse42_despace_branchless) 2.0 GB/s
AVX-512 (vbmi2_despace) 8.5 GB/s

Your results will differ. Nevertheless, we find that AVX-512 is highly useful for this task and the related function surpasses all other such functions. It is not merely the raw speed, it is also the fact that we do not require a lookup table and that the code does not rely on branch prediction: there is no hard-to-predict branches that may harm your speed in practice.

The result should not surprise us since, for the first time, we almost have direct hardware support for the operation (“pruning unwanted bytes”). The downside is that few processors support the desired instruction set. And it is not clear whether AMD will ever support these fancy instructions.

I should conclude with Linus Torvalds take regarding AVX-512:

I hope AVX-512 dies a painful death, and that Intel starts fixing real problems instead of trying to create magic instructions to then create benchmarks that they can look good on

I cannot predict what will happen to Intel or AVX-512, but if the past is any indication, specialized and powerful instructions have a bright future.

An overview of version control in programming

In practice, computer code is constantly being transformed. At the beginning of a project, the computer code often takes the form of sketches that are gradually refined. Later, the code can be optimized or corrected, sometimes for many years.

Soon enough, programmers realized that they needed to not only to store files, but also to keep track of the different versions of a given file. It is no accident that we are all familiar with the fact that software is often associated with versions. It is necessary to distinguish the different versions of the computer code in order to keep track of updates.

We might think that after developing a new version of a software, the previous versions could be discarded. However, it is practical to keep a copy of each version of the computer code for several reasons:

  1. A change in the code that we thought was appropriate may cause problems: we need to be able to go back quickly.
  2. Sometimes different versions of the computer code are used at the same time and it is not possible for all users to switch to the latest version. If an error is found in a previous version of the computer code, it may be necessary for the programmer to go back and correct the error in an earlier version of the computer code without changing the current code. In this scenario, the evolution of the software is not strictly linear. It is therefore possible to release version 1.0, followed by version 2.0, and then release version 1.1.
  3. It is sometimes useful to be able to go back in time to study the evolution of the code in order to understand the motivation behind a section of code. For example, a section of code may have been added without much comment to quickly fix a new bug. The attentive programmer will be able to better understand the code by going back and reading the changes in context.
  4. Computer code is often modified by different programmers working at the same time. In such a social context, it is often useful to be able to quickly determine who made what change and when. For example, if a problem is caused by a segment of code, we may want to question the programmer who last worked on that segment.

Programmers quickly realized that they needed version control systems. The basic functions that a version control system provides are rollback and a history of changes made. Over time, the concept version control has spread. There are even several variants intended for the general public such as DropBox where various files, not only computer code, are stored.

The history of software version control tools dates back to the 1970s (Rochkind, 1975). In 1972, Rochkind developed the SCCS (Source Code Control System) at Bell Laboratories. This system made it possible to create, update and track changes in a software project. SCCS remained a reference from the end of the 1970s until the 1980s. One of the constraints of SCCS is that it does not allow collaborative work: only one person can modify a given file at a given time.

In the early 1980s, Tichy proposed the RCS (Revision Control System), which innovated with respect to SCCS by storing only the differences between the different versions of a file in backward order, starting from the latest file. In contrast, SCCS stored differences in forward order starting from the first version. For typical use where we access the latest version, RCS is faster.

In programming, we typically store computer code within text files. Text files most often use ASCII or Unicode (UTF-8 or UTF-16) encoding. Lines are separated by a sequence of special characters that identify the end of a line and the beginning of a new line. Two characters are often used for this purpose: “carriage return” (CR) and “line feed” (LF). In ASCII and UTF-8, these characters are represented with the byte having the value 13 and the byte having the value 10 respectively. In Windows, the sequence is composed of the CR character followed by the LF character, whereas in Linux and macOS, only the LF character is used. In most programming languages, we can represent these two characters with the escape sequences \r and \n respectively. So the string “a\nb\nc” has three lines in most programming languages under Linux or macOS: the lines “a”, “b” and “c”.

When a text file is edited by a programmer, usually only a small fraction of all lines are changed. Some lines may also be inserted or deleted. It is convenient to describe the differences as succinctly as possible by identifying the new lines, the deleted lines and the modified lines.

The calculation of differences between two text files is often done first by breaking the text files into lines. We then treat a text file as a list of lines. Given two versions of the same file, we want to associate as many lines in the first version as possible with an identical line in the second version. We also assume that the order of the lines is not reversed.

We can formalize this problem by looking for the longest common subsequence. Given a list, a subsequence simply takes a part of the list, excluding some elements. For example, (a,b,d) is a subsequence of the list (a,b,c,d,e). Given two lists, we can find a common subsequence, e.g. (a,b,d) is a subsequence of the list (a,b,c,d,e) and the list (z,a,b,d). The longest common subsequence between two lists of text lines represents the list of lines that have not been changed between the two versions of a text file. It might be difficult to solve this program using brute force. Fortunately, we can compute the longest common subsequence by dynamic programming. Indeed, we can make the following observations.

  1. If we have two strings with a longest subsequence of length k, and we add at the end of each of the two strings the same character, the new strings will have a longer subsequence of length k+1.
  2. If we have two strings of lengths m and n, ending in distinct characters (for example, “abc” and “abd”), then the longest subsequence of the two strings is the longest subsequence of the two strings after removing the last character from one of the two strings. In other words, to determine the length of the longest subsequence between two strings, we can take the maximum of the length of the subsequence after amputating one character from the first string while keeping the second unchanged, and the length of the subsequence after amputating one character from the second string while keeping the first unchanged.
    These two observations are sufficient to allow an efficient calculation of the length of the longest common subsequence. It is sufficient to start with strings comprising only the first character and to add progressively the following characters. In this way, one can calculate all the longest common subsequences between the truncated strings. It is then possible to reverse this process to build the longest subsequence starting from the end. If two strings end with the same character, we know that the last character will be part of the longest subsequence. Otherwise, one of the two strings is cut off from its last character, making our choice in such a way as to maximize the length of the longest common subsequence.

The following function illustrates a possible solution to this problem. Given two arrays of strings, the function returns the longest common subsequence. If the first string has length m and the second n, then the algorithm runs in O(m*n) time.

func longest_subsequence(file1, file2 []string) []string {
	m, n := len(file1), len(file2)
	P := make([]uint, (m+1)*(n+1))
	for i := 1; i <= m; i++ {
		for j := 1; j <= n; j++ {
			if file1[i-1] == file2[j-1] {
				P[i*(n+1)+j] = 1 + P[(i-1)*(n+1)+(j-1)]
			} else {
				P[i*(n+1)+j] = max(P[i*(n+1)+(j-1)], P[(i-1)*(n+1)+j])
			}
		}
	}
	longest := P[m*(n+1)+n]
	i, j := m, n
	subsequence := make([]string, longest)
	for k := longest; k > 0; {
		if P[i*(n+1)+j] == P[i*(n+1)+(j-1)] {
			j-- // the two strings end with the same char
		} else if P[i*(n+1)+j] == P[(i-1)*(n+1)+j] {
			i--
		} else if P[i*(n+1)+j] == 1+P[(i-1)*(n+1)+(j-1)] {
			subsequence[k-1] = file1[i-1]
			k--; i--; j--
		}
	}
	return subsequence
}

Once the subsequence has been calculated, we can quickly calculate a description of the difference between the two text files. Simply move forward in each of the text files, line by line, stopping as soon as you reach a position corresponding to an element of the longest sub-sequence. The lines that do not correspond to the subsequence in the first file are considered as having been deleted, while the lines that do not correspond to the subsequence in the second file are considered as having been added. The following function illustrates a possible solution.

func difference(file1, file2 []string) []string {
    subsequence := longest_subsequence(file1, file2)
    i, j, k := 0, 0, 0
    answer := make([]string, 0)
    for i &lt; len(file1) &amp;&amp; k &lt; len(file2) {
        if file2[k] == subsequence[j] &amp;&amp; file1[i] == subsequence[j] {
            answer = append(answer, "'"+file2[k]+"'\n")
            i++; j++; k++
        } else {
            if file1[i] != subsequence[j] {
                answer = append(answer, "deleted: '"+file1[i]+"'\n")
                i++
            }
            if file2[k] != subsequence[j] {
                answer = append(answer, "added: '"+file2[k]+"'\n")
                k++
            }
        }
    }
    for ; i &lt; len(file1); i++ {
        answer = append(answer, "deleted: '"+file1[i]+"'\n")
    }
    for ; k &lt; len(file2); k++ {
        answer = append(answer, "added: '"+file2[k]+" \n")
    }
    return answer
}


The function we propose as an illustration for computing the longest subsequence uses O(m*n) memory elements. It is possible to reduce the memory usage of this function and simplify it (Hirschberg, 1975). Several other improvements are possible in practice (Miller and Myers, 1985). We can then represent the changes between the two files in a concise way.

Suggested reading: article Diff (wikipedia)

Like SCCS, RCS does not allow multiple programmers to work on the same file at the same time. The need to own a file to the exclusion of all other programmers while working on it may have seemed a reasonable constraint at the time, but it can make the work of a team of programmers much more cumbersome.

In 1986 Grune developed the Concurrent Versions System (CVS). Unlike previous systems, CVS allows multiple programmers to work on the same file simultaneously. It also adopts a client-server model that allows a single directory to be present on a network, accessible by multiple programmers simultaneously. The programmer can work on a file locally, but as long as he has not transmitted his version to the server, it remains invisible to the other developers.

The remote server also serves as a de facto backup for the programmers. Even if all the programmers’ computers are destroyed, it is possible to start over with the code on the remote server.

In a version control system, there is usually always a single latest version. All programmers make changes to this latest version. However, such a linear approach has its limits. An important innovation that CVS has updated is the concept of a branch. A branch allows to organize sets of versions that can evolve in parallel. In this model, the same file is virtually duplicated. There are then two versions of the file (or more than two) capable of evolving in parallel. By convention, there is usually one main branch that is used by default, accompanied by several secondary branches. Programmers can create new branches whenever they want. Branches can then be merged: if a branch A is divided into two branches (A and B) which are modified, it is then possible to bring all the modifications into a single branch (merging A and B). The branch concept is useful in several contexts:

  1. Some software development is speculative. For example, a programmer may explore a new approach without being sure that it is viable. In such a case, it may be better to work in a separate branch and merge with the main branch only if successful.
  2. The main branch may be restricted to certain programmers for security reasons. In such a case, programmers with reduced access may be restricted to separate branches. A programmer with privileged access may then merge the secondary branch after a code inspection.
  3. A branch can be used to explore a particular bug and its fix.
  4. A branch can be used to update a previous version of the code. Such a version may be kept up to date because some users depend on that earlier version and want to receive certain fixes. In such a case, the secondary branch may never be integrated with the main branch.

One of the drawbacks of CVS is poor performance when projects include multiple files and multiple versions. In 2000, Subversion (SVN) was proposed as an alternative to CVS that meets the same needs, but with better performance.

CVS and Subversion benefit from a client-server approach, which allows multiple programmers to work simultaneously with the same version directory. Yet programmers often want to be able to use several separate remote directories.

To meet these needs, various “distributed version control systems” (DVCS) have been developed. The most popular one is probably the Git system developed by Torvalds (2005). Torvalds was trying to solve a problem of managing Linux source code. Git became the dominant version management tool.
tool. It has been adopted by Google, Microsoft, etc. It is free software.

In a distributed model, a programmer who has a local copy of the code can synchronize it with either one directory or another. They can easily create a new copy of the remote directory on a new server. Such flexibility is considered essential in many complex projects such as the Linux operating system kernel.

Several companies offer Git-based services including GitHub. Founded in 2008, GitHub has tens of millions of users. In 2018, Microsoft acquired GitHub for $7.5 billion.

For CVS and Subversion, there is only one set of software versions. With a distributed approach, multiple sets can coexist on separate servers. The net result is that a software project can evolve differently, under the responsibility of different teams, with possible future reconciliation.

In this sense, Git is distributed. Although many users rely on GitHub (for example), your local copy can be attached to any remote directory, and it can even be attached to multiple remote directories. The verb “clone” is sometimes used to describe the recovery of a Git project locally, since it is a complete copy of all files, changes, and branches.

If a copy of the project is attached to another remote directory, it is called a fork. We often distinguish between branches and forks. A branch always belongs to the main project. A fork is originally a complete copy of the project, including all branches. It is possible for a fork to rejoin the main project, but it is not essential.

Given a publicly available Git directory, anyone can clone it and start working on it and contributing to it. We can create a new fork. From a fork, we can submit a pull request that invites people to integrate our changes. This allows a form of permissionless innovation. Indeed, it becomes possible to retrieve the code, modify it and propose a new version without ever having to interact directly with the authors.

Systems like CVS and subversion could become inefficient and take several minutes to perform certain operations. Git, in contrast, is generally efficient and fast, even for huge projects. Git is robust and does not get “corrupted” easily. However, it is not recommended to use Git for huge files such as multimedia content: Git’s strength lies in text files. It should be noted that the implementation of Git has improved over time and includes sophisticated indexing techniques.

Git is often used on the command line. It is possible to use graphical clients. Services like GitHub make Git a little easier.

The basic logical unit of Git is the commit, which is a set of changes to multiple files. A commit includes a reference to at least one parent, except for the first commit which has no parent. A single commit can be the parent of several children: several branches can be created from a commit and each subsequent commit becomes a child of the initial commit. Furthermore, when several branches are merged, the resulting commit will have several parents. In this sense, the commits form an “acyclic directed graph”.

With Git, we want to be able to refer to a commit in an easy way, using a unique identifier. That is, we want to have a short numeric value that corresponds to one commit and one commit only. We could assign each commit a version number (1.0, 2.0, etc.). Unfortunately, such an approach is difficult to reconcile with the fact that commits do not form a linear chain where a commit has one and only one parent. As an alternative, we use a hash function to compute the unique identifier. A hash function takes elements as parameters and calculates a numerical value (hash value). There are several simple hash functions. For example, we can iterate over the bytes contained in a message from a starting value h, by computing h = 31 * h + b where b is the byte value. For example, a message containing bytes 3 and 4 might have a hash value of 31 * (31 * 3) + 4 if we start h = 0. Such a simple approach is effective in some cases, but it allows malicious users to create collisions: it would be possible to create a fake commit that has the same hash value and thus create security holes. For this reason, Git uses more sophisticated hashing techniques (SHA-1, SHA-256) developed by cryptographic specialists. Commits are identified using a hash value (for example, the hexadecimal numeric value 921103db8259eb9de72f42db8b939895f5651489) which is calculated from the date and time, the comment left by the programmer, the user’s name, the parents and the nature of the change. In theory, two commits could have the same hash value, but this is an unlikely event given the hash functions used by Git. It’s not always practical to reference a hexadecimal code. To make things easier, Git allows you to identify a commit with a label (e.g., v1.0.0). The following command will do: git tag -a v1.0.0 -m "version 1.0.0".

Though tags can be any string, tags often contain sequences of numbers indicating a version. There is no general agreement among programmers on how  to attribute version numbers to a version. However, tags sometimes take the form of three numbers separated by dots: MAJOR.MINOR.PATCH (for example, 1.2.3). With each new version, 1 is added to at least one of the three numbers. The first number often starts at 1 while the next two start at 0.

  • The first number (MAJOR) must be increased when you make major changes to the code. The other two numbers (MINOR and PATCH) are often reset to zero. For example, you can go from version 1.2.3 to version 2.0.0.
  • The second number (MINOR) is increased for minor changes (for example, adding a function). When increasing the second number, the first number (MAJOR) is usually kept unchanged and the last number (PATCH) is reset to zero.
  • The last number (PATCH) is increased when fixing bugs. The other two numbers are not increased.
    There are finer versions of this convention like “semantic versioning“.

With Git, the programmer can have a local copy of the commit graph. They can add new commits. In a subsequent step, the programmer must “push” his changes to a remote directory so that the changes become visible to other programmers. The other programmers can fetch the changes using a `pull’.

Git has advanced collaborative features. For example, the git blame command lets you know who last modified a given piece of code.

Conclusion

Version control in computing is a sophisticated approach that has benefited from many years of work. It is possible to store multiple versions of the same file at low cost and navigate from one version to another quickly.

If you develop code without using a version control tool like Git or the equivalent, you are bypassing proven practices. It’s likely that if you want to work on complex projects with multiple programmers, your productivity will be much lower without version control.

Floats have 15-digit accuracy in their normal range

In programming languages like JavaScript or Python, numbers are typically represented using 64-bit IEEE number types (binary64). For these numbers, we have 15 digits of accuracy. It means that you can pick a 15-digit number, such as 1.23456789012345e100 and it can be represented exactly: there exists a floating-point number that has exactly these 15-most significant digits. In this particular case, it is the number 6355009312518497 * 2280. Having 15-digit of accuracy is excellent: few engineering processes can ever exceed this accuracy.

This 15-digit accuracy fails for numbers that outside the valid range. For example, the number 1e500 is too large and cannot be directly represented using standard 64-bit floating-point numbers. Similarly, 1e-500 is too small and it can only be represented as zero.

The range of 64-bit floating-point number might be defined as going from 4.94e-324 to 1.8e308 and -1.8e308 to -4.94e-324, together with exactly 0. However, this range includes subnormal numbers where the relative accuracy can be small. For example, the number 5.00000000000000e-324 is best represented as 4.94065645841247e-324, meaning that we have zero-digit accuracy.

For the 15-digit accuracy rule to work, you might remain in the normal range, e.g., from 2.225e−308 to 1.8e308 and -1.8e308 to -2.225e−308. There are other good reasons to remain in the normal range, such as poor performance and low accuracy in the subnormal range.

To summarize, standard floating-point numbers have excellent accuracy (at least 15 digits) as long you remain in their normal range which is between 2.225e−308 to 1.8e308 for positive numbers.

String representations are not unique: learn to normalize!

Most strings in software today are represented using the unicode standard. The unicode standard can represent most human readable strings. Unicode works by representing each ‘character’ as a numerical value (called a code point) between 0 and 1 114 112.

Thus the character é is typically represented as the numerical value 233 (or 0xe9 in hexadecimal). Thus in Python, JavaScript and many other programming languages, you get the following:

>>> "\u00e9"
'é'

Unfortunately, unicode does not ensure that there is a unique way to achieve every visual character. For example, you can combine the letter ‘e’ (code point 0x65) with ‘acute accent’ (code point 0x0301):

>>> "\u0065\u0301"
'é'

Unfortunately, in most programming languages, these strings will not be considered to be the same even though they look the same to us:

>>> "\u0065\u0301"=="\u00e9"
False

For obvious reason, it can be a problem within a computer system. What if you are doing some search in a database for name with the character ‘é’ in it?

The standard solution is to normalize your strings. In effect, you transform them so that strings that are semantically equal are written with the same code points. In Python, you may do it as follows:

>>> import unicodedata
>>> unicodedata.normalize('NFC',"\u00e9") == unicodedata.normalize('NFC',"\u0065\u0301")
True

There are multiple ways to normalize your strings, and there are nuances.

In JavaScript and other programming languages, there are equivalent functions:

> "\u0065\u0301".normalize() == "\u00e9".normalize()
true

Though you should expect normalization to be efficient, it is unlikely to be computationally free. Thus you should not repeatedly normalize your strings, as I have done. Rather you should probably normalize the strings as they enter your system, so that each string is normalized only once.

Normalization alone does not solve all of your problems, evidently. There are multiple complicated issues with internalization, but if you are at least aware of the normalization problem, many perplexing issues are easily explained.

Further reading: Internationalization for Turkish:
Dotted and Dotless Letter “I”

Converting integers to decimal strings faster with AVX-512

In most systems, integers are stored using a fixed binary representation. It is common to store integers using 32-bit or 64-bit words. You sometimes need to convert it to a string. For example, the integer 12345 might need to be converted to the five characters ‘12345’.

In an earlier blog post, I presented the simpler problem of converting integers to fixed-digit strings, using exactly 16-characters with leading zeroes as needed. For example, the integer 12345 becomes the string ‘0000000000012345’.

For this problem, the most practical approach might be a tree-based version with a small table. The core idea is to start from the integer, compute an integer representing the 8 most significant decimal digits, and another integer representing the least significant 8 decimal digit. Then we repeat, dividing the two eight-digit integers into two four-digit integers, and so forth until we get to two-digit integers in which case we use a small table to convert them to a decimal representation. The code in C++ might look as follows:

void to_string_tree_table(uint64_t x, char *out) {
  static const char table[200] = {
      0x30, 0x30, 0x30, 0x31, 0x30, 0x32, 0x30, 0x33, 0x30, 0x34, 0x30, 0x35,
      0x30, 0x36, 0x30, 0x37, 0x30, 0x38, 0x30, 0x39, 0x31, 0x30, 0x31, 0x31,
      0x31, 0x32, 0x31, 0x33, 0x31, 0x34, 0x31, 0x35, 0x31, 0x36, 0x31, 0x37,
      0x31, 0x38, 0x31, 0x39, 0x32, 0x30, 0x32, 0x31, 0x32, 0x32, 0x32, 0x33,
      0x32, 0x34, 0x32, 0x35, 0x32, 0x36, 0x32, 0x37, 0x32, 0x38, 0x32, 0x39,
      0x33, 0x30, 0x33, 0x31, 0x33, 0x32, 0x33, 0x33, 0x33, 0x34, 0x33, 0x35,
      0x33, 0x36, 0x33, 0x37, 0x33, 0x38, 0x33, 0x39, 0x34, 0x30, 0x34, 0x31,
      0x34, 0x32, 0x34, 0x33, 0x34, 0x34, 0x34, 0x35, 0x34, 0x36, 0x34, 0x37,
      0x34, 0x38, 0x34, 0x39, 0x35, 0x30, 0x35, 0x31, 0x35, 0x32, 0x35, 0x33,
      0x35, 0x34, 0x35, 0x35, 0x35, 0x36, 0x35, 0x37, 0x35, 0x38, 0x35, 0x39,
      0x36, 0x30, 0x36, 0x31, 0x36, 0x32, 0x36, 0x33, 0x36, 0x34, 0x36, 0x35,
      0x36, 0x36, 0x36, 0x37, 0x36, 0x38, 0x36, 0x39, 0x37, 0x30, 0x37, 0x31,
      0x37, 0x32, 0x37, 0x33, 0x37, 0x34, 0x37, 0x35, 0x37, 0x36, 0x37, 0x37,
      0x37, 0x38, 0x37, 0x39, 0x38, 0x30, 0x38, 0x31, 0x38, 0x32, 0x38, 0x33,
      0x38, 0x34, 0x38, 0x35, 0x38, 0x36, 0x38, 0x37, 0x38, 0x38, 0x38, 0x39,
      0x39, 0x30, 0x39, 0x31, 0x39, 0x32, 0x39, 0x33, 0x39, 0x34, 0x39, 0x35,
      0x39, 0x36, 0x39, 0x37, 0x39, 0x38, 0x39, 0x39,
  };
  uint64_t top = x / 100000000;
  uint64_t bottom = x % 100000000;
  uint64_t toptop = top / 10000;
  uint64_t topbottom = top % 10000;
  uint64_t bottomtop = bottom / 10000;
  uint64_t bottombottom = bottom % 10000;
  uint64_t toptoptop = toptop / 100;
  uint64_t toptopbottom = toptop % 100;
  uint64_t topbottomtop = topbottom / 100;
  uint64_t topbottombottom = topbottom % 100;
  uint64_t bottomtoptop = bottomtop / 100;
  uint64_t bottomtopbottom = bottomtop % 100;
  uint64_t bottombottomtop = bottombottom / 100;
  uint64_t bottombottombottom = bottombottom % 100;
  //
  memcpy(out, &table[2 * toptoptop], 2);
  memcpy(out + 2, &table[2 * toptopbottom], 2);
  memcpy(out + 4, &table[2 * topbottomtop], 2);
  memcpy(out + 6, &table[2 * topbottombottom], 2);
  memcpy(out + 8, &table[2 * bottomtoptop], 2);
  memcpy(out + 10, &table[2 * bottomtopbottom], 2);
  memcpy(out + 12, &table[2 * bottombottomtop], 2);
  memcpy(out + 14, &table[2 * bottombottombottom], 2);
}

It compiles down to dozens of instructions.

Could you do better without using a much larger table?

It turns out that you can do much better if you have a recent Intel processor with the appropriate AVX-512 instructions (IFMA, VBMI), as demonstrated by an Internet user called InstLatX64.

We rely on the observation that you can compute directly the quotient and the remainder of the division using a series of multiplications and shifts (Lemire et al. 2019).

The code is a bit technical, but remarkably, it does not require a table. And it generates several times fewer instructions. For the sake of simplicity, I merely provide an implementation using Intel intrinsics. Importantly, you are not expected to follow through with the code, but you should notice that it is rather short.

void to_string_avx512ifma(uint64_t n, char *out) {
  uint64_t n_15_08  = n / 100000000;
  uint64_t n_07_00  = n % 100000000;
  __m512i bcstq_h   = _mm512_set1_epi64(n_15_08);
  __m512i bcstq_l   = _mm512_set1_epi64(n_07_00);
  __m512i zmmzero   = _mm512_castsi128_si512(_mm_cvtsi64_si128(0x1A1A400));
  __m512i zmmTen    = _mm512_set1_epi64(10);
  __m512i asciiZero = _mm512_set1_epi64('0');

  __m512i ifma_const	= _mm512_setr_epi64(0x00000000002af31dc, 0x0000000001ad7f29b, 
    0x0000000010c6f7a0c, 0x00000000a7c5ac472, 0x000000068db8bac72, 0x0000004189374bc6b,
    0x0000028f5c28f5c29, 0x0000199999999999a);
  __m512i permb_const	= _mm512_castsi128_si512(_mm_set_epi8(0x78, 0x70, 0x68, 0x60, 0x58,
    0x50, 0x48, 0x40, 0x38, 0x30, 0x28, 0x20, 0x18, 0x10, 0x08, 0x00));
  __m512i lowbits_h	= _mm512_madd52lo_epu64(zmmzero, bcstq_h, ifma_const);
  __m512i lowbits_l	= _mm512_madd52lo_epu64(zmmzero, bcstq_l, ifma_const);
  __m512i highbits_h	= _mm512_madd52hi_epu64(asciiZero, zmmTen, lowbits_h);
  __m512i highbits_l	= _mm512_madd52hi_epu64(asciiZero, zmmTen, lowbits_l);
  __m512i perm          = _mm512_permutex2var_epi8(highbits_h, permb_const, highbits_l);
  __m128i digits_15_0	= _mm512_castsi512_si128(perm);
  _mm_storeu_si128((__m128i *)out, digits_15_0);
}

Remarkably, the AVX-512 is 3.5 times faster than the table-based approach:

function time per conversion
table 8.8 ns
AVX-512 2.5 ns

I use GCC 9 and an Intel Tiger Lake processor  (3.30GHz). My benchmarking code is available.

A downside of this nifty approach is that it is (obviously) non-portable. There are still few Intel processors supporting these nifty extensions, and it is currently limited to Intel: no AMD or ARM processor can do the same right now. However, the gain might be sufficient that it is worth the effort deploying it in some instances.

 

Writing out large arrays in Go: binary.Write is inefficient for large arrays

Programmers often need to write data structures to disk or to networks. The data structure then needs to be interpreted as a sequence of bytes. Regarding integer values, most computer systems adopt “little endian” encoding whereas an 8-byte integer is written out using the least significant bytes first. In the Go programming language, you can write an array of integers to a buffer as follows:

var data []uint64
var buf *bytes.Buffer = new(bytes.Buffer)

...

err := binary.Write(buf, binary.LittleEndian, data)

Until recently, I assumed that the binary.Write function did not allocate memory. Unfortunately, it does. The function converts the input array to a new, temporary byte arrays.

Instead, you can create a small buffer just big enough to hold you 8-byte integer and write that small buffer repeatedly:

var item = make([]byte, 8)
for _, x := range data {
    binary.LittleEndian.PutUint64(item, x)
    buf.Write(item)
}

Sadly, this might have poor performance on disks or networks where each write/read has a high overhead. To avoid this problem, you can use Go’s buffered writer and write the integers one by one. Internally, Go will allocate a small buffer.

writer := bufio.NewWriter(buf)
var item = make([]byte, 8)
for _, x := range data {
	binary.LittleEndian.PutUint64(item, x)
	writer.Write(item)
}
writer.Flush()

I wrote a small benchmark that writes an array of 100M integers to memory.

function memory usage time
binary.Write 1.5 GB 1.2 s
one-by-one 0 0.87 s
buffered one-by-one 4 kB 1.2 s

(Timings will vary depending on your hardware and testing procedure. I used Go 1.16.)

The buffered one-by-one approach is not beneficial with respect to speed in this instance, but it would be more helpful in other cases. In my benchmark, the simple one-by-one approach is fastest and uses least memory. For small inputs, binary.Write would be faster. The ideal function might have a fast path for small arrays, and a more careful handling of the larger inputs.

Enforcement by software

At my university, one of our internal software systems allows a professor to submit a revision to a course. The professor might change the content or the objectives of the course.

In a university, professors have extensive freedom regarding course content. As long as you reasonably meet the course objectives, you can do whatever you like. You can pick the textbook you prefer, write your own and so forth.

However, the staff that built our course revision system decided to make it so that every single change should go through all layers of approvals. So if I want to change the title of an assignment, according to this tool, I need the department to approve.

KaneJamison.com

When I first encountered this new tool, I immediately started to work around it. And because I am department chair, I brought my colleagues along for the ride. So we ‘pretend’ to get approval, submitting fake documents. The good thing in a bureaucracy is that most people are too bored to check up on the fine prints.

Surprisingly, it appears that no other department has been routing around the damage that is this new tool. I should point out that I am not doing anything illegal or against the rules. I am a good soldier. I just route around the software system. But it makes people uneasy.

And there lies the scary point. People are easily manipulated by computing.

People seem to think that if the software requires some document, then surely the rules require the document in question. That is, human beings believe that the software must be an accurate embodiment of the law.

In some sense, software does the policing. It enforces the rules. But like the actual police, software can go far beyond the law… and most people won’t notice.

An actual policeman can be intimidating. However, it is a human being. If they ask something that does not make sense, you are likely to question them. You are also maybe more likely to think that a policeman could be mistaken. Software is like a deaf policeman. And people want software to be correct.

Suppose you ran a university and you wanted all professors to include a section on religion in all their courses. You could not easily achieve such a result by the means of law. Changing the university regulations to add such a requirement would be difficult at a secular institution. However, if you simply make it that all professors must fill out a section on religion when registering a course, then professors would probably do it without question.

Of course, you can achieve the same result with bureaucracy. You just change the forms and the rules. But it takes much effort. Changing software is comparatively easier. There is no need to document the change very much. There is no need to train the staff.

I think that there is great danger in some of the recent ‘digit ID’ initiatives that various governments are pushing. Suppose, for example, that your driver’s license is on your mobile phone. It seems reasonable, at first, for the government to be able to activate it and deactivate it remotely. You no longer need to go to a government office to get your new driver’s license. However, it now makes it possible for a civil servant to decide that you cannot drive your car on Tuesdays. They do not need a new law, they do not need your consent, they can just switch a flag inside a server.

You may assume then that people would complain loudly, and they may. However, they are much less likely to complain than if it is a policeman that comes to their door on Tuesdays to take away their driver’s license. We have a bias as human being to accept without question software enforcement.

It can be used for good. For example, the right software can probably help you lose weight. However, software can enable arbitrary enforcement. For crazy people like myself, it will fail. Sadly, not everyone is as crazy as I am.