JavaScript and fast data structures: some initial experiments

Two of my favorite data structures are the bitset and the heap. The latter is typically used to implement a priority queue.

Both of these data structures come by default in Java. In JavaScript, there is a multitude of implementations, but few, if any, are focused on offering the best performance. That’s annoying because these data structures are routinely used to implement other fast algorithms. So I did what all programmers do, I started coding!

I first implemented a fast heap in JavaScript called FastPriorityQueue.js. As a programmer, I found that JavaScript was well suited to the task. My implementation feels clean.

How does it compare with Java’s PriorityQueue? To get some idea, I wrote a silly Java benchmark. The result? My JavaScript version can execute my target function over 27,000 times per second on Google’s V8 engine whereas Java can barely do it 13,000 times. So my JavaScript smokes Java in this case. Why? I am not exactly sure, but I believe that Java’s PriorityQueue implementation is at fault. I am sure that a heap implementation in Java optimized for the benchmark would fare much better. But I should point out that my JavaScript implementation uses far fewer lines of code. So bravo for JavaScript!

I also wrote a fast bitset implementation in JavaScript. This was more difficult. JavaScript does not have any support for 64-bit integers as far as I can tell though it supports arrays of 32-bit integers (Uint32Array). I did with what JavaScript had, and I published the FastBitSet.js library. How does it compare against Java? One benchmark of interest is the number of times you can compute the union between two bitsets (generating a new bitset in the process). In Java, I can do it nearly 3 million times a second. The JavaScript library appears limited to 1.1 million times per second. That’s not bad at all… especially if you consider that JavaScript is a very ill-suited language to implement a bitset (i.e., no 64-bit integers). When I tried to optimize the JavaScript version, to see if I could get it closer to the Java version, I hit a wall. At least with Google’s V8 engine, creating new arrays of integers (Uint32Array) is surprisingly expensive and seems to have nothing to do with just allocating memory and doing basic initialization. You might think that there would be some way to quickly copy an Uint32Array, but it seems to be much slower than I expect.

To illustrate my point, if I replace my bitset union code…

answer.words = new Uint32Array(answer.count);
for (var k = 0; k < answer.count; ++k) {
   answer.words[k] = t[k] | o[k];

by just the allocation…

answer.words = new Uint32Array(answer.count);

… the speed goes from 1.1 million times per second to 1.5 million times per second. This means that I have no chance to win against Java. Roughly speaking, JavaScript seems to allocate arrays about an order of magnitude slower than it should. That’s not all bad news. With further tests, I have convinced myself that if we can just reuse arrays, and avoid creating them, then we can reduce the gap between JavaScript and Java: Java is only twice as fast when working in-place (without creating new bitsets). I expected such a factor of two because JavaScript works with 32-bit integers whereas Java works with 64-bit integers.

What my experiments have suggested so far is that JavaScript’s single-threaded performance is quite close to Java’s. If Google’s V8 could gain support for 64-bit integers and faster array creation/copy, it would be smooth sailing.

Update: I ended up concluding that typed arrays (Uint32Array) should not be used. I switched to standard arrays for better all around performance.

Links to the JavaScript libraries:

12 thoughts on “JavaScript and fast data structures: some initial experiments”

  1. Regarding the benchmarks: Java is slower because you’re putting integers into a generic container. Java must box every single one of those integers with an object wrapper to put them into the container, then unbox them again when reading their values.

    There is still no optimization for this, at least until value types are properly supported in Java 10. For decent speed you must use non-generic containers with hard-coded numerical types, see e.g.

    JavaScript engines are smarter about integers: they use “tagged values” for object instances that directly represent 31-bit integer values if a flag bit is zero. See here for V8:

    Try rerunning your benchmarks with strings (always objects) or with large integers or non-integers, those should force boxing in Chrome.

  2. After looking at your JavaScript loop, try rewriting it like this and see how much your performance increases:

    answer.words = new Unit32Array(answer.count);
    var k = 0
    , nCount = answer.count

    for (; k < nCount; k++) {
    answer.words[k] = t[k] | o[k];

  3. @Daniel

    I was thinking that an entire bitset could be stored in a canvas. Then two canvases can be composited together to show for example the logical AND of all the bits. Then the bits read our if the canvas.

  4. @Addam

    As I wrote in my post, the bulk of the running time is spent executing the object allocation :

    answer.words = new Unit32Array(answer.count);

    I can remove all the computations that follow, and the speed won’t increase by more than 50%.

    So what I need is some way to optimize the Unit32Array.

    I can get rid of it and replace it by a simple Array, and the speed improves in this case, but it degrades in other ways. Also, I suspect that Unit32Array uses less space than Array. (Probably half as much.)

  5. Hi Daniel, why the conflation of JavaScript with V8? I’d be curious to see the results in other JavaScript engines, especially Microsoft’s Chakra and Firefox. Chakra is the fastest JS engine according to benchmarks spanning the last year or so (Edge being the browser, though Chakra in IE11 won benchmarks when it was new).

    Firefox and Edge are especially intriguing because of their full support for asm.js, the fast subset of JavaScript that is emitted by Emscripten. It would be fascinating if asm.js made a big difference for these workloads. It offers a richer set of types, including explicit integers, though not 64-bit. Only Firefox and Edge support asm.js and perform AOT compilation for it, though V8 is supposed to benefit from asm.js code to some extent.

    1. > why the conflation of JavaScript with V8?

      I have convenient scripts that run benchmarks automatically through node on my linux boxes. V8 happens to be the engine under Node. It is very convenient to be able to run perfectly reproducible benchmarks, and this is made a lot easier with a carefully configured server box (not a laptop!).

      I invite contributed benchmarks based on other engines.

      > Firefox and Edge are especially intriguing because of their full support for asm.js, the fast subset of JavaScript that is emitted by Emscripten. It would be fascinating if asm.js made a big difference for these workloads. It offers a richer set of types, including explicit integers, though not 64-bit.

      I have played with ams.js but not tested it thoroughly. I have not yet seen anything interesting. To be clear, I am impressed by Emscripten, but not by what I have seen with ams.js so far.

      I am inviting benchmarks.

Leave a Reply

Your email address will not be published. Required fields are marked *