Programmers routinely work with lists of integers. We recently showed how to compress such lists of integers close to their entropy, while being able to decompress billions of integers per second.

To ensure anyone could do it, we published the FastPFOR C++ library. We also published the JavaFastPFOR Java library and there is even a corresponding Go library.

However, I wanted to provide also a low-level C library that advanced programmers could embed deep in their own software without the inconvenience of a bulky C++ research library.

So we wrote the SIMDComp library. It is a minimalist C library. On a recent PC, it will decompress integers at over 4 billion integers per second: less than one CPU cycle per integer. We use a liberal open source license so it should be suitable for all your projects.

To test it out for yourself, grab a copy, and type “make example; ./example”.

7 Comments

  1. Excellent. Very easy to use! Thanks!

    Comment by Alecco — 19/5/2014 @ 14:13

  2. Has FastPFOR been used/evaluated in a real contex such as lucene text search?

    Comment by anonymous — 19/5/2014 @ 16:30

  3. @anonymous

    Lucene uses what is effectively the FastPFOR algorithm inspired by the JavaFastPFOR library. As for using the C++ library, I do not know if it is practical since Lucene is written in Java.

    Comment by Daniel Lemire — 20/5/2014 @ 8:10

  4. In my tests SIMD bitpacking offer no speed advantage over optimized scalar bitpacking when used with large buffers (see simplebenchmark in FastPFor). This is valable for most applications (ex. inverted index). A realistic benchmark should compare SIMD/Scalar bitpacking only on large buffers.

    Comment by powturbo — 28/5/2014 @ 5:07

  5. @powturbo

    If you are going to take the data from RAM, bring it all the way to L1 cache, load it in registers, then push it out all the back to RAM… you are IO bound… your CPU runs empty and so, saving CPU cycles becomes irrelevant. To make things worse, you can pretty much forget about using more than one core because your L3 cache is going to be overwhelmed by one core.

    So? So you avoid decompressing whole arrays to RAM.

    We have demonstrated directly the benefit of SIMD bit packing in our latest paper (see http://arxiv.org/abs/1401.6399).

    Comment by Daniel Lemire — 28/5/2014 @ 8:16

  6. If you’re out of disk-space, is there a way to handle updates in a way that won’t require additional scratch space?

    Comment by Garen — 30/5/2014 @ 15:59

  7. @Garen

    This particular library does not handle disk storage at all (by design). However, there is no particular problem with updates and this library. In fact, it compresses super fast so recompressing updating blocks should be quite fast.

    Comment by Daniel Lemire — 30/5/2014 @ 16:43

Sorry, the comment form is closed at this time.

« Blog's main page

Powered by WordPress