Maintaining a set of integers is a common problem in programming. It can also be implemented in many different ways.

Maybe the most common implementation uses a hashing (henceforth hashset): it provides optimal expected-time complexity. That is, we expect that it takes a constant time to add or remove an integer (O(1)), and it takes a time proportional to the cardinality of the set to iterate through the elements. For this purpose, Java provides the HashSet class.

An alternative implementation is the bitset: essentially, it is an array of boolean values. It ensures that adding and removing integers takes a constant time. However, iterating through the elements could be less than optimal: if your universe contains *N* integers, it may take a time proportional to *N* to enumerate the integers irrespective of the number of integers in the set.

So, suppose you expect to have 1000 integers in the range from 0 to *N*. Which data structure is best? The hashset or the bitset? Clearly, if *N* is sufficient large, the hashset will be best. But how large must *N* be?

I decided to implement a quick test to determine the answer. Instead of using the standard Java BitSet, I decided to write my own bitset (henceforth StaticBitSet) that is faster in my tests. For the hashset, I compared both the standard HashSet and TIntHashSet and found that there was little difference in performance in my tests, so I report just the results with the standard HashSet (from the OpenJDK 7).

The following table reports the speed in millions of elements per second for adding, removing and iterating through 1000 elements in the range from 0 to N.

N | bitset | hashset |
---|---|---|

100,000 | 77 | 18 |

1,000,000 | 45 | 19 |

10,000,000 | 11 | 18 |

These numbers are consistent with the theory. The speed of the hashset data structure is relatively independent from *N* whereas the performance of the bitset degrades as *N* increases. However, what might be surprising, is how large *N* needs to be before the bitset is beaten. The bitset only starts failing you (in this particular test) when the ratio of the size of the universe to the size of the set exceeds 1,000.

The bitset data structure is more generally applicable than you might think.

**Source**: My Java source code is available, as usual.

**Further reading**: In to Sorting is fast and useful, I showed that binary search over sorted array of integers could be a competitive way to test whether a value belongs to a set.

It might be because the bitset is more cache friendly. I personally love the bitmagic C++ library (http://bmagic.sourceforge.net/) as it handles sparse sets really well.

Have you got perf figures for that StaticBitSet class, compared to a BitSet?

@Justin

They are in the repository, right there:

https://github.com/lemire/Code-used-on-Daniel-Lemire-s-blog/blob/master/2012/11/13/results.txt

You can also run the code and see for yourself what happens on your own machine. Results may vary.

BTW I am not saying that BitSet is bad. My StaticBitSet class just does better, it seems, on this particular test.

Daniel, this is entirely consistent with my experience. A well-implemented bit set is a nearly optimal data structure for modern CPU architectures. It is brute force but in a manner CPUs are highly optimized for. If you are saturating all the ALUs then the number of entry evaluations per clock cycle is very high. Hash sets have to overcome their cache line overhead and relatively low number of evaluations per clock cycle.

As you undoubtedly know, when bit sets become large/sparse enough to become inefficient, a very good alternative is compressed bit sets.

Of course. I’m just wondering since I have some extremely performance-sensitive code which uses java.util.BitSet — it’ll be nice to benchmark it there too, given those numbers….

nice results :)!

this seems to be a bit offtopic but ðŸ™‚

does somebody know of a sparse hashset implementation in Java which is more memory efficient than the THashSet?

I need some memory efficient datastructure for the case that there are areas of consecutive integer values … or how would you implement that?

to be more specific: do you think that one could combine bitset and hashset?

I mean a hashset which is baken by (linked or whatever) several bitsets?

Hi,

would be nice to see some b search + sorted vector in comparison, like in your linked blog post. Even more interesting would be some cache aware or even (if they exist) cache oblivious “kind of b search + sorted vector”. I guess binary tree “flattened” to array would behave nicely with regards to cache perf.

@Daniel

of course I’m aware of your nice projects ðŸ™‚

but I think compressed bitsets are not an option for me as I need random access.

@Justin

You are welcome to steal my StaticBitSet implementation and benchmark it for your purposes. It is available on github.

@Rogers @Ivan

Agreed. It would be interesting to throw in more data structures and more strategies. I will do so in the future.

@Daniel

Sorry for the confusion! I should have thought about my problem a bit more ðŸ™‚

I need a hashmap or a ‘compressed’ integer array which efficiently ‘maps’ ints to ints (or longs to longs)

@Peter

It is a reasonable request, but I don’t know of an existing solution.

You might want to look at compressed bitsets. They could meet your needs. See https://github.com/lemire/simplebitmapbenchmark for a comparative benchmark.

@Peter

Yes, I understand what you seek.

If you ever find a good solution, please email me.

I found the SparseArray/LongSparseArray of the android project. Still a *full* re-allocation is necessary if the space is not sufficient but the key/values are stored very compact. Access is done via binary search, so not O(1) …

Cool post! For those who are interested, we have a wide variety of integer set implementations in WALA:

http://wala.sourceforge.net

See the com.ibm.wala.util.intset.IntSet interface and its implementations. Efficient representation of integer sets is critical to the scalability of many program analyses.

Interesting post! http://www.censhare.com/en/aktuelles/censhare-labs/efficient-concurrent-long-set-and-map shows a different approach for sparse sets using a bit trie.