There are many ways in software to represent a set. The most common approach is to use a hash table. We define a “hash function” that takes as an input our elements and produces as an output an integer that “looks random”. Then your element is stored at the location indicated by the value of the hash function. To check whether the value is in the hash function, you compute the hash function and look at the appropriate location memory. Your elements will thus appear in “random” order.
This means that unless you do extra work, iterating through the elements in your hash set will be done in random order. Let me pull out my Swift console to demonstrate:
$ var x = Set<Int>() $ x.insert(1) $ x.insert(2) $ x.insert(3) $ for i in x { print(i) } 2 3 1
That is right: you insert the numbers 1, 2, 3 and the numbers 2, 3, 1 come out.
You get the same kind of result in Python:
>>> x = set() >>> x.add(-1) >>> x.add(-2) >>> x.add(-3) >>> x set([-2, -1, -3])
That is, the hash set is visited starting with the elements having the smallest hash function value. The hash function is often designed to appear random, so the elements come out randomly.
This randomness can take programmers by surprise, so some programming languages like JavaScript and PHP preserve the “insertion order”. If you pull out your JavaScript console, you get that the set keeps track of the order in which you inserted the element and gives it back to you:
> var x = new Set() > x.add(-3) Set { -3 } > x.add(-2) Set { -3, -2 } > x.add(-1) Set { -3, -2, -1 } > x.add(3) Set { -3, -2, -1, 3 } > x.add(2) Set { -3, -2, -1, 3, 2 } > x.add(1) Set { -3, -2, -1, 3, 2, 1 }
This can be implemented as a linked list working on top of the hash table.
Java supports both approaches through HashSet and LinkedHashSet.
The LinkedHashSet will use more memory, but it gives back the elements in insertion order. The HashSet gives back the element in an order determined in large part by the hash function. The LinkedHashSet may allow you to iterate over the elements faster because you are essentially bypassing the hash table entirely and just following the linked list. Linked lists are not great in the sense that each node being visited can incur a cache miss. However, Java’s HashSet is implemented using a fancy chaining approach, so you will be chasing pointers in memory and possibly also having cache misses.
So it would seem like LinkedHashSet is a good choice in Java if you are not memory bound.
To explore this problem, I took a set made of 1 million integers generated randomly. I insert them into both a HashTable and a LinkedHashTable. Then I sum the values. I run my benchmark on a Skylake processor with Java 8:
class | sum values |
---|---|
HashSet | 50 ops/s |
LinkedHashSet | 150 ops/s |
My numbers are clear: in my tests, it is three times faster to sum up the values in a LinkedHashSet. You can reproduce my benchmark.
HashSet jumps over the memory randomly. Locality could be a big contribution factor. See https://shipilev.net/jvm-anatomy-park/11-moving-gc-locality/
It would be interesting to see how do benchmark results change if
System.gc()
is called right before the iteration.HashSet jumps over the memory randomly.
Can you elaborate?
When elements are inserted one after another in a HashSet, newly allocated HashMap.Entry are allocated sequentially (in a TLAB), but end up being stored at random positions in the HashMap’s array. HashSet’s (= HashMap’s) iteration goes over the array sequentially, but it means jumping over HashMap.Entries, allocated at random positions in TLAB.
System.gc() may actually fix that, read the Alexey Shipilev’s post for details.
LinkedHashMap links the entries in the allocation order, and iterates in the same order, it means that it scans TLAB memory sequentially.
I modified my code so that I call gc after creating the sets. It seems to only have a small effect. Calling the gc can even have a negative effect.
I tried recreating a new hash set from the previous hash set in the hope that the data would be allocated in the right order…
No luck:
See https://github.com/lemire/Code-used-on-Daniel-Lemire-s-blog/blob/master/2018/03/13/src/main/java/me/lemire/microbenchmarks/hash/OrderHash.java
Degradation is observed when a “recreated” set is iterated, as far as I can see. Possible reason might be that GC scans arrays in decreasing order (I don’t know and don’t assert that!), So GC might put objects in not very favourable order again.
Anyway, it’s all just assumtions. Both of your benchmark results might be completely misleading, affected by some bugs or contribution factors that we didn’t consider yet. Only study of perfasm with cycle as well as cache miss counts could help to answer those questions for sure.
That’s work for me by the way 😀
Fastutil has additional implementations.
In particular, a hash set without chaining, but open addressing.
Can you include these in your benchmark? It would be interesting to see the differences even without the primitive specializations that make fastutil attractive.
Pull Requests invited!
JavaScript only preserves insertion order for non-numeric properties. The actual iteration order is:
Numeric properties in ascending order.
String properties in insertion order.
Symbols.
ES 3 however described objects as “unordered”. A few years ago Firefox used insertion order for all properties and V8 browsers the current behavior. It seems ES6 finally established the iteration rules as it was causing quite a few bugs. http://2ality.com/2015/10/property-traversal-order-es6.html
JavaScript only preserves insertion order for non-numeric properties. (…) Numeric properties in ascending order.
Can you please type this in your favorite JavaScript console:
(Update: I had pasted Swift code initially.)
I executed test on my pc and it show the following results
Benchmark (gc) Mode Cnt Score Error Units
collectionUtils.OrderHash.scanHashSet false thrpt 5 39,380 ± 1,357 ops/s
collectionUtils.OrderHash.scanHashSet true thrpt 5 38,749 ± 0,897 ops/s
collectionUtils.OrderHash.scanHashSet2 false thrpt 5 55,521 ± 15,549 ops/s
collectionUtils.OrderHash.scanHashSet2 true thrpt 5 64,839 ± 3,105 ops/s
collectionUtils.OrderHash.scanLinkedHashSet false thrpt 5 141,776 ± 23,246 ops/s
collectionUtils.OrderHash.scanLinkedHashSet true thrpt 5 143,821 ± 1,604 ops/s
why is that scanHashSet2 is almost twice as fast as scanHashSet?
it depends on the way it was filled?