In my previous post, I reviewed a new fast random number generator called wyhash. I commented that I expected it to do well on x64 processors (Intel and AMD), but not so well on ARM processors.
Let us review again wyhash:
uint64_t wyhash64_x; uint64_t wyhash64() { wyhash64_x += 0x60bee2bee120fc15; __uint128_t tmp; tmp = (__uint128_t) wyhash64_x * 0xa3b195354a39b70d; uint64_t m1 = (tmp >> 64) ^ tmp; tmp = (__uint128_t)m1 * 0x1b03738712fad5c9; uint64_t m2 = (tmp >> 64) ^ tmp; return m2; }
It is only two multiplications (plus a few cheap operations like add and XOR), but these are full multiplications producing a 128-bit output.
Let us compare with a similar but conventional generator (splitmix) developed by Steele et al. and part of the Java library:
uint64_t splitmix64(void) { splitmix64_x += 0x9E3779B97F4A7C15; uint64_t z = splitmix64_x; z = (z ^ (z >> 30)) * 0xBF58476D1CE4E5B9; z = (z ^ (z >> 27)) * 0x94D049BB133111EB; return z ^ (z >> 31); }
We still have two multiplications, but many more operation. So you would expect splitmix to be slower. And it is, on my typical x64 processor.
Let me reuse my benchmark where I simply sum up 524288 random integers are record how long it takes…
Skylake x64 | Skylark ARM | |
wyhash | 0.5 ms | 1.4 ms |
splitmix | 0.6 ms | 0.9 ms |
According to my tests, on the x64 processor, wyhash is faster than splitmix. When I switch to my ARM server, wyhash becomes slower.
The difference is that the computation of the most significant bits of a 64-bit product on an ARM processor requires a separate and potentially expensive instruction.
Of course, your results will vary depending on your exact processor and exact compiler.
Note: I have about half a million integers, so if you double my numbers, you will get a rough estimate of the number of nanoseconds per 64-bit integer generated.
Update 1: W. Dijkstra correctly pointed out that wyhash could not, possibly, be several times faster than splitmix in a fair competition. I initially reported bad results with splitmix, but after disabling autovectorization (-fno-tree-vectorize), the results are closer. He also points out that results are vastly different on other ARM processors like Falkor and ThunderX2.
Update 2: One reading of this blog post is that I am pretending to compare Intel vs. ARM and to qualify one as being better than the other one. That was never my intention. My main message is that the underlying hardware matters a great deal when trying to determine which code is fastest.
Update 3. My initial results made the ARM processor look bad. Switching to a more recent compiler (GNU GCC 8.3) resolved the issue.
Computing the high bits of 64×64 is how expensive on this ARM server? I mean there’s a 20x relative difference in performance…
What type of ARM? “Skylarke ARM” doesn’t turn up many hits – mostly stuff about a nice farm that does weddings.
You can find more info on that specific ARM implementation here: https://en.wikichip.org/wiki/apm/microarchitectures/skylark
There was a typo in my post. It is Skylark… https://en.wikichip.org/wiki/apm/microarchitectures/skylark
I can give you access to the box.
I don’t have exact numbers for Skylark. On a Cortex A57 processor, to compute the most significant 64 bits of a 64-bit product, you must use the multiply-high instructions (umulh and smulh), but they require six cycles of latency and they prevent the execution of other multi-cycle instructions for an additional three cycles.
http://infocenter.arm.com/help/topic/com.arm.doc.uan0015b/Cortex_A57_Software_Optimization_Guide_external.pdf
Daniel, if you have access to an M1, try the performance there, along with looking at the assembly.
Of course there is the basic “M1 is fast” stuff, that’s not interesting.
What’s interesting is that the 128b multiply should be coded as a UMULH and a MUL instruction pair. Apple has a more or less generic facility to support instructions with multiple destination registers, which means that, in principle, these two multiplies could be fused, and thus executed faster than two successive independent multiply-type operations.
Does Apple in fact do this? Is 128b multiplication considered a common enough operation to special-case? Who knows? But they do, of course, special case and fuse various of the other obvious specialized crypto instruction pairs.
See https://lemire.me/blog/2021/03/17/apples-m1-processor-and-the-full-128-bit-integer-product/
Results form Pine64 on CortexA53:
wyrng 0.013576 s
bogus:14643616649108139168
splitmix64 0.010964 s
bogus:18305447471597396837
And here is the numbers form my laptop, Intel i5-4250U.
wyrng 0.000929 s
bogus:15649925860098344998
splitmix64 0.000842 s
bogus:15901732380406292985
I don’t benchmark on laptops, but here is what I get on my haswell server (i7-4770):
Email me if you want access to it.
I have enough hardware to test, next I want to try on 64bit Atom. But my point here, performance of such things does not really depends on the instructions set (ARMv8 vs amd64). It depends on internal CPU architecture. Cortex A53 and Apple A11 are both armv8 cpus, but on A11 wyrng is faster and on A53 splitmix64 is faster.
I agree.
Another important point in such comparisons is compiler. On my CortexA53 lehmer64 (2) is fastest with gcc, and lehmer64 (3) is fastest with clang. Looks like gcc generates full 128×128 bit multiplication, while clang generates 128×64.
And on iPhone X with AppleA11 wyrng is faster:
wyrng 0.000563 s
bogus:12179112671541558566
splitmix64 0.000728 s
bogus:808196752756138662