Colm MacCárthaigh is organizing a programming competition with three 3 prizes: $500, $300, $200. The objective? Produce the most readable, easy to follow, and well tested implementation of the *nearly divisionless* random integer algorithm. The scientific reference is Fast Random Integer Generation in an Interval (ACM Transactions on Modeling and Computer Simulation, 2019). I have a few blog posts on the topic such as Nearly Divisionless Random Integer Generation On Various Systems.

This algorithm has been added to the Swift standard library by Pavol Vaskovic. It has also been added to the Go standard library by Josh Snyder. And it is part of the Python library Numpy thanks to Bernardt Duvenhage and others.

I have a C++ repository on GitHub with relevant experiments as well as a Python extension.

Colm wants the implementation to be licensed under the Apache Software License 2.0. It could be in any programming language you like. The deadline is September 1st 2019. You can find Colm on Twitter.

A pity that the contest is not directed at PCG:

http://www.pcg-random.org

However, since the C code is so simple, I’m not sure that it can be further simplified.

My impression is that there is no rule against PCG.

As far as I can tell, PCG and the algorithm the contest is about are orthogonal or even complementary: the former is a way of creating uniform random bits, and the latter is a way Daniel proposes of taking random bits and creating an uniform integer within a given range.

So you can use PCG, or any other bit-producing RNG to feed Daniel’s algorithm.

Where should the contribution be submitted?

I suggest reaching out to Colm on Twitter.

You’ll be glad to hear that julialang is also in the process to migrating to your algorithm (PR here), with proper attribution. I’m not rfourquet, so I don’t know whether you accept my nomination on his behalf.

On a different note, we recently adopted a similar strategy for unpacking bit-arrays to the one you propagate on your blog (we arrived on the same solution independently of you, as countless of others before either of us). This is used for matlab-style logical indexing (PR here).

I was looking at this over the weekend, and I’m not sure if my ideas would make the code more, or less readable.

The 64-bit multiplication and 32-bit extractions are really just fixed-point arithmetic, and I was looking at rewriting this with some kind of fixed point library, but I didn’t find anything convenient.

I did come up with the idea of saving (-range % range) between calls with some kind of RangeGenerator object. That’s not much use in shuffling, but it may be in other contexts.

Another tricky part is compile time vs. runtime optimization, since eg the modulus could be computed at compile time in the case of a constant range. The runtime version should be lazy-evaluated, but compile time should not.