How many floating-point numbers are in the interval [0,1]?

Most commodity processors support single-precision IEEE 754 floating-point numbers. Though they are ubiquitous, they are often misunderstood.

One of my readers left a comment suggesting that picking an integer in [0,232) at random and dividing it by 232, was equivalent to picking a number at random in [0,1). I am not assuming that the reader in question made this inference, but it is a tempting one.

That’s certainly “approximately true”, but we are making an error when doing so. How much of an error?

Floating-point numbers are represented as sign bit, a mantissa and an exponent as follows:

  • There is a single sign bit. Because we only care about positive number, this bit is fixed and can be ignored.
  • The mantissa is straight-forward: 23 bits. It is implicitly preceded by the number 1.
  • There are eight bits dedicated to the exponent. It is pretty much straight-forward, as long as the exponents range from -126 to 127. Otherwise, you get funny things like infinity, not-a-number, denormal numbers… and zero. To represent zero, you need an exponent value of -127 and zero mantissa.

So how many “normal” non-zero numbers are there between 0 and 1? The negative exponents range from -1 all the way to -126. In each case, we have 223 distinct floating-point numbers because the mantissa is made of 23 bits. So we have 126 x 223 normal floating-point numbers in [0,1). If you don’t have a calculator handy, that’s 1,056,964,608. If we want to add the number 1, that 126 x 223 + 1 or 1,056,964,609.

Most people would consider zero to be a “normal number”, so maybe you want to add it too. Let us make it 1,056,964,610.

There are 4,294,967,296 possible 32-bit words, so about a quarter of them are in the interval [0,1]. Isn’t that interesting? Of all the float-pointing point numbers your computer can represent, a quarter of them lie in [0,1]. By extension, half of the floating-point numbers are in the interval [-1,1].

So already we can see that we are likely in trouble. The number 232 is not divisible by 1,056,964,610, so we can’t take a 32-bit non-negative integer, divide it by 232 and hope that this will generate a number in [0,1] in an unbiased way.

How much of a bias is there? We have a unique way of generating the zero number. Meanwhile, there are 257 different ways to generate 0.5: any number in between 2,147,483,584 and 2,147,483,776 (inclusively) gives you 0.5 when divided by the floating-point number 232.

A ratio of 1 to 257 is a fairly sizeable bias. So chances are good that your standard library does not generate random numbers in [0,1] in this manner.

How could you get an unbiased map?

We can use the fact that the mantissa uses 23 bits. This means in particular that you pick any integer in [0,224), and divide it by 224, then you can recover your original integer by multiplying the result again by 224. This works with 224 but not with 225 or any other larger number.

So you can pick a random integer in [0,224), divide it by 224 and you will get a random number in [0,1) without bias… meaning that for every integer in [0,224), there is one and only one number in [0,1). Moreover, the distribution is uniform in the sense that the possible floating-point numbers are evenly spaced (the distance between them is a flat 2-24).

So even though single-precision floating-point numbers use 32-bit words, and even though your computer can represent about 230 distinct and normal floating-point numbers in [0,1), chances are good that your random generator only produces 224 distinct floating-point numbers in the interval [0,1).

For most purposes, that is quite fine. But it could trip you up. A common way to generate random integers in an interval [0,N) is to first generate a random floating-point number and then multiply the result by N. That is going to be fine as long as N is small compared to 224, but should N exceed 224, then you have a significant bias as you are unable to generate all integers in the interval [0,N).

I did my analysis using 32-bit words, but it is not hard to repeat it for 64-bit words and come to similar conclusions. Instead of generating 224 distinct floating-point numbers in the interval [0,1], you would generate 253.

Further reading: The nextafter function and Uniform random floats: How to generate a double-precision floating-point number in [0, 1] uniformly at random given a uniform random source of bits.

Published by

Daniel Lemire

A computer science professor at the University of Quebec (TELUQ).

25 thoughts on “How many floating-point numbers are in the interval [0,1]?”

  1. The POSIX drand48() function returns a floating point number in [0, 1) with 48 bits of entropy. This is typically done by taking the IEEE 754 double representation of 1.0, setting the first 48 bit of the mantissa to random 48 bits and then subtracting 1.0. This leads to a uniformly distributed floating point number in this range.

    To your claim about the 1-to-257 bias: Note that the range of valid floating point numbers around 0 is much denser than the range around 0.5, so even in a perfectly uniform distribution, hitting exactly 0.0 is much less probable than hitting 0.5 as the number of numbers that round to 0.0 is much higher than the amount of numbers that round to 0.5.

      1. I believe the notion of a uniform distribution when applied to floating point numbers is a bit odd, which I think is what fuz is referring to. Since floating point numbers are denser around 0, if you generate a uniformly distributed real number and round that to the nearest representable floating point number, then by definition numbers around 0 will be more likely *if* your real number distribution was uniform and could generate an arbitrary real number (which dividing an integer by 2^24 can’t do).

        The alternative route is to generate a uniformly distributed representable floating point number – which seems to imply that the goal is to have each floating point number in [0,1] have the same probability – which results in a non-uniform distribution, of course, and can be achieved by generating approximately 30 bits of randomness and filling mantissa/exponent with them (or, to be more precise, a random number in 0..1,056,964,610).

        1. That’s an interesting distinction to make. In most applications I can imagine one would want the latter thing, where we’re trying to uniformly pick out numbers in a finite set that happens to be the set of floating point numbers in [0,1], without regard to whether that distribution approximates a uniform distribution over the real range that contains them. E.g. if we’re using these numbers for cryptographic purposes, any non-uniform probability in terms of which actual point numbers our process selects is going to result in an entropy loss and will generally weaken our process.

          I’m sure there are applications where you’d really want the former thing (a distribution over the floating point numbers in [0,1] that approximates a uniform distribution over the reals in that range) but my brain is tuned to information theory and I can’t think of one off the top of my head. I’d be interested in hearing a simple example if anyone has one.

      2. If choosing a value in [0-e, 0+e] is as likely as [0.5-e, 0.5+e] for reasonable values of e, you have a uniform distribution. Probability of hitting the exact values is affected by the fact that each “exact value” is an approximation for a different interval. With the real real numbers, the probability of hitting any exact value is exactly zero, so with floats the probability of hitting an exact value only reflects floats’ failure to be really reals.

    1. Any method which populate [1,2) and then subtract 1 to get U[0,1) will inevitable loose some random bits – there are twice as much floats in [0,1) as in [1,2)

      1. That’s entirely false. Try

        static inline double to_double(uint32_t x) {
        const union { uint32_t i; float f; } u = { .i = UINT32_C(0x3F8) << 20 | x };
        return u.f – 1.0f;
        }

        int main() {
        for(int i = 0; i < 1 << 23; i++) assert(to_double(i) * (1 << 23) == i);
        }

  2. “meaning that for every integer in [0,2^24), there is one and only one integer in [0,1).”

    Should that read “one and only one floating point number in [0,1)” instead?

  3. I used nextafterf() to count the number of floats between 0.0 and 1.0. The following program takes only a couple seconds to report that there are 1,065,353,216 such values, which is slightly larger than 1,056,964,608. I don’t know why there is a difference.

    int main(void) {
    float x = 0.0f;
    int count = 0;

    while (x < 1.0) {
    x = nextafterf(x, 1.0);
    count += 1;
    }

    printf("count: %d\n", count);
    return 0;
    }

    1. My guess is that the extra floating point numbers that you detected are denormalized representations which don’t fit the patterns originally presupposed in the article.

      Floating point representations are more complex than they appear.

  4. One reason the count could be wrong is that the post only counts “normal numbers”, but there are a extra values between the smallest positive non-zero normal value and zero — the so called “denormal numbers”. The correct calculation should have been $127 x 2^23$. ($126 x 2^23$ normals, 1 zero and $1 x 2^23 -1$ denormals). Which matches your calculation exactly.

    1. The correct calculation should have been (…)

      I excluded explicitly denormal numbers. It is debatable whether they should be included.

      In any case, it does not change the count very much.

    1. That’s correct. The mean is 0.0119 as you can verify with the following program…

      int main() {
        double total = 0;
        size_t count = 0;
        float val = 0;
        for(uint32_t x = 0; x < 0xFFFFFFFF; x++) {
          memcpy(&val, &x, sizeof(x));
          if(isnormal(val) && (val >= 0) && (val <= 1.0)) {
            total += val;
            count +=1;
          }
        }
        printf("%f %zu \n", total, count);
      }
      

      It is very far from 0.5. Of course, you can find a uniform sample of the numbers in [0,1)… but there are many more floats near 0…

  5. For a true uniform number (32 bit float) half the time the exponent should be 126, a quarter of the time 125, one eighth of the time 124 etc. You can force that. A lowest set bit or highest set bit on say a random 32 or 64 bit integer will give the right distribution to add/subtract from the exponent. Then just put random bits in the mantissa.
    I guess the error is “just putting random bits in the mantissa.”
    That would probably lead to bias. If you look at the methods in the code by Vigna ( a famous person, by the way) there are 2 ways given to get floats.
    Anyway it depends on what you want random numbers for. For evolutionary algorithms it is better to make full use of all the precision available even at the cost of some slight bias. Also for rare event code a uniform random number quantized in 2^24 pieces would be useless. You would never get a number in the interval [1e-26,1e-27] for example . That is a major bias in that application.

  6. Actually there are an unlimited number of floating point numbers if we get away from the implementation dependent hardware straightjacket.
    Just one example : take the sequence of 0.1, 0.01, …, 0.00001 and so on, Yes, that is an endless progression. Any questions ?

  7. The approaches that use division (which is a very complex operation in itself) and convert random integers to floating point seem cumbersome to me. Is it possible to work something out from the bit representation?

    For example, something like this. Draw a random bit, if it equals one, then the exponent is 2^-1 which corresponds to the interval [1/2 .. 1). Otherwise draw another bit, if it equals one the exponent is 2^-2 which corresponds to the interval [1/4 .. 1/2) and so on. This gives us uniformity. Once we have chosen the interval, just fill the mantissa with 23 random bits. Such a generator can certainly produce all possible floating point numbers. It’s also easy to generate denormals this way, if necessary.

    Will something like this work?

      1. Hello! But it’s bit-by-bit only in concept. In reality that’d be very approximately something like:

        e = ctz(random(2^126))
        m = random(2^23)

        Where e is a bit representation of the exponent (so it should be in the [1, 126] interval, and ctz gives you an index of the least-significant non-zero bit.

        In practice that would probably be 2-4 calls to the base random generator and CTZ/CLZ instructions. Surely the division is much more expensive than that! I’m more worried about proving that this approach will give a “good” distribution.

Leave a Reply to Severin Pappadeux Cancel reply

Your email address will not be published.

To create code blocks or other preformatted text, indent by four spaces:

    This will be displayed in a monospaced font. The first four 
    spaces will be stripped off, but all other whitespace
    will be preserved.
    
    Markdown is turned off in code blocks:
     [This is not a link](http://example.com)

To create not a block, but an inline code span, use backticks:

Here is some inline `code`.

For more help see http://daringfireball.net/projects/markdown/syntax

You may subscribe to this blog by email.