# Number of atoms in the universe versus floating-point values

It is a huge number and it far exceeds the maximal value of a single-precision floating-point type in our current computers (which is about 1038).

Yet the maximal value that we can represent using the common double-precision floating-point type is larger than 10308. It is an unimaginable large number. There will never be any piece of engineering involving as many as 10308 parts.

Using a double-precision floating-point value, we can represent easily the number of atoms in the universe. We could also represent the number of ways you can pick any three individual atoms at random in the universe.

If your software ever produces a number so large that it will not fit in a double-precision floating-point value, chances are good that you have a bug.

Further reading: Lloyd N. Trefethen, Numerical Analysis, Princeton Companion to Mathematics, 2008 ### Daniel Lemire

A computer science professor at the University of Quebec (TELUQ).

## 7 thoughts on “Number of atoms in the universe versus floating-point values”

1. Albert says:

An implicit assumption is that the only thing one can do with numbers is count physical objects. (Actually, that would be two assumptions, but who’s counting?) This assumption directly contradicts my experience.
Besides, it is impossible to represent most integers in the range between 0 to 10308 with float64.

1. Daniel Lemire says:

I did not write that binary64 was good enough for all purposes. That’s not what I believe.

2. Brian Kessler says:

I think it would be more accurate to ay you can represent the magnitude of the number of atoms in the universe. A double only has 53 bits of precision so you can’t use it to “count” that high, but you can represent the leading 53 bits (~17 decimal digits) of a number that large.

But yes, if you exceed the range of a double, you likely have an issue with your calculation such as poor choice of units.

3. traski says:

*Observable universe

4. Marcos says:

Nobody will ever make a machine with 2^53 parts either, so for counting a double is enough.

The entire problem with floating point numbers is loss of precision on calculation. That one is a completely mathematical phenomenon, so pointing at physics is missing the point, and errors do accumulate on superlinear fashion, so that large mantissa is way less useful than it looks like.

There is a reason why quad-precision floats exist.

1. Daniel Lemire says:

Again: this post was not a defence of binary64 in general.

5. Christopher Chang says:

It’s worth noting that this rule of thumb is not true in the other direction: likelihood values between 0 and the smallest positive value representable by a double (~5 * 10^{-324}) frequently show up. This can sometimes be worked around by normalizing against the likelihood of a specific event, but library support for log-likelihoods is very valuable.

You may subscribe to this blog by email.