It is estimated that there are about 10^{80} atoms in the universe. The estimate for the total number of electrons is similar.

It is a huge number and it far exceeds the maximal value of a single-precision floating-point type in our current computers (which is about 10^{38}).

Yet the maximal value that we can represent using the common double-precision floating-point type is larger than 10^{308}. It is an unimaginable large number. There will never be any piece of engineering involving as many as 10^{308} parts.

Using a double-precision floating-point value, we can represent easily the number of atoms in the universe. We could also represent the number of ways you can pick any three individual atoms at random in the universe.

If your software ever produces a number so large that it will not fit in a double-precision floating-point value, chances are good that you have a bug.

**Further reading**: Lloyd N. Trefethen, Numerical Analysis, Princeton Companion to Mathematics, 2008

Daniel Lemire, "Number of atoms in the universe versus floating-point values," in *Daniel Lemire's blog*, March 15, 2020.

An implicit assumption is that the only thing one can do with numbers is count physical objects. (Actually, that would be two assumptions, but who’s counting?) This assumption directly contradicts my experience.

Besides, it is impossible to represent most integers in the range between 0 to 10308 with

float64.I did not write that binary64 was good enough for all purposes. That’s not what I believe.

I think it would be more accurate to ay you can represent the magnitude of the number of atoms in the universe. A double only has 53 bits of precision so you can’t use it to “count” that high, but you can represent the leading 53 bits (~17 decimal digits) of a number that large.

But yes, if you exceed the range of a double, you likely have an issue with your calculation such as poor choice of units.

*

ObservableuniverseNobody will ever make a machine with 2^53 parts either, so for counting a double is enough.

The entire problem with floating point numbers is loss of precision on calculation. That one is a completely mathematical phenomenon, so pointing at physics is missing the point, and errors do accumulate on superlinear fashion, so that large mantissa is way less useful than it looks like.

There is a reason why quad-precision floats exist.

Again: this post was not a defence of binary64 in general.

It’s worth noting that this rule of thumb is not true in the other direction: likelihood values between 0 and the smallest positive value representable by a double (~5 * 10^{-324}) frequently show up. This can sometimes be worked around by normalizing against the likelihood of a specific event, but library support for log-likelihoods is very valuable.