In programming, we often represent numbers using types that have specific ranges. For example, 64-bit signed integer types can represent all integers between -9223372036854775808 and 9223372036854775807, inclusively. All integers inside this range are valid, all integers outside are “out of range”. It is simple.

What about floating-point numbers? The nuance with floating-point numbers is that they cannot represent all numbers within a continuous range. For example, the real number 1/3 cannot be represented using binary floating-point numbers. So the convention is that given a textual representation, say “1.1e100”, we seek the closest approximation.

Still, are there ranges of numbers that you should not represent using floating-point numbers? That is, are there numbers that you should reject?

It seems that there are two different interpretation:

- My own interpretation is that floating-point types can represent all numbers from -infinity to infinity, inclusively. It means that ‘infinity’ or 1e9999 are indeed “in range”. For 64-bit IEEE floating-point numbers, this means that numbers smaller than 4.94e-324 but greater than 0 can be represented as 0, and that numbers greater than 1.8e308 should be infinity. To recap, all numbers are always in range.
- For 64-bit numbers, another interpretation is that only numbers in the ranges 4.94e-324 to 1.8e308 and -1.8e308 to -4.94e-324, together with exactly 0, are valid. Numbers that are too small (less than 4.94e-324 but greater than 0) or numbers that are larger than 1.8e308 are “out of range”. Common implementations of the strtod function or of the C++ equivalent follow this convention.

This matters because the C++ specification for the `from_chars` functions state that

If the parsed value is not in the range representable by the type of value, value is unmodified and the member ec of the return value is equal to errc::result_out_of_range.

I am not sure programmers have a common understanding of this specification.

float numbers/ real numbers is much complicated than integers. the range of float needs to worry about more than upper and lower bound, but also minimum closet difference between any two consecutive float number.

I find it weird that 0.0 1 is out of range (not rounded to 0), but 1.0 1 is not out of range, and rounded instead to 1. The round-trip behavior might also be weird for numbers very close to (but not exactly) 0.

gcc 11.2 can parse 2.22507e-308, but a bit below that is out of range.

Your comment did not come out right. There is a formatting issue.

I find it weird that 0.0 (…400 times 0…) 1 is out of range (not rounded to 0), but 1.0 (…400 times 0…) 1 is not out of range, and rounded instead to 1.

The round-trip behavior might also be weird for numbers very close to (but not exactly) 0: I’m wondering if it will always work as expected if a number is first converted to a string, and then parsed again.

gcc 11.2 can parse 2.22507e-308, but a bit below that is out of range. This doesn’t match the bound you gave with of 4.94e-324.

It’s probably best to consider 64-bit floating point range to be from 1.1e-308 to 1.8e308, i.e. excluding denormal values. Including denormal numbers is problematic because of the loss of precision. Including numbers beyond that is pointless because they are not representable.

Surely, you want to include the two zeros as well as the negative numbers?

Sure that would make sense.

<>

How infinity could be inclusive?

just only going from the definition it is simply impossible.