In most programming languages, the value 0.1 + 0.2 differs from 0.3. Let us try it out in Node (JavaScript):

> 0.1 + 0.2 == 0.3 false

Yet 1 + 2 is equal to 3.

Why is that? Let us look at it a bit more closely. In most instances, your computer will represent numbers like 0.1 or 0.2 using binary64. In this format, numbers are represented using a 53-bit mantissa (a number between 2^{52} and 2^{53}) multiplied by a power of two.

When you type 0.1 or 0.2, the computer does not represent 0.1 or 0.2 exactly.

Instead, it tries to find the closest possible value. For the number 0.1, the best match is 7205759403792794 times 2^{-56}. It is just slightly larger than 0.1, about 0.10000000000000000555. Importantly, this is a bit larger than 0.1. The compute could have used 7205759403792793 times 2^{-56} or 0.099999999999999991 instead, but it is a slightly worse approximation.

For 0.2, the computer will use 7205759403792794 times 2^{-55} or about 0.2000000000000000111. Again, this is just slightly larger than than 0.2.

What about 0.3? The compute will use 5404319552844595 times 2^{-54}, or approximately 0.29999999999999998889776975, so just under 0.3.

When the computer adds 0.1 and 0.2, it has no longer any idea what the original numbers are. It only has 0.10000000000000000555 and 0.2000000000000000111. When it adds them together, it seeks the best approximation to the sum of these two numbers. It finds, unsurprisingly, the a value just above 0.3 is the best match: 5404319552844596 times 2^{-54}, or approximately 0.30000000000000004440.

And that is why 0.1 + 0.2 is not equal to 0.3 in software. When you stream different sequences of approximations, even if the exact values would be equal, there is no reason to expect that your approximations will match.

If you are working a lot with decimals, you can try to rely on another computer type, the decimal. It is much slower, but it would not have this exact problem since it is designed specifically for decimal values:

>>> Decimal(1)/Decimal(10) + Decimal(2)/Decimal(10) Decimal('0.3')

However, decimals have other problems:

>>> Decimal(1)/Decimal(3)*Decimal(3) == Decimal(1) False

What is going on? What can’t computers support numbers the way human beings do?

Computers can do computations the way human beings do. For example, WolframAlpha has none of the problems above. Effectively, it gives the impression that it processes values a bit like human beings do. But it is slow.

You may think that computers being so fast, there is really no reason of being inconvenienced at the expense of speed. And that may well be true, but many software projects that start out believing that performance is irrelevant, end up being asked to optimize later. And it can be really difficult to engineer speed back into a system that sacrificed performance at every step.

Speed matters.

AMD recently released its latest processors (zen3). They are expected to be 20% faster than their previous family of processors (zen2). This 20% performance boost is viewed as a remarkable achievement. Going only 20% faster is worth billions of dollars to AMD.

-I’m very fast in maths!

-Ok, what’s 56×38?

-450!

-That’s not right…

-I said I was fast, not precise!

I actually wonder what WolframAlpha does internally, and if it works as neatly as you describe. For instance, if you input (0.1^(1/1000))^1000 without pressing enter, it shows an imprecise approximation (frankly, this could be a Javascript hack!), which would indicate it is at least not fully symbolic (for instance, it doesn’t replace 0.1 with 1/10, which would show up on other computations). Final results (after pressing enter) are impressively correct, though.

WolframAlpha is based on Mathematica (or what they like to call “Wolfram Language”) which has traditionally taken a little different approach on numbers: it has exact values (effectively built up from integers, predefined constants and symbolic solutions built from these), approximate numbers with tracked precision, and machine-precision numbers.

When you enter something like 0.1 + 0.2 in Mathematica, these numbers are machine-precision reals – effectively binary64 type. 0.1 + 0.2 == 0.3 returns True in Mathematica, but this is not because it would perform symbolic or decimal presentation arithmetic, but because Mathematica ignores couple least significant bits of the mantissa as it knows rounding errors are going to creep in, choosing different semantics (with different tradeoffs). (One can also evaluate 0.1 + 0.2 // InputForm in Mathematica and see that rounding errors indeed creep in on this computation.)

I suspect WolframAlpha has some sort of heuristics to remove binary floating point kinks from the layperson user experience. What these heuristics precisely are is not immediately obvious to me. It definitely doesn’t straight away replace 0.1 with 1/10…

Scaled integers or rationals? In my view these are good approaches for this type of problem. Rational data types where a nice surprise when I started to use Haskell.

The Android Calculator uses an internal number representation that is pretty much indistinguishable from real numbers for a human user.

See this article by Hans Boehm:

https://dl.acm.org/doi/abs/10.1145/3385412.3386037

Adrian Colyer’s blog had a writeup on some recent work of his: https://blog.acolyer.org/2020/10/02/toward-an-api-for-the-real-numbers/

COBOL allows one to define a variable as containing any arbitrary number of whole and decimal values, like all modern languages that dare to represent “business logic” should.