Why is 0.1 + 0.2 not equal to 0.3?

In most programming languages, the value 0.1 + 0.2 differs from 0.3. Let us try it out in Node (JavaScript):

> 0.1 + 0.2 == 0.3

Yet 1 + 2 is equal to 3.

Why is that? Let us look at it a bit more closely. In most instances, your computer will represent numbers like 0.1 or 0.2 using binary64. In this format, numbers are represented using a 53-bit mantissa (a number between 252 and 253) multiplied by a power of two.

When you type 0.1 or 0.2, the computer does not represent 0.1 or 0.2 exactly.

Instead, it tries to find the closest possible value. For the number 0.1, the best match is 7205759403792794 times 2-56. It is just slightly larger than 0.1, about 0.10000000000000000555. Importantly, this is a bit larger than 0.1. The compute could have used 7205759403792793 times 2-56 or 0.099999999999999991 instead, but it is a slightly worse approximation.

For 0.2, the computer will use 7205759403792794 times 2-55 or about 0.2000000000000000111. Again, this is just slightly larger than than 0.2.

What about 0.3? The compute will use 5404319552844595 times 2-54, or approximately 0.29999999999999998889776975, so just under 0.3.

When the computer adds 0.1 and 0.2, it has no longer any idea what the original numbers are. It only has 0.10000000000000000555 and 0.2000000000000000111. When it adds them together, it seeks the best approximation to the sum of these two numbers. It finds, unsurprisingly, the a value just above 0.3 is the best match: 5404319552844596 times 2-54, or approximately 0.30000000000000004440.

And that is why 0.1 + 0.2 is not equal to 0.3 in software. When you stream different sequences of approximations, even if the exact values would be equal, there is no reason to expect that your approximations will match.

If you are working a lot with decimals, you can try to rely on another computer type, the decimal. It is much slower, but it would not have this exact problem since it is designed specifically for decimal values:

>>> Decimal(1)/Decimal(10) + Decimal(2)/Decimal(10)

However, decimals have other problems:

>>> Decimal(1)/Decimal(3)*Decimal(3) == Decimal(1)

What is going on? What can’t computers support numbers the way human beings do?

Computers can do computations the way human beings do. For example, WolframAlpha has none of the problems above. Effectively, it gives the impression that it processes values a bit like human beings do. But it is slow.

You may think that computers being so fast, there is really no reason of being inconvenienced at the expense of speed. And that may well be true, but many software projects that start out believing that performance is irrelevant, end up being asked to optimize later. And it can be really difficult to engineer speed back into a system that sacrificed performance at every step.

Speed matters.

AMD recently released its latest processors (zen3). They are expected to be 20% faster than their previous family of processors (zen2). This 20% performance boost is viewed as a remarkable achievement. Going only 20% faster is worth billions of dollars to AMD.

Published by

Daniel Lemire

A computer science professor at the University of Quebec (TELUQ).

8 thoughts on “Why is 0.1 + 0.2 not equal to 0.3?”

  1. -I’m very fast in maths!
    -Ok, what’s 56×38?
    -That’s not right…
    -I said I was fast, not precise!

  2. Computers can do computations the way human beings do. For example, WolframAlpha has none of the problems above because it uses symbolic computations.

    I actually wonder what WolframAlpha does internally, and if it works as neatly as you describe. For instance, if you input (0.1^(1/1000))^1000 without pressing enter, it shows an imprecise approximation (frankly, this could be a Javascript hack!), which would indicate it is at least not fully symbolic (for instance, it doesn’t replace 0.1 with 1/10, which would show up on other computations). Final results (after pressing enter) are impressively correct, though.

    WolframAlpha is based on Mathematica (or what they like to call “Wolfram Language”) which has traditionally taken a little different approach on numbers: it has exact values (effectively built up from integers, predefined constants and symbolic solutions built from these), approximate numbers with tracked precision, and machine-precision numbers.

    When you enter something like 0.1 + 0.2 in Mathematica, these numbers are machine-precision reals – effectively binary64 type. 0.1 + 0.2 == 0.3 returns True in Mathematica, but this is not because it would perform symbolic or decimal presentation arithmetic, but because Mathematica ignores couple least significant bits of the mantissa as it knows rounding errors are going to creep in, choosing different semantics (with different tradeoffs). (One can also evaluate 0.1 + 0.2 // InputForm in Mathematica and see that rounding errors indeed creep in on this computation.)

    I suspect WolframAlpha has some sort of heuristics to remove binary floating point kinks from the layperson user experience. What these heuristics precisely are is not immediately obvious to me. It definitely doesn’t straight away replace 0.1 with 1/10…

  3. Scaled integers or rationals? In my view these are good approaches for this type of problem. Rational data types where a nice surprise when I started to use Haskell.

  4. COBOL allows one to define a variable as containing any arbitrary number of whole and decimal values, like all modern languages that dare to represent “business logic” should.

  5. Thanks for your article. I’m still wondering why, for some reason, Java is not consistent when computing floats and doubles.

    for instance :
    0.1d + 0.2d is not equal to 0.3d (as you explained in your article).

    But 0.1f + 0.2f (the same operation using float with a mantissa of 24 bits) IS equal to 0.3. Following the same logic it shouldn’t : 0.1 + 0.2 should be equal to 0.30000004.

    0.3 is internally represented as 0.3000000119
    0.1 is internally represented as 0.10000000149
    0.2 is internally represented as 0.20000000298
    so, 0.1 + 0.2 as 0.30000000447
    and the closed representable matching value shoud be 0.3000000417 , not 0.3000000119…

    Any ideas why this is inconsistent ?

    1. As 32-bit numbers, 0.1f is represented as 26843546*2**-28, so slightly over 0.1 (about 0.10000000149).

      0.2f is represented as 26843546*2**-27, so slightly over 0.2 (about 0.20000000298).

      0.3f is represented as 20132660*2**-26, so slightly over 0.3 (about 0.3000000119).

      If you were to assume that the sum is lossless, you would indeed expect about 0.30000000447034836, but when computing the sum, the processor rounds up to about 0.3000000119.

      Doing the computation manually, we get that the mantissa of 0.1f + 0.2f should be round(((26843546*2)+26843546)/4.0) = round(20132659.5) = 20132660 under round-to-even.

Leave a Reply

Your email address will not be published. The comment form expects plain text. If you need to format your text, you can use HTML elements such strong, blockquote, cite, code and em. For formatting code as HTML automatically, I recommend tohtml.com.

You may subscribe to this blog by email.