[ad_1]

Floating point numbers are represented, at the hardware level, as fractions of binary numbers (base 2). For example, the decimal fraction:

```
0.125
```

has the value 1/10 + 2/100 + 5/1000 and, in the same way, the binary fraction:

```
0.001
```

has the value 0/2 + 0/4 + 1/8. These two fractions have the same value, the only difference is that the first is a decimal fraction, the second is a binary fraction.

Unfortunately, most decimal fractions cannot have exact representation in binary fractions. Therefore, in general, the floating point numbers you give are only approximated to binary fractions to be stored in the machine.

The problem is easier to approach in base 10. Take for example, the fraction 1/3. You can approximate it to a decimal fraction:

```
0.3
```

or better,

```
0.33
```

or better,

```
0.333
```

etc. No matter how many decimal places you write, the result is never exactly 1/3, but it is an estimate that always comes closer.

Likewise, no matter how many base 2 decimal places you use, the decimal value 0.1 cannot be represented exactly as a binary fraction. In base 2, 1/10 is the following periodic number:

```
0.0001100110011001100110011001100110011001100110011 ...
```

Stop at any finite amount of bits, and you’ll get an approximation.

For Python, on a typical machine, 53 bits are used for the precision of a float, so the value stored when you enter the decimal 0.1 is the binary fraction.

```
0.00011001100110011001100110011001100110011001100110011010
```

which is close, but not exactly equal, to 1/10.

It’s easy to forget that the stored value is an approximation of the original decimal fraction, due to the way floats are displayed in the interpreter. Python only displays a decimal approximation of the value stored in binary. If Python were to output the true decimal value of the binary approximation stored for 0.1, it would output:

```
>>> 0.1
0.1000000000000000055511151231257827021181583404541015625
```

This is a lot more decimal places than most people would expect, so Python displays a rounded value to improve readability:

```
>>> 0.1
0.1
```

It is important to understand that in reality this is an illusion: the stored value is not exactly 1/10, it is simply on the display that the stored value is rounded. This becomes evident as soon as you perform arithmetic operations with these values:

```
>>> 0.1 + 0.2
0.30000000000000004
```

This behavior is inherent to the very nature of the machine’s floating-point representation: it is not a bug in Python, nor is it a bug in your code. You can observe the same type of behavior in all other languages that use hardware support for calculating floating point numbers (although some languages do not make the difference visible by default, or not in all display modes).

Another surprise is inherent in this one. For example, if you try to round the value 2.675 to two decimal places, you will get

```
>>> round (2.675, 2)
2.67
```

The documentation for the round() primitive indicates that it rounds to the nearest value away from zero. Since the decimal fraction is exactly halfway between 2.67 and 2.68, you should expect to get (a binary approximation of) 2.68. This is not the case, however, because when the decimal fraction 2.675 is converted to a float, it is stored by an approximation whose exact value is :

```
2.67499999999999982236431605997495353221893310546875
```

Since the approximation is slightly closer to 2.67 than 2.68, the rounding is down.

If you are in a situation where rounding decimal numbers halfway down matters, you should use the decimal module. By the way, the decimal module also provides a convenient way to “see” the exact value stored for any float.

```
>>> from decimal import Decimal
>>> Decimal (2.675)
>>> Decimal ('2.67499999999999982236431605997495353221893310546875')
```

Another consequence of the fact that 0.1 is not exactly stored in 1/10 is that the sum of ten values of 0.1 does not give 1.0 either:

```
>>> sum = 0.0
>>> for i in range (10):
... sum + = 0.1
...>>> sum
0.9999999999999999
```

The arithmetic of binary floating point numbers holds many such surprises. The problem with “0.1” is explained in detail below, in the section “Representation errors”. See The Perils of Floating Point for a more complete list of such surprises.

It is true that there is no simple answer, however do not be overly suspicious of floating virtula numbers! Errors, in Python, in floating-point number operations are due to the underlying hardware, and on most machines are no more than 1 in 2 ** 53 per operation. This is more than necessary for most tasks, but you should keep in mind that these are not decimal operations, and every operation on floating point numbers may suffer from a new error.

Although pathological cases exist, for most common use cases you will get the expected result at the end by simply rounding up to the number of decimal places you want on the display. For fine control over how floats are displayed, see String Formatting Syntax for the formatting specifications of the str.format () method.

This part of the answer explains in detail the example of “0.1” and shows how you can perform an exact analysis of this type of case on your own. We assume that you are familiar with the binary representation of floating point numbers.The term Representation error means that most decimal fractions cannot be represented exactly in binary. This is the main reason why Python (or Perl, C, C ++, Java, Fortran, and many others) usually doesn’t display the exact result in decimal:

```
>>> 0.1 + 0.2
0.30000000000000004
```

Why ? 1/10 and 2/10 are not representable exactly in binary fractions. However, all machines today (July 2010) follow the IEEE-754 standard for the arithmetic of floating point numbers. and most platforms use an “IEEE-754 double precision” to represent Python floats. Double precision IEEE-754 uses 53 bits of precision, so on reading the computer tries to convert 0.1 to the nearest fraction of the form J / 2 ** N with J an integer of exactly 53 bits. Rewrite :

```
1/10 ~ = J / (2 ** N)
```

in :

```
J ~ = 2 ** N / 10
```

remembering that J is exactly 53 bits (so> = 2 ** 52 but <2 ** 53), the best possible value for N is 56:

```
>>> 2 ** 52
4503599627370496
>>> 2 ** 53
9007199254740992
>>> 2 ** 56/10
7205759403792793
```

So 56 is the only possible value for N which leaves exactly 53 bits for J. The best possible value for J is therefore this quotient, rounded:

```
>>> q, r = divmod (2 ** 56, 10)
>>> r
6
```

Since the carry is greater than half of 10, the best approximation is obtained by rounding up:

```
>>> q + 1
7205759403792794
```

Therefore the best possible approximation for 1/10 in “IEEE-754 double precision” is this above 2 ** 56, that is:

```
7205759403792794/72057594037927936
```

Note that since the rounding was done upward, the result is actually slightly greater than 1/10; if we hadn’t rounded up, the quotient would have been slightly less than 1/10. But in no case is it exactly 1/10!

So the computer never “sees” 1/10: what it sees is the exact fraction given above, the best approximation using the double precision floating point numbers from the “” IEEE-754 “:

```
>>>. 1 * 2 ** 56
7205759403792794.0
```

If we multiply this fraction by 10 ** 30, we can observe the values of its 30 decimal places of strong weight.

```
>>> 7205759403792794 * 10 ** 30 // 2 ** 56
100000000000000005551115123125L
```

meaning that the exact value stored in the computer is approximately equal to the decimal value 0.100000000000000005551115123125. In versions prior to Python 2.7 and Python 3.1, Python rounded these values to 17 significant decimal places, displaying “0.10000000000000001”. In current versions of Python, the displayed value is the value whose fraction is as short as possible while giving exactly the same representation when converted back to binary, simply displaying “0.1”.

[ad_2]

Source link