A
A
adygha2022-01-06 22:37:58
C++ / C#
adygha, 2022-01-06 22:37:58

Why 0.2f + 0.3f == 0.5f?

I already know the problem of storing fractional numbers in computer memory in binary form, i.e. why the equality 0.1 + 0.2 == 0.3 is not fulfilled is clear to me.

But I don't understand why the equation 0.2f + 0.3f == 0.5f is correct.
Ran the following code:

cout << setprecision(64)
    << "0.3 = " << 0.3 << "\n"
    << "0.2 = " << 0.2 << "\n"
    << "0.2 + 0.3 = " << 0.2 + 0.3 << "\n"
    << "0.3f = " << 0.3f << "\n"
    << "0.2f = " << 0.2f << "\n"
    << "0.2f + 0.3f = " << 0.2f + 0.3f << "\n";


I get the following output:

0.3 = 0.299999999999999988897769753748434595763683319091796875
0.2 = 0.200000000000000011102230246251565404236316680908203125
0.2 + 0.3 = 0.5
0.3f = 0.300000011920928955078125
0.2f = 0.20000000298023223876953125
0.2f + 0.3f = 0.5


Those. why 0.3 + 0.2, which have a double type, is still clear, if you add up their values, it really will be 0.5.
But why is 0.2f + 0.3f also output as 0.5 and not as 0.50000001490116119384765625?

Answer the question

In order to leave comments, you need to log in

2 answer(s)
R
Rsa97, 2022-01-06
@adygha

0.2 = 1.10011001100110011001101 2 * 2 -3
0.3 = 1.00110011001100110011010 2 * 2 -2
Convert to the highest power, keeping the number of binary digits.
0.2 = 0.110011001100111001100110 2 * 2 -2
fold, we get
0.1100111111111001100110 2 * 2 -2
+
1.00111001100110011010 2 * 2 -2
=
10.0000000000000000000000 2 * 2 -2
= 1.0000000000000000000 2 * 2 -1 = 0.1 = 0.1 = 0.1 = 0.1 = 0.1 = 0.1 = 0.2 * 2 -1 = 0.2 * 2 -1 = 0.2 * 2 -1 = 0.2 * 2 -1 = 0.2 * 2 -1 = 0.

V
vanyamba-electronics, 2022-01-10
@vanyamba-electronics

Here it is appropriate to recall that 100% integers do not exist in nature - it is always rounding with some required accuracy. But the computer is built on binary logic, and therefore it works best with integers.
Therefore, when you offer the user to operate with floating point numbers, you always need to give him the opportunity to set the parameter for the required calculation precision. And then continue to operate with these numbers, based on these requirements.
Because if for you, as a programmer, there is no difference between 0.1f and 0.9999999999999f, then for any user this difference is quite obvious.

Didn't find what you were looking for?

Ask your question

Ask a Question

731 491 924 answers to any question