C# – Why the result is different for this problem

c++floating-accuracyfloating-point

I came across this following arithmetic problem.

But the result is different from normal maths operation, Why is it so?

double d1 = 1.000001;

double d2 = 0.000001;

Console.WriteLine((d1-d2)==1.0);

Best Solution

I presume you found the question on Jon Skeet's Brainteasers page? The answers are listed and explained here on the same website.

For a matter of reference, here's the answer copied from that page.


3) Silly arithmetic

Computers are meant to be good at arithmetic, aren't they? Why does this print "False"?

double d1 = 1.000001; double d2 =
0.000001; Console.WriteLine((d1-d2)==1.0);

Answer: All the values here are stored as binary floating point. While 1.0 can be stored exactly, 1.000001 is actually stored as 1.0000009999999999177333620536956004798412322998046875, and 0.000001 is actually stored as 0.000000999999999999999954748111825886258685613938723690807819366455078125. The difference between them isn't exactly 1.0, and in fact the difference can't be stored exactly either.