Binary floating point math is like this. In most programming languages, it is based on the IEEE 754 standard. The crux of the problem is that numbers are represented in this format as a whole number times a power of two; rational numbers (such as `0.1`

, which is `1/10`

) whose denominator is not a power of two cannot be exactly represented.

For `0.1`

in the standard `binary64`

format, the representation can be written exactly as

`0.1000000000000000055511151231257827021181583404541015625`

in decimal, or
`0x1.999999999999ap-4`

in C99 hexfloat notation.

In contrast, the rational number `0.1`

, which is `1/10`

, can be written exactly as

`0.1`

in decimal, or
`0x1.99999999999999...p-4`

in an analogue of C99 hexfloat notation, where the `...`

represents an unending sequence of 9's.

The constants `0.2`

and `0.3`

in your program will also be approximations to their true values. It happens that the closest `double`

to `0.2`

is larger than the rational number `0.2`

but that the closest `double`

to `0.3`

is smaller than the rational number `0.3`

. The sum of `0.1`

and `0.2`

winds up being larger than the rational number `0.3`

and hence disagreeing with the constant in your code.

A fairly comprehensive treatment of floating-point arithmetic issues is *What Every Computer Scientist Should Know About Floating-Point Arithmetic*. For an easier-to-digest explanation, see floating-point-gui.de.

**Side Note: All positional (base-N) number systems share this problem with precision**

Plain old decimal (base 10) numbers have the same issues, which is why numbers like 1/3 end up as 0.333333333...

You've just stumbled on a number (3/10) that happens to be easy to represent with the decimal system, but doesn't fit the binary system. It goes both ways (to some small degree) as well: 1/16 is an ugly number in decimal (0.0625), but in binary it looks as neat as a 10,000th does in decimal (0.0001)** - if we were in the habit of using a base-2 number system in our daily lives, you'd even look at that number and instinctively understand you could arrive there by halving something, halving it again, and again and again.

** Of course, that's not exactly how floating-point numbers are stored in memory (they use a form of scientific notation). However, it does illustrate the point that binary floating-point precision errors tend to crop up because the "real world" numbers we are usually interested in working with are so often powers of ten - but only because we use a decimal number system day-to-day. This is also why we'll say things like 71% instead of "5 out of every 7" (71% is an approximation, since 5/7 can't be represented exactly with any decimal number).

So no: binary floating point numbers are not broken, they just happen to be as imperfect as every other base-N number system :)

**Side Side Note: Working with Floats in Programming**

In practice, this problem of precision means you need to use rounding functions to round your floating point numbers off to however many decimal places you're interested in before you display them.

You also need to replace equality tests with comparisons that allow some amount of tolerance, which means:

Do *not* do `if (x == y) { ... }`

Instead do `if (abs(x - y) < myToleranceValue) { ... }`

.

where `abs`

is the absolute value. `myToleranceValue`

needs to be chosen for your particular application - and it will have a lot to do with how much "wiggle room" you are prepared to allow, and what the largest number you are going to be comparing may be (due to loss of precision issues). Beware of "epsilon" style constants in your language of choice. These are *not* to be used as tolerance values.

In GNU libm, the implementation of `sin`

is system-dependent. Therefore you can find the implementation, for each platform, somewhere in the appropriate subdirectory of sysdeps.

One directory includes an implementation in C, contributed by IBM. Since October 2011, this is the code that actually runs when you call `sin()`

on a typical x86-64 Linux system. It is apparently faster than the `fsin`

assembly instruction. Source code: sysdeps/ieee754/dbl-64/s_sin.c, look for `__sin (double x)`

.

This code is very complex. No one software algorithm is as fast as possible and also accurate over the whole range of *x* values, so the library implements several different algorithms, and its first job is to look at *x* and decide which algorithm to use.

When *x* is very *very* close to 0, `sin(x) == x`

is the right answer.

A bit further out, `sin(x)`

uses the familiar Taylor series. However, this is only accurate near 0, so...

When the angle is more than about 7Â°, a different algorithm is used, computing Taylor-series approximations for both sin(x) and cos(x), then using values from a precomputed table to refine the approximation.

When |*x*| > 2, none of the above algorithms would work, so the code starts by computing some value closer to 0 that can be fed to `sin`

or `cos`

instead.

There's yet another branch to deal with *x* being a NaN or infinity.

This code uses some numerical hacks I've never seen before, though for all I know they might be well-known among floating-point experts. Sometimes a few lines of code would take several paragraphs to explain. For example, these two lines

```
double t = (x * hpinv + toint);
double xn = t - toint;
```

are used (sometimes) in reducing *x* to a value close to 0 that differs from *x* by a multiple of Ï€/2, specifically `xn`

Ã— Ï€/2. The way this is done without division or branching is rather clever. But there's no comment at all!

Older 32-bit versions of GCC/glibc used the `fsin`

instruction, which is surprisingly inaccurate for some inputs. There's a fascinating blog post illustrating this with just 2 lines of code.

fdlibm's implementation of `sin`

in pure C is much simpler than glibc's and is nicely commented. Source code: fdlibm/s_sin.c and fdlibm/k_sin.c

## Best Solution

How not equal are they? If they are sufficiently close, this might just be the traditional problem that testing equality with floating points is near impossible.

Also, are they of the same type? My opinion was most gaming calculations were done with floats, where as Math.PI would be a double.

EDIT: MathHelper does indeed use floats