What is the difference between `decimal`

, `float`

and `double`

in .NET?

When would someone use one of these?

Skip to content
# .net – Difference between decimal, float and double in .NET

###### Related Question

.netdecimaldoublefloating-point

What is the difference between `decimal`

, `float`

and `double`

in .NET?

When would someone use one of these?

- .net – What’s the difference between struct and class in .NET
- C# – the difference between const and readonly in C#
- Java – How to round a number to n decimal places in Java
- C++ – the difference between float and double
- Why doesn’t GCC optimize a*a*a*a*a*a to (a*a*a)*(a*a*a)
- .net – the difference between .NET Core and .NET Standard Class Library project types

## Best Solution

`float`

and`double`

are floatingbinarypoint types. In other words, they represent a number like this:The binary number and the location of the binary point are both encoded within the value.

`decimal`

is a floatingdecimalpoint type. In other words, they represent a number like this:Again, the number and the location of the

decimalpoint are both encoded within the value – that's what makes`decimal`

still a floating point type instead of a fixed point type.The important thing to note is that humans are used to representing non-integers in a decimal form, and expect exact results in decimal representations; not all decimal numbers are exactly representable in binary floating point – 0.1, for example – so if you use a binary floating point value you'll actually get an approximation to 0.1. You'll still get approximations when using a floating decimal point as well – the result of dividing 1 by 3 can't be exactly represented, for example.

As for what to use when:

For values which are "naturally exact decimals" it's good to use

`decimal`

. This is usually suitable for any concepts invented by humans: financial values are the most obvious example, but there are others too. Consider the score given to divers or ice skaters, for example.For values which are more artefacts of nature which can't really be measured

exactlyanyway,`float`

/`double`

are more appropriate. For example, scientific data would usually be represented in this form. Here, the original values won't be "decimally accurate" to start with, so it's not important for the expected results to maintain the "decimal accuracy". Floating binary point types are much faster to work with than decimals.