Thursday, 9 December 2010

Decimal vs Double in .Net

Due to the increased accuracy, decimal type is often preferred (and recommended by MSDN) to double type when dealing with financial calculations.

Put in layman’s term, the reason that decimal type is more accurate is because decimal types are encoded base 10 (which is the number system humans use), as opposed to base 2 (which is the number system computers use). Some people like to explain the accuracy in terms of bits and bytes, e.g. decimal is 16-byte v.s double 8-byte, but I found such explanation unintuitive.

Base 2 number system cannot represent all numbers in base 10. Therefore, sometimes you get weird result when you perform arithmetic operations on double types. For example,

8.954-7.612 will return 1.3420000000000005

Whereras if you use decimal type, you get the correct result:

8.954m-7.612m returns 1.342

This applies to comparison as well:

1.34200000000000005 > 1.342 returns false

1.34200000000000005m > 1.342m returns true

However, the extra performance comes with extra cost:

1). Decimal takes twice as much the memory space as Double

2). Double is the default type of fractional number literals in .Net. and decimal calculation is exponentially slower than its double counterparts. My test of one million calculations confirms this point.

The above two points are probably not a huge problem, since they can be alleviated by high-end hardware which are increasingly accessible. However, when use decimals, there are a number of things you need to be aware of:

1). Double is the default in .Net and manybuild-in functions returning numeric values return them as double, e.g. Math.Pow(), etc. The implication is that if you declare your variable as decimal, and sometimes when you want to combine your variable with another fractional numeric literal or a function returning double, you have to explicitly convert the other non-decimal party into Decimal because no auto-conversion between double and decimal in C#.

For example:

decimal pow = Math.Pow(2, 4);

The above line will give you a compile error if you don’t explicitly convert.

You have to do explicit conversion every time you work with factional literals (by putting a 'm' to the end of it) and other .Net functions returning double if your variable is decimal.


2). Since Double is favoured by .Net. Microsoft has enhanced Double type with some cool feature - Double calculations do not throw exceptions. It only returns one of the three special values when exception occurs

o Double.NaN
o Double.NegativeInfinity
o Double.PositiveInfinity

This feature is not available in Decimal type. You may argue that: what is the big deal? Can we just catch and deal with these exceptions. Yes you can, but not without a lot more code and without breaking the logic flow (image you are doing a big loop to calculate the returns of 1000 portfolios).

3). Despite being treated conceptually as a primitive type, decimal is technically not a primitive type in .Net. For example:

3.4m.GetType().IsPrimitive returns false

3.4.GetType().IsPrimitive returns true


This is an important point to keep in mind when you use reflection, e.g. to auto-map value objects or data transfer objects, etc.

4). Decimal is more accurate than double, but double has much bigger range (therefore no auto conversion between them). Not only in max/min values:

decimal.MaxValue => 79228162514264337593543950335

double.MaxValue => 1.7976931348623157E+308


but also in the number of decimal places that can be represented:

decimal can only represent maximum 28 decimal places, while double a few times more than that. This is especially important when you want to parse some unusual numbers returned from a database or a 3rd party application. For example:

string str = "0.00000000000000000000000000005";

decimal.Parse(str) returns 0

double.Parse(str) returns 0.00000000000000000000000000005


As a financial software provider, our company has a "decimal only" policy in our code. While using decimals everyday, sometimes tricky bugs occur in our code due to developer's lack of awareness of aforementioned point 3 and 4. Knowing the subtle difference between double and decimal will help you choose which one to use and help you pin down otherwise hard-to-find bugs.