floating-point java - Difference between decimal, float and double in.NET?




sql c# (14)

What is the difference between decimal, float and double in .NET?

When would someone use one of these?


Answers

The Decimal structure is strictly geared to financial calculations requiring accuracy, which are relatively intolerant of rounding. Decimals are not adequate for scientific applications, however, for several reasons:

  • A certain loss of precision is acceptable in many scientific calculations because of the practical limits of the physical problem or artifact being measured. Loss of precision is not acceptable in finance.
  • Decimal is much (much) slower than float and double for most operations, primarily because floating point operations are done in binary, whereas Decimal stuff is done in base 10 (i.e. floats and doubles are handled by the FPU hardware, such as MMX/SSE, whereas decimals are calculated in software).
  • Decimal has an unacceptably smaller value range than double, despite the fact that it supports more digits of precision. Therefore, Decimal can't be used to represent many scientific values.

This has been an interesting thread for me, as today, we've just had a nasty little bug, concerning decimal having less precision than a float.

In our C# code, we are reading numeric values from an Excel spreadsheet, converting them into a decimal, then sending this decimal back to a Service to save into a SQL Server database.

Microsoft.Office.Interop.Excel.Range cell = …
object cellValue = cell.Value2;
if (cellValue != null)
{
    decimal value = 0;
    Decimal.TryParse(cellValue.ToString(), out value);
}

Now, for almost all of our Excel values, this worked beautifully. But for some, very small Excel values, using decimal.TryParse lost the value completely. One such example is

  • cellValue = 0.00006317592

  • Decimal.TryParse(cellValue.ToString(), out value); // would return 0

The solution, bizarrely, was to convert the Excel values into a double first, and then into a decimal:

Microsoft.Office.Interop.Excel.Range cell = …
object cellValue = cell.Value2;
if (cellValue != null)
{
    double valueDouble = 0;
    double.TryParse(cellValue.ToString(), out valueDouble);
    decimal value = (decimal) valueDouble;
    …
}

Even though double has less precision than a decimal, this actually ensured small numbers would still be recognised. For some reason, double.TryParse was actually able to retrieve such small numbers, whereas decimal.TryParse would set them to zero.

Odd. Very odd.


float and double are floating binary point types. In other words, they represent a number like this:

10001.10010110011

The binary number and the location of the binary point are both encoded within the value.

decimal is a floating decimal point type. In other words, they represent a number like this:

12345.65789

Again, the number and the location of the decimal point are both encoded within the value – that's what makes decimal still a floating point type instead of a fixed point type.

The important thing to note is that humans are used to representing non-integers in a decimal form, and expect exact results in decimal representations; not all decimal numbers are exactly representable in binary floating point – 0.1, for example – so if you use a binary floating point value you'll actually get an approximation to 0.1. You'll still get approximations when using a floating decimal point as well – the result of dividing 1 by 3 can't be exactly represented, for example.

As for what to use when:

  • For values which are "naturally exact decimals" it's good to use decimal. This is usually suitable for any concepts invented by humans: financial values are the most obvious example, but there are others too. Consider the score given to divers or ice skaters, for example.

  • For values which are more artefacts of nature which can't really be measured exactly anyway, float/double are more appropriate. For example, scientific data would usually be represented in this form. Here, the original values won't be "decimally accurate" to start with, so it's not important for the expected results to maintain the "decimal accuracy". Floating binary point types are much faster to work with than decimals.


+---------+----------------+---------+----------+---------------------------------------------+
| C#      | .Net Framework | Signed? | Bytes    | Possible Values                             |
| Type    | (System) type  |         | Occupied |                                             |
+---------+----------------+---------+----------+---------------------------------------------+
| sbyte   | System.Sbyte   | Yes     | 1        | -128 to 127                                 |
| short   | System.Int16   | Yes     | 2        | -32768 to 32767                             |
| int     | System.Int32   | Yes     | 4        | -2147483648 to 2147483647                   |
| long    | System.Int64   | Yes     | 8        | -9223372036854775808 to 9223372036854775807 |
| byte    | System.Byte    | No      | 1        | 0 to 255                                    |
| ushort  | System.Uint16  | No      | 2        | 0 to 65535                                  |
| uint    | System.UInt32  | No      | 4        | 0 to 4294967295                             |
| ulong   | System.Uint64  | No      | 8        | 0 to 18446744073709551615                   |
| float   | System.Single  | Yes     | 4        | Approximately ±1.5 x 10-45 to ±3.4 x 1038   |
|         |                |         |          |  with 7 significant figures                 |
| double  | System.Double  | Yes     | 8        | Approximately ±5.0 x 10-324 to ±1.7 x 10308 |
|         |                |         |          |  with 15 or 16 significant figures          |
| decimal | System.Decimal | Yes     | 12       | Approximately ±1.0 x 10-28 to ±7.9 x 1028   |
|         |                |         |          |  with 28 or 29 significant figures          |
| char    | System.Char    | N/A     | 2        | Any Unicode character (16 bit)              |
| bool    | System.Boolean | N/A     | 1 / 2    | true or false                               |
+---------+----------------+---------+----------+---------------------------------------------+

For more information, see:
http://social.msdn.microsoft.com/Forums/en-US/csharpgeneral/thread/921a8ffc-9829-4145-bdc9-a96c1ec174a5


The problem with all these types is that a certain imprecision subsists AND that this problem can occur with small decimal numbers like in the following example

Dim fMean as Double = 1.18
Dim fDelta as Double = 0.08
Dim fLimit as Double = 1.1

If fMean - fDelta < fLimit Then
    bLower = True
Else
    bLower = False
End If

Question: Which value does bLower variable contain ?

Answer: On a 32 bit machine bLower contains TRUE !!!

If I replace Double by Decimal, bLower contains FALSE which is the good answer.

In double, the problem is that fMean-fDelta = 1.09999999999 that is lower that 1.1.

Caution: I think that same problem can certainly exists for other number because Decimal is only a double with higher precision and the precision has always a limit.

In fact, Double, Float and Decimal correspond to BINARY decimal in COBOL !

It is regrettable that other numeric types implemented in COBOL don't exist in .Net. For those that don't know COBOL, there exist in COBOL following numeric type

BINARY or COMP like float or double or decimal
PACKED-DECIMAL or COMP-3 (2 digit in 1 byte)
ZONED-DECIMAL (1 digit in 1 byte) 

Integers, as was mentioned, are whole numbers. They can't store the point something, like .7, .42, and .007. If you need to store numbers that are not whole numbers, you need a different type of variable. You can use the double type or the float type. You set these types of variables up in exactly the same way: instead of using the word int, you type double or float. Like this:

float myFloat;
double myDouble;

(float is short for "floating point", and just means a number with a point something on the end.)

The difference between the two is in the size of the numbers that they can hold. For float, you can have up to 7 digits in your number. For doubles, you can have up to 16 digits. To be more precise, here's the official size:

float:  1.5 × 10^-45  to 3.4 × 10^38  
double: 5.0 × 10^-324 to 1.7 × 10^308

float is a 32-bit number, and double is a 64-bit number.

Double click your new button to get at the code. Add the following three lines to your button code:

double myDouble;
myDouble = 0.007;
MessageBox.Show(myDouble.ToString());

Halt your program and return to the coding window. Change this line:

myDouble = 0.007;
myDouble = 12345678.1234567;

Run your programme and click your double button. The message box correctly displays the number. Add another number on the end, though, and C# will again round up or down. The moral is if you want accuracy, be careful of rounding!


I won't reiterate tons of good (and some bad) information already answered in other answers and comments, but I will answer your followup question with a tip:

When would someone use one of these?

Use decimal for counted values

Use float/double for measured values

Some examples:

  • money (do we count money or measure money?)

  • distance (do we count distance or measure distance? *)

  • scores (do we count scores or measure scores?)

We always count money and should never measure it. We usually measure distance. We often count scores.

* In some cases, what I would call nominal distance, we may indeed want to 'count' distance. For example, maybe we are dealing with country signs that show distances to cities, and we know that those distances never have more than one decimal digit (xxx.x km).


  1. Double and float can be divided by integer zero without an exception at both compilation and run time.
  2. Decimal cannot be divided by integer zero. Compilation will always fail if you do that.

The Decimal, Double, and Float variable types are different in the way that they store the values. Precision is the main difference where float is a single precision (32 bit) floating point data type, double is a double precision (64 bit) floating point data type and decimal is a 128-bit floating point data type.

Float - 32 bit (7 digits)

Double - 64 bit (15-16 digits)

Decimal - 128 bit (28-29 significant digits)

More about...the difference between Decimal, Float and Double


float ~ ±1.5 x 10-45 to ±3.4 x 1038 --------7 figures
double ~ ±5.0 x 10-324 to ±1.7 x 10308 ------15 or 16 figures
decimal ~ ±1.0 x 10-28 to ±7.9 x 1028 --------28 or 29 figures


In simple words:

  1. The Decimal, Double, and Float variable types are different in the way that they store the values.
  2. Precision is the main difference (Notice that this is not the single difference) where float is a single precision (32 bit) floating point data type, double is a double precision (64 bit) floating point data type and decimal is a 128-bit floating point data type.
  3. The summary table:

/==========================================================================================
    Type       Bits    Have up to                   Approximate Range 
/==========================================================================================
    float      32      7 digits                     -3.4 × 10 ^ (38)   to +3.4 × 10 ^ (38)
    double     64      15-16 digits                 ±5.0 × 10 ^ (-324) to ±1.7 × 10 ^ (308)
    decimal    128     28-29 significant digits     ±7.9 x 10 ^ (28) or (1 to 10 ^ (28)
/==========================================================================================
You can read more here, Float, Double, and Decimal.

float 7 digits of precision

double has about 15 digits of precision

decimal has about 28 digits of precision

If you need better accuracy, use double instead of float. In modern CPUs both data types have almost the same performance. The only benifit of using float is they take up less space. Practically matters only if you have got many of them.

I found this is interesting. What Every Computer Scientist Should Know About Floating-Point Arithmetic


The main difference between each of these is the precision.

float is a 32-bit number, double is a 64-bit number and decimal is a 128-bit number.


This question is complicated.

Suppose we have a function, roundTo2DP(num), that takes a float as an argument and returns a value rounded to 2 decimal places. What should each of these expressions evaluate to?

  • roundTo2DP(0.014999999999999999)
  • roundTo2DP(0.0150000000000000001)
  • roundTo2DP(0.015)

The 'obvious' answer is that the first example should round to 0.01 (because it's closer to 0.01 than to 0.02) while the other two should round to 0.02 (because 0.0150000000000000001 is closer to 0.02 than to 0.01, and because 0.015 is exactly halfway between them and there is a mathematical convention that such numbers get rounded up).

The catch, which you may have guessed, is that roundTo2DP cannot possibly be implemented to give those obvious answers, because all three numbers passed to it are the same number. IEEE 754 binary floating point numbers (the kind used by JavaScript) can't exactly represent most non-integer numbers, and so all three numeric literals above get rounded to a nearby valid floating point number. This number, as it happens, is exactly

0.01499999999999999944488848768742172978818416595458984375

which is closer to 0.01 than to 0.02.

You can see that all three numbers are the same at your browser console, Node shell, or other JavaScript interpreter. Just compare them:

> 0.014999999999999999 === 0.0150000000000000001
true

So when I write m = 0.0150000000000000001, the exact value of m that I end up with is closer to 0.01 than it is to 0.02. And yet, if I convert m to a String...

> var m = 0.0150000000000000001;
> console.log(String(m));
0.015
> var m = 0.014999999999999999;
> console.log(String(m));
0.015

... I get 0.015, which should round to 0.02, and which is noticeably not the 56-decimal-place number I earlier said that all of these numbers were exactly equal to. So what dark magic is this?

The answer can be found in the ECMAScript specification, in section 7.1.12.1: ToString applied to the Number type. Here the rules for converting some Number m to a String are laid down. The key part is point 5, in which an integer s is generated whose digits will be used in the String representation of m:

let n, k, and s be integers such that k ≥ 1, 10k-1s < 10k, the Number value for s × 10n-k is m, and k is as small as possible. Note that k is the number of digits in the decimal representation of s, that s is not divisible by 10, and that the least significant digit of s is not necessarily uniquely determined by these criteria.

The key part here is the requirement that "k is as small as possible". What that requirement amounts to is a requirement that, given a Number m, the value of String(m) must have the least possible number of digits while still satisfying the requirement that Number(String(m)) === m. Since we already know that 0.015 === 0.0150000000000000001, it's now clear why String(0.0150000000000000001) === '0.015' must be true.

Of course, none of this discussion has directly answered what roundTo2DP(m) should return. If m's exact value is 0.01499999999999999944488848768742172978818416595458984375, but its String representation is '0.015', then what is the correct answer - mathematically, practically, philosophically, or whatever - when we round it to two decimal places?

There is no single correct answer to this. It depends upon your use case. You probably want to respect the String representation and round upwards when:

  • The value being represented is inherently discrete, e.g. an amount of currency in a 3-decimal-place currency like dinars. In this case, the true value of a Number like 0.015 is 0.015, and the 0.0149999999... representation that it gets in binary floating point is a rounding error. (Of course, many will argue, reasonably, that you should use a decimal library for handling such values and never represent them as binary floating point Numbers in the first place.)
  • The value was typed by a user. In this case, again, the exact decimal number entered is more 'true' than the nearest binary floating point representation.

On the other hand, you probably want to respect the binary floating point value and round downwards when your value is from an inherently continuous scale - for instance, if it's a reading from a sensor.

These two approaches require different code. To respect the String representation of the Number, we can (with quite a bit of reasonably subtle code) implement our own rounding that acts directly on the String representation, digit by digit, using the same algorithm you would've used in school when you were taught how to round numbers. Below is an example which respects the OP's requirement of representing the number to 2 decimal places "only when necessary" by stripping trailing zeroes after the decimal point; you may, of course, need to tweak it to your precise needs.

/**
 * Converts num to a decimal string (if it isn't one already) and then rounds it
 * to at most dp decimal places.
 *
 * For explanation of why you'd want to perform rounding operations on a String
 * rather than a Number, see http://.com/a/38676273/1709587
 *
 * @param {(number|string)} num
 * @param {number} dp
 * @return {string}
 */
function roundStringNumberWithoutTrailingZeroes (num, dp) {
    if (arguments.length != 2) throw new Error("2 arguments required");

    num = String(num);
    if (num.indexOf('e+') != -1) {
        // Can't round numbers this large because their string representation
        // contains an exponent, like 9.99e+37
        throw new Error("num too large");
    }
    if (num.indexOf('.') == -1) {
        // Nothing to do
        return num;
    }

    var parts = num.split('.'),
        beforePoint = parts[0],
        afterPoint = parts[1],
        shouldRoundUp = afterPoint[dp] >= 5,
        finalNumber;

    afterPoint = afterPoint.slice(0, dp);
    if (!shouldRoundUp) {
        finalNumber = beforePoint + '.' + afterPoint;
    } else if (/^9+$/.test(afterPoint)) {
        // If we need to round up a number like 1.9999, increment the integer
        // before the decimal point and discard the fractional part.
        finalNumber = Number(beforePoint)+1;
    } else {
        // Starting from the last digit, increment digits until we find one
        // that is not 9, then stop
        var i = dp-1;
        while (true) {
            if (afterPoint[i] == '9') {
                afterPoint = afterPoint.substr(0, i) +
                             '0' +
                             afterPoint.substr(i+1);
                i--;
            } else {
                afterPoint = afterPoint.substr(0, i) +
                             (Number(afterPoint[i]) + 1) +
                             afterPoint.substr(i+1);
                break;
            }
        }

        finalNumber = beforePoint + '.' + afterPoint;
    }

    // Remove trailing zeroes from fractional part before returning
    return finalNumber.replace(/0+$/, '')
}

Example usage:

> roundStringNumberWithoutTrailingZeroes(1.6, 2)
'1.6'
> roundStringNumberWithoutTrailingZeroes(10000, 2)
'10000'
> roundStringNumberWithoutTrailingZeroes(0.015, 2)
'0.02'
> roundStringNumberWithoutTrailingZeroes('0.015000', 2)
'0.02'
> roundStringNumberWithoutTrailingZeroes(1, 1)
'1'
> roundStringNumberWithoutTrailingZeroes('0.015', 2)
'0.02'
> roundStringNumberWithoutTrailingZeroes(0.01499999999999999944488848768742172978818416595458984375, 2)
'0.02'
> roundStringNumberWithoutTrailingZeroes('0.01499999999999999944488848768742172978818416595458984375', 2)
'0.01'

The function above is probably what you want to use to avoid users ever witnessing numbers that they have entered being rounded wrongly.

(As an alternative, you could also try the round10 library which provides a similarly-behaving function with a wildly different implementation.)

But what if you have the second kind of Number - a value taken from a continuous scale, where there's no reason to think that approximate decimal representations with fewer decimal places are more accurate than those with more? In that case, we don't want to respect the String representation, because that representation (as explained in the spec) is already sort-of-rounded; we don't want to make the mistake of saying "0.014999999...375 rounds up to 0.015, which rounds up to 0.02, so 0.014999999...375 rounds up to 0.02".

Here we can simply use the built-in toFixed method. Note that by calling Number() on the String returned by toFixed, we get a Number whose String representation has no trailing zeroes (thanks to the way JavaScript computes the String representation of a Number, discussed earlier in this answer).

/**
 * Takes a float and rounds it to at most dp decimal places. For example
 *
 *     roundFloatNumberWithoutTrailingZeroes(1.2345, 3)
 *
 * returns 1.234
 *
 * Note that since this treats the value passed to it as a floating point
 * number, it will have counterintuitive results in some cases. For instance,
 * 
 *     roundFloatNumberWithoutTrailingZeroes(0.015, 2)
 *
 * gives 0.01 where 0.02 might be expected. For an explanation of why, see
 * http://.com/a/38676273/1709587. You may want to consider using the
 * roundStringNumberWithoutTrailingZeroes function there instead.
 *
 * @param {number} num
 * @param {number} dp
 * @return {number}
 */
function roundFloatNumberWithoutTrailingZeroes (num, dp) {
    var numToFixedDp = Number(num).toFixed(dp);
    return Number(numToFixedDp);
}




.net floating-point double decimal