|
|
Subscribe / Log in / New account

Why the vehement objections to decimal floating-point?

Why the vehement objections to decimal floating-point?

Posted Apr 20, 2009 1:01 UTC (Mon) by mgb (guest, #3226)
In reply to: Why the vehement objections to decimal floating-point? by stevenj
Parent article: What's coming in glibc 2.10

64-bit integers are adequate to deal with most banking and financial calculations today. Currencies are generally denoted in cents or mills, although the Zimbabwean dollar might more appropriately be denoted in billions.

128-bit floats can be used to store very large integers today and pure 128-bit integers are waiting in the wings.

Decimal floats are symptomatic of poor design.


to post comments

Why the vehement objections to decimal floating-point?

Posted Apr 20, 2009 8:25 UTC (Mon) by epa (subscriber, #39769) [Link] (4 responses)

64-bit integers are adequate to deal with most banking and financial calculations today.
Not all numbers used in finance are integers. Consider exchange rates and interest rates, for a start. If you were particularly perverse you could decide to use 64-bit ints for everything, with some way of encoding the number of decimal places (or binary places), but in that case you have effectively reinvented a floating point math library.
Decimal floats are symptomatic of poor design.
Not at all. They are often the best match to what the user and the rest of the world requires. It is accepted that 1/3 gives a recurring decimal .333... but no accountant wants their computer system to introduce rounding errors, no matter how minute, when calculating 1/5 (which is .0011... in binary). Or do you mean that *floating* point decimal is a bad idea, and it's better to use fixed point with a certain fixed number of digits precision? There is certainly a case for that.

Why the vehement objections to decimal floating-point?

Posted Apr 20, 2009 16:31 UTC (Mon) by stevenj (guest, #421) [Link] (3 responses)

A lot of people here are proposing that decimal fixed point is just as good or better than decimal floats.

I'm a little skeptical of this, based on my experience with scientific computation: there are many, many circumstances when both the input and output of the computation appear to be in a range suitable for fixed-point representation, but the intermediate calculations will have vastly greater rounding errors in fixed point than in floating point. And fixed-point error analysis in the presence of rounding and overflow is a nightmare compared to floating point.

Decimal floating point gives you the best of both worlds. If the result of each calculation is exactly representable, it will give you the exact result. (Please don't raise the old myth that floating-point calculations add some mysterious random noise to each calculation!) There is no rounding when decimal inputs are entered, so human input is preserved exactly. And if the result is not exactly representable, its rounding characteristics will be much, much better than fixed point. (And don't try to claim that financial calculations never have to round.)

Note that the IEEE double-precision (64-bit) decimal-float format has a 16 decimal-digit significand (and there is also a quad-precision decimal float with a 34 decimal-digit significand). I would take this over 64-bit fixed point any day: only nine bits of this are sacrificed in IEEE to give you a floating decimal point and fixed relative precision over a wide dynamic range.

Why the vehement objections to decimal floating-point?

Posted Apr 20, 2009 16:34 UTC (Mon) by stevenj (guest, #421) [Link] (2 responses)

(By "result of each calculation is exactly representable" I am of course including intermediate calculations. Note that this is equally true in fixed-point and integer arithmetic.)

Why the vehement objections to decimal floating-point?

Posted Apr 25, 2009 12:29 UTC (Sat) by dmag (guest, #17775) [Link] (1 responses)

Fixed-point won't loose information on simple calculations, but there is a possibility some intermediate results will saturate your representation. For example, if you square a number, add 1 and take the square root. For large numbers, the square isn't likely to be representable.

Floating point has the opposite problem. The intermediate calculations won't blow up, but you can lose precision even in simple cases.

Most people don't have a correct mental model of floating point. Floating point has a reputation for being 'lossy' because it can loose information in non-obvious ways.

$ irb
>> 0.1 * 0.1 - 0.01
=> 1.73472347597681e-18
Sometimes the answer is to store in fixed point, but calculate in floating point (and do appropriate rounding during conversion back to fixed).

Why the vehement objections to decimal floating-point?

Posted Apr 18, 2011 22:37 UTC (Mon) by stevenj (guest, #421) [Link]

Your example makes no sense; the result would be computed exactly in decimal floating point.

More generally, in essentially any case where decimal fixed point with N digits would produce exact results, decimal floating point with an N-digit significand would also produce exact results. The only sacrifice in going from fixed to (decimal) floating point is that you lose a few bits of precision to store the exponent, and in exchange you get vastly bigger dynamic range and much more sensible roundoff characteristics.

You're certainly right that many people don't have a correct mental model of floating point, however.


Copyright © 2025, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds