Posted May 2, 2006 7:36 UTC (Tue) by stevenj
In reply to: Careful with the omissions...
Parent article: Portability and Pitfalls of C-Types (developerWorks)
You're absolutely right that these issues are tricky, and that double precision is not a panacea. However, Kahan's points are twofold:
First, error analysis can be very nonobvious, and because of this it's dangerous to ask programmers to decide for themselves whether single precision is sufficient. As Kahan puts it, Except in extremely uncommon situations, extra-precise arithmetic generally attenuates risks due to roundoff at far less cost than the price of a competent error-analyst.
Second, another misconception that Kahan warns about, which is quite common in my experience, is the myth that "Arithmetic should be barely more precise than the data and the desired result." Here, the developerWorks author suggested that the precision be determined by the "range you care about", which in my opinion is dangerously misleading at best—it could easily be read as saying that the precision is given by the desired accuracy of the result (or perhaps by the accuracy of the input, which is usually just as wrong), when in fact you have to worry about the intermediate calculations as well.
For huge datasets, there is a strong incentive to use single precision, I agree. Even then it is often a good idea to perform in-cache intermediate calculations in double precision.
I've read the Goldberg article, but I'm inclined to find Kahan's 80-page presentation (despite being a series of slides and not an article) to be more thought-provoking because it goes directly after common misconceptions and explodes them.
to post comments)