Weekly edition Kernel Security Distributions Contact Us Search Archives Calendar Subscribe Write for LWN LWN.net FAQ Sponsors

# Nitpicking

## Nitpicking

Posted Dec 6, 2012 8:42 UTC (Thu) by renox (subscriber, #23785)
In reply to: Nitpicking by khim
Parent article: GNU Guile 2.0.7 released

> It's not all that interesting because once you start going in this directions you can never stop. Quite often intervals are not enough

This is the opposite IMHO, most of the time the precision you have with floating point is good enough, that's why nobody care about intervals, but without intervals when (very rarely) you have a precision issue, you don't notice it which can be very problematic as in your example if you're trying to do 1/x on x=0.1+-0.2 with floating points you have the result 10 and you're (mistakenly) happy, with intervals you can see that there is an issue.

Of course, the discussions about interval arithmetic talk about the potential issues of interval arithmetic where your computation are precise enough but the interval is too big (because you wrote x*x instead of x^2 for example), this doesn't mean that this happen often..

> guile gives you rationals (which are suitable for most purposes)
Unless you want to use "rare" things such as square root, logarithms, exponentials, cos/sin, etc?

As for CPU speed, I'm not sure that for common operation (+,-,*,/) floating point ranges are much slower than rationals (on x86s which have powerful CPUs)..

Nitpicking

Posted Dec 8, 2012 17:10 UTC (Sat) by khim (subscriber, #9252) [Link]

As for CPU speed, I'm not sure that for common operation (+,-,*,/) floating point ranges are much slower than rationals (on x86s which have powerful CPUs)..

It's not about speed. It's about exact calculations vs nonexact calculations (non-coincidentally "exact" is property of a number in scheme and thus guile). Arbitrary-precision rationals are top-of-the-pyramid for the exact calculations. As for inexact ones... they will always be non-exact and you can build whatever you want on top of what CPU (and guile) is offering.

The fact that recent versions of both AMD and Intel CPUs added support for half-precision (16bit!) floating point numbers should say you something about this whole business. Exact calculations may be true or not - and guile gives you everything you need for these. Inexact calculations are always approximation and there are bazillion ways to do that approximation thus guile's approach to give you FPU-provided primitives and nothing else looks sensible. You may extend the tower using GOOPS in any direction you want.

Unless you want to use "rare" things such as square root, logarithms, exponentials, cos/sin, etc?

Indeed. All these things are extremely rare. What you need relatively often is rounded up (or sometimes down) square root, rounded up (or sometimes down) logarithm, etc. Floating points are usually enough for that.

This is the opposite IMHO, most of the time the precision you have with floating point is good enough, that's why nobody care about intervals, but without intervals when (very rarely) you have a precision issue, you don't notice it which can be very problematic as in your example if you're trying to do 1/x on x=0.1+-0.2 with floating points you have the result 10 and you're (mistakenly) happy, with intervals you can see that there is an issue.

And what can you do with your numbers at this point? The answer: usually nothing. You may be surprised but programs which produce useful result 99% of time but some nonsense 1% of time are preferred by users to programs which produce useful result 90% of time and refuse to even run 10% of time. Even if such programs never produce incorrect result.

Of course, the discussions about interval arithmetic talk about the potential issues of interval arithmetic where your computation are precise enough but the interval is too big (because you wrote x*x instead of x^2 for example), this doesn't mean that this happen often..

This is wrong measure. Most of the time interval arithmetic and plain old IEEE arithmetic give you more-or-less the same answer. But in cases where interval arithmetic says that there are no useful result more often then not it's wrong. Why? Because numbers which you receive from real life don't adhere to the intervals idea! When you get something like 0.1±0.2 as an input usually it means that it's probably 0.1±0.05, but if stars were all aligned incorrectly when measurement took place and all possible sources of errors materialized at once it may become 0.3 or -0.1. IEEE will give you x=10 and most of the time it'll be good enough (because you don't really exploit all your systems at close to 100% of capacity and often spare power is quite large) so you'll be able to cope with 20 or 30, too. Intervals will give you the answer: "wrong input data" and you'll be forced to revamp the whole model.

Interval arithmetic if not used everywhere and is not supported by FPUs for a reason.

P.S. Funny that you mentioned `cos`/`sin` in your argument. Do you know how `sin` and `cos` are implemented in FPU and what precision they actually offer? The answer is obvious: you don't. And if you naively expect to get the result with full precision supported by double then you are sorely mistaken. This means that if you want to ever use interval arithmetic with `sin` or `cos` (and other transcendent functions) then you need to reimplement them from scratch in a much, much, MUCH less efficient manner. It may be useful in some cases, but this is not something you want in general-purpose extension language by default.