Julia v1.10: Performance, a new parser, and more
Julia v1.10: Performance, a new parser, and more
Posted Jan 17, 2024 11:15 UTC (Wed) by joib (subscriber, #8541)In reply to: Julia v1.10: Performance, a new parser, and more by ianmcc
Parent article: Julia v1.10: Performance, a new parser, and more
But of course an implementation can only calculate with whatever value is given to them. And yes, tan(x) is almost certainly going to be more accurate than calculating tanpi(x/pi). So the *pi() functions will only be useful if the argument you want to calculate needs to be multiplied by a multiple of pi (which per se isn't entirely uncommon).
Posted Jan 17, 2024 12:56 UTC (Wed)
by ianmcc (subscriber, #88379)
[Link] (1 responses)
In fact the error in tanpi is bigger than it could be, since 1/4 + 2e20 is exactly represented as Float64, and the exact result of tanpi(1/4 + 2e20) is 1.0, but Julia instead gives 0.9999999999999999. So for example, if the result is used as input for another function, then that error can blow up,
If you need to calculate tan of numbers ~ 1e10 using double precision, and those numbers are calculated from some other function, then you ought to "not care" about errors in the result of order 1e-5. If you want the result to be more accurate then you need your input value to be specified to more digits than are available in double precision.
On order to rely on the special properties of tanpi(1/4 + 2e10) to produce 1.0 (+/- 1 ulp), the input number can't have been subject to any previous rounding. I've never seen a real-world example where that is the case (except trivial examples where the argument is a constant).
Posted Jan 17, 2024 14:31 UTC (Wed)
by joib (subscriber, #8541)
[Link]
And yes, a particularly large argument to a trigonometric function probably implies that problem should be formulated in some other way. But again, the implementation of a trigonometric function can't know that, and the best it can do is to assume the argument is exact and calculate an answer that is as accurate as possible given that assumption.
And yes, the x87 was notoriously bad with reducing large arguments. Thankfully those days are now past us, and most libc's AFAIK do argument reduction roughly per the famous(?) paper by Ng.
Posted Jan 17, 2024 14:41 UTC (Wed)
by farnz (subscriber, #17727)
[Link] (2 responses)
As a trivial example of when the *pi functions are useful, consider a case where your measurement is not in radians; for example, you have a measurement of 200° or 97 gon. The conversion from degrees to radians is multiply by π and divide by 180, while the conversion from gon to radians is multiply by π and divide by 200. In both cases, the *pi() functions help because it's incredibly cheap to reduce the argument to the range 0 to 2, whereas it's a lot more challenging to reduce it to 0 to 2π without loss of accuracy.
Posted Jan 18, 2024 9:55 UTC (Thu)
by ianmcc (subscriber, #88379)
[Link] (1 responses)
That isn't true, in IEEE arithmetic the remainder function is exact, and very fast. In fact I don't think there is any performance advantage to taking a remainder modulo 2 compared with 2 pi.
Posted Jan 18, 2024 10:22 UTC (Thu)
by farnz (subscriber, #17727)
[Link]
The IEEE arithmetic remainder function is not able to give an exact remainder when dividing by 2π; it can only give an exact remainder when the divisor's IEEE 754 representation is exact, and there is no exact, finite, representation of 2π (in any binary floating point system, not just in IEEE 754). You will thus, per IEEE 754's own rules, get an inexact remainder when you take the remainder after division by 2π, since you did not supply an exact representation of 2π to the remainder function.
In contrast, the real number 2 can be represented exactly in IEEE 754 floating point, and thus the remainder function will give you an exact result. This is, FWIW, the actual rationale given by P754 working group for the inclusion of sinPi, cosPi and atanPi in table 9.1; the only reason that IEEE 754 has traditionally omitted tanPi, acosPi, and asinPi is that these functions have problems where there's two or more valid output values for certain inputs, and there's arguments for both possible valid output values.
Well, in ordinary cases (i.e. when the argument is close to 1), tanpi(x) and tan(pi*x) will have the same accuracy:
Julia v1.10: Performance, a new parser, and more
println(tanpi(1/4))
println(tan(π/4))
Program stdout
0.9999999999999999
0.9999999999999999
The problem with trig functions with very large arguments is that the calculation is ill-conditioned, and roundoff error in the argument produces large errors in the result. That it is even possible to implement the trig functions so that the result only has +/- 1 ulp error for any input is non-obvious (and wasn't the case in the original 287 coprocessor), and while it is nice to have, in practice it isn't all that useful since you usually have at least 1 ulp error on the input as well and there is no way to avoid that 1 ulp error on input getting magnified into a large error on the output.
print(tanpi(tanpi(1/4 + 2e10) * 2e10 + 1/4))
Program stdout
0.999976031837428
although in principle all of the numbers here are exactly representable in Float64 and the exact result is 1.0.
Julia v1.10: Performance, a new parser, and more
Julia v1.10: Performance, a new parser, and more
Julia v1.10: Performance, a new parser, and more
it's a lot more challenging to reduce it to 0 to 2π without loss of accuracy.
Julia v1.10: Performance, a new parser, and more
