|
|
Subscribe / Log in / New account

An interview with Larry Wall (LinuxVoice)

An interview with Larry Wall (LinuxVoice)

Posted Jul 18, 2015 21:12 UTC (Sat) by Cyberax (✭ supporter ✭, #52523)
In reply to: An interview with Larry Wall (LinuxVoice) by raiph
Parent article: An interview with Larry Wall (LinuxVoice)

> Sure, but that's irrelevant. I was commenting on things like 0.1, which isn't a floating point number, and 1/10, which isn't either. (Or even 1/3, which isn't even decimal.) These are all rational numbers that can easily be stored in binary storage with no loss of precision by storing two integers instead of one floating point number.
The problem is in details. If you just want to store a rational number then it's perfectly OK.

However, once you start doing moderately complex math with true rationals, denominator grows really fast. Sooner or later you have to start normalizing them, causing precision losses.

That's why flexible rationals are not really popular, despite their seductive allure. People just stick to f64 if they want speed, since f64 peculiarities are well-known and are taught in all CS courses. Or people stick to decimal types if they need to mimick the way 'humans' do the arithmetic (primarily for financial stuff).


to post comments

An interview with Larry Wall (LinuxVoice)

Posted Jul 18, 2015 22:35 UTC (Sat) by juliank (guest, #45896) [Link] (5 responses)

> Sooner or later you have to start normalizing them, causing precision losses.

Why does normalization cause precision loss?

An interview with Larry Wall (LinuxVoice)

Posted Jul 19, 2015 4:49 UTC (Sun) by Cyberax (✭ supporter ✭, #52523) [Link] (4 responses)

Once your denominator grows too large (because of mutually prime factors with the numerator) you have to lose some unnecessary precision.

Additionally, if you try to use trascedentals (even a simple square root!) then rationals again become useless.

So there's only a very narrow niche for them, and they are not used widely even though lots of languages have libraries supporting them.

An interview with Larry Wall (LinuxVoice)

Posted Jul 19, 2015 6:07 UTC (Sun) by mchapman (subscriber, #66589) [Link] (3 responses)

> Once your denominator grows too large (because of mutually prime factors with the numerator) you have to lose some unnecessary precision.

It seems to me that the people that are going to be using calculations involving a large number of coprime denominators are precisely the same set of people that will be quite happy to keep using floating-point numbers anyway. So long as it's clear where the transition from rational values to floating-point values lies (e.g. through a warning, or more explicitly through a lexically-scoped pragma), I see no reason why we can't satisfy all parties here.

An interview with Larry Wall (LinuxVoice)

Posted Jul 19, 2015 11:20 UTC (Sun) by khim (subscriber, #9252) [Link] (2 responses)

You can's satisfy all parties because use of fractions where floating point numbers are requested satisfy noone. It's like UTF-8 vs UTF-16. With UTF-8 you need to learn to live with with variable character width early and thus have a chance to produce working program. With UTF-16 your program pass all tests with flying colors (beause they all use BMP) yet fail for real users. Similarly with "fractions vs floating point numbers": if you language introduces floating point numbers early in (as in: 0.1 is not precisely 0.1) and distinguishes it from fractions (think guile: 7/2 will be 7/2 there it'll never automagically become 3.5) then you have a chance, if your language "does basic math" right then you will happily rely on it till you'll need sinus or square. Then you'll continue to rely in the fact that you language does basic math right, only to find out too late that sin2x + cos2x is not always 1.

An interview with Larry Wall (LinuxVoice)

Posted Jul 19, 2015 12:03 UTC (Sun) by mchapman (subscriber, #66589) [Link]

> You can's satisfy all parties because use of fractions where floating point numbers are requested satisfy noone.

I don't think of 0.1 as "requesting" any kind of numeric type. It's just a real number.

In Perl 6 that literal does happen to create a Rat value. But that sounds reasonable to me -- all decimal real numbers *are* rational. 0.1 and <1/10> are just two different textual representations for the same value.

So what if you really want to "request" a floating-point value for some reason? Easy: simply use that Rat as a floating-point value. You could assign it to a Num scalar, or you could use it with some operator or method that only works with Nums. It's easy enough to do the conversion. It is, after all, exactly the same conversion most other programming languages do when they interpret the decimal form in the first place.

To be honest, I would not be surprised if Perl implementations skip the Rat value completely if the decimal literal is used somewhere where it will be immediately used as a floating-point value. If the Rat temporary value isn't externally visible, constant-folding can turn it into a constant Num instead.

An interview with Larry Wall (LinuxVoice)

Posted Jul 20, 2015 8:39 UTC (Mon) by jezuch (subscriber, #52988) [Link]

> basic math

Any programmer who thinks that computers implement "basic math" is simply incompetent... unless you're doing purely symbolic calculations and/or have infinite-length registers.

[Yes, I did fall into that trap once or twice when I was at the university. But then I learned about proper numerical algorithms, numerical (in)stability etc. and I'm no longer incompetent ;)]


Copyright © 2025, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds