Wow.
Wow.
Posted Jul 30, 2010 12:01 UTC (Fri) by dskoll (subscriber, #1630)In reply to: Wow. by niner
Parent article: The first Rakudo Star release
Well the perl5 binary did have 17 (!!) years of time for optimization.
But in those 17 years, the perl5 binary has grown, not gotten smaller. (I'm not sure about speed. I suspect important parts such as the regex engine have been made faster over time.)
Posted Jul 30, 2010 12:13 UTC (Fri)
by niner (subscriber, #26151)
[Link] (7 responses)
Seriously: what exactly is the point of your statistics showing the surprising results that an early adopters pre-release is not as optimized as the predecessor that has been in used production for over 15 years? Another surprise: it also has more bugs! Who'd have thought?
Posted Jul 30, 2010 14:49 UTC (Fri)
by dskoll (subscriber, #1630)
[Link] (6 responses)
Seriously: what exactly is the point of your statistics showing the surprising results that an early adopters pre-release is not as optimized as the predecessor that has been in used production for over 15 years?
Come on! Eliminating 95% of memory usage and reducing startup time by a factor of 500 is merely a matter of optimization?
I recently completely rewrote a C program we were using in-house to use less memory. I reduced the memory to 1/3rd of the original, and that was with a large amount of effort and creative hacks. There is simply no way to "optimize away" the amount of bloat needed to make Perl 6 competitive with Perl 5.
After ten years of development, if the best that can be achieved is a feature-incomplete, buggy, bloated alpha... that's a sign that the project has gone off the rails.
Posted Jul 30, 2010 15:06 UTC (Fri)
by niner (subscriber, #26151)
[Link] (4 responses)
That's what you get by reducing the comparison of complex things like programming languages to one single metric.
Posted Jul 30, 2010 15:34 UTC (Fri)
by dskoll (subscriber, #1630)
[Link] (3 responses)
Competitive on what grounds?
Performance and memory consumption.
On saving a couple of megabytes of memory?
Umm... a "couple of megabytes"? I'm talking about 93MB. Our commercial product uses many (sometimes hundreds) of concurrent Perl processes. Even though it's not embedded, 93MB extra overhead per process would kill us. (In perl 5, even though we fork after loading all our modules, it seems that the reference-counting GC method makes copy-on-write inefficient; most memory pages end up getting touched and we see poor memory sharing among processes.)
Me and many others just couldn't care less.
Yes, and that is obvious by the state of Perl 6: Buggy, feature-incomplete and bloated after a decade of development.
The notion that programmer time is far more valuable than performance or memory consumption is usually true, but it is not true when it comes to programming languages. The designers of programming languages must try to make them efficient, because the time they spend doing that is multiplied thousands or millions of times when people use the language. Something that makes Perl a bit faster or smaller has enormous dividends, far more than one particular developer making his or her particular Perl project smaller or faster.
So if your feelings about speed and memory ("couldn't care less") is indicative of the general attitude among Perl 6 developers, then I say it again: This is a doomed project. It may be unpopular to say that in the Perl community, but I bet most objective outsiders would agree with my assessment.
So one could easily turn that argument around: by requiring the use of countless fragile models to still not reach the same level of maintainability and robustness features that Perl 6 has, Perl 5 is just not competitive.
Except that's all in theory. A robust implementation of Perl 6 doesn't exist yet, and we have no idea whether or not the average Perl 6 project will be any more maintainable than the average Perl 5 project.
Posted Jul 30, 2010 16:29 UTC (Fri)
by chromatic (guest, #26207)
[Link] (2 responses)
Patrick and I know where time and memory go in Rakudo Star, and as such, we're confident that we can find major optimizations. For example, with some effort we can halve the amount of memory used for objects, which both reduces the memory footprint and the amount of time spent in the garbage collector. Other such optimizations exist; don't assume that we care nothing for them.
(Oh, and Rakudo's been in development since November 2007--to correct your "decade" number.)
Posted Jul 30, 2010 17:38 UTC (Fri)
by dskoll (subscriber, #1630)
[Link] (1 responses)
You are correct in assuming I haven't looked deeply at the code for Rakudo Star. I just downloaded it to "kick the tires".
I wish you luck in your optimization efforts, though I remain doubtful you'll succeed.
Rakudo's been in development since November 2007--to correct your "decade" number.
My "decade" reference refers to Perl 6 itself; the first design notes for Perl 6 came out in 2000, I believe. (Another sign of a project in trouble, IMO, is abandonment of a major line of development [PUGS] in favor of a completely different line.)
Posted Aug 18, 2010 5:21 UTC (Wed)
by lindahl (guest, #15266)
[Link]
Posted Aug 5, 2010 15:42 UTC (Thu)
by nix (subscriber, #2304)
[Link]
An example: I sped up the userspace daemon used by the entropy key hardware by more than 1/3rd (minimal figure) or more than 95% (maximal figure) last night. It took about an hour. This is not because the ekeyd authors were fools or lazy: it's because they had other things to do than optimization, so the first person to optimize it could get major speed improvements with very little effort and a few algorithmic changes to a couple of hotspots. (If the structure of the code is such that such changes are unnecessarily hard, that *is* an indictment of the original developers. This certainly wasn't true of the ekeyd and I doubt it's true of Rakudo.)
I would *expect* developer prereleases of software to be just like this more often than not.
Wow.
Wow.
Wow.
Wow.
Wow.
Wow.
Wow.
Wow.
