|
|
Subscribe / Log in / New account

Quotes of the week

The software design moral: Everything is shit and will attempt to kill you when you're not looking.
-- Matthew Garrett

I don't believe "auto-destroy my music collection" is a sane default.
-- Alan Cox

BTW, the current influx of higher-complexity filesystems certainly worries me a little.
-- Christoph Hellwig

Can you post the patch, so that we can see if we can find some silly error that we can ridicule you over?
-- Linus Torvalds (Thanks to Jeff Schroeder)

There's a lot of stuff here, as can be seen by the final diffstat number:
779 files changed, 472695 insertions(+), 26479 deletions(-)
and yes, it's all crap :)
-- Greg Kroah-Hartman

I will just note wryly that it used to be that I could compile 0.9x kernels on a 40 MHz 386 machine in 10 minutes. Some 15 years later, it still takes roughly the same amount of time to compile a kernel, even though computers have gotten vastly faster since then. Something seems wrong with that....
-- Ted Ts'o

to post comments

LOL

Posted Jan 8, 2009 3:34 UTC (Thu) by khim (subscriber, #9252) [Link] (10 responses)

I will just note wryly that it used to be that I could compile 0.9x kernels on a 40 MHz 386 machine in 10 minutes. Some 15 years later, it still takes roughly the same amount of time to compile a kernel, even though computers have gotten vastly faster since then. Something seems wrong with that....
Actually it feels exactly right to me: computers are doing the same perceived work in the same time. It just means software and hardware are in sync: new features are not added faster then hardware can cope yet new features are added so hardware does not sit idle...

LOL

Posted Jan 8, 2009 12:12 UTC (Thu) by ahoogerhuis (guest, #4041) [Link]

I'm sure he's just equating the Linux kernel to the bloat of Vista, it runs about as good as I remember Windows 2.11 on a 286.... I'll report for a life in exile now. ;)

-A

LOL

Posted Jan 8, 2009 12:45 UTC (Thu) by smitty_one_each (subscriber, #28989) [Link]

I think that it means all us wannabees will make our own little compile farms out of eepc machines once they hit the $250 price point.

LOL

Posted Jan 9, 2009 1:50 UTC (Fri) by giraffedata (guest, #1954) [Link] (4 responses)

yet new features are added so hardware does not sit idle...

I don't think it happens that way. Hardware makers don't make faster hardware without a pre-existing need for it.

What has happened is that hardware has advanced to meet the needs of the new features, which people want more than a faster compile.

It reminds me of an observation a traffic engineer once made. He found that the optimum speed, from a user utility point of view, on a California freeway was 35 miles per hour. (People claimed they hated driving that slow, but continued getting on the freeway until the speed went below 35). He noted that when he added a new lane to the freeway, the speed remained 35. More people used the freeway.

More California freeway humor (off-topic)

Posted Jan 9, 2009 3:15 UTC (Fri) by pr1268 (guest, #24648) [Link]

It reminds me of an observation a traffic engineer once made. He found that the optimum speed, from a user utility point of view, on a California freeway was 35 miles per hour. (People claimed they hated driving that slow, but continued getting on the freeway until the speed went below 35). He noted that when he added a new lane to the freeway, the speed remained 35. More people used the freeway.

Quoting the late Johnny Carson with respect to highway speed laws: "55 miles per hour? We Californians are changing tires at 55!"

LOL

Posted Jan 9, 2009 16:26 UTC (Fri) by rgmoore (✭ supporter ✭, #75) [Link] (2 responses)

What has happened is that hardware has advanced to meet the needs of the new features, which people want more than a faster compile.

I think that it works both ways. Some applications- scientific computing, rendering, and high-end games are good examples- have an insatiable demand for computing power. Those applications are always going to give processor manufacturers a market for improved performance, and they're going to do their best to fill it. The economics of processor design means that there's a trickle-down effect, and those high performance designs will eventually work their way into cheaper and cheaper computers for the rest of the market.

But it's also important to remember that one of the markets for high end computers is software developers who want fast compile times. The developers then wind up targeting their own high-end systems when they design their software. They add cool features that take advantage of their faster machines, and they focus more on development speed than efficiency under the assumption of increasing power. That puts users on the perpetual hardware upgrade cycle.

He found that the optimum speed, from a user utility point of view, on a California freeway was 35 miles per hour. (People claimed they hated driving that slow, but continued getting on the freeway until the speed went below 35).

I think he was misinterpreting his findings. That looks to me like a classic substitution effect. People compare how long it will take them to get to their destination using different modes of transportation. A freeway is a better alternative as long as it's faster than surface streets, which apparently traveled at about 35mph. It's not that drivers really think that 35mph is OK and they're lying to people who ask them about it. It's just that they need to get where they're going, and there isn't an available alternative that will get them there any faster.

traffic engineering, effects of increasing capacity

Posted Jan 9, 2009 19:18 UTC (Fri) by giraffedata (guest, #1954) [Link] (1 responses)

I think he was misinterpreting his findings. That looks to me like a classic substitution effect.

I must have explained it poorly, because that's just what he said. Except that he knows drivers also substitute trips at less convenient times and trip forbearance for a 30 mph freeway trip.

And his only point about the disparity between people claiming to hate driving 35 mph and what actually happens is that while they hate driving 35 mph, they like it enough to do it, which is all that matters. There are apparently enough people whose cutoff point of hating the freeway enough not to use it is right about 35 that adding new lanes doesn't significantly increase the speed.

I see the same thing in computers. Users accept performance that, to me, is maddeningly slow so that as computers get faster, application developers put in more features and keep it down at that speed.

Actually, I just remembered I quoted the wrong number. 35 mph is what he said yields the greatest freeway capacity, based on the following distance at which drivers feel comfortable at various speeds. I can't remember what the figure was for when people stop getting on the freeway. Probably 25. So as more people start using a 70 mph freeway, it absorbs the traffic and slows down steadily until it gets to 35, then crashes to 25 and the number of people getting on stabilizes there.

traffic engineering, effects of increasing capacity

Posted Jan 9, 2009 20:12 UTC (Fri) by dlang (guest, #313) [Link]

speeds don't increase from 35 when they add more lanes because they haven't added enough lanes. if they keep adding lanes they will eventually get to the point where the capacity is high enough to sustain higher speeds.

from a freeway builders point of view it's most efficiant to only spend enough money to get freeway speeds up to 35 mph as that results in the most cars per $$ spent, but that's not what the users of the freeway want.

LOL

Posted Jan 15, 2009 12:15 UTC (Thu) by forthy (guest, #1525) [Link] (2 responses)

I agree that something's wrong here. Compare e.g. my Forth system. 20 years ago, it took about one minute to compile on my Atari ST (8MHz 68k); that was after I managed to speed up the compiler by a factor of 10 (because 10 minutes was unbearable slow). Now it takes 0.3 seconds on an 2GHz Athlon64, producing about 500k. That's a factor 200. The total size of the binary expanded by a factor of 4, and there are really more features in than there were 20 years ago. To compile that part I used to have on the Atari takes a fourth of the total time, so the overall ratio looks reasonable (factor 800 between Atari and Athlon64).

One reason the Linux kernel takes longer today is that GCC became slower over time. The GCC maintainer tell me, that this is because it optimizes better. For my own C programs, I still get the best results from 2.95.x (which was the last GCC which compiled reasonably "fast", which is already dog slow compared to a Forth compiler). So maybe it's not something wrong with the Linux kernel, but with GCC. After all, the size of the Linux kernel and the features (in terms of the number of supported hardware) increased a lot more over 0.9x than my Forth system - it was already fully featured 20 years ago, just lacking things nobody would have thought of back then (like no UTF-8 support or no X based GUI).

LOL

Posted Jan 15, 2009 18:39 UTC (Thu) by dfsmith (guest, #20302) [Link] (1 responses)

It does make me wonder how much time is spent parsing #ifdef'd sections of the myriad header files the kernel uses now.

LOL

Posted Jan 16, 2009 18:04 UTC (Fri) by jch (guest, #51929) [Link]

Very little. Parsing itself takes a small fraction of the time, and ifdefed-away sections are discarded by the preprocessor very early, and never actually parsed.

What takes time is the optimiser, which takes ages in recent GCC releases.


Copyright © 2009, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds