|
|
Subscribe / Log in / New account

Kernel build performance

Kernel build performance

Posted Nov 25, 2009 10:27 UTC (Wed) by epa (subscriber, #39769)
In reply to: Kernel build performance by mingo
Parent article: The 2009 Linux and free software timeline - Q1

It's bigger, but not all of the new code is built. Otherwise the kernel image, which took a few hundred kilobytes back in the old days, would now take 300 times as much space.

Still, I remember leaving the computer running all night to build a new 1.2.x kernel, even though it was a 486. I think disk thrashing is the biggest factor.


to post comments

Kernel build performance

Posted Nov 25, 2009 10:36 UTC (Wed) by mingo (guest, #31122) [Link] (5 responses)

Not all new code is built - but the bit you do build did get comparatively larger - and GCC got comparatively slower.

It roughly evens out for the configs i'm using - my kernel build times stayed more or less constant for the last 10 years. When i go back to older kernels and build them again they get built quite a bit faster than on older hardware. (no surprise there)

My point is that it's all pretty natural and the build times are acceptable. The kernel grows, GCC gets smarter and slower, and hardware gets faster - on a long-term trend basis these factors manage to cancel out each other. (which is pure chance but that's what's happening)

Kernel build performance

Posted Nov 25, 2009 11:21 UTC (Wed) by tialaramex (subscriber, #21167) [Link] (3 responses)

It's not /pure/ chance, it's partly because people adjust their behaviour in order for things to come out as expected. When you're a bit late you run for the train, and when you're early you stop to look at the clouds in the sky.

To drag in the inevitable car analogy, we know that making cars safer has a muted effect, the drivers compensate somewhat for (some kinds of) safety improvements by driving less carefully. Wider lanes result in faster driving. Better brakes result in more tail-gating. There is some hidden 'right' amount of danger that we're instinctively comfortable with, and it's calibrated way above the level we would accept if it was presented as a conscious decision.

If your new PC and compiler seem lightning fast, you may be tempted to put off that idea you had to speed up build times, because it doesn't seem to matter. On the other hand, when compiling your program seems to always take too long, you may spend the time thinking of ideas to make compilation faster, reduce code size, etc.

I've read somewhere the idea that this same thing applies to Moore's law. When engineers see that the new product is right in line with Moore's prediction, they relax, take some holiday, goof off at work. But when it's 10% short they fear the competition will overtake them, their bosses insist they work overtime, they worry about the problem at home, and so on. So in practice Moore's law may work just because we think it works, if we'd expected a doubling every three years, that could work fine too.

Kernel build performance

Posted Nov 25, 2009 12:08 UTC (Wed) by mingo (guest, #31122) [Link] (2 responses)

It's not /pure/ chance, it's partly because people adjust their behaviour in order for things to come out as expected. When you're a bit late you run for the train, and when you're early you stop to look at the clouds in the sky.

Yeah it's not pure chance, but note that none of the factors i cited are really any 'macro behavioral' items. People don't speed up or slow down the kernel build via a single act - their micro-changes have _way_ too little effect on it as a whole. It literally needs a thousand changes for anything like this to show up in any wall-clock measurement.

(Sometimes there's feedback in terms of 'hey you made the kernel build slower' - but these aren't efforts that stabilize it - these just affect the basic parameters and the combination (end result) is random.)

But i'll certainly agree that people wouldnt accept 30+ minutes kernel build times - nor would they stop from bringing its speed from 10 seconds to 20 secods (halving build performance, without anyone really complaining). So there's a certain psychology driven behavior that keeps it somewhat within a given "band" of performance - but the fact that kernel build times are pretty stable over the past decade is pure chance i think.

But i think i digress :-)

Kernel build performance

Posted Nov 25, 2009 14:13 UTC (Wed) by nix (subscriber, #2304) [Link] (1 responses)

people wouldnt accept 30+ minutes kernel build times
Your enormous hardware budget is showing :) until I upgraded this year I had never owned a machine nor even seen a machine which could build a kernel in less time than that. Multi-hour-long build times are not unknown, though admittedly you need to use 1999-vintage hardware for that, which is pushing obsoleteness even among those of us with near-nil hardware budgets.

Kernel build performance

Posted Nov 25, 2009 15:41 UTC (Wed) by mingo (guest, #31122) [Link]

Your enormous hardware budget is showing :)

You are making assumptions :-)

I regularly build the kernel on stock laptops and desktops (which are typically 1-2-3 generations older than state-of-the-art) and the build times are well below 30 minutes. (usually within 10 minutes.)

The oldest system on which i still build the kernel is a 833 MHz P3 laptop with 512 MB of RAM, there a typical kernel takes 45 minutes to build. (But that's ~5 generations old and kernel developers/testers rarely use such old systems.)

I never build distro kernels though - i always use (and used) .config's tailored to the specific hardware. You can certainly waste a lot of time by building generic kernels.

Kernel build performance

Posted Nov 25, 2009 12:23 UTC (Wed) by nix (subscriber, #2304) [Link]

I don't know. For me, kernel build times stayed roughly constant (on those machines I upgraded), until I went multicore. Then they *plunged*. The first machine I ran Linux on in 1997 took half an hour to build a kernel, and that stayed true (it takes a bit over an hour for a 900MHz PIII nowadays). My current single-socket Nehalem desktop takes two minutes, and that's over NFS: do it on the NFS server and it takes 57 seconds. And that's *with* debugging information!

I think that could be considered a speedup. :)

Kernel build performance

Posted Dec 5, 2009 4:06 UTC (Sat) by adobriyan (subscriber, #30858) [Link]

> I think disk thrashing is the biggest factor.

Kernel build is CPU bound unless you do something like not using make -j even on UP or turning CONFIG_DEBUG_INFO on (where so-called thrashing becomes much more noticeable).


Copyright © 2025, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds