|
|
Subscribe / Log in / New account

Linux 2.5.30 released

Linux 2.5.30 released

Posted Aug 2, 2002 18:04 UTC (Fri) by proski (subscriber, #104)
In reply to: Linux 2.5.30 released by squash
Parent article: Linux 2.5.30 released

This makes me think that there should be some automatic testsuite that every kernel to be released will have to pass. Basically, the kernel should compile in the maximal configuration and boot in bochs and as usermode Linux. If the test doesn't pass, the kernel is not released no matter how experimental it is.

Compile errors waste time of developers, even of those who know how to fix them.


to post comments

Compile Tests Considered Harmful

Posted Aug 2, 2002 20:23 UTC (Fri) by Peter (guest, #1127) [Link]

This makes me think that there should be some automatic testsuite that every kernel to be released will have to pass.

Every time a kernel is released that fails to compile for some person, this is brought up. It probably won't happen because nobody ever seems to be eager to be the one to set up such a system. And it isn't really even practical because

  • there are too many kernel configurations - sometimes it is not so simple as "compile everything in" or "compile everything as modules" or "leave everything possible out". Certain problem cases come up often, like CONFIG_PROC_FS=n or CONFIG_DEVFS_FS=y, but it would be computationally infeasible to test every configuration.
  • why stop at i386? Why not cross-compile for all ~15 architectures? (Of course, you can't cross-compile some of these, but that is a kernel build problem and is being addressed - see the asm-offsets.h patches.)
  • for that matter, why stop with all the architectures? There are lots of sub-machine-types under arm, all of which use drastically different config options. mips, powerpc, and ia64 have variant sub-architectures as well. Even i386 has the SGI Visual Workstation, and the IBM NUMA-Q, and possibly more in the future.

The biggest reason this won't happen, aside from nobody being willing to do the work, is that it would impose an unnecessary synchronisation point on Linus. He releases development (2.5.x) kernels not so everyone can go out and run them, but so other developers can synch up to his tree and develop against up-to-date sources. This allows people to send Linus patches that apply without tweaks, reducing his workload. Some patches are works-in-progress, like Ingo Molnar's IRQ rework - it affected potentially all drivers in the kernel (in practice, many were not affected, but they could have been). What do you think Linus should have done - waited until he and Ingo and a few others with the IRQ patch fixed every single instance of cli() and other such functions? Instead, he released a kernel known not to compile for many configurations so that the whole world could fix the remaining drivers.

Compile errors waste time of developers, even of those who know how to fix them.

It's much worse to only get a kernel once a month, and then scramble to adapt your development work to all the new changes. Kernel releases in the development series should be seen as synch points to make the whole process more transparent, not as releases per se.

In stable kernels, compile tests are very important - that's why we have a dozen pre-releases of 2.4.19 pending. In the 2.2.15-or-so days, Andrzej Krzystofowicz (sp?) did lots of "make randconfig" testing, unearthing large piles of corner cases that didn't compile. (As noted above, mostly it was certain common culprits like CONFIG_PROC_FS=n.) If you want to step up and do the same for 2.4.19, feel free - I'm sure it would be appreciated. But insisting that this sort of thing should hold up the release process, especially in the 2.5 world, is not going to fly.


Copyright © 2025, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds