User: Password:
|
|
Subscribe / Log in / New account

Is the kernel development process broken?

Is the kernel development process broken?

Posted Mar 10, 2005 6:59 UTC (Thu) by khim (subscriber, #9252)
In reply to: Is the kernel development process broken? by mrshiny
Parent article: Is the kernel development process broken?

I've made these comments before in other discussions, but what this amounts to is that some of the kernel developers seem unwilling to develop the kernel the way most people develop software: work on something until it's "finished", then fix it up until it's "stable".

Bingo! You've got a prize. Just a clarification: "some of the kernel developers" - is something like 95% of them. Most development works made on kernel is not done by "big kernel hackers" who can coordinate work with kernel releases but by independent developers (or not-so-independent developers if we are talking about people who are pair for kernel work). And they care about exactly one feature: their own. They want it to be included in kernel when it's ready and not year later when it becomes obsolete. And yes - year or so is what's needed to "properly stabilize" kernel.

With 2.4 we had mainline kernel (forever obsoleted, not really-interested for 90% of users) and vendor kernels (heavily patched and not as stable but with USB support where appropriate, with timely SATA support, with XFS and so on). With 2.6 we have exactly the opposite - and this is good thing(tm) since it's easier for everyone to have vendor kernels which are more stable and with reduced number of features then opposite situation: very small chance of API fragmentation and such.

I'd like to see someone step forward to maintain 2.6, and let the rest of the developers go off with 2.7. I don't understand why this hasn't happened really... 2.6 is really just like what a 2.7 might be. If people are happy with the current 2.6, they'll be just as happy with the 2.7 kernel. Those of us who want stuff to work can stick to 2.6.

Looks like you missed the whole article. Development kernels do not receive testing! 99% users will stuck with 2.6 till 2.8 will be out and then we'll see huge number of complains "oh, my dell keyboard does not work" and "aaa! my CD is not detected anymore". 2.6 is only stable since it's declared stable. This is not a joke: kernel can only be made stable when it's widely tested and it can only be widely tested when it's declared stable. Thus there are no realistic way to transform unstable kernel to stable one! 2.4.x were disasters for most small x - for exactly that reason. I do not see why we need repeated performance with 2.8.x...


(Log in to post comments)

Is the kernel development process broken?

Posted Mar 10, 2005 15:40 UTC (Thu) by mrshiny (subscriber, #4266) [Link]

I'm not sure where you get the statistic that the 2.4 kernel was not interesting to 90% of the users. I'd be very surprised if that were true.

Furthermore, I don't believe that the 2.6 kernel makes life easier for vendors. Preventing API fragmentation is a good goal, but that can be met by having shorter development cycles. But by shorter development cycles I don't mean 2.6.x to 2.6.x+1. Each 2.6 kernel is "unstable", as in, untested. You claim I didn't read the article but you got the point backards: all of the 2.6 kernels are untested and thus are exactly like 2.5 kernels. Except that with bitkeeper we can better manage the changes and make sure that each release is somewhat working. But nothing about the kernel dev process changes the fact that users want working kernels, not experimental kernels.

As to your comment that the 2.8.0 kernel would be unstable, because nobody tested 2.7.99, you're right. That's exactly what I'd expect, except for this: The kernel developers should declare a feature freeze, and then a code freeze, before releasing 2.7.99. 2.7.99 should contain fixes for bugs found in previous 2.7 kernels. 2.7.99 can be announced to the world, to distro-makers, to app developers, to anyone. And then, once bugs stop being found, it can be released as 2.8.0. This is the QA process that many software companies follow. Open-source companies also follow this process; this is exactly what KDE does. Will this catch all bugs? of course not. But if there was a real effort to make a release candidate that was actually suitable for release, people might test it. Especially if it offers features that the so-called obsolete 2.6 offers.

How can we offer QA, and real release candidates, and stable kernels, while still preventing obsolescence and API fragmentation? Have shorter release cycles. The Gnome project releases every 6 months. I'm sure the kernel team could come up with something reasonable along these lines.

Is the kernel development process broken?

Posted Mar 10, 2005 20:47 UTC (Thu) by oak (guest, #2786) [Link]

Kernel is one *huge* project. Gnome is a lot of "trivial" to medium sized
projects, where each one sits on a top of a stable API (Gtk, API-stable
since 2.0, after that only new APIs have been added, no old ones have been
changed).

Just calculate the lines of each gnome project together (ignoring
configure files) and compare that to the kernel code. Then look at how
many lines of that code changes in the 6 month period in both projects...

Is the kernel development process broken?

Posted Mar 10, 2005 21:15 UTC (Thu) by mrshiny (subscriber, #4266) [Link]

So what? That gives the kernel developers an excuse to release broken software? The technical challenges behind Gnome or KDE may be much smaller than the kernel, but then, I expect that the calibre of developer on the kernel is much higher. And I expect that their level of quality control should be higher, since their product is actually more important.

As for the problem of stable APIs, I'm a believer in stable APIs. The kernel developers like to complain that stable APIs make their jobs harder, but I think the reality is that they make certain jobs harder and other jobs easier. For example, it's easier to make any change you want if the API is unstable, because you don't have to worry about compatibility. On the other hand, it's hard to work on drivers that need to work on multiple versions of the kernel, since there's an unlimited number of API revisions. It's harder to make a new module of any kind that needs to run on multiple kernel versions.

Anyway, the hardware API that the kernel runs on changes very slowly. The kernel has hardly any dependencies, except on itself. So any problems that are due to API flux are created by the kernel developers themselves. However, they've claimed that flexible API is how they want to work, so, fine, that's how they work.

I think the kernel process could be improved by separating the drivers into a separate project. That way we could have new drivers for old kernels, instead of having to install new kernels just to get new drivers. If we just accomplished that separation, then I bet most users would be totally happy since they could pick whatever 2.6.x works for them, and use the 2.6-series drivers to their heart's content. Most people don't need new kernels, they only need new drivers. But I won't elaborate on this since Greg KH has already flamed me about this very topic. Let's just say that I disagree with the kernel developers about their development practices.

Is the kernel development process broken?

Posted Mar 11, 2005 14:19 UTC (Fri) by jschrod (subscriber, #1646) [Link]

Why are the 2.4 kernels not of intersted for users? Like, working APM instead of broken ACPI brain damage?

I would use it happily weren't it for the stopped support of SUSE 8.1 and my thus-forced upgrade.

Joachim


Copyright © 2017, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds