time it takes to get a project into the upstream kernel
time it takes to get a project into the upstream kernel
Posted Jul 25, 2007 14:26 UTC (Wed) by mingo (guest, #31122)In reply to: Interview with Con Kolivas (APC) by freggy
Parent article: Interview with Con Kolivas (APC)
Con Kolivas has been maintaining swap prefetching for more than a year.Btw., while I support the upstream integration of the swap-prefetch code (i ran Con's testapp, reported the numbers, reported regressions, tested fixes, reviewed the code, gave my Ack, etc.), i'd like to point out that the above characterisation is quite unfair towards lots of other kernel developers who currently have one feature or another queued for upstream integration.
Lots of features had to wait several years and go through lots of iterations before they were merged. Some are still not merged as of today. Some were rejected and abandoned.
Let me give you three examples for major piece of code i wrote and which never went upstream: the 4G/4G VM feature, exec-shield and Tux. I have written the 4G/4G feature more than 4 years ago, and it's quite a bit of code:
60 files changed, 1942 insertions(+), 706 deletions(-)
Compare this to swap-prefetch:
23 files changed, 857 insertions(+), 6 deletions(-)
So it's in the same ballpark in terms of complexity.
4G:4G was a major effort on my part, but the VM maintainers (and Linus) rejected it (fundamentally) for a number of reasons. In hindsight, they were more correct than wrong, but i sure was upset about it.
exec-shield i wrote more than 4 years ago too, that too was rejected by the VM maintainers. (for reasons i still dont agree with :-) Bits of it went upstream, but a fair chunk didnt. Was i upset about the decision? Sure i was, that is natural, when someone spends a lot of time on a project.
But there are other examples as well: the KDB patchset was first posted around 1998, 9 years ago. It was rejected numerous times and it is still not upstream. The scalable pagecache patches have been in the works for a long time as well, and they are still not upstream - although they were written by one of the VM maintainers (Nick Piggin), the same people who are currently not convinced about swap-prefetch (yet). There are countless other examples. (In fact a number of times we rejected code from Linus too - it happened not once that he has sent out some idea-patch which was then rejected and someone wrote something better.)
In Linux we reject _lots_ of code, and that's the only way to create a quality kernel. It's a bit like evolutionary selection: breathtakingly wasteful and incredibly efficient at the same time.
Posted Jul 25, 2007 15:41 UTC (Wed)
by msmeissn (subscriber, #13641)
[Link]
The non-soft-NX parts look mergeable at least to my eyes.
Posted Jul 26, 2007 15:31 UTC (Thu)
by rmstar (guest, #3672)
[Link] (2 responses)
In Linux we reject _lots_ of code, and that's the only way to create a quality kernel. It's a bit like evolutionary selection: breathtakingly wasteful and incredibly efficient at the same time.
In general, evolutionary algorithms have never had a serious breakthrough because turning on your brain (as long as one is involved) tends to produce much better results. The bottom line is: evolutionary selection is just wasteful, period. That after billions of years of mindless tinkering something interesting results doesn't mean that it is "efficient".
Back to the linux kernel, it seems to me that the fundamental human problem behind kernel development is that stuff has to get "approved" and merged in. If there was an easy way to keep changes separate, that didn't imply intense maintaining efforts, none of this would happen. We would have dozens of schedulers and VMs, the best would be used most, progress would be very fast, and there would be less fights and frustration.
The fact that good, motivated people that have a positive impact are leaving in frustration is not good at all. Please stop rationalizing it.
Posted Jul 27, 2007 23:42 UTC (Fri)
by maney (subscriber, #12630)
[Link]
Granted that efficient may not be the best possible world, but look at what you just said. Evolution got us from complete mindlessness to sapience - brains from nothing. Efficient is damning it with faint praise...
If there was an easy way to keep changes separate, that didn't imply intense maintaining efforts, none of this would happen.
And if pigs had wings, they would fly. (is that too blunt? if so, I think it is nonetheless exactly true: not requiring considerable effort is tantamount to asking for the rate of change to be turned down, and while there could be some good reasons for that, I don't think that making it easier for external patches to limp along without ever progressing towards being included (or rejected) is remotely one such.)
Posted Aug 2, 2007 7:54 UTC (Thu)
by anandsr21 (guest, #28562)
[Link]
You should try again with exec-shield.time it takes to get a project into the upstream kernel
time it takes to get a project into the upstream kernel
That after billions of years of mindless tinkering something interesting results doesn't mean that it is "efficient".
time it takes to get a project into the upstream kernel
If you wanted to reach point A from point B when there is no known path, then evolution is the fastest way to find the path. There is nothing better. It will try all remotely possible paths and discard the bad ones. It will also find the most efficient path. I think evolution is the best way to go for Open Source Software development. Any thing else is wasting time. Resources on the other hand are meant to be wasted, as you couldn't control them anyway.time it takes to get a project into the upstream kernel