|Please consider subscribing to LWN|
Subscriptions are the lifeblood of LWN.net. If you appreciate this content and would like to see more of it, your subscription will help to ensure that LWN continues to thrive. Please visit this page to join up and keep LWN on the net.
Back in August, there was a big fight over whether the user-space "native Linux KVM tool" should be merged into the mainline kernel repository. One development cycle later, we've had the same fight with many of the same arguments and roughly the same result. Sequels are rarely as good as the original; that applies to flame wars as well as to more creative works. But there is a core issue here that has relevance well beyond the kernel community: does the separation of projects help the Linux community more than it hurts it?
The proponents of merging the tool into the kernel make a number of points. Having the projects in the same repository makes development that crosses the boundary between the two easier; in particular, it helps in the creation of APIs that will stand the test of time. The project's overall standards help to keep the quality of the tools high and the release cycle predictable. Reuse of code between user-space and kernel projects gets easier. All told, they say, having the "perf" tool in the kernel tree has greatly helped its development; see this message from Ingo Molnar for a detailed description of the perceived advantages of this mode of development. Artificial separation of projects, instead, is said to have high costs; Ingo went so far as to claim that Linux lost the desktop market as the result of an ill-advised separation of projects.
Opponents, instead, say that putting the kernel and the tools in the same tree makes it easier to create API regressions for out-of-tree tools. The reason that perf has a relatively good record on this front, Ted Ts'o said, has more to do with the competence of the developers involved than its presence in the kernel tree. Adding user-space tools bloats the kernel source distribution, puts competing out-of-tree projects at a disadvantage, and, Ted said, creates a number of difficulties for distributors.
The one concrete end result of the discussion was that the pull request for the KVM tool was passed over by Linus who, feeling that he had enough stuff for this development cycle already, did not want to wander into this particular disagreement. It is not hard to imagine that he will get another chance in a future development cycle; it does not seem that any minds have been changed by the discussion so far.
In the middle of this discussion, it was asked whether it would make sense to bring other projects into the kernel - GNOME, for example. It was pointed out that BSD-based systems tend to be developed in this mode - an existence proof that operating system development can work that way. Ted responded (in the message linked above) as follows:
One could note that BSD does have one safety valve: to fork the entire system. That has happened a number of times in the history of BSD; pointing this out, though, only serves to reinforce Ted's point.
Distributors play a crucial role in the Linux ecosystem; they function as the middleman between most development projects and their users. Most of us, most of the time, do not obtain the software we run directly from those who wrote it; it comes, instead, nicely packaged from our distributor. As they ponder each package, distributors (the successful ones, at least) will be keeping their users' needs in mind. If the package has obnoxious anti-social features or security problems, the distributors will either fix it or leave the package out altogether. The recent Calibre mess is a prime example; aware distributors had already eliminated the worst problems before they were generally known.
Distributors make it possible to change the source of your operating system without having to stop running Linux. Anybody who has been working with Linux long enough has almost certainly switched distributions at least once during that time; the process is not without its disruptions, but the amount of pain is usually surprisingly low. The lack of lock-in in the Linux world has improved life for users and, at the same time, given distributors an incentive to improve the Linux experience for everybody.
The role of the distributors is made possible by the boundaries between the projects. If the entire system were integrated into a single source tree, there would be little space for the distributors to do their own integration work. The lack of independent *BSD distributions makes this point clear. That suggests that too much integration at the project level might not be a good thing for Linux.
So one could make an argument that bringing GNOME into the kernel source tree is probably a bad idea for this reason alone; Linux as a whole may be better served by having the kernel and the desktop environments be separate components that can be combined (or not) at will. That makes it clear (if it wasn't before - your editor can be slow at times, please bear with him) that there is a line to be drawn somewhere; bringing some projects into the kernel source tree may be harmful for Linux even without considering the effects on the kernel itself. But separating the kernel from some user-space projects may have costs that are just as high. There is no consensus, currently, on what those costs are or where the line should be drawn.
All of this implies that the debate over the inclusion of the KVM tool has an importance that goes beyond the fate of that one project. Does (as some allege) the integration between perf and the kernel impede the development of alternatives and hurt the performance tooling ecosystem as a whole? Would the integration of the KVM tool put QEMU at the mercy of a fast-changing, regression-prone API over which its developers have no control? Are we better served by a fence between the kernel and user space that is as well defined at the project level as it is at the API level? Or, on the other hand, does keeping the KVM tool out of the kernel repository slow its growth and hurt the capability and usability of Linux tooling as a whole? And, importantly, what does the reasoning that leads to an answer to these questions tell us about which other projects should - or should not - find a home in the kernel tree?
These issues arise at a number of levels; some distributors, for example, are increasingly taking control of parts of the system through tightly-controlled in-house projects. Android is an extreme example of this approach, but it can be found in more traditional distributions as well. There are clear advantages to doing things that way, but it is worth asking whether that behavior is good for Linux in the long term and just where the line should be drawn. The fences between our projects may have played an important role in both the successes and failures of Linux; decisions on whether to strengthen them or tear them down need some serious thought.
Copyright © 2011, Eklektix, Inc.
This article may be redistributed under the terms of the Creative Commons CC BY-SA 4.0 license
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds