benefits of out-of-tree development
benefits of out-of-tree development
Posted Jun 8, 2006 20:26 UTC (Thu) by wilck (guest, #29844)In reply to: old code by cventers
Parent article: Quote of the week
I am assuming that _someone_ is working on the driver, e.g. on SourceForge, and will react when API functions change (e.g. by tracking the LWN kernel API changes page).
If the kernel developers themselves don't care for the driver, this person will have to do adaptations anyway, because even if API changes are applied to it in-kernel, if it's done without testing, the driver will probably be broken.
http://lwn.net/Articles/186427 mentions that even core developers like Jeff Garzik and Alan Cox aren't satisfied with the process of integrating drivers into the kernel. The problem is worse for less well-known developers. Trying to shove the code upstream generates a substantial amount of extra friction which can be avoided by developing out-of-tree.
If the driver has only few users, what's the benefit of having it in-tree, after all? Why should every Linux user have to deal with the code?
For many drivers, development doesn't primarily mean adaptation to kernel API changes, but improvement of the driver and its features (e.g. hardware support). Pushing new code of that second type upstream can be a tough exercise. I happen to know that a few drivers that are considered by many to be developed in-tree (take megaraid, for example) are actually developed out-of-tree, pushing certain code drops into mainline every few months.
I know it's a minority opinion on LWN, but I think that "all drivers must be developed in-tree" is a strongly exaggerated statement.
Posted Jun 9, 2006 5:16 UTC (Fri)
by cventers (guest, #31465)
[Link] (6 responses)
Moreover, one of the biggest advantages of having everything in core is
The friction is frankly worth the benefits of life in-tree. How do you
Posted Jun 9, 2006 13:22 UTC (Fri)
by wilck (guest, #29844)
[Link] (5 responses)
That'll catch mostly trivial problems, which an external maintainer will be able to fix as well (if he can figure it out, see last argument).
It is nice, in theory, yes. In practice, it often doesn't work that way - external drivers are needed nonetheless. And it requires the kernel to be bloated with drivers for every device that can reasonably expected to be supported - an approach that doesn't scale. Most "stable" distributions need to backport drivers to older kernel versions and thus don't benefit directly from in-tree development.
I don't. I say that the kernel needs a reasonably stable API. I don't mean "frozen", but changing in a sane way that external people can track without going crazy.
Posted Jun 9, 2006 15:51 UTC (Fri)
by cventers (guest, #31465)
[Link] (4 responses)
Ah, but once again, you're presuming this external maintainer _exists_.
And I disagree that it'll catch "mostly trivial problems". While it's
> It is nice, in theory, yes. In practice, it often doesn't work that way
In theory? This is not theory, this is the reality of how the kernel _is
The approach isn't totally perfect, because kernel source tarballs are
And yes, I'm aware distributions backport drivers. Developing drivers out
You have to carry your argument out a little bit to see its consequences.
> I don't. I say that the kernel needs a reasonably stable API. I don't
Read Documentation/stable-api-nonsense.txt. The 2.6 development process
Without the current model, the rapid rate of change and evolution in the
I'm reminded of a recent LKML thread "OpenGL-based frame-buffer concepts"
Posted Jun 12, 2006 9:38 UTC (Mon)
by wilck (guest, #29844)
[Link] (3 responses)
Yes I do. I think that every piece of code is bad off without active maintainer, whether or not it's in-kernel. Common Open-Source wisdom says that a useful piece of code will find a new maintainer sooner or later, anyway.
I meant "in theory" wrt Linux system operation. In most cases I am aware of, installation of a system with recent hardware still requires external drivers, NVidia/ATI not being counted.
So should you. I find it pretty obvious that a distributed development model scales better than a one-big-chunk model.
The "hit by a bus" argument bears no relation to in-kernel or out-of-kernel. Note that I'm not talking about closed-source stuff.
I did, many times. I don't agree with it. It's fine for the ABI part, but not for API part. The API needs to be predictable. That doesn't mean standstill, just a bit of respect for those developers who don't belong to the kernel community.
"Rate of change" is not a value per se. The one important criterion for the usefulnes of a development model is whether it benefits users. For that to happen, other (non-kernel) communities must take time to adapt to the new features and use them, and users must learn how to take profit from them. I am not sure if the current model was primarily designed with user benefit that in mind, or rather kernel developer fun (which would be understandable).
Wrt the frame buffer change, we don't disagree. I'm all for new models which improve user experience, and I am certain that the community shouldn't wait for ATI/NVidia. Just make the API change predictable and smooth (keep the old API for one 2.6.x cycle, say) and if it's a good model I'm pretty sure they'll follow suit.
Meanwhile, before taking this discussion further, we should perhaps clarify what we mean. Is it "in-kernel development" if someone has his own git tree with the latest version of his driver? Or does "in-kernel" require him to push his changes to his subsystem maintainer, say, once a month? Or more often? What about people who prefer not to use git?
Posted Jun 12, 2006 18:51 UTC (Mon)
by cventers (guest, #31465)
[Link] (2 responses)
I just don't get it - which of the two users of these drivers we've been
The wisdom you speak of really applies to things that are useful to lots
> I meant "in theory" wrt Linux system operation. In most cases I am
Toss NVIDIA and ATI for the obvious licensing issues, and the only thing
If this wireless development had been done __in kernel__, as all driver
> So should you. I find it pretty obvious that a distributed development
But that's just it -- the Linux kernel model _is_ distributed where it
> I did, many times. I don't agree with it. It's fine for the ABI part,
Sorry, this just isn't the way Linux kernel development is done. This
> "Rate of change" is not a value per se. The one important criterion for
Rate of change is not a tangiable end-user value in and of itself. What
Lastly, my interpretation of in-kernel development means that primary
There isn't a big distinction in what you are considering the
If someone doesn't want to use git, they're free to use patch and send
Posted Jun 13, 2006 9:58 UTC (Tue)
by wilck (guest, #29844)
[Link] (1 responses)
I was assuming the number "2" had been used symbolically in this discussion, not literally. Frankly, with literally 2 users code wouldn't go into the kernel unless one of the users was Linus, would it?
> If this wireless development had been done __in kernel__
Certainly. You are talking about a subsystem/general framework here. _Of course_ it must be developed in the kernel.
> That's why the kernel ships with a file called
I have deep respect for the kernel hackers, and GregKH specifically, but that document misses their usual standards. Between the lines, I am just reading "we don't wont any restrictions of creativity, and have been digging around for arguments that all else is bad". I understand very well that this is in the developers' interest (and hell, it's their project), but I doubt that it is actually optimal for the community as a whole.
> Lastly, my interpretation of in-kernel development means that primary
That leaves me puzzled. How many drivers are out there whose latest version is part of the tarball? Wouldn't that apply only to "vintage hardware" drivers, which are not under active devlopement by themselves?
Posted Jun 18, 2006 0:31 UTC (Sun)
by efexis (guest, #26355)
[Link]
Of cause it is; a happy developer is a productive developer :-D
And I have to say, if I wanna test (or begin actually using) some obscure piece of hardware (in my case, this is usually when I'm adding rare/random/old/spare network cards to my router box), I /like/ the fact that drivers are all there, and that I don't have to hunt around for a driver online somewhere that probably won't compile with recent kernel versions, even if it does mean I'm not using a 10th of the code in the kernel tarball.
I bet the % of in-kernel drivers that *at least* compile is significantly higher than the out-of-tree tarballs you'll find lying around the net for hardware that never made it into the tree, and most of the stuff that doesn't work is probably more down to API changes that haven't been kept up with than anything else. In-kernel is definitely the way to go.
Heh, I'm definitely not in that minority. It's pretty simple, I think -- benefits of out-of-tree development
how often do you think these two-user drivers are going to have someone
_outside_ the kernel tracking changes? API changes are likely to result in
compile failures before failures to behave. And given that the kernel _is_
tested with allyes, these API changes will have to be extended to support
all in-tree code.
that distributions (and hell, even users) don't have to deal with nearly
as much crap to have a working system. It's really excellent when you can
worry about _one_ kernel package and hotplug all sorts of devices in
without worrying about messing with drivers.
really expect to be able to manage many out of tree drivers without a
reasonably stable API? Breakage would be constant. It would require the
cooperation of lots of people on a single change, rather than one man with
a quilt full of patches. In-tree drivers give kernel developers the
freedom to consistently refine the code.
benefits of out-of-tree development
Given that the kernel _is_
tested with allyes ...
distributions don't have to deal with nearly
as much crap ...
How do you
really expect to be able to manage many out of tree drivers without a
reasonably stable API?
> That'll catch mostly trivial problems, which an external maintainerbenefits of out-of-tree development
> will be able to fix as well (if he can figure it out, see last
> argument).
I'm saying that when a driver only has two users, he probably doesn't, or
if he does, he's not likely interested enough to follow the chaos of
every release.
possible to introduce breakage in a way that build testing wouldn't
detect, the vast majority of breakage in fact comes from API changes. You
probably don't notice these API changes because developers prefer to keep
everything in tree where it's possible to fix all the users at once
(hence, a two-user driver with no maintainer gets a temporary maintainer
-- the guy making the API change who won't get his API change accepted
unless the kernel builds).
> - external drivers are needed nonetheless. And it requires the kernel
> to be bloated with drivers for every device that can reasonably
> expected to be supported - an approach that doesn't scale. Most
> "stable" distributions need to backport drivers to older kernel
> versions and thus don't benefit directly from in-tree development.
done today_. We have in-tree drivers with two users. The kernel
maintainers want _more_ in-tree drivers, even if hardly anyone uses them.
sort of heavy these days. But as common as broadband is, it's not much of
a problem. And the running kernel is bloated with nothing - thanks to the
extensive support and coverage of Kconfig and the modular kernel
architecture, pieces can be loaded and unloaded at will.
of tree wouldn't change that fact; rather, it would actually give
distributors _more_ work.
What if there were 100 out of tree drivers? That's 100 more projects
distributors have to watch for security vulnerabilities, fixes, etc.
That's 100 sources they have to obtain and wrangle. And if a maintainer
gets hit by a bus, they now have to _forward port_ drivers for their
users to prevent breaking working configurations. None of these problems
exist when everying is in-tree.
> mean "frozen", but changing in a sane way that external people can
> track without going crazy.
is _fantastically_ efficient compared to all other efforts I'm aware of.
It gives some people the willies, but it does what matters most - getting
the job done.
kernel would not be possible at all.
discussing new ways to work with vgacon, fb and drm. The possibility
exists to build a new, better graphics system - but the problem is
getting those _out of tree_ ATI and NVIDIA drivers to follow.
benefits of out-of-tree development
Ah, but once again, you're presuming this external maintainer _exists_.
This is not theory, this is the reality of how the kernel _is done today
You have to carry your argument out a little bit to see its consequences.
Read Documentation/stable-api-nonsense.txt.
Without the current model, the rapid rate of change and evolution in the kernel would not be possible at all.
> Yes I do. I think that every piece of code is bad off without activebenefits of out-of-tree development
> maintainer, whether or not it's in-kernel. Common Open-Source wisdom
> says that a useful piece of code will find a new maintainer sooner or
> later, anyway.
discussing do you suppose will be troubled to maintain it across kernel
API changes? You can call it bad off if you like, but it's better for the
code to be in kernel where it will _always_ at least build than out of
kernel where it will almost surely go stale.
of people. If there are only two users, and the code is out of tree, it
is very likely to die and die quickly.
> aware of, installation of a system with recent hardware still requires
> external drivers, NVidia/ATI not being counted.
I see that is 'common' is wireless. This is largely because wireless
development was done by many parties, in parallel, __outside__ of the
kernel. So you end up with multiple competing stacks and implementations,
and it was only recently when John Linville volunteered to take
maintainership over kernel Wifi support that we've started to see any
improvements here.
development should be, the situation would be _MUCH_ better today since
all of Linux's wireless support would be available from mainline (save
for the ndiswrapper hack).
> model scales better than a one-big-chunk model.
matters (anyone can work on anything). The irony is that the only way to
make this work is either a stable API (which sucks, more on this later)
or pushing as many things in-core as possible (as Greg KH points out
clearly in Documentation/stable-api-nonsense.txt).
> but not for API part. The API needs to be predictable. That doesn't
> mean standstill, just a bit of respect for those developers who don't
> belong to the kernel community.
issue has been beaten to death over and over on LKML and in other
channels. And quite frankly, the people that are most qualified to make
this call are the people who work on the kernel all day, every day. These
people almost (completely?) universally agree that a stable API is a bad
thing. That's why the kernel ships with a file called
"Documentation/stable-api-nonsense.txt" -- expressly for the purpose of
pointing this out to people, to try to make this dead horse go the hell
away.
> the usefulnes of a development model is whether it benefits users. For
> that to happen, other (non-kernel) communities must take time to adapt
> to the new features and use them, and users must learn how to take
> profit from them. I am not sure if the current model was primarily
> designed with user benefit that in mind, or rather kernel developer fun
> (which would be understandable).
it means is more about /flexibility/ - if I want to make a sweeping
change to the kernel, what are the costs? Having everything in-kernel
means that I can propose a new API, and following good review, replace
the API and all of the users in one big sweep. Imagine how difficult it
would be to get big patchsets like Ingo's lock validator merged if it
were going to scream about all kinds of third party code users depended
on?
distribution of the source code happens as part of the kernel tarball.
That means that everyone working on Linux has the same code the users do,
which further means that any API changes they decide to adopt can be
rapidly applied to the whole kernel. (Not to mention, build regressions
are going to be _immediately_ obvious rather than becoming a huge
surprise after a major release).
possibilities for 'in-kernel' -- pretty much everyone has a git tree that
they develop on and push changes upstream.
e-mail. Git need not be a part of this arrangement.
> If there are only two users...benefits of out-of-tree development
> "Documentation/stable-api-nonsense.txt" -- expressly for the purpose of
> pointing this out to people, to try to make this dead horse go the hell
> away.
> distribution of the source code happens as part of the kernel tarball.
> this is in the developers' interest (and hell, it's their project),benefits of out-of-tree development
> but I doubt that it is actually optimal for the community as a whole