|
|
Subscribe / Log in / New account

benefits of out-of-tree development

benefits of out-of-tree development

Posted Jun 8, 2006 20:26 UTC (Thu) by wilck (guest, #29844)
In reply to: old code by cventers
Parent article: Quote of the week

I am assuming that _someone_ is working on the driver, e.g. on SourceForge, and will react when API functions change (e.g. by tracking the LWN kernel API changes page).

If the kernel developers themselves don't care for the driver, this person will have to do adaptations anyway, because even if API changes are applied to it in-kernel, if it's done without testing, the driver will probably be broken.

http://lwn.net/Articles/186427 mentions that even core developers like Jeff Garzik and Alan Cox aren't satisfied with the process of integrating drivers into the kernel. The problem is worse for less well-known developers. Trying to shove the code upstream generates a substantial amount of extra friction which can be avoided by developing out-of-tree.

If the driver has only few users, what's the benefit of having it in-tree, after all? Why should every Linux user have to deal with the code?

For many drivers, development doesn't primarily mean adaptation to kernel API changes, but improvement of the driver and its features (e.g. hardware support). Pushing new code of that second type upstream can be a tough exercise. I happen to know that a few drivers that are considered by many to be developed in-tree (take megaraid, for example) are actually developed out-of-tree, pushing certain code drops into mainline every few months.

I know it's a minority opinion on LWN, but I think that "all drivers must be developed in-tree" is a strongly exaggerated statement.


to post comments

benefits of out-of-tree development

Posted Jun 9, 2006 5:16 UTC (Fri) by cventers (guest, #31465) [Link] (6 responses)

Heh, I'm definitely not in that minority. It's pretty simple, I think --
how often do you think these two-user drivers are going to have someone
_outside_ the kernel tracking changes? API changes are likely to result in
compile failures before failures to behave. And given that the kernel _is_
tested with allyes, these API changes will have to be extended to support
all in-tree code.

Moreover, one of the biggest advantages of having everything in core is
that distributions (and hell, even users) don't have to deal with nearly
as much crap to have a working system. It's really excellent when you can
worry about _one_ kernel package and hotplug all sorts of devices in
without worrying about messing with drivers.

The friction is frankly worth the benefits of life in-tree. How do you
really expect to be able to manage many out of tree drivers without a
reasonably stable API? Breakage would be constant. It would require the
cooperation of lots of people on a single change, rather than one man with
a quilt full of patches. In-tree drivers give kernel developers the
freedom to consistently refine the code.

benefits of out-of-tree development

Posted Jun 9, 2006 13:22 UTC (Fri) by wilck (guest, #29844) [Link] (5 responses)

Given that the kernel _is_ tested with allyes ...

That'll catch mostly trivial problems, which an external maintainer will be able to fix as well (if he can figure it out, see last argument).

distributions don't have to deal with nearly as much crap ...

It is nice, in theory, yes. In practice, it often doesn't work that way - external drivers are needed nonetheless. And it requires the kernel to be bloated with drivers for every device that can reasonably expected to be supported - an approach that doesn't scale. Most "stable" distributions need to backport drivers to older kernel versions and thus don't benefit directly from in-tree development.

How do you really expect to be able to manage many out of tree drivers without a reasonably stable API?

I don't. I say that the kernel needs a reasonably stable API. I don't mean "frozen", but changing in a sane way that external people can track without going crazy.

benefits of out-of-tree development

Posted Jun 9, 2006 15:51 UTC (Fri) by cventers (guest, #31465) [Link] (4 responses)

> That'll catch mostly trivial problems, which an external maintainer
> will be able to fix as well (if he can figure it out, see last
> argument).

Ah, but once again, you're presuming this external maintainer _exists_.
I'm saying that when a driver only has two users, he probably doesn't, or
if he does, he's not likely interested enough to follow the chaos of
every release.

And I disagree that it'll catch "mostly trivial problems". While it's
possible to introduce breakage in a way that build testing wouldn't
detect, the vast majority of breakage in fact comes from API changes. You
probably don't notice these API changes because developers prefer to keep
everything in tree where it's possible to fix all the users at once
(hence, a two-user driver with no maintainer gets a temporary maintainer
-- the guy making the API change who won't get his API change accepted
unless the kernel builds).

> It is nice, in theory, yes. In practice, it often doesn't work that way
> - external drivers are needed nonetheless. And it requires the kernel
> to be bloated with drivers for every device that can reasonably
> expected to be supported - an approach that doesn't scale. Most
> "stable" distributions need to backport drivers to older kernel
> versions and thus don't benefit directly from in-tree development.

In theory? This is not theory, this is the reality of how the kernel _is
done today_. We have in-tree drivers with two users. The kernel
maintainers want _more_ in-tree drivers, even if hardly anyone uses them.

The approach isn't totally perfect, because kernel source tarballs are
sort of heavy these days. But as common as broadband is, it's not much of
a problem. And the running kernel is bloated with nothing - thanks to the
extensive support and coverage of Kconfig and the modular kernel
architecture, pieces can be loaded and unloaded at will.

And yes, I'm aware distributions backport drivers. Developing drivers out
of tree wouldn't change that fact; rather, it would actually give
distributors _more_ work.

You have to carry your argument out a little bit to see its consequences.
What if there were 100 out of tree drivers? That's 100 more projects
distributors have to watch for security vulnerabilities, fixes, etc.
That's 100 sources they have to obtain and wrangle. And if a maintainer
gets hit by a bus, they now have to _forward port_ drivers for their
users to prevent breaking working configurations. None of these problems
exist when everying is in-tree.

> I don't. I say that the kernel needs a reasonably stable API. I don't
> mean "frozen", but changing in a sane way that external people can
> track without going crazy.

Read Documentation/stable-api-nonsense.txt. The 2.6 development process
is _fantastically_ efficient compared to all other efforts I'm aware of.
It gives some people the willies, but it does what matters most - getting
the job done.

Without the current model, the rapid rate of change and evolution in the
kernel would not be possible at all.

I'm reminded of a recent LKML thread "OpenGL-based frame-buffer concepts"
discussing new ways to work with vgacon, fb and drm. The possibility
exists to build a new, better graphics system - but the problem is
getting those _out of tree_ ATI and NVIDIA drivers to follow.

benefits of out-of-tree development

Posted Jun 12, 2006 9:38 UTC (Mon) by wilck (guest, #29844) [Link] (3 responses)

Ah, but once again, you're presuming this external maintainer _exists_.

Yes I do. I think that every piece of code is bad off without active maintainer, whether or not it's in-kernel. Common Open-Source wisdom says that a useful piece of code will find a new maintainer sooner or later, anyway.

This is not theory, this is the reality of how the kernel _is done today

I meant "in theory" wrt Linux system operation. In most cases I am aware of, installation of a system with recent hardware still requires external drivers, NVidia/ATI not being counted.

You have to carry your argument out a little bit to see its consequences.

So should you. I find it pretty obvious that a distributed development model scales better than a one-big-chunk model.

The "hit by a bus" argument bears no relation to in-kernel or out-of-kernel. Note that I'm not talking about closed-source stuff.

Read Documentation/stable-api-nonsense.txt.

I did, many times. I don't agree with it. It's fine for the ABI part, but not for API part. The API needs to be predictable. That doesn't mean standstill, just a bit of respect for those developers who don't belong to the kernel community.

Without the current model, the rapid rate of change and evolution in the kernel would not be possible at all.

"Rate of change" is not a value per se. The one important criterion for the usefulnes of a development model is whether it benefits users. For that to happen, other (non-kernel) communities must take time to adapt to the new features and use them, and users must learn how to take profit from them. I am not sure if the current model was primarily designed with user benefit that in mind, or rather kernel developer fun (which would be understandable).

Wrt the frame buffer change, we don't disagree. I'm all for new models which improve user experience, and I am certain that the community shouldn't wait for ATI/NVidia. Just make the API change predictable and smooth (keep the old API for one 2.6.x cycle, say) and if it's a good model I'm pretty sure they'll follow suit.

Meanwhile, before taking this discussion further, we should perhaps clarify what we mean. Is it "in-kernel development" if someone has his own git tree with the latest version of his driver? Or does "in-kernel" require him to push his changes to his subsystem maintainer, say, once a month? Or more often? What about people who prefer not to use git?

benefits of out-of-tree development

Posted Jun 12, 2006 18:51 UTC (Mon) by cventers (guest, #31465) [Link] (2 responses)

> Yes I do. I think that every piece of code is bad off without active
> maintainer, whether or not it's in-kernel. Common Open-Source wisdom
> says that a useful piece of code will find a new maintainer sooner or
> later, anyway.

I just don't get it - which of the two users of these drivers we've been
discussing do you suppose will be troubled to maintain it across kernel
API changes? You can call it bad off if you like, but it's better for the
code to be in kernel where it will _always_ at least build than out of
kernel where it will almost surely go stale.

The wisdom you speak of really applies to things that are useful to lots
of people. If there are only two users, and the code is out of tree, it
is very likely to die and die quickly.

> I meant "in theory" wrt Linux system operation. In most cases I am
> aware of, installation of a system with recent hardware still requires
> external drivers, NVidia/ATI not being counted.

Toss NVIDIA and ATI for the obvious licensing issues, and the only thing
I see that is 'common' is wireless. This is largely because wireless
development was done by many parties, in parallel, __outside__ of the
kernel. So you end up with multiple competing stacks and implementations,
and it was only recently when John Linville volunteered to take
maintainership over kernel Wifi support that we've started to see any
improvements here.

If this wireless development had been done __in kernel__, as all driver
development should be, the situation would be _MUCH_ better today since
all of Linux's wireless support would be available from mainline (save
for the ndiswrapper hack).

> So should you. I find it pretty obvious that a distributed development
> model scales better than a one-big-chunk model.

But that's just it -- the Linux kernel model _is_ distributed where it
matters (anyone can work on anything). The irony is that the only way to
make this work is either a stable API (which sucks, more on this later)
or pushing as many things in-core as possible (as Greg KH points out
clearly in Documentation/stable-api-nonsense.txt).

> I did, many times. I don't agree with it. It's fine for the ABI part,
> but not for API part. The API needs to be predictable. That doesn't
> mean standstill, just a bit of respect for those developers who don't
> belong to the kernel community.

Sorry, this just isn't the way Linux kernel development is done. This
issue has been beaten to death over and over on LKML and in other
channels. And quite frankly, the people that are most qualified to make
this call are the people who work on the kernel all day, every day. These
people almost (completely?) universally agree that a stable API is a bad
thing. That's why the kernel ships with a file called
"Documentation/stable-api-nonsense.txt" -- expressly for the purpose of
pointing this out to people, to try to make this dead horse go the hell
away.

> "Rate of change" is not a value per se. The one important criterion for
> the usefulnes of a development model is whether it benefits users. For
> that to happen, other (non-kernel) communities must take time to adapt
> to the new features and use them, and users must learn how to take
> profit from them. I am not sure if the current model was primarily
> designed with user benefit that in mind, or rather kernel developer fun
> (which would be understandable).

Rate of change is not a tangiable end-user value in and of itself. What
it means is more about /flexibility/ - if I want to make a sweeping
change to the kernel, what are the costs? Having everything in-kernel
means that I can propose a new API, and following good review, replace
the API and all of the users in one big sweep. Imagine how difficult it
would be to get big patchsets like Ingo's lock validator merged if it
were going to scream about all kinds of third party code users depended
on?

Lastly, my interpretation of in-kernel development means that primary
distribution of the source code happens as part of the kernel tarball.
That means that everyone working on Linux has the same code the users do,
which further means that any API changes they decide to adopt can be
rapidly applied to the whole kernel. (Not to mention, build regressions
are going to be _immediately_ obvious rather than becoming a huge
surprise after a major release).

There isn't a big distinction in what you are considering the
possibilities for 'in-kernel' -- pretty much everyone has a git tree that
they develop on and push changes upstream.

If someone doesn't want to use git, they're free to use patch and send
e-mail. Git need not be a part of this arrangement.

benefits of out-of-tree development

Posted Jun 13, 2006 9:58 UTC (Tue) by wilck (guest, #29844) [Link] (1 responses)

> If there are only two users...

I was assuming the number "2" had been used symbolically in this discussion, not literally. Frankly, with literally 2 users code wouldn't go into the kernel unless one of the users was Linus, would it?

> If this wireless development had been done __in kernel__

Certainly. You are talking about a subsystem/general framework here. _Of course_ it must be developed in the kernel.

> That's why the kernel ships with a file called
> "Documentation/stable-api-nonsense.txt" -- expressly for the purpose of
> pointing this out to people, to try to make this dead horse go the hell
> away.

I have deep respect for the kernel hackers, and GregKH specifically, but that document misses their usual standards. Between the lines, I am just reading "we don't wont any restrictions of creativity, and have been digging around for arguments that all else is bad". I understand very well that this is in the developers' interest (and hell, it's their project), but I doubt that it is actually optimal for the community as a whole.

> Lastly, my interpretation of in-kernel development means that primary
> distribution of the source code happens as part of the kernel tarball.

That leaves me puzzled. How many drivers are out there whose latest version is part of the tarball? Wouldn't that apply only to "vintage hardware" drivers, which are not under active devlopement by themselves?

benefits of out-of-tree development

Posted Jun 18, 2006 0:31 UTC (Sun) by efexis (guest, #26355) [Link]

> this is in the developers' interest (and hell, it's their project),
> but I doubt that it is actually optimal for the community as a whole

Of cause it is; a happy developer is a productive developer :-D

And I have to say, if I wanna test (or begin actually using) some obscure piece of hardware (in my case, this is usually when I'm adding rare/random/old/spare network cards to my router box), I /like/ the fact that drivers are all there, and that I don't have to hunt around for a driver online somewhere that probably won't compile with recent kernel versions, even if it does mean I'm not using a 10th of the code in the kernel tarball.

I bet the % of in-kernel drivers that *at least* compile is significantly higher than the out-of-tree tarballs you'll find lying around the net for hardware that never made it into the tree, and most of the stuff that doesn't work is probably more down to API changes that haven't been kept up with than anything else. In-kernel is definitely the way to go.


Copyright © 2025, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds