User: Password:
|
|
Subscribe / Log in / New account

GNU + Linux = broken development model

GNU + Linux = broken development model

Posted Jul 29, 2009 21:34 UTC (Wed) by adicarlo (subscriber, #8463)
In reply to: GNU + Linux = broken development model by mheily
Parent article: A tempest in a tty pot

>> Tighter coupling between kernel and userland is better? You gotta be kidding me. That means you gotta run the kernel your distributor provides you. Want that new driver for that wifi card or USB dongle? Too bad.

> Ever heard of loadable kernel modules? They let you run experimental code that isn't shipped with the distributors kernel. Of course, it helps to have a stable kernel API, but the Linux devs have made their distaste for interface stability perfectly clear.

Sure I've heard of them. Good luck loading ath5k.ko from 2.6.30 in your vendor-supplied 2.6.26 kernel tho.


(Log in to post comments)

GNU + Linux = broken development model

Posted Jul 29, 2009 22:43 UTC (Wed) by jond (subscriber, #37669) [Link]

To be fair to the parent (who I also disagree with) that particular problem only exists due to the lack of a stable kernel API.

GNU + Linux = broken development model

Posted Jul 29, 2009 23:52 UTC (Wed) by JoeBuck (guest, #2330) [Link]

And the main reason Microsoft sucks rocks, despite all of the brilliant people who work there, is stable ABIs. For everything that they've ever done. No matter how stupid it was. In many cases, even things that they never documented can never be broken, because critical apps (including Microsoft apps) depend on this undocumented behavior. They are crushed under the weight of all that legacy.

GNU + Linux = broken development model

Posted Jul 30, 2009 1:28 UTC (Thu) by mikov (subscriber, #33179) [Link]

I am sorry but that is simply wrong.

I have written and maintained several large and very non-standard Windows kernel drivers on my own through the years. They worked with minimal effort (if any) across NT4, W2K and WXP (I switched to Linux development before Vista was out). This stability was an immense help to me.

I would never have had enough time to do it if I had to track incompatible kernel changes every few months. Plus, of course the changes are never clearly documented and the only way to find them is by observing broken compilation or worse broken behavior. Or read LWN, of course.

This is madness! It takes a full time job just to track the kernel, let alone have time to do other constructive stuff. This is not a sustainable development model and I think that reality has proven me right. Nobody can afford to waste these kind of resources on Linux and ... nobody is.

No software vendor (Redhad, Novell, etc) tests _all_ or even _most_ of the drivers which are included in the kernel for every new release. Most smaller hardware vendors in turn can't afford to employ kernel developers just for this so they don't test their own products either.

Compare this to Windows where a company can hire a one time consultant to write a driver and can be reasonably sure that the driver will continue to work for the foreseeable future without _any_ maintenance.

The argument for including stuff in the mainline kernel is of course completely ridiculous:

- One, it would be extremely hard, probably impossible, to include a driver for custom hardware which is not widely available.

- Two, including it in the kernel doesn't guarantee anything more than successful compilation in future versions. As I said, nobody tests the drivers before a kernel release.

In the end it is much easier to write a new Linux kernel driver, but I suspect that in the long term it is more expensive than Windows to maintain.

GNU + Linux = broken development model

Posted Jul 30, 2009 2:10 UTC (Thu) by foom (subscriber, #14868) [Link]

- One, it would be extremely hard, probably impossible, to include a driver for custom hardware which is not widely available.
Apparently not actually true, there are such drivers in the kernel, and the kernel devs seem to be happy to include them. (I too find this surprising and a bit strange...)
- Two, including it in the kernel doesn't guarantee anything more than successful compilation in future versions. As I said, nobody tests the drivers before a kernel release.
Same thing is true of windows drivers -- they promise not to break the ABI, but microsoft isn't going to test your little driver for custom hardware with their new OS...sure it might still load, but then maybe it'll immediately crash. There's no guarantee against that!

GNU + Linux = broken development model

Posted Jul 30, 2009 2:53 UTC (Thu) by mikov (subscriber, #33179) [Link]

Same thing is true of windows drivers -- they promise not to break the ABI, but microsoft isn't going to test your little driver for custom hardware with their new OS...sure it might still load, but then maybe it'll immediately crash. There's no guarantee against that!

No, it is not the same thing by far. While Microsoft will not test my driver (unless I have submitted it for WHQL, which BTW is much easier for a business than submitting to the Linux kernel), the amount of time I would have to spend making sure it works is minuscule by comparison. Plus:

  • Microsoft at least tests all the drivers that are shipped with the OS. This is a huge difference with any Linux distribution.
  • The above alone guarantees a very high level of API compatibility.
  • They don't ship new kernels with frequency anywhere near Linux.

Even if a 100% reliable stable API is not realistic, aiming for one is a really immense help for developers and businesses alike. Especially for smaller businesses where the need for an extra high salaried employee would make a huge difference.

In my opinion it would be sufficient if Linux maintained a source-level API with explicit versioning, and with explicit documentation for each change. Change the API at every release if you like, but increment the version and maintain a formal documentation of the changes (and I don't mean a GIT log).

Sigh... Alas, that will never happen. I am convinced that it is one of the big reasons why we will never see mass adoption of Linux on the desktop. Specialized huge volume devices with fixed hardware like phones, netbooks, game consoles - perhaps, but a PC - not until this changes.

GNU + Linux = broken development model

Posted Jul 30, 2009 4:24 UTC (Thu) by rahvin (subscriber, #16953) [Link]

They don't ship new kernels with frequency anywhere near Linux.
Why not admit that this is what you are complaining about, not the lack of an API? The fast moving kernel is the issue that makes it difficult to track, not the lack of an API as you state. The simple solution is to do what many businesses do and develop for kernel releases shipped with distributions. RHEL, Debian, Unbuntu, etc... all ship with specific kernels, there is often little need to track the newest kernel for out of kernel drivers.

GNU + Linux = broken development model

Posted Jul 30, 2009 5:04 UTC (Thu) by mikov (subscriber, #33179) [Link]

Sigh. You are missing the point.

The same Windows kernel driver can work from NT 3.51, released in 1995, to at least Windows XP, and for many classes of drivers Vista and Windows 7. During that time the Windows kernel has undergone numerous fundamental improvements (plug and play, power management, etc), and yet the core kernel _concepts_ have been kept stable. Can you point me to a Linux distribution that maintains a kernel for 15 years?

The frequency of kernel releases would be immaterial if the API was stable or the changes formally tracked and documented. The latter is actually preferable because it wouldn't stop the innovation.

Oh, yes, the commercial distributions are trying to maintain a stable kernel for some time (although far from 15 years) - of course they would; this is what the vast majority of businesses and users really want and need regardless of what the stupid document "stable_api_nonsense.txt" says. Alas, it is actually causing further incompatibilities and problems between distributions - everybody has a different version, with different patches, etc. It is a nightmare.

Sometimes I cynically suspect that this is on purpose, or otherwise expensive support contracts wouldn't be that necessary.

GNU + Linux = broken development model

Posted Jul 30, 2009 8:12 UTC (Thu) by farnz (subscriber, #17727) [Link]

The same Windows kernel driver can work from NT 3.51, released in 1995, to at least Windows XP, and for many classes of drivers Vista and Windows 7. During that time the Windows kernel has undergone numerous fundamental improvements (plug and play, power management, etc), and yet the core kernel _concepts_ have been kept stable. Can you point me to a Linux distribution that maintains a kernel for 15 years?

I'm sorry, but practical experience tells me you're wrong. I had a driver for a SCSI card that was fine in NT 4, fine in 2000, did not even load on XP; when I followed instructions to force load the driver, it immediately bluescreened. I've since thrown the card out, because the chip wasn't supported in Linux, either.

Oh, and the card had different drivers for NT 3.51; those didn't work with 2000 or XP (I had no access to NT 4 to check if the NT 3.51 driver worked there).

So, throwing it back at you, can you show me a driver written for NT 3.51 that does something involving DMA (e.g. a SCSI driver, an IDE driver, a TV card driver) that still works fine in XP? If not, what about one written for NT 4?

GNU + Linux = broken development model

Posted Jul 30, 2009 16:44 UTC (Thu) by mikov (subscriber, #33179) [Link]

I'm sorry, but practical experience tells me you're wrong. I had a driver for a SCSI card that was fine in NT 4, fine in 2000, did not even load on XP; when I followed instructions to force load the driver, it immediately bluescreened. I've since thrown the card out, because the chip wasn't supported in Linux, either.

Well I am speaking from the developer's perspective, not as a user with a closed source driver. The changes between different Microsoft kernels are infrequent, mostly backwards compatible and are well documented. (Vista broke sound and graphics and who knows what else, but it is not like it was unexpected or undocumented).

So, a transition between Microsoft OS versions requires some amount of development work, but the stable API makes that work fairly predictable and routine, plus it doesn't happen frequently. At least that has been my first hand experience.

Of course, if you are a user with a closed source driver you are generally out of luck - but that's an issue separate from the stable API.

GNU + Linux = broken development model

Posted Jul 31, 2009 4:47 UTC (Fri) by ebiederm (subscriber, #35028) [Link]

In Linux API stability is generally provided by the rule that if you change the API you have to fix all of the drivers. If there are a lot of drivers that is a lot of work to change everything.

So if an API changes under you (in linux) for a driver that means either you are using something esoteric that few other drivers use and their was very little pain in changing it. Or you are using something for which someone felt the pain of changing it was worth the gain.

Since it is a lot of work to change an API with a lot of users the churn rate should be reasonable, and it should not need to be a full time job.

Where have you seen the rate of change in the linux kernel APIs churn so fast it was a full time job to keep up?

And yes there are periodic rants about not changing the driver APIs too much as well.

GNU + Linux = broken development model

Posted Jul 31, 2009 5:30 UTC (Fri) by dlang (subscriber, #313) [Link]

remember that the other poster is complaining about the effort to maintain an out-of-tree driver. those don't get fixed when the api changes

out of tree

Posted Jul 31, 2009 6:33 UTC (Fri) by alex (subscriber, #1355) [Link]

And therein lies the problem. Out of tree is not an efficient way to develop a Linux driver.

out of tree

Posted Jul 31, 2009 8:04 UTC (Fri) by mikov (subscriber, #33179) [Link]

That's the common opinion expressed in "Documentation/stable_api_nonsense.txt". Yet, some people disagree with it.

Having a driver in tree guarantees only that the driver compiles, nothing more. The responsibility for testing the driver still lies with the manufacturer, though the manufacturer has no direct control over what goes into the kernel. So, the manufacturer is forced to track which bugs are in which kernels (and which distributions), which patches have been accepted, when, why not, etc. As I mentioned elsewhere this is a full time job and not a fun one.

Having the driver in-tree actually increases the burden to the manufacturers because now they can't control what their customers are getting. Realistically they have to maintain both an in tree and out of tree version, or they just give up on Linux.

Plus, the whole idea that every single driver should be in tree is frankly dishonest. Is the kernel driver tree going to grow forever with every single device ever manufactured? What's the point? Who is going to really audit and test all these drivers for every single API change? Let's not kid ourselves, this is not happening and is not going to happen.

As a whole this centralized model is unsustainable. The maintenance of the drivers should be left to the hardware manufacturers, who have customers and financial motivation to produce quality drivers. These manufacturers shouldn't have to track every possible kernel release as long as they comply with the stable API. (Plus, of course, the drivers being GPL is incredibly useful for the quality).

The end users in turn shouldn't have to jump through hoops to get new hardware working if a driver exists.

out of tree

Posted Jul 31, 2009 11:38 UTC (Fri) by ebiederm (subscriber, #35028) [Link]

No direct control? Device driver patches are accepted readily, so if you aren't getting your patches accepted something very strange is going on.

If a driver has a weird set of bugs on a lot of different platforms it sounds fragile and no amount of stable APIs will help.

As for manufacturers doing a better job with out of tree drivers quite often hardware manufacturers build a device for a short while sell it and quickly become no longer interested in it. Which makes a community model where all the people who care (not just the hardware manufacturer) can participate a better thing.

With our real world examples I can't see where you are coming from.

out of tree

Posted Jul 31, 2009 15:06 UTC (Fri) by mikov (subscriber, #33179) [Link]

No direct control? Device driver patches are accepted readily, so if you aren't getting your patches accepted something very strange is going on.

Let's say I am a manufacturer and I receive a bug report from a customer. How am I supposed to fix it? Submit a patch to LKML and tell the customer to wait for a new kernel version? That is absurd. So, I have to maintain an out-of-tree driver in any case. I want my customers to use the same driver in all kernel versions.

In general linking a particular driver version to a particular kernel version is just stupid. If I have one a single misbehaving driver I have to update my entire kernel, making many more unnecessary changes. That is is a support and maintenance nightmare.

As for manufacturers doing a better job with out of tree drivers quite often hardware manufacturers build a device for a short while sell it and quickly become no longer interested in it. Which makes a community model where all the people who care (not just the hardware manufacturer) can participate a better thing.

All drivers must still be GPL, of course, so people can participate if they want. But before all, it must be easy for the original manufacturer to participate because that is the most efficient way to do it. Currently, it is very expensive to do that and that is at least part of the reason why many hardware manufacturers aren't doing it at all.

out of tree

Posted Jul 31, 2009 15:17 UTC (Fri) by mjg59 (subscriber, #23239) [Link]

On the other side of the issue, we've seen that vendor drivers often provide custom interfaces, engage in awkward hacks and generally provide a lower level of integration with existing kernel functionality than in-tree drivers. Users generally prefer in-tree drivers because they get a more consistent experience.

out of tree

Posted Jul 31, 2009 16:28 UTC (Fri) by johill (subscriber, #25196) [Link]

It's not that hard to backport. We do it almost automatically: http://wireless.kernel.org/en/users/Download

out of tree

Posted Aug 1, 2009 10:04 UTC (Sat) by nix (subscriber, #2304) [Link]

Let's say I am a manufacturer and I receive a bug report from a customer. How am I supposed to fix it? Submit a patch to LKML and tell the customer to wait for a new kernel version? That is absurd. So, I have to maintain an out-of-tree driver in any case. I want my customers to use the same driver in all kernel versions.
What? You maintain a git repo, the same way anyone else doing kernel development does, and feed one branch to Linus and the other to your customers. This is really not even slightly rocket science, and git has plenty of tools to help keep the branches in sync.

out of tree

Posted Aug 1, 2009 16:29 UTC (Sat) by mikov (subscriber, #33179) [Link]

Oh, come on, the issue here is not the source control tool. Git is not magically going port your driver between versions. (BTW, I am a git fanatic, but that's another subject).

A successful compilation is not sufficient. Every release has to be exhaustively tested. Maintaining several slightly different versions of the same driver, plus tracking the yet different version which may or may not have been accepted in the mainline, is a lot of work. Especially if you have direct customers.

You absolutely need to maintain an out-of-tree driver. There is no two ways about it.

Look, this is not theoretical. This is my real experience and it has been that out of tree drivers are much more convenient for the users. If I need support for a piece of hardware, I download the module source, compile it and use it. No kernel upgrades, no backports, no crap. If I am going to install an out-of-tree driver, I actually prefer the mainline not to have one.

So, anything that makes it easier to maintain out of tree drivers would be good in my book. Of course everything must be GPL, and it would be nice if there was a centralized repository for these drivers so you don't have to hunt for them.

out of tree

Posted Aug 2, 2009 1:39 UTC (Sun) by foom (subscriber, #14868) [Link]

This is my real experience and it has been that out of tree drivers are much more convenient for the users.

As a user: yuck. Not only are out of tree drivers more irritating to use/install/upgrade than in-tree ones, my experience has been that they're also always buggier.

And I don't want to have to download the module source, compile it, and use it: I want the device to just work when I plug it in, without even thinking about it.

On the other hand, I do like it, when the driver is brand new, if an out-of-tree version is made available for use with older kernel versions. Certainly it's helpful to provide an out-of-tree version for the first year or two until everyone's upgraded to a kernel with the driver included already.

out of tree

Posted Aug 2, 2009 14:03 UTC (Sun) by alex (subscriber, #1355) [Link]

It would be nice if the distros could compile out-of-tree snapshots in a nice user friendly way. But for this to work they have to be GPL source code and not arbitrary binary blobs.

Certainly for me as a developer the current wireless testing tree/compat layer is an example of doing it right. I can run wireless with my distro kernel but still have the latest greatest without upgrading the kernel if I want. I wonder if more out-of-tree's could follow the wireless model.

out of tree

Posted Aug 2, 2009 21:05 UTC (Sun) by johnflux (guest, #58833) [Link]

It's good to get other opinions here, and it's certainly interesting to hear arguments for out-of-tree drivers.

But one very important question comes to mind:

What happens when you no longer support that hardware? What happens if your company goes bankrupt, reduces in size, etc, and can no longer support the out-of-tree driver?

You then end up in a situation where users cannot ever upgrade their kernel because your out-of-tree driver is no longer being upgraded.

I worked for a company that has an out-of-tree driver. We ended up only officially supporting quite an old kernel. This screwed over customers that needed a newer kernel (typically for newer power management features)

out of tree

Posted Aug 3, 2009 4:47 UTC (Mon) by mikov (subscriber, #33179) [Link]

What happens when you no longer support that hardware? What happens if your company goes bankrupt, reduces in size, etc, and can no longer support the out-of-tree driver? You then end up in a situation where users cannot ever upgrade their kernel because your out-of-tree driver is no longer being upgraded.

Well, since the out-of-tree drivers still must be GPL, if the hardware manufacturer doesn't want to support it, anybody else can do it. Nothing changes in that respect. A stable API gives us best of both worlds.

I worked for a company that has an out-of-tree driver. We ended up only officially supporting quite an old kernel. This screwed over customers that needed a newer kernel (typically for newer power management features)
I completely agree that, as things are currently, an out-of-tree driver is a lot of work to maintain. That's precisely why I am arguing for a stable API. Then ideally an out-of-tree driver would work with any kernel. If there happen to be driver API changes (realistically those can't be avoided) they will be formally versioned and documented so porting will be routine.

out of tree

Posted Aug 3, 2009 6:38 UTC (Mon) by johnflux (guest, #58833) [Link]

Out-of-tree drivers don't have to be GPL (well, that's a up to debate, but there are certainly proprietary drivers). And even if it's GPLed, if it's out of tree then there's a good chance that it doesn't meet the coding standards for inclusion into the kernel - for example a company typically might have a single source for Windows and Linux, with hundreds of #ifdef's to switch between them. There is also often a custom build system, revision control system, etc.

So this means that once the company goes bankrupt, some volunteer has to take code that he's probably never looked at before and undertake what could be a large task to clean up and adapt the driver for inclusion into the kernel.

out of tree

Posted Aug 3, 2009 17:48 UTC (Mon) by mikov (subscriber, #33179) [Link]

Out-of-tree drivers don't have to be GPL (well, that's a up to debate, but there are certainly proprietary drivers).

That is not really true. I am not aware of anything that allows a Linux kernel driver to be non-GPL. The only gray area is if the driver has been ported from another OS, and this is also questionable. I have seen more than one clear statement from Linus Torvalds on this subject.

There is no question that people violate the Linux GPL all the time. It is ethically disgusting (although I actually understand very well why they do it).

But in any case, a stable source API has not relevance to that problem.

And even if it's GPLed, if it's out of tree then there's a good chance that it doesn't meet the coding standards for inclusion into the kernel - for example a company typically might have a single source for Windows and Linux, with hundreds of #ifdef's to switch between them. There is also often a custom build system, revision control system, etc.

And that is bad because? The current standards are needed because the kernel developers have to be able to "maintain" (more like coax into compiling) a mountain of drivers they don't care about. If the drivers are maintained by interested 3rd parties, it doesn't matter in the least, and those 3rd parties should choose whatever standards of their own they prefer.

So this means that once the company goes bankrupt, some volunteer has to take code that he's probably never looked at before and undertake what could be a large task to clean up and adapt the driver for inclusion into the kernel.

I am sorry but this is a logical fallacy. This volunteer is free to develop a driver of his own using his superior coding standards from day one. Plus, surely having an existing working driver is much better than starting from scratch.

Look, I am well aware that the idea for a stable API is not popular and for sure I am not the best advocate for it. I actually don't think that a stable API will happen. But I also know what I am seeing in my professional life and it doesn't look good.

Also (sorry to repeat myself) it is clear to me that with the current situation Linux will never be really successful on the desktop. The stable API is just one of the several reasons for that, but it isn't a small one.

out of tree

Posted Aug 3, 2009 20:08 UTC (Mon) by johnflux (guest, #58833) [Link]

> I am sorry but this is a logical fallacy. This volunteer is free to develop a driver of his own using his superior coding standards from day one. Plus, surely having an existing working driver is much better than starting from scratch.

There's no logical fallacy there. I stated that an out-of-tree driver requires a lot of work to get into the kernel if it has to be done once the company has gone bankrupt, compared to if the driver was in the tree in the first place. But you're stating that an out-of-tree driver is easy than no driver at all which is true, but not really relevant to an in-tree or out-tree discussion.

I do get where you're coming from, but I do think that in the long run, Linux will have the bigger advantage of having all the drivers in tree.

The other day, for example, I need to get a webcam working on an ARM device. In linux it was easy because all the drivers were just there. I plugged in the webcam and it just worked.
That sort of thing is simply not possible in Windows.

Likewise I dread re-installing Windows because the drivers for my wireless card are difficult to get. The company that made the drivers has since gone and their website is now down.

out of tree

Posted Aug 3, 2009 22:14 UTC (Mon) by mikov (subscriber, #33179) [Link]

There's no logical fallacy there. I stated that an out-of-tree driver requires a lot of work to get into the kernel if it has to be done once the company has gone bankrupt, compared to if the driver was in the tree in the first place. But you're stating that an out-of-tree driver is easy than no driver at all which is true, but not really relevant to an in-tree or out-tree discussion.

I do get where you're coming from, but I do think that in the long run, Linux will have the bigger advantage of having all the drivers in tree.

Well, you are implying that it is somehow "bad" to have an existing working GPL driver, which just happens to be out of tree. That is the fallacy. In fact it cannot possibly negatively affect the desired outcome, which is to have a working driver.

I actually think that there should be no drivers in tree. That is an unmanageable situation - keeping all possible drivers in one monolithic source tree, and upgrading them at every kernel release. It should be obvious that this simply cannot scale. You cannot expect a finite number of developers to maintain a potentially infinite number of drivers for ever.

The other day, for example, I need to get a webcam working on an ARM device. In linux it was easy because all the drivers were just there. I plugged in the webcam and it just worked. That sort of thing is simply not possible in Windows.

That is a funny example because I have exactly the opposite experience with an embedded project. It is not possible to buy a working camera for it because its kernel can't practically be upgraded. And even with a modern kernel, I can't find a single webcam from Fry's to work on my Linux desktop.

AFAIK, there is a very high turnover rate with webcam chips, so it simply does not make economic sense to maintain a driver for hardware whose life is 6 months. Now, if only there was a stable driver API ... :-)

Likewise I dread re-installing Windows because the drivers for my wireless card are difficult to get. The company that made the drivers has since gone and their website is now down.

I don't get it. Why would you have to re-install Windows for a single driver?

Let's for a second forget about the labels Linux and Windows and look at it as impartial scientists performing an experiment. We have "OS A" which owns 99% of a market, and "OS B" which has 0.5% and doesn't seem to be growing even though it is much cheaper (!?!). What reasons could we have, as impartial scientists, to believe that "OS B" is somehow superior?

Personally I obviously think that Linux is superior in many respects, or I wouldn't be here. But I am sure that that it is despite the lack of stable API, not because of it.

out of tree

Posted Aug 3, 2009 22:19 UTC (Mon) by dlang (subscriber, #313) [Link]

what makes you think that the pool of developers to maintain the drivers in the kernel is finite while the pool of drivers is not?

the number of people who contribute to the kernel continues to grow.

you believe that the unstable API is a drawback, many others believe that is is a strength.

the people that you need to convince are not here, you need to go elsewhere to convince them.

out of tree

Posted Aug 3, 2009 23:55 UTC (Mon) by mikov (subscriber, #33179) [Link]

the people that you need to convince are not here, you need to go elsewhere to convince them.

Do you mean this in the sense "shut up and go away"?

I don't want to convince anybody. I think that a stable API is a lost cause, like Linux on the desktop, or Linux gaming. It is time to wake up and smell the flowers. The desktop PC is almost gone.

I am curious about intelligent objections against a stable API, but I am not crazy to go get flamed on LKML. (If you do a search, there have been others, brave or stupid enough to post about a stable API there. LOL. Poor suckers. They deserved what was coming to them.)

The people who would need convincing are the big companies who pay for Linux kernel development. It is more than a bit naive to think that volunteers in their free time can influence Linux at this point of its development. But I have a pretty good theory why RedHat, Novell, etc, aren't interested in a stable API.

out of tree

Posted Aug 4, 2009 0:38 UTC (Tue) by foom (subscriber, #14868) [Link]

I'm pretty sure I don't use any 3rd party drivers on my Mac laptop, and I certainly haven't missed
them or thought MacOS was not ready-for-the-desktop because of it...

out of tree

Posted Aug 4, 2009 4:47 UTC (Tue) by mikov (subscriber, #33179) [Link]

Yeah right. Which parts of your Mac did you buy from a different hardware vendor?

out of tree

Posted Aug 3, 2009 22:29 UTC (Mon) by johnflux (guest, #58833) [Link]

Well, you are implying that it is somehow "bad" to have an existing working GPL driver, which just happens to be out of tree.

_Compared_ to having it in tree. That's all I've ever said.

> That is the fallacy. In fact it cannot possibly negatively affect the desired outcome, which is to have a working driver.

Compared to not having it at all, sure.

> That is a funny example because I have exactly the opposite experience with an embedded project. It is not possible to buy a working camera for it because its kernel can't practically be upgraded.

Why can't it be upgraded? Because of an out-of-tree driver? :-D

> And even with a modern kernel, I can't find a single webcam from Fry's to work on my Linux desktop.

I find that unlikely :-) I've bought 4 webcams randomly, 2 built into laptops, and all just worked in linux.

> I don't get it. Why would you have to re-install Windows for a single driver?

Nah, I meant that if I re-install, for an unrelated reason, then I have trouble getting the drivers.

out of tree

Posted Aug 3, 2009 23:38 UTC (Mon) by mikov (subscriber, #33179) [Link]

Why can't it be upgraded? Because of an out-of-tree driver? :-D

Laugh all you want :-) But it is a real problem that should not exist... Ignoring it doesn't make it any less important for the developers and users of this product.

I find that unlikely :-) I've bought 4 webcams randomly, 2 built into laptops, and all just worked in linux.

Well, I am not making it up. BTW, I am running Kubuntu LTS and Debian Stable on my desktops. Perhaps you see my problem? ;-)

Nah, I meant that if I re-install, for an unrelated reason, then I have trouble getting the drivers.

Well, that is the problem of closed source, no argument from me there.

BTW, you can still save the drivers. You have to copy the INF file and the files referenced from it and you can reinstall them on a new system without any tricks. Some things in Windows really are that easy :-)

out of tree

Posted Aug 4, 2009 8:06 UTC (Tue) by farnz (subscriber, #17727) [Link]

The thing is, and this is from direct personal experience, companies maintaining out of tree drivers don't put their driver through the Linux code review process; as a result, they ignore standard Linux interfaces in favour of inventing their own.

Add to that the fact that they lose interest in maintaining their driver at all when they've moved onto the new shiny (in the case of one vendor I've dealt with, the driver stops being maintained as soon as they have a new chip out, even though they're still selling the old chip), and the fact that I don't have the time to do more than bugfix drivers that have already been ported to new APIs, and I find myself thinking that all the advantages you claim for out of tree drivers are disadvantages from my perspective.

I want one place to go to for drivers. Not twenty trees, but the one tree on kernel.org. I have found it easier to backport drivers from 2.6.30 to 2.6.23 than to integrate vendor drivers, so I would prefer in- aiming to move into the tree (before GregKH's staging area), and that was quite enough pain.

I'm also afraid that your admission that you're stuck on an old kernel due to an out-of-tree driver from your board vendor reminds me of a subset of nVidia graphics card owners, who complain bitterly (regardless of technical reasons) when X.org releases a new version of the X server that isn't backwards compatible with the ancient version of the nVidia drivers that supports their older hardware; I do feel that you should be exerting pressure on your vendor to make this happen, either by getting their drivers into the kernel, or getting them to step up and do the work of maintaining a stable in-kernel API. It's not going to come for free, so why shouldn't the people who want such an API do the work of maintaining it?

out of tree

Posted Aug 4, 2009 20:51 UTC (Tue) by mikov (subscriber, #33179) [Link]

> The thing is, and this is from direct personal experience, companies
> maintaining out of tree drivers don't put their driver through the Linux
> code review process; as a result, they ignore standard Linux interfaces
> in favour of inventing their own.

I have no opinion about this because I haven't personally experienced it, but I believe you. But why isn't this a problem in Windows?

> Add to that the fact that they lose interest in maintaining their driver
> at all when they've moved onto the new shiny (in the case of one vendor
> I've dealt with, the driver stops being maintained as soon as they have
> a new chip out, even though they're still selling the old chip), and the
> fact that I don't have the time to do more than bugfix drivers that have
> already been ported to new APIs, and I find myself thinking that all the
> advantages you claim for out of tree drivers are disadvantages from my
> perspective.

Agreed, but what is your point? You are describing exactly the problems caused by a lack of stable API. Vendors do not want to maintain drivers for hardware which they no longer sell, because it is a never ending cycle of completely pointless but expensive work (from their standpoint). That would not be the case if there was a stable API.

First, it would be very cheap for them to continue the maintenance, so they would likely do it. Second, even if they didn't, the driver should just continue to work. And lastly, a volunteer can pick it up at any moment.

> I want one place to go to for drivers. Not twenty trees, but the one
> tree on kernel.org. I have found it easier to backport drivers from
> 2.6.30 to 2.6.23 than to integrate vendor drivers, so I would prefer in-
> aiming to move into the tree (before GregKH's staging area), and that
> was quite enough pain.

Well, that is you but you are the exception. 99.9% of the Linux users and most of the developers can't backport a driver. While I personally also could do it, there is more than that - for example I don't really have time or support such backported driver in a business setting.

There is definite value in a central repository for drivers. The problem with the current approach is that the release cycles of all drivers are tied together. Ideally I would like to see a central repository where vendors are free to upload their drivers whenever they want.

> I'm also afraid that your admission that you're stuck on an old kernel
> due to an out-of-tree driver from your board vendor [...]

I have never said that. I mentioned a problem where newer webcam drivers are only available for new kernels. This has nothing to do with in-tree vs out-of-tree drivers; it is precisely a matter of lack of stable API.

It seems that many people who responded negatively to my comments here are confusing a couple of related by separate issues:

* Stable API. It is equally beneficial to in-tree and out-of-tree drivers because in either case it decreases the maintenance burden. In fact contrary to the common wisdom, a stable API gives more freedom to core developers to make changes because they can keep them isolated behind the facade of the stable API.

* In-tree vs out-of-tree drivers. It is true that a stable API decreases the incentive to submit drivers to the kernel because the "free maintenance" red herring is gone. But if people perceive value in having a central point for driver development, there is nothing stopping them.

* Closed source drivers. This is completely unrelated to a stable source level API. They are prohibited by the kernel license and suffer from their well known problems.

> reminds me of a
> subset of nVidia graphics card owners, who complain bitterly (regardless
> of technical reasons) when X.org releases a new version of the X server
> that isn't backwards compatible with the ancient version of the nVidia
> drivers that supports their older hardware;

I would like to see an open source Nvidia driver as much as the next guy, but I have to disagree with you. How did Microsoft manage to keep their graphics driver API the same from NT 3.1 to Windows XP? To think that it is a bad thing is to ignore reality.

> I do feel that you should be
> exerting pressure on your vendor to make this happen, either by getting
> their drivers into the kernel, or getting them to step up and do the
> work of maintaining a stable in-kernel API. It's not going to come for
> free, so why shouldn't the people who want such an API do the work of
> maintaining it?

Agreed in theory. But you are forgetting that these vendors don't need to support Linux at all. It is less than 1% of their market, but it takes more work to support that Windows. So, why the hell would they do it at all?

So, they don't have to do anything. It is not their problem. If the Linux community wants to bring these vendors fully on board, it should make it easy for them. Easier than Windows. The major factor for "easiness" is stable APIs.

out of tree

Posted Aug 5, 2009 10:32 UTC (Wed) by farnz (subscriber, #17727) [Link]

> The thing is, and this is from direct personal experience, companies > maintaining out of tree drivers don't put their driver through the Linux > code review process; as a result, they ignore standard Linux interfaces > in favour of inventing their own. I have no opinion about this because I haven't personally experienced it, but I believe you. But why isn't this a problem in Windows?

It is a problem in Windows, too; for example, very few graphics card drivers support Windows native screen rotation in XP - you need to use a vendor utility instead. The difference is that Windows users are used to this, and don't complain as loudly about it. It's becoming less of an issue as things like USB standardize more and more device classes, and manufacturers use the standard classes, because that means that the device is Plug And Play under currently shipping Windows systems (Vista, future Windows 7 installs).

> Add to that the fact that they lose interest in maintaining their driver > at all when they've moved onto the new shiny (in the case of one vendor > I've dealt with, the driver stops being maintained as soon as they have > a new chip out, even though they're still selling the old chip), and the > fact that I don't have the time to do more than bugfix drivers that have > already been ported to new APIs, and I find myself thinking that all the > advantages you claim for out of tree drivers are disadvantages from my > perspective. Agreed, but what is your point? You are describing exactly the problems caused by a lack of stable API. Vendors do not want to maintain drivers for hardware which they no longer sell, because it is a never ending cycle of completely pointless but expensive work (from their standpoint). That would not be the case if there was a stable API.

Oops - my bad. I forgot to mention that the case in question was migrating from NT4 to XP in an embedded product. We could still buy the chips in question, but the NT4 driver didn't work in XP, even after we recompiled it against the new DDK (don't know why - I thought there was a stable API?). We ended up having to buy new ASI capture chips, and scrapping the older hardware.

> reminds me of a > subset of nVidia graphics card owners, who complain bitterly (regardless > of technical reasons) when X.org releases a new version of the X server > that isn't backwards compatible with the ancient version of the nVidia > drivers that supports their older hardware; I would like to see an open source Nvidia driver as much as the next guy, but I have to disagree with you. How did Microsoft manage to keep their graphics driver API the same from NT 3.1 to Windows XP? To think that it is a bad thing is to ignore reality.

They didn't keep the graphics driver API the same from NT 3.1 to Windows XP. Newer OSes cannot run graphics cards using older drivers, pure and simple. No different to Linux, in that respect.

And note that vendors do need to bring the Linux community on board - my employer alone buys a significant number of TV capture cards a month, on condition that they are supported under Linux. We pay a slight premium to buy Hauppauge cards because of the Linux support; other vendors are £5/card cheaper, but cost huge amounts of money in expensive developer time keeping the card going.

out of tree

Posted Aug 5, 2009 18:05 UTC (Wed) by mikov (subscriber, #33179) [Link]

It is a problem in Windows, too; for example, very few graphics card drivers support Windows native screen rotation in XP - you need to use a vendor utility instead.

I don't want to defend Windows, but IIRC that is technically wrong. Unless I am mistaken, XP does not support native screen rotation. It was introduced in Vista.

I do agree with your general point that you can't rely on the vendors to "do the right" thing. Vendors do suck. I have seen really really horrible disgusting things working with closed source drivers. But this is an impossible battle. You can't require all developers for your OS to be perfect. People will do whatever they want to do. The best you can do is give them tools encouraging them in the right direction. Plus, hopefully GPL will shame them into better code.

Look at Google Android. They are basically doing whatever they want with the kernel and not bothering to submit anything back.

They didn't keep the graphics driver API the same from NT 3.1 to Windows XP. Newer OSes cannot run graphics cards using older drivers, pure and simple. No different to Linux, in that respect.

I am sorry but that is not quite true either. In a previous life I used to do Windows kernel development, and graphics drivers specifically. I have ported drivers from NT4 to W2K and WXP. The API remained backwards compatible and even binary compatible.

In NT4 they did a fundamental architectural change by moving the drivers into the kernel, but the source level API remained compatible with NT 3.51.

And note that vendors do need to bring the Linux community on board - my employer alone buys a significant number of TV capture cards a month, on condition that they are supported under Linux. We pay a slight premium to buy Hauppauge cards because of the Linux support; other vendors are £5/card cheaper, but cost huge amounts of money in expensive developer time keeping the card going.

I don't quite follow. Can you please clarify a bit? Why do the other cards need huge amounts of money? Also, you are saying that Hauppauge is more expensive because it has to support Linux. So, how is that a good thing?

out of tree

Posted Aug 5, 2009 18:15 UTC (Wed) by johnflux (guest, #58833) [Link]

> I don't quite follow. Can you please clarify a bit? Why do the other cards need huge amounts of money? Also, you are saying that Hauppauge is more expensive because it has to support Linux. So, how is that a good thing?

I think his point was that his company was willing to pay extra just for linux support. So Hauppauge was able to charge more money, and still get his custom. Whereas the companies that didn't support Linux didn't get his business.

From a business point of view, if the money that a company loses because of their lack of linux support is greater than the cost of opening the specs or writing the drivers, then the company should write Linux drivers. The poster was providing anecdotal evidence to indicate that they believe that this is the case for some (most?) companies.

On a personal note, I think companies perceive the cost of opening their drivers to be far higher than it really is.

out of tree

Posted Aug 5, 2009 18:45 UTC (Wed) by farnz (subscriber, #17727) [Link]

Hauppauge cards tend to work out of the box with a recent enough kernel; when they don't, Hauppauge employees are happy to help you make the cards work. Other vendors cards require developer time to make work, and attempts to get the vendor to help tend to be fruitless (they have their own out of tree driver that doesn't work well, due to reinvention of interfaces, and don't understand why you won't simply rewrite your userspace to match their driver).

Developer time is hugely expensive, as we don't have much to spare; given the choice between a small amount of extra money per card (to the vendor), or developer time, we'd rather pay. If you're a vendor, looking to expand, this is a market, and it has money to spend on you.

out of tree

Posted Aug 6, 2009 6:28 UTC (Thu) by lysse (guest, #3190) [Link]

> this is from direct personal experience

So why are you generalising from it?

out of tree

Posted Aug 6, 2009 8:10 UTC (Thu) by farnz (subscriber, #17727) [Link]

I'm generalising from experience of 15 to 20 vendors, of whom only a couple had their drivers upstreamed. The remainder simply did not understand why (to use some examples) I didn't want to use their vendor- specific DVB-T tuning interface instead of Linux's DVB API, or their vendor-specific video capture interface instead of V4L2.

However, I recognise that my experience may be atypical; I therefore mention that I base this on personal experience, so that if I've just had bad luck with vendors, other people can point out vendors who get it right.

out of tree

Posted Aug 7, 2009 22:31 UTC (Fri) by Los__D (guest, #15263) [Link]

This is not much different on Windows.

Stupid devices which needs stupid programs to do what a perfectly capabable standard Windows configuration tool should be doing seems to be more the rule that the exception.

- The worst part is that Windows seems to support that ridiculous way of doing it. i.e. the Wi-Fi configuration tool has an option to transfer control to a 3rd party program.

(To be fair, this seems to be getting a bit less frequent lately).

out of tree

Posted Aug 10, 2009 16:43 UTC (Mon) by vadim (subscriber, #35271) [Link]

Let me give you my opinion as an user: As far as I'm concerned, if your hardware doesn't have a driver in the mainline kernel, it doesn't have one. And that means I don't buy it. And that's not a huge hindrance, since Linux has modules for everything these days. A device that only has an out of tree module is a rare exception and easily avoided.
Having the driver in-tree actually increases the burden to the manufacturers because now they can't control what their customers are getting.
But I'm not that interested in you controlling what I'm getting. If I have a 3Com card, I need it to work. Who makes it work isn't particularly important. If it stops working, I don't go to 3com.com, I expect my distro to ship a fix. It's actually something I love about Linux - on Windows every driver comes with random crap attached. Webcams can't just be webcams, they have to ship some horrible interface as well, without which it's impossible to have full control of the device. I much prefer the Linux model: The webcam is an image capture device, nothing more. Every webcam works identically via a common API. If I remove one and plug in another, capabilities of the hardware aside, nothing changes. That's precisely how I like it.

GNU + Linux = broken development model

Posted Jul 31, 2009 21:54 UTC (Fri) by ebiederm (subscriber, #35028) [Link]

The key point is that despite stable api nonsense and a lot of talk of a nonstable ABI to drivers, the kernel driver API has a lot of inertia and resistance to change. Generally new apis are introduced while still providing the old api. Eventually the old API goes away but rarely does it happen in a single kernel release.

The kernel ABI because it depends on Kconfig options is much less stable and there are a lot to choose from on any given release.

In general things don't change willy nilly and kill your driver, even out of tree.

Thinking back on my experiences maintaining code out of tree for sometimes years at a time before it was merged. Yes there are changes but they aren't hard too hard to keep up with.

From the other side as one of the people who make changes in the APIs a lot of work is done to make it as painless as possible for the driver developers. With both the old and the new apis living side by side for a while.

GNU + Linux = broken development model

Posted Aug 1, 2009 12:09 UTC (Sat) by xilun (guest, #50638) [Link]

You are changing what you say on the fly (ABI -> API), and your last post even contradict itself (the API is stable but sometimes it changes???).

GNU + Linux = broken development model

Posted Aug 1, 2009 16:16 UTC (Sat) by mikov (subscriber, #33179) [Link]

Oh, really? Why don't you quote where I said anything about an ABI. etc?

I have never said anything about ABI, and I have consistently said that what is needed is a versioned and formally documented API.

But if you can't understand my point, why are you responding?

GNU + Linux = broken development model

Posted Jul 30, 2009 14:46 UTC (Thu) by Richard_DCS (guest, #56565) [Link]

This sounds like a nice theory though I've seen lots of counter examples, so I don't think it works.

Also you miss the point that most users are making the transition from 9x to XP 32bit and then onto Windows 7 64bit in the future. That path stops drivers working on every transition, Linux never had that problem.

Drivers work best when they are open source and in the mainline kernel where they can be fixed when needed.

GNU + Linux = broken development model

Posted Jul 30, 2009 16:52 UTC (Thu) by mikov (subscriber, #33179) [Link]

No, no, no. I have been talking about source level API, not ABI! This is about developers, not users.

I don't expect the same binary to literally work between all Windows or Linux versions (although in many cases it does, but I don't view that as necessary or even useful).

Plus, I am not disputing the usefullness of open source at all. After all, that's a big part of why I am using Linux. I am just saying that a stable kernel API would make life easier for most developers and businesses.

We should take the good stuff from Windows (stable API), improve it (make it source-only instead of binary ABI) and apply it to Linux.

GNU + Linux = broken development model

Posted Aug 1, 2009 23:06 UTC (Sat) by Lumag (guest, #22579) [Link]

We should take the good stuff from Windows (stable API), improve it (make it source-only instead of binary ABI) and apply it to Linux.
We?.. "Show us your code." I mean if you want stable API in some area, you can try working with developers in that area by providing patches that ensure backwards compatibility, etc. Unless you do that, all this talk is no sense. Sorry :)

GNU + Linux = broken development model

Posted Jul 30, 2009 19:04 UTC (Thu) by Tomislav (guest, #56442) [Link]

> Sigh. You are missing the point.

It seems to me that he's not missing it by much. Although I would prefer a
stable API it looks like this is a matter of priorities. If you prefer
stability and longevity, you choose RHEL or Debian and stick with their
kernels. RHEL 4 supported 2.6.9 for 4 years.

> Oh, yes, the commercial distributions are trying to maintain a stable
> kernel for some time (although far from 15 years) - of course they
> would; this is what the vast majority of businesses and users really
> want and need regardless of what the stupid document
> "stable_api_nonsense.txt" says. Alas, it is actually causing further
> incompatibilities and problems between distributions - everybody has
> a different version, with different patches, etc. It is a nightmare.

:shrugs:
You pointed out (several times) that you are speaking from the developers
standpoint. But here you base your argument on a claim that _businesses_
and _users_ want a stable API. It's the ABI they (well, me at least) care
about.

You can't have rapid development _and_ a stable API in a single product.
If you think this is possible I would very much like to hear about The Way
To Do This.

Another thing here that sticks in my eye, and I see that I'm not the only
one, is the allusion that Microsoft offers support for it's kernel for 15
years. IMO nobody there has touched the NT4 kernel for quite a while and
would be greatly surprised if the NT5 family had any recent updates. They
are only stable by virtue of being old.

GNU + Linux = broken development model

Posted Jul 30, 2009 19:41 UTC (Thu) by mikov (subscriber, #33179) [Link]

It seems to me that he's not missing it by much. Although I would prefer a stable API it looks like this is a matter of priorities. If you prefer stability and longevity, you choose RHEL or Debian and stick with their kernels. RHEL 4 supported 2.6.9 for 4 years.

As I pointed out, that doesn't work.

First, what are you supposed to do with this stable kernel if you want to use newer hardware?

Secondly, every vendor and distribution has a different "stable" kernel, which in turn is different from the vanilla kernel with the same version. So, what do you do if you a hardware vendor? Even if you have included your driver in the mainline in 2.6.30, who does it help your customers some of who are using 2.6.18, some 2.6.9, some 2.6.26, etc? Is anybody surprised that hardware vendors are not keen on supporting Linux?

Lastly, even if there was only one stable distribution in the world, 4 years is a far cry from 15.

:shrugs: You pointed out (several times) that you are speaking from the developers standpoint. But here you base your argument on a claim that _businesses_ and _users_ want a stable API. It's the ABI they (well, me at least) care about.

Clearly I mean businesses who develop and support software and their customers who are affected by it because their reliability goes down and their costs go up. What I am going for here is that an enthusiastic hacker who is happy to recompile his own kernel daily (like most of us) can't always see the big picture.

Another thing here that sticks in my eye, and I see that I'm not the only one, is the allusion that Microsoft offers support for it's kernel for 15 years. IMO nobody there has touched the NT4 kernel for quite a while and would be greatly surprised if the NT5 family had any recent updates. They are only stable by virtue of being old.

Huh? Who is talking about support? The point of a stable API is precisely that - to continue using the same source with the newer OS-es which are currently supported.

Then there are no point and case is closed, right?

Posted Aug 5, 2009 19:40 UTC (Wed) by khim (subscriber, #9252) [Link]

The point of a stable API is precisely that - to continue using the same source with the newer OS-es which are currently supported.

Sure. And we have many OSes which promised and failed to deliver that: Windows, Solaris, etc. It just does not work. Drivers from NT don't work in Windows 2000, drivers from Windows 2000 crash the Windows XP, drivers from Solaris 8 are unusabel in Solaris 10, etc.

Right now Linux supports more hardware then Windows - sure, the "latest and greatest" gadgets are supported by "latest and greatest" release of Windows only, but sheer number is on Linux side.

Stable OS (not stable API, but stable OS - I certainly remember cases where switch from NT4SP5 to NT4SP6 was impossible because drivers had no suppoort for this version) makes support for some hardware easier and for some hardware harder (recall NT4 and USB, for example, then do the same with Pixel Shaders 4), so I see no huge advantages either way.

What I do see is the wish to reduce number of problems for manufacturers by increasing number of problem for users and developers - and I can not agree that it's good idea.

If you want to make life easier then implement stable API where it naturally exist anyway: on the border between your hardware and my hardware!

Then there are no point and case is closed, right?

Posted Aug 5, 2009 22:17 UTC (Wed) by mikov (subscriber, #33179) [Link]

Sure. And we have many OSes which promised and failed to deliver that: Windows, Solaris, etc. It just does not work. Drivers from NT don't work in Windows 2000, drivers from Windows 2000 crash the Windows XP, drivers from Solaris 8 are unusabel in Solaris 10, etc.

Well, I am talking about a source level API, so I don't see your point really. Plus, I disagree with your example. The vast majority of drivers did work between Windows versions. Speaking both as a user and as an ex-Windows kernel developer.

What I do see is the wish to reduce number of problems for manufacturers by increasing number of problem for users and developers - and I can not agree that it's good idea.

I am sorry but you are dead wrong. Very ironically just today (about hald an hour ago) I encountered exactly the problem I am describing.

I was asked to troubleshoot a Debian Lenny image on a new batch of POS machines. Well, guess what, Lenny's kernel 2.6.26 does not support the network adapter (Intel 82567LM-3).

So, WTF am I supposed to do? All the "wisdom" against out-of-tree drivers and stable API expressed in the numerous responses does not help me a bit! Should I perhaps install a newer kernel version? Huh? What about security updates? Backport a driver from 2.6.28? I don't have time to do that, and I am not a network driver expert.

Fortunately for me, Intel maintains an out-of-tree driver at http://e1000.sf.net/ . An out of tree driver saved the day! As usual and as expected.

Unless somebody can offer me a better solution than this out-of-tree driver, I don't see how the arguments carry any weight.

Then there are no point and case is closed, right?

Posted Aug 7, 2009 14:34 UTC (Fri) by deleteme (guest, #49633) [Link]

But the e1000 driver is in the tree as well, right? Which has been stated before in this thread, it's nice to have the source directly from the vendor as well.

Then there are no point and case is closed, right?

Posted Aug 9, 2009 11:46 UTC (Sun) by khim (subscriber, #9252) [Link]

Well, I am talking about a source level API, so I don't see your point really.

What makes API so much different from ABI? In my experience breakage occurs not at format ABI/API level, but at level of semantic: old driver presume that something is done in some specific way and it does not work in exactly this way after small fix in kernel (I've personally seed cases where driver stopped working with NP4SP5=>NT4SP6 and XPSP1=>XPSP2 upgrades) - at that point the only solution is some person who can maintain driver... and if you have such person why do you need stable ABI or API?

Plus, I disagree with your example.

How can you disagree with truth?

The vast majority of drivers did work between Windows versions.

The wast majority of drivers work with Linux out-of-the-box. Speaking as a Linux user.

I was asked to troubleshoot a Debian Lenny image on a new batch of POS machines. Well, guess what, Lenny's kernel 2.6.26 does not support the network adapter (Intel 82567LM-3).

How is it different from my example above - where Windows XP SP2 broke drivers for Pinnacle DV500?

So, WTF am I supposed to do?

You know the answer - you just refuse to resign yourself to it.

Should I perhaps install a newer kernel version?

Yes. Either that or ask your vendor for older motherboard.

What about security updates?

What about them? Latest version of kernel always have the latest version of security updates applied - it's up to your distribution to package them.

Remember that bug in driver can make your system volnerable (case to the point - and conveniently it's about e1000 as well) so there are only two ways to live:
1. Use drivers from your distribution kernel (and forgo latest and greatest hardware), or
2. Use latest and greatest version of kernel (and live with possible bugs).

Both approaches don't need any stable API or ABI...

P.S. Infromation from Windows campus agrees with me: substantial percentage of problems with the (in)famous Windows (un)stability are caused by drivers.

Then there are no point and case is closed, right?

Posted Aug 12, 2009 4:12 UTC (Wed) by obi (guest, #5784) [Link]

My first reaction would have been to install Debian's 2.6.30 kernel backport - that way I get all the bugfixes and security updates too.

GNU + Linux = broken development model

Posted Jul 30, 2009 13:13 UTC (Thu) by cortana (subscriber, #24596) [Link]

To be fair, the WHQL testing is worthless. The WHQL-passed driver for my RT61-based wlan interface causes 64-bit Vista and Windows 7 systems to crash whenever it deals with a non-trivial about of traffic. This is extremely easy to reproduce: just running the Steam server browser, Left 4 Dead or Bittorrent is enough to trigger the crash. But the drivers were rubber stamped, no problem...

GNU + Linux = broken development model

Posted Jul 30, 2009 17:54 UTC (Thu) by BenHutchings (subscriber, #37955) [Link]

WHQL testing involves running Microsoft's test suite on your own system. If I remember correctly, they require you use an SMP system, x86-64 and >4GB RAM, which historically have exposed a number of driver bugs, and they test various corner cases in power management. But it's still possible to have the driver cheat to pass the tests, or to run the test suite repeatedly until it passes, or even modify the test suite to falsely report passes.

GNU + Linux = broken development model

Posted Aug 1, 2009 12:05 UTC (Sat) by xilun (guest, #50638) [Link]

<quote>Microsoft at least tests all the drivers that are shipped with the OS.<quote>

Depends of what you call testing. I once tried the SB16 driver available on windows update, which was utterly broken with my SB16. The worse part being than downgrading to the old driver (which worked perfectly, my mistake on wanting to try a new one for no real reason) was a nightmare.

GNU + Linux = broken development model

Posted Jul 30, 2009 4:06 UTC (Thu) by elanthis (guest, #6227) [Link]

Microsoft breaks ABIs and userland apps all the fucking time. Why do you think so many people complain about Vista? It doesn't run a lot of apps that worked perfectly in XP (even if you install/run them in administrator mode or in XP compatibility mode). Hell, do any of you actually remember when XP came people? People bitched about it just as much because it broke a ton of apps that worked just fine in 95/98. Windows 7 can't run some apps that Vista handled just fine.

Windows maintains stable ABIs *where it can* because Real Users kind of prefer not having half their desktop stack break every 6 months while proclaiming the endless breakage as "true innovation!"

GNU + Linux = broken development model

Posted Jul 30, 2009 17:03 UTC (Thu) by mikov (subscriber, #33179) [Link]

No, I am sorry, this is beside the point.

For one, Microsoft doesn't release a new kernel every 6 months (far from it), so the breakage doesn't occur every 6 months. I don't want to sound like I am defending them, but there are parts of their strategy that are trruly superior to the Linux model in my opinion. We cannot simply say "everything that Microsoft does is bad". Monopoly is part of their success, but a large part of it also is that they did some things right. Having a stable API is definitely one of them.

That pains me because I am Linux developer and user for both practical and ideological reasons (I am willing to suffer a little pain in the cases when Linux is inferior).

Plus, I am referring to source level API, and not so much keeping it completely static, but approaching the changes formally by versioning and formal documentation.

Not having a stable API makes life a little easier for the core kernel developers, but much harder for everybody else. Plus, in the broader picture, I am sure that it is an insurmountable barrier before "Linux" on the desktop.

Heed my words: Linux will never succeed on the desktop without a stable driver API. I wish it wasn't so.

GNU + Linux = broken development model

Posted Jul 30, 2009 17:43 UTC (Thu) by foom (subscriber, #14868) [Link]

Linux *does* have stable driver APIs.:
- libusb for usb in userspace.
- FUSE for filesystems in userspace.
- UIO for driving arbitrary hardware from userspace

Perhaps more?

GNU + Linux = broken development model

Posted Aug 1, 2009 23:11 UTC (Sat) by Lumag (guest, #22579) [Link]

IIRC stuff around netdev was backwards-compatible for ages (not talking about NAPI :)). We experienced only one major change with it - the net_device_ops.

GNU + Linux = broken development model

Posted Aug 2, 2009 8:43 UTC (Sun) by johill (subscriber, #25196) [Link]

Multiqueue was another big change, but didn't matter much unless you used it.

GNU + Linux = broken development model

Posted Jul 30, 2009 20:10 UTC (Thu) by rvfh (subscriber, #31018) [Link]

I just wanted to say that, Linux-lover as I am, I agree with you mikov. It seems that people are a bit too eager to break interfaces (and even Linus complained about it, though it was about the /sys filesystem IIRC).

Breaking the API when it solves a problem is the Linux way, so improvements can be made, but one should strive not to do it if it can be avoided. And one would expect these changes to be kinda obvious and documented (a proper #warning can help a big deal), or even to a have an in-kernel tool (Coccinelle can do this AIUI) provided to update your code for you! Ok, I know, "easy, tiger."

GNU + Linux = broken development model

Posted Aug 4, 2009 16:02 UTC (Tue) by nye (guest, #51576) [Link]

I've not personally come across single application which works in XP but not Vista. The primary problem with Vista is that it's noticeably slower.

Comparing 95/98 with XP/Vista is fairly absurd. They're not different versions of the same OS; they're completely different OSes which both have very-nearly compatible implementations of some important APIs, most notably win32. In fact, come to think of it, I can't name a single win32 application that works in Windows 95 and not Vista. The only applications that have broken along the way either interact with the OS at a lower level, or are 16-bit, or actually DOS applications. That would be a bit like expecting Linux to provide perfect binary compatibility with Minix.

The only substantiated bitching I've heard has been about drivers, but you were talking about userland apps; I contend that the number of broken userland apps is vanishingly small.

GNU + Linux = broken development model

Posted Aug 4, 2009 17:43 UTC (Tue) by mcmanus (subscriber, #4569) [Link]

I've not personally come across single application which works in XP but not Vista.

Here's a good one: http://msdn.microsoft.com/en-us/library/aa376635%28VS.85%29.aspx

Packet Filtering Functions

[These functions are available for use in the operating systems listed in the Requirements section for each function. In Windows Server "Longhorn", these functions return ERROR_CALL_NOT_SUPPORTED. The MprConfigInterfaceTransportSetInfo function and the Windows Filtering Platform API Management Functions provide similar functionality.]

GNU + Linux = broken development model

Posted Jul 30, 2009 11:43 UTC (Thu) by mb (subscriber, #50428) [Link]

> To be fair to the parent (who I also disagree with) that particular problem only exists due to the lack of a stable kernel API.

No that is completely wrong. The required API was not available (because it was not yet developed) in 2.6.26.
Having a stable API/ABI does not help this at all.


Copyright © 2017, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds