|
|
Subscribe / Log in / New account

Avoiding the OS abstraction trap

August 12, 2011

This article was contributed by Dan J. Williams

It is an innocent idea. After all, "all problems in computer science can be solved by another level of indirection." However, when the problem is developing a device driver for acceptance into the current mainline Linux kernel, OS abstraction (using a level of indirection to hide a kernel's internal API) is taking things a level too far. Seasoned Linux kernel developers will have already cringed at the premise of this article. But they are not my intended readers; instead, this text is aimed at those that find themselves in a similar position as the original authors of the isci driver: a team new to the process of getting a large driver accepted into the mainline, and tasked with enabling several environments at once. The isci driver fell into the OS abstraction trap. These are the lessons learned and attitudes your author developed about this trap while leading the effort to rework the driver for upstream acceptance.

As mentioned above, one would be hard pressed to find an experienced Linux kernel developer willing to accept OS abstraction as a general approach to driver design. So, a simplistic rule of thumb for those wanting to avoid the pain of reworking large amounts of code would be to not go it alone. Arrange for a developer with at least 100 upstream commits to be permanently assigned to the development team, and seek the advice of a developer with at least 500 commits early in the design phase. After the fact, it was one such developer, Arjan van de Ven, who set the expectation for the magnitude of rework effort. When your author was toying with ideas of Coccinelle and other automated ways to unwind the driver's abstractions Arjan presciently noted (paraphrasing): "...it isn't about the specific abstractions, it's about the wider assumptions that lead to the abstractions."

The fundamental problem with OS abstraction techniques is that they actively defeat the purpose of having an open driver in the first place. As a community we want drivers upstream so that we can refactor common code into generic infrastructure and drive uniformity across drivers of the same class. OS abstraction, in contrast, implies the development of driver-specific translations of common kernel constructs, "lowest common denominator" programming to avoid constructs that do not have a clear analogue in all environments, and overlooking the subtleties of interfaces that appear similar but have important semantic differences.

So what were the larger problematic assumptions that led to to the rework effort? It comes down to the following list that, given the recurrence of OS-abstracted drivers, may be expected among other developers new to the Linux mainline acceptance process.

  1. The programming interface of the the Linux kernel is a static contract to third party developers. The documentation is up to date, and the recourse for upper layer bugs is to add workarounds to the driver.

  2. The "community" is external to the development team. Many conversations about Linux kernel requirements reference the "community" as an anonymous body of developers and norms external to the driver's development process.

  3. An OS abstraction layer can cleanly handle the differences between operating systems.

In the case of the isci driver these assumptions resulted in a nearly half-year effort to rework the driver as measured from the first public release until the driver was ultimately deemed acceptable.

Who fixes the platform?

The kernel community runs on trust and reciprocation. To get things done in a timely manner one needs to build up a cache of trust capital. One quick way to build this capital is to fix bugs or otherwise lower the overall maintenance burden of the kernel. Fixing a bug in a core library, or spotting some duplicated patterns that can be unified are golden opportunities to demonstrate proficiency and build trust.

The attitude of aiming to become a co-implementer of common kernel infrastructure is an inherently foreign idea for a developer with a proprietary environment background. The proprietary OS vendor provides an interface sandbox for the driver to play in that is assumed to be rigid, documented and supported. A similar assumption was carried to the isci driver; for example, it initially contained workarounds for bugs (real and perceived) in libsas and the other upper layers. The assumption behind those workarounds seems to be that the "vendor's" (maintainer's) interface is broken and the vendor is on the hook for a fix. This is, of course, the well-known "platform problem."

In the particular case of libsas, SCSI maintainer James Bottomley noted: "there's no overall maintainer, it's jointly maintained by its users." Internal kernel interfaces evolve to meet the needs of their users, but the users that engender the most trust tend to have an easier time getting their needs met. In this case, root-causing the bugs or allowing time to clarify the documentation for libsas ahead of the driver's introduction might have streamlined the acceptance process; it certainly would have saved the effort of developing local workarounds to global problems.

We the community

Similar to how internal kernel interfaces evolve at a different pace than their documentation, so too do the expectations of the community for mainline-acceptable code versus the documented submission requirements. However, in contrast to the interface question where the current code can be used to clarify interface details, the same cannot be done to determine the current set of requirements for mainline acceptance. The reality is that code exists in the mainline tree that would not be acceptable if it were being re-submitted for inclusion today.

A driver with an OS-abstracted core can be found in the tree, but over time the maintenance burden incurred by that architecture has precluded future drivers from taking the same approach. As a result, attempting to understand "the community" from an external position is a sure-fire way to underestimate the current set of requirements for acceptable code. The only way to acquire this knowledge is ongoing participation. Read other drivers, read the code reviews from other developers, and try to answer the question "would someone external to the development team have a chance at maintaining the driver without assistance?".

One clear "no" answer to this question from the isci driver experience came from the simple usage of c99 structure initializers. The common core was targeted for reuse in environments where there was no compiler support for this syntax. However, the state machine implementation in the driver had dozens of tables filled with, in some cases, hundreds of function pointers. Modifying such tables by counting commas and trusting comments is error prone. The conversion to c99-style struct initialization made the code safer to edit (compiler verifiable), more readable and, consequently, allowed many of those tables to be removed. These initializations were a simple example of lowest-common-denominator programming and a nuance that an "external" Linux kernel developer need not care to understand when modifying the driver, especially when the next round of cleanups are enabled by the change.

Can the OS be hidden?

OS abstraction defenders may look at that last example and propose automated ways to convert the code for different environments. The dangerous assumptions of automated abstraction replacement engines are the same as listed above. The Linux kernel internal interface is not static, so the abstraction needs to keep up with the pace of API change, but that effort is better spent participating in the Linux interface evolution.

More dangerous is the assumption that similar looking interfaces from different environments can be properly captured by a unified abstraction. The prominent example from the isci driver was memory mapping: converting between virtual and physical addresses. As far as your author knows, Linux is one of the few environments that utilizes IOMMUs (I/O memory management units) to protect standard streaming DMA mappings requested by device drivers (via the DMA API). The isci abstraction had a virtual-to-physical abstraction that mapped to virt_to_phys() (broken but straightforward to fix), but it also had a physical-to-virtual abstraction mapped to phys_to_virt() which was not straightforward to fix. The assumption that physical-to-virtual translation was a viable mechanism lead to an implementation that not only missed the DMA API requirements, but also the need to use kmap() when accessing pages that may be located in high memory. The lesson is that convenient interfaces in other environments can lead to the diversionary search for equivalent functionality in Linux and magnify the eventual rework effort.

Conclusion

The initial patch added 60,896 lines to the kernel over 159 files. Once the rework was done, the number of new files was cut down to 34 and overall diffstat for the effort was:

    192 files changed, 23575 insertions(+), 60895 deletions(-)

There is no question that adherence to current Linux coding principles resulted in a simpler implementation of the isci driver. The community's mainline acceptance criteria are designed to maximize the portability and effectiveness of a kernel developer's skills across drivers. Any locally-developed convenience mechanisms that diminish that global portability will almost always result in requests for changes and prevent mainline acceptance. In the end participation and getting one's hands dirty in the evolution of the native interfaces is the requirement for mainline acceptance, and it is very difficult to achieve that through a level of indirection.

I want to thank Christoph Hellwig and James Bottomley for their review and recognize Dave Jiang, Jeff Skirvin, Ed Nadolski, Jacek Danecki, Maciej Trela, Maciej Patelczyk and the rest of the isci development team that accomplished this rework.

Index entries for this article
KernelDevelopment model
GuestArticlesWilliams, Dan


to post comments

That's computer science, I guess

Posted Aug 12, 2011 21:19 UTC (Fri) by ncm (guest, #165) [Link] (3 responses)

All problems in computer engineering can be solved by another level of indirection -- except sluggishness. That's the difference between computer science and computer engineering. Understanding of that difference is what I probe in interviews.

That's computer science, I guess

Posted Aug 13, 2011 0:29 UTC (Sat) by ringerc (subscriber, #3071) [Link] (1 responses)

Except sluggishness, maintainability, reliability and compiled binary size.

Over-abstraction has plenty of costs. I've been guilty of it enough to know.

That's computer science, I guess

Posted Aug 14, 2011 19:36 UTC (Sun) by nowster (subscriber, #67) [Link]

Except sluggishness, maintainability, reliability, compiled binary size and a fanatical devotion to quoting Monty Python scripts.

I'll come in again...

That's computer science, I guess

Posted Aug 13, 2011 2:22 UTC (Sat) by elanthis (guest, #6227) [Link]

The real problem that most CS grads have is that they don't understand the difference between "adding layers of abstraction" and "coding to the abstraction."

More layers of abstraction is rarely useful. Building the _correct_ layers of abstraction is massively useful. If you feel that you need another layer, you should first check and make sure that the layer(s) you already have isn't just a poor design for the task at hand.

Avoiding the OS abstraction trap

Posted Aug 12, 2011 21:24 UTC (Fri) by dlang (guest, #313) [Link] (4 responses)

so what is the one line that didn't change in the process? :-)

Avoiding the OS abstraction trap

Posted Aug 12, 2011 21:38 UTC (Fri) by philipstorry (subscriber, #45926) [Link] (2 responses)

I have no idea.

But I'm guessing the smart money is on this one:
int main(void)

;-)

Avoiding the OS abstraction trap

Posted Aug 12, 2011 21:56 UTC (Fri) by dlang (guest, #313) [Link] (1 responses)

what, you mean it's not

}

? ;-)

it was just the stats of the initial commit being 60,896 lines and the diffstat showing 60,895 lines deleted made me wonder.

Avoiding the OS abstraction trap

Posted Aug 13, 2011 13:13 UTC (Sat) by krisis (guest, #70792) [Link]

Renaming files counts as lines deleted so my guess is that the "sole survivor" line is in the makefile.

Avoiding the OS abstraction trap

Posted Aug 14, 2011 2:20 UTC (Sun) by csamuel (✭ supporter ✭, #2624) [Link]

/* FIXME */

:-)

Avoiding the OS abstraction trap

Posted Aug 12, 2011 22:07 UTC (Fri) by thedevil (guest, #32913) [Link] (21 responses)

That's a very happy ending for the Linux driver, but what happened to the other platforms? Does the team now have to maintain 2 or 3 separate drivers duplicating almost all of the code? If yes, was the ending really so happy?

That's why people attempt abstraction

Posted Aug 12, 2011 22:54 UTC (Fri) by JoeBuck (subscriber, #2330) [Link] (12 responses)

A device manufacturer typically wants to support not only Linux, but also BSD (e.g. Darwin/MacOSX) and Windows (various flavors), without writing three completely different drivers. Abstraction has its costs, but so does the approach outlined in the article.

That's why people attempt abstraction

Posted Aug 12, 2011 23:27 UTC (Fri) by djbw (subscriber, #78104) [Link] (7 responses)

Teams can collaborate and maybe find some highest-common-factor code to share, but at the end of day each environment has it nuances that need special attention. The cost of getting the abstraction wrong is high, the chance of getting abstractions layers right is low (for non-trivial drivers).

That's why people attempt abstraction

Posted Aug 13, 2011 3:00 UTC (Sat) by quotemstr (subscriber, #45331) [Link] (2 responses)

Yet, somehow, nvidia successfully develops its hugely complex drivers on top of an abstraction layer. In fact, these drivers often work better on Linux than bespoke drivers for other graphics hardware. I find it difficult to accept that writing a platform-specific driver for every platform is really the optimal approach.

That's why people attempt abstraction

Posted Aug 13, 2011 3:14 UTC (Sat) by mjg59 (subscriber, #23239) [Link] (1 responses)

nvidia benefit hugely from the fact that opengl is already a pretty effective abstraction layer, and the vast majority of their driver is devoted to turning opengl commands into the card's internal format. On the other hand, their lack of integration with the rest of Linux does carry costs. Their driver fails in a variety of situations where nouveau succeeds (handling multi-GPU laptops, systems where the EDID has to be obtained via ACPI, thermal monitoring, backlight control, integration with xrandr) because nouveau is able to take advantage of the infrastructure present in Linux. And nouveau's 2D performance is massively better than the nvidia blob, because nvidia have nothing to abstract 2D operations between Windows and Linux.

So it depends. If you want to do nothing but run 3D applications then their approach is successful, and if you want to do anything else their approach is dreadful.

That's why people attempt abstraction

Posted Aug 14, 2011 19:16 UTC (Sun) by mlankhorst (subscriber, #52260) [Link]

there are tales of horror in the nvidia glue code, look up os_smp_*_barrier in the glue code. Or their shotgun approach to cache flushing, wbinvd instead of a bunch of clflush calls. ;)

That's why people attempt abstraction

Posted Aug 13, 2011 19:18 UTC (Sat) by cpeterso (guest, #305) [Link]

Perhaps some platform-specific abstraction layers can be avoided if the driver design is inverted: have a cross-platform core that is called *from* the platform-specific code. The core can use event callbacks to invoke platform code.

The abstraction layers described in the article sound like they are focused on implementation details instead of capturing a higher-level "domain model". Admittedly, designing a platform-independent domain model for a kernel driver manipulating hardware sounds challenging.

https://secure.wikimedia.org/wikipedia/en/wiki/Domain-dri...

That's why people attempt abstraction

Posted Aug 16, 2011 21:01 UTC (Tue) by mjthayer (guest, #39183) [Link] (2 responses)

> Teams can collaborate and maybe find some highest-common-factor code to share, but at the end of day each environment has it nuances that need special attention. The cost of getting the abstraction wrong is high, the chance of getting abstractions layers right is low (for non-trivial drivers).

I could potentially imagine the review process of an "abstraction layer-based driver" going along the lines of "this abstraction is acceptable, this one is not, this one could be acceptable if it were changed a bit". You might then end up with things like some code which is generic for all other supported OSes (or most other? You might find that one or two other target OSes also benefit from having their own implementations) being re-implemented for Linux, but still keep your somewhat refactored core code OS-independent. And of course, like the Linux kernel API, your abstraction layer need not be set in stone and "right the first time". It can be refactored as problems are found or new needs appear.

I wonder though more generally whether it would be acceptable to the Linux kernel community to have code files in the kernel for which kernel.org is not the "primary" site? Otherwise there is little chance of a driver in the upstream kernel having any sort of shared core with other OSes.

That's why people attempt abstraction

Posted Aug 16, 2011 21:05 UTC (Tue) by dlang (guest, #313) [Link] (1 responses)

there is already code in the kernel that is primarily developed and maintained elsewhere.

however, the kernel developers don't feel much need to respect the idea of 'this code is read-only on kernel.org', if they make an interface change in the kernel, they will make that change even on these drivers that have their 'primary source' external to the kernel, and it's up to that external source to pick up the changes so that they don't break with the next merge request.

That's why people attempt abstraction

Posted Aug 18, 2011 20:34 UTC (Thu) by mjthayer (guest, #39183) [Link]

> there is already code in the kernel that is primarily developed and maintained elsewhere.
>
> however, the kernel developers don't feel much need to respect the idea of 'this code is read-only on kernel.org', if they make an interface change in the kernel, they will make that change even on these drivers that have their 'primary source' external to the kernel, and it's up to that external source to pick up the changes so that they don't break with the next merge request.

I suppose the easiest way around that would be to make sure that the code parts which are "kernel-external" only interface to other parts in the driver, and not to anything outside. Without letting those other parts become too much of a simple abstraction layer, see above. Another problem that I see though is coding style - having code shared between the kernel and something else in this way forces that something else to adopt the kernel coding style, which is likely not to be the same as their preferred style.

That's why people attempt abstraction

Posted Aug 13, 2011 11:44 UTC (Sat) by justincormack (subscriber, #70439) [Link] (2 responses)

Iscsi is not a device though, so that argument does not really apply.

That's why people attempt abstraction

Posted Aug 13, 2011 12:42 UTC (Sat) by ftc (subscriber, #2378) [Link] (1 responses)

The article is not about an iSCSI (SCSI over IP) driver, it's about the Intel C600 series chipset SAS controller, which goes by the somewhat unfortunate name of isci (with only one "s").

isc(s)i

Posted Aug 15, 2011 20:08 UTC (Mon) by wilck (guest, #29844) [Link]

> the somewhat unfortunate name of isci ...
... indeed. This name has caused more confusion than any other driver name I encountered. It's sort of a running gag already.

That's why people attempt abstraction

Posted Aug 15, 2011 4:04 UTC (Mon) by broonie (subscriber, #7078) [Link]

The problem you end up with is that you get a cross platform driver that doesn't work particularly well on any of the platforms; it ends up being non-idiomatic and often reimplementing the wheel.

Avoiding the OS abstraction trap

Posted Aug 13, 2011 9:24 UTC (Sat) by intgr (subscriber, #39733) [Link] (5 responses)

> Does the team now have to maintain 2 or 3 separate drivers duplicating
> almost all of the code?

So instead of one 60kLOC driver, they would now have three 20kLOC drivers. I'm not sure that's a bad thing.

Avoiding the OS abstraction trap

Posted Aug 14, 2011 2:12 UTC (Sun) by JoeBuck (subscriber, #2330) [Link] (4 responses)

... unless they decide to make two 20Kloc drivers and leave the one for Linux out because it's too much of a PITA.

Avoiding the OS abstraction trap

Posted Aug 14, 2011 11:31 UTC (Sun) by ebirdie (guest, #512) [Link] (3 responses)

Did you read the article or lacking in understanding the text, or am I? I'll try my shot.

On Linux there is no need to maintain any Kloc for isc driver, if the initial driver development is done the "Linux community way", ie. shared, until the hardware will get revisions, which may need new bits and pieces added in by the vendor development team.

To put it another way with over simplification and has a familiar sound. Write once and forget. ;-)

Avoiding the OS abstraction trap

Posted Aug 14, 2011 16:01 UTC (Sun) by bfields (subscriber, #19510) [Link] (2 responses)

"Write once and forget" overstates the case--the driver will need some ongoing maintenance to keep up with the rest of the kernel, and it's hard for people who don't know the specific driver well to completely take over maintenance. At a bare minimum, you need someone who has access to the hardware and can test new kernels.

We certainly hope contributors can reduce their maintenance load, but if we lead them to expect it to be reduced to zero, then we risk ending up with a lot of bit-rotted drivers.

Avoiding the OS abstraction trap

Posted Aug 15, 2011 16:38 UTC (Mon) by misiu_mp (guest, #41936) [Link] (1 responses)

The point of this rewriting is that the future maintenance will be possible by someone outside the original development team. So although you will probably need a person with the hardware to test it, it could be an ordinary savvy user. That's because the driver is now much simpler and similar to other drivers, instead of being the very specific and complicated cake of layers that it used to be. It's got a better Bus factor .

Avoiding the OS abstraction trap

Posted Aug 16, 2011 6:20 UTC (Tue) by ebirdie (guest, #512) [Link]

"the driver is now much simpler and similar to other drivers, instead of being the very specific and complicated cake of layers"

I love this, because to me it implies that the piece of hardware (ie. a chip and its surrounding implementation) has better chances to shine on its own as hardware and not leveled to other similar hardware just by a software development method, which does not directly produce any good to me.

Avoiding the OS abstraction trap

Posted Aug 15, 2011 9:08 UTC (Mon) by linusw (subscriber, #40300) [Link] (1 responses)

I always nurtured the idea that we grossly overestimate the pain of writing code, especially concerning device drivers. Coding is easy, know-how is hard.

So writing three different device drivers tailored for three differen OS:es isn't necessarily that bad, as long as you have people who are actually comfortable with diving around all three codebases and making sure the know-how about the hardware is shared between all three codebases.

The key assumption I do is that if a developer is to focus on one technical aspect at a time across several codebase rather than one specific codebase across several technical aspects at a time, the outcome may be better.

This may be true to varying extent in different contexts, it's just my intuitive feeling.

Avoiding the OS abstraction trap

Posted Aug 15, 2011 17:14 UTC (Mon) by misiu_mp (guest, #41936) [Link]

"Coding is easy, know-how is hard."
So true. To write a good driver you need to know both the hardware's secrets and the kernel's secrets. It is basically impossible to have a driver that does not need kernel secrets - which an abstracted driver tries to do (exception being the graphics drivers based on the opengl specs).

There is much more people with kernel knowledge than a with particular hardware knowledge. A well documented driver written in a language that is easy to understand to kernel hackers will widen the potential circle of hardware knowledgeable people. Once the know-how is out there or easy to reach, the writing is a formality.

libsas

Posted Aug 12, 2011 23:14 UTC (Fri) by dougg (guest, #1894) [Link]

Author and ex-maintainer: Luben Tuikov

Avoiding the OS abstraction trap

Posted Aug 12, 2011 23:25 UTC (Fri) by neilbrown (subscriber, #359) [Link] (2 responses)

Several good points but this is my favourite:

> The "community" is external to the development team

It is nonsense in much the same way that a "stable ABI" is nonsense.

Integrating with the rest of the community certainly requires an inventment and if you cannot afford the time, then "Arrang[ing] for a developer with at least 100 upstream commits to be permanently assigned to the development team" sounds like a suitable alternative investment strategy.

I can just see a new standard line in resumes:
> Linux Kernel Contributions: XXX commits over YYY releases.
:-)

Avoiding the OS abstraction trap

Posted Aug 13, 2011 5:58 UTC (Sat) by agrover (guest, #55381) [Link] (1 responses)

I definitely had a line like that on my resume, the last time I was job-seeking.

We don't want # of contributions to be a major career boost (otherwise people will game the system with more patches) but it does loosely correlate with something of value on a CV.

Avoiding the OS abstraction trap

Posted Aug 16, 2011 8:20 UTC (Tue) by Np237 (guest, #69585) [Link]

As long as these patches are useful additions or real bug fixes (which the acceptance guidelines should guarantee), this is not “gaming the system”. Quite the opposite actually, it means giving more incentive to improve the system.

Avoiding the OS abstraction trap

Posted Aug 13, 2011 1:05 UTC (Sat) by cesarb (subscriber, #6266) [Link]

Another reason to avoid abstraction layers like these is that it helps the evolution of both the kernel and the driver.

Without an abstraction layer padding things, you can compare the "shape" of the API desired by the driver with the "shape" of the API as given by the kernel, and see if they have a good "fit" (picture puzzle pieces trying to fit together). Places where both sides are not fitting together very well point to things which need to be changed either in the kernel or in the driver (or in both).

With an abstraction layer, the temptation is to just add more padding instead of adjusting both sides of the API. Given enough time, you will end up buried in foam.

Avoiding the OS abstraction trap

Posted Aug 15, 2011 22:39 UTC (Mon) by PaulWay (guest, #45600) [Link]

This is a really great article Dan, well written, thorough, and with a great ending for both the isci driver and the kernel. This is why I pay my subscription to LWN!

Avoiding the OS abstraction trap

Posted Aug 16, 2011 7:32 UTC (Tue) by wilck (guest, #29844) [Link] (1 responses)

Dan, it is interesting to see you fully take the kernel maintainers' point of view, assuming that you're mainly paid for being a driver developer. But there's a back side of the coin, too.

Am I mistaken assuming that OS abstraction would have worked for your driver for all OSes except Linux? The technical problems you describe look valid, but they could have been solved with a few simple macros (the phys_to_virt() example was caused not by abstraction per se, but by using an oversimplified abstraction caused by coding for some other OS first).

What really matters here is not the technical reasons but the political ones. The maintainers don't like common code, compatibility layers and such. They don't care that you need to develop your driver for several OSes and several flavours of Linux. They want you to write the leanest possible code and if possible, do it in a general way that fixes problems or implements features in the entire subsystem you're working with. That's of course also an abstraction which causes further complexity for you, but this time the community benefits, so this kind of abstraction is not only welcome, it also helps you earn credit points.

You could have opted to develop your driver off-tree and focus on support for the main (enterprise) distribution kernels. You decided to be a good Linux citizen and buy into the 'Develop In Tree, Stable API Is Nonsense' doctrine. If I am reading correctly, the upstream acceptance process cost your group almost 6 months of work. The result is a leaner driver, but I assume that the common code base still exists (after all, the code for the other OSes needs to be maintained as well), so what you now have is the old code plus the new one, and the duty to maintain both.

In order to get your device actually supported for customers, you need to do back-porting to distribution kernels. I suspect that solving "platform problems" in upstream subsystems benefits you little for that task, with the OS kernels still including the old broken subsystem code. What do you do - try to convince distributions to pull in the modified subsystems, re-enable the former workarounds you made in your driver, or not support certain distributions at all?

You are in the lucky situation that your employer understands and accepts the development rules the community decreed. Other companies may see things differently. Your article illustrates beautifully that, contrary to common Linux myth, it's harder and takes longer to get a device supported under (upstream) Linux than on other platforms. Apart from having to adopt the kernels' development paradigm, it also requires a social exercise and lengthy trust-and-credit-earning process. You seem to have a boss who is willing to pay for that. Good for you.

Avoiding the OS abstraction trap

Posted Aug 18, 2011 15:06 UTC (Thu) by josh (subscriber, #17465) [Link]

The process tends to go much more smoothly if the developers start working with upstream sooner than the pointer where they have a full cross-platform abstraction-layer-based driver. Among other things, it means the platform can get fixed sooner, the driver workarounds never need to get written, and the development doesn't need to happen twice.

Talk to the kernel community during the design phase, not during the 'it's "done", please merge' phase.

Avoiding the OS abstraction trap

Posted Aug 16, 2011 8:27 UTC (Tue) by Np237 (guest, #69585) [Link] (20 responses)

OS abstraction is indeed a bad idea, and one that doesn’t depend on the involved OSes.

But in the issues described in this article, some are caused by OS abstraction, and others were caused by the “stable API nonsense” fallacy, which is a Linux-specific problem. Not having any kind of stable kernel interface makes it harder for device driver developers to work on their driver. Instead they have to work on the kernel itself, coping with its ever-changing needs to update their work instead of improving it.

I don’t mean that we should stop changing these APIs. But we should do so over larger timeframes and with a clear roadmap. Currently it’s just a giant mess and anyone can decide to rewrite large stacks without any care for the consequences on out-of-tree drivers.

Avoiding the OS abstraction trap

Posted Aug 18, 2011 1:56 UTC (Thu) by dmag (guest, #17775) [Link] (19 responses)

> I don’t mean that we should stop changing these APIs. But we should do so over larger timeframes and with a clear roadmap.

The 2.4 kernel series tried to do what you propose: Many large, invasive patches were put off until later. Developers hand to carry their patches around for years. Then, when 'later' came, there was much pain trying to stabilize many massive changes at once.

So during 2.6, they decided to optimize for kernel developers, not anonymous 3rd parties who aren't maintaining the kernel. Patches were integrated when they were ready, instead of some artificial deadline. The USB API was rewritten to make drivers easier to write. The network API was rewritten to allow switching interrupts to polling during high loads (saving a lot of CPU when it's needed most.) A suspend/resume API was globally added to every driver. New APIs are created to better abstract memory management (especially under memory pressure). etc, etc, etc.

Huge swaths of the kernel are rewritten constantly, including internal "driver APIs".

> rewrite large stacks without any care for the consequences on out-of-tree drivers.

Be careful what you wish for. This is what economists call an externality.

The kernel developers want to refactor kernel code globally (without worrying about making all data structures and APIs back-compatible, which would take a lot of extra work.)

The 3rd parties (with outside code) want a stable API so they can do less work of trying to keep up with the changes in the kernel. The 3rd parties wouldn't pay any of the costs, since they (generally) aren't core kernel maintainers. The 3rd parties would literally choose to slow kernel development in order to make their lives easier.

Luckily, 3rd parties don't get to make the choice, the kernel developers do.

Avoiding the OS abstraction trap

Posted Aug 18, 2011 7:53 UTC (Thu) by Np237 (guest, #69585) [Link] (18 responses)

What you are describing is not the solution, it is the problem.

Many large-scale open source projects have found elegant solutions to similar problems. They let several APIs coexist for reasonable timeframes, phasing out unneeded ones and forcing people to migrate to new APIs after 1 or 2 years. I fail to see in what way the kernel would need to be different.

Said otherwise, I’m not asking for refactorings to wait for 3 years. I’m asking that they wait 6 months and get done cleanly.

Avoiding the OS abstraction trap

Posted Aug 18, 2011 9:14 UTC (Thu) by dmag (guest, #17775) [Link] (16 responses)

> Many large-scale open source projects have found elegant solutions to similar problems

Most plug-in APIs (like browser plug-ins) have a small 'surface area'. This is not true with kernel drivers. There is a massive number of internal APIs/datastructures. Trying to keep back-compatibility would slow development.

For example, if the kernel would be faster if we eliminated the "jiffies" variable, should we delay that change because someone somewhere _may_ have a proprietary driver that _might_ use it?

> They let several APIs coexist for reasonable timeframes

Either you refactor the API or you don't. If your code is required to be back-compatible for some time, then you can't _really_ refactor your code. You can only "create a new API, shuffle things around a little bit." You are in a straight jacket and prevented from making radical simplifying changes.

> forcing people to migrate to new APIs after 1 or 2 years.

That implies forcing people to keep their radical patches for 1 or 2 years.

> I’m not asking for refactorings to wait for 3 years.

Well, you just said 2. Even 1 year is too long in my book.

> I fail to see in what way the kernel would need to be different.

If an internal API is deemed to be defective, you are saying it would have to be supported for years. Today, it is simply deleted and rewritten, so the defective API can no longer cause problems. Much simpler, no maintenance burden. (For examples, see read the excellent articles here on how the RCU API has changed thru the years.)

You're asking the kernel developers to take on a *lot* more maintenance burden. But what do they get in return?

Avoiding the OS abstraction trap

Posted Aug 18, 2011 9:27 UTC (Thu) by Np237 (guest, #69585) [Link] (4 responses)

To claim that the current kernel development model means less resources is a fallacy. Compared to other open source projects of similar size (KDE, GNOME, Mozilla, LibreOffice), the Linux kernel involves around 10 times more developers per kloc. The very reason for requiring so much people is that the current model IS the burden. Every time you rewrite the SATA stack or the USB one, it involves an insane amount of work to update every single driver.

1 year is too long in your book? Sorry, but that’s nice of you for living in a pure-developer world where you can break everything every 2 weeks. There’s also a world out there where computers need to just work, and to do the same thing for decades.

Avoiding the OS abstraction trap

Posted Aug 18, 2011 9:56 UTC (Thu) by dmag (guest, #17775) [Link]

> To claim that the current kernel development model means less resources is a fallacy.

Well, I guess I (and the kernel developers) agree to disagree.

> Compared to other open source projects of similar size (KDE, GNOME, Mozilla, LibreOffice)

It's not news that an Operating System Kernel requires more work than a Word Processor or a Browser. Want to bet that Linux supports more CPU architectures than your Word Processor supports file formats?

> Every time you rewrite the SATA stack or the USB one, it involves an insane amount of work to update every single driver.

Heh. It only looks that way on TV. Real programmers write programs to do the programming for them.

http://lwn.net/Articles/315686/

> There’s also a world out there where computers need to just work and to do the same thing for decades.

Exactly. That's the job of the distributors. They fork the kernel and support it for as long as people will pay for it.

Avoiding the OS abstraction trap

Posted Aug 25, 2011 0:25 UTC (Thu) by chloe_zen (guest, #8258) [Link] (2 responses)

To claim that the current kernel development model means less resources is a fallacy. Compared to other open source projects of similar size (KDE, GNOME, Mozilla, LibreOffice), the Linux kernel involves around 10 times more developers per kloc.
I don't know if that statistic is correct, but if it is, I'd attribute that to the difficulty of developing a kernel vs. developing an application. I've done both, at least a bit -- I identified and fixed an SMP-specific dentry race condition in NFS, and wrote the boot-time DHCP support -- so I'm not just guessing.

From The Tao of Programming, 3.3:

There was once a programmer who was attached to the court of the warlord of Wu. The warlord asked the programmer: "Which is easier to design: an accounting package or an operating system?"

"An operating system," replied the programmer.

The warlord uttered an exclamation of disbelief. "Surely an accounting package is trivial next to the complexity of an operating system," he said.

"Not so," said the programmer, "when designing an accounting package, the programmer operates as a mediator between people having different ideas: how it must operate, how its reports must appear, and how it must conform to the tax laws. By contrast, an operating system is not limited by outside appearances. When designing an operating system, the programmer seeks the simplest harmony between machine and ideas. This is why an operating system is easier to design."

The warlord of Wu nodded and smiled. "That is all good and well, but which is easier to debug?"

The programmer made no reply.

kloc

Posted Aug 25, 2011 0:31 UTC (Thu) by chloe_zen (guest, #8258) [Link] (1 responses)

Let us also consider the possibility that more effort per kloc means they're working harder to make better code.

Writing big code is easy! Writing small yet good code is much harder.

kloc

Posted Aug 25, 2011 4:41 UTC (Thu) by dlang (guest, #313) [Link]

when was the last time anyone found a million lines of dead code in the kernel (like what was removed by libreoffice from the openoffice.org codebase)

how many of the 'easy' office suites that existed 10-20 years ago are still in use today?

Avoiding the OS abstraction trap

Posted Aug 18, 2011 10:26 UTC (Thu) by fuhchee (guest, #40059) [Link] (10 responses)

"But what do they get in return?"

Perhaps more drivers from companies who cannot afford to chase upstream, but who'd be willing to produce mediocre/working ones for a stabler API.

This is the problem, not a solution...

Posted Aug 18, 2011 15:43 UTC (Thu) by khim (subscriber, #9252) [Link] (9 responses)

Yeah, "more crappy drivers" is real problem, and unstable ABI solves this problem quite handily, so what are the advantages?

It's actually better not to have any drivers rather then have crappy drivers - not just long-term, but also a medium-term. If there are no drivers then someone will write them or hardware will just be abandoned, but crappy drivers make user's life miserable for a long, long time.

This is the problem, not a solution...

Posted Aug 18, 2011 15:47 UTC (Thu) by Np237 (guest, #69585) [Link] (6 responses)

Of course, this is why we have such excellent drivers for wi-fi and graphics cards and the ones with crappy drivers have all been abandoned.

Wait…

If you tried irony then you failed. Utterly and completely.

Posted Aug 18, 2011 16:11 UTC (Thu) by khim (subscriber, #9252) [Link] (5 responses)

WiFi is actually pretty good example. We used to have bunch of crappy WiFi drivers but they were mostly abandoned and replaced.

As for videodrivers... well, the vendors decided that they absolutely, positively need to produce crap - and that's why videodrivers are crappiest drivers on all platform. I think open-source drivers eventually will fix the problem but this process is slow-going because most vendors are not just indifferent - they actively hurt these efforts.

If you tried irony then you failed. Utterly and completely.

Posted Aug 19, 2011 22:06 UTC (Fri) by zlynx (guest, #2285) [Link] (4 responses)

Yeah.

And for years (2004-2008) I used ndiswrapper and the WINDOWS wireless driver for my laptop because the Linux drivers either didn't work, crashed, broke suspend or changed their behavior. This seemed to vary with each kernel release.

As for the open source video drivers, for most of them I don't believe the problem is vendors hurting the effort. AMD and Intel have been supportive. I believe the problem is just not enough resources.

It needs a massive test effort that will verify correctness in all major OpenGL applications on each card version. It needs exhaustive OpenGL test suites that exercise all the code paths. That's a hundred test machines right there.

It needs dozens of developers who analyze OpenGL use by software and determine how to optimize the code. The commercial drivers often have optimized code paths for specific applications.

If you want to match Nvidia binary drivers, you have to do those things. It's expensive. Really expensive.

This is are totally different issues...

Posted Aug 20, 2011 10:07 UTC (Sat) by khim (subscriber, #9252) [Link] (3 responses)

And for years (2004-2008) I used ndiswrapper and the WINDOWS wireless driver for my laptop because the Linux drivers either didn't work, crashed, broke suspend or changed their behavior. This seemed to vary with each kernel release.

Yup, but this period stretched for so long because "use the ndiswrapper" was standard answer and thus native drivers were poorly tested and few developers spent time fixing them. This is obvious trade-off: stable ABI makes life easier for user short-term but does not improve situation all that much long term, yet it makes life of developers miserable both short term and long term... and kernel developers rarely accept any decision which promise tiny short-term relief at the price of huge long-term pain. That's why Linux is still around and a lot of other OSes are not.

It needs dozens of developers who analyze OpenGL use by software and determine how to optimize the code.

I can't care less about "optimized code paths". The fact that all these complex and fragile compilers are in driver is madness. They don't belong there! Just like format converters for webcams don't belong there. The problems with drivers I do care about are things the driver should do: setup the videomode, handle DMA, etc. And there are two problems with these needs:
1. There are no documentation: AMD's is probably the best, Intel is slightly worse, NVidia does not have anything.
2. These sequences are changing between releases (and sometimes between steppings) for no good reason.

Note that problem #2 is significantly worse and it's brought to light entirely because Windows offers stable ABI for driver writers. This means that decisions like "save 0.01% of transistor budged and fix all the crap in drivers" look viable. This then leads to problems in both Windows and Linux, but as long as thing more-or-less works in Windows chip can be sold.

If you want to match Nvidia binary drivers, you have to do those things. It's expensive. Really expensive.

It's not hard to match NVidia driver's quality. They may offer fast 3D in games - but I don't care about that. I do care about silly things like suspend, thermal monitoring, etc - and there NVidia drivers are awful. Open-source drivers are better in this regard - but just barely, they still are awful, nothing like [currently quite stable] WiFi drivers.

This is are totally different issues...

Posted Aug 20, 2011 19:30 UTC (Sat) by BenHutchings (subscriber, #37955) [Link] (1 responses)

This means that decisions like "save 0.01% of transistor budged and fix all the crap in drivers" look viable.

The decision is more likely "don't delay tape-out to fix this hardware bug". And it has nothing to do with the ability to hide the workarounds in a proprietary driver. Like programs, all chips have bugs, and you can see workarounds for them in almost any Linux driver you care to look at.

I'm not talking about bugs...

Posted Aug 21, 2011 8:19 UTC (Sun) by khim (subscriber, #9252) [Link]

A lot of drivers contain workarounds for buggy hardware. I'm not talking about these. I'm talking about the fact that each new generation of GPU chips have totally different initialization sequences and even basic operations often require totally different driver.

AFAIK videodrivers are more-or-less unique in this regard. 802.11n WiFi card have driver almost identical to what 801.11b had - even if chip internals are wildly different. Yes, I know, GPUs are significantly more complicated... well, they are not more complicated then CPUs and these can reuse older software just fine. About the only piece of silicone comparably buggy were winmodems - and these, too, were brought to life by the promise to paper over hardware problems in software.

This is are totally different issues...

Posted Aug 21, 2011 18:37 UTC (Sun) by Np237 (guest, #69585) [Link]

Yup, you point out the very problem. Once you care about speed (and at work, we do, a lot), you’re basically screwed. You’re forced to buy nVidia since they are the only ones with fast Linux drivers, and you get a shitload of bugs and no support for modern desktops.

This is the problem, not a solution...

Posted Aug 18, 2011 15:47 UTC (Thu) by fuhchee (guest, #40059) [Link] (1 responses)

"Yeah, "more crappy drivers" is real problem" ...

You are misparaphrasing me. Mediocre/working can be good enough for users. There exist other popular operating systems that demonstrate this quite well.

What "other popular operating systems" are you talking about?

Posted Aug 18, 2011 16:26 UTC (Thu) by khim (subscriber, #9252) [Link]

If you are talking about Windows then it's unmitigated disaster: drivers voodoo is huge part of that culture - and it only exist because neither users not hardware vendors can abandon this crazy platform. It's because it has a monopoly and a lot of softwate only work under window - not because it's such a fun to use. Often drivers only work for one particilar version of OS (Windows 98 or XP or whatever), and even then they are huge PITA.

And all other platforms have died for the lack of drivers: such diverse OSes and Solaris, Symbian and many many others were abandoned and replaced with Linux because Linux has the most drivers available.

Think Android: it's brand-new OS and we all know Google does not like GPL - yet it uses Linux, not NetBSD or something like that. Why? Because they needed "bag of drivers" and Linux was the only sane choice.

Or think RIM. They have choosen to switch to QNX. Well, QNX is good OS, but it does use your favored "stable ABI" drivers model... and suprisingly enough this means it does not have as many drivers as Linux and the ones they do have are often worse. This means RIM's hardware selection is limited and that means RIM is already dead: it's question of "when", not question of "if".

Avoiding the OS abstraction trap

Posted Aug 18, 2011 18:44 UTC (Thu) by rgmoore (✭ supporter ✭, #75) [Link]

What you are describing is not the solution, it is the problem.
I'm pretty sure the kernel maintainers would disagree violently; they would say the problem is the existence of out of tree drivers. They have said on numerous occasions that they are happy to write their own drivers for any hardware manufacturer that is willing to provide the documentation necessary to write it. If you want them to share their kernel code but don't want to return the favor by sharing your driver code, you have no standing to turn around and demand they change their development process to make your life easier.

OS abstraction - not always a trap

Posted Aug 18, 2011 2:07 UTC (Thu) by Eliot (guest, #19701) [Link] (13 responses)

Can I provide a counter example?
The ALSA asihpi driver is about 90% shared code with osx and windows drivers.

The private source is transformed by (scary) sed and python scripts into
the public version visible in the kernel.
Part of this process replaces OS-abstraction wrappers with their equivalent kernel direct calls. Datatypes and symbol names are also transformed to
kernel style. Other-OS-specific ifdef blocks are removed using a simplified preprocessor.

The result is not particularly pretty, but it allows the driver to exist by minimizing the linux-sepcific development and maintenance. It is much less likely that this driver would exist if the entire thing had to be rewritten and maintained separately.

OS abstraction - not always a trap

Posted Aug 18, 2011 7:55 UTC (Thu) by Np237 (guest, #69585) [Link] (1 responses)

What you describe makes me feel positively sure that it would be less costly to maintain the 3 drivers separately.

OS abstraction - not always a trap

Posted Aug 18, 2011 19:08 UTC (Thu) by vonbrand (subscriber, #4458) [Link]

I shudder contemplating the maintenance of said "scary sed/Python/whatever" scripts, keeping up with Linux changes.

OS abstraction - not always a trap

Posted Aug 18, 2011 11:42 UTC (Thu) by cortana (subscriber, #24596) [Link] (7 responses)

Does this mean that the version distributed on kernel.org and by other distributors is not the "preferred form of modification"?

I guess the answer is irrelevant until the copyright owner of the code in question decides to turn nasty.

OS abstraction - not always a trap

Posted Aug 18, 2011 14:32 UTC (Thu) by cladisch (✭ supporter ✭, #50193) [Link] (6 responses)

> Does this mean that the version distributed on kernel.org and by other distributors is not the "preferred form of modification"?

Apparently not.

(For most hackers, the "preferred form of modification" would actually be a cloned git tree.)

> I guess the answer is irrelevant until the copyright owner of the code in question decides to turn nasty.

Which could be defended against with estoppel because the generation of the kernel code and then labelling it as GPL was done by the copyright owner.

OS abstraction - not always a trap

Posted Aug 19, 2011 2:22 UTC (Fri) by Eliot (guest, #19701) [Link] (5 responses)

>> Does this mean that the version distributed on kernel.org and by other distributors is not the "preferred form of modification"?

> Apparently not.

Of course it is the "preferred form" for anybody working with the kernel sources. It exists in its own right independent of our private version. The scripted conversion just saves us the process of porting changes manually when our version changes.
(note that the 'original' sources are also available under GPL licence)

There is no sign of anyone else trying to submit other than very minor patches to this code. If that happens we'll have to work out what to do...

> (For most hackers, the "preferred form of modification" would actually be a cloned git tree.)

>> I guess the answer is irrelevant until the copyright owner of the code in question decides to turn nasty.

I don't see how this is possible once the code has been released under GPL.

OS abstraction - not always a trap

Posted Aug 19, 2011 22:11 UTC (Fri) by giraffedata (guest, #1954) [Link] (4 responses)

Which could be defended against with estoppel because the generation of the kernel code and then labelling it as GPL was done by the copyright owner.

Wrong code. We're talking about the copyright on the rest of the kernel. The owners of that have authorized folks to prepare and distribute derivative works only if they provide their source code in "preferred form." So if the generated code isn't preferred form, and a copyright owner gets nasty, there could be trouble both for the authors of the asihpi driver and anyone who distributes a kernel that contains it.

It exists in its own right independent of our private version. The scripted conversion just saves us the process of porting changes manually when our version changes.

Note that you could say the same thing about an ELF version which the author compiles from C, and everyone agrees ELF is not "the preferred form."

The fact that the generated code is C, and presumably pretty normal C, is a much more persuasive argument that it is "the preferred form."

I guess the answer is irrelevant until the copyright owner of the code in question decides to turn nasty.

I don't see how this is possible once the code has been released under GPL.

First, to be clear: "the code in question" is the Linux kernel in its entirety. And if the asihpi driver code is not in the preferred form, then the code in question has not been released - its release was conditional on everything lumped with it being available in preferred form.

OS abstraction - not always a trap

Posted Aug 20, 2011 19:34 UTC (Sat) by BenHutchings (subscriber, #37955) [Link] (3 responses)

The owners of that have authorized folks to prepare and distribute derivative works only if they provide their source code in "preferred form."

And they have also made it clear that a "portable" version of the driver would not be the preferred form. So, where's the problem?

OS abstraction - not always a trap

Posted Aug 20, 2011 21:01 UTC (Sat) by giraffedata (guest, #1954) [Link] (2 responses)

And they have also made it clear that a "portable" version of the driver would not be the preferred form. So, where's the problem?

How have they made that clear?

Every one of them has given a license to anyone who chooses to use it that is described in its entirety by one document: GPL V2. That document gives no detail at all on what "the preferred form of the work for making modifications to it" is.

And incidentally, if one of the kernel authors did decide to be more specific, he'd be violating some previous author's copyright, because of GPL's condition that one offer one's own additions under GPL.

OS abstraction - not always a trap

Posted Oct 1, 2011 9:48 UTC (Sat) by Duncan (guest, #6647) [Link] (1 responses)

> Every one of them has given a license to anyone who chooses to use it that is described in its entirety by one document: GPL V2.

AFAIK, the GPLv2 describes the collected work, and individual files/patches must of course have compatible licenses, but no, the GPLv2 does not describe "in its entirety" the license of every individual part, as some parts are more liberally licensed (2-Clause BSD, LGPLv2, GPLv2 or later instead of specific v2 as applies to the kernel as a whole, etc.).

Additionally, I do not believe the GPLv2 describing the license "in its entirety" is fully correct for even the kernel as a whole, due to Linus's specific waiver of the "derived work" claim for userspace as opposed to kernelspace, or as specifically worded in the kernel's GPLv2 preamble in the COPYING file, "user programs that use kernel services by normal system calls".

FWIW, your statement might have been "loosely acceptable" if not entirely accurate, had it not used the words "in its entirety", but it's clearly incorrect with the inclusion of the "entirety" claim, because that claim is clearly factually incorrect.

Finally, there's the various binary firmware blobs, some of which are still in the kernel, which don't really comply with the GPLv2 at all, and which make a lot of people nervous. However, they're gradually being moved out of the kernel, there's various "deblobbed" kernel sources available, and should someone legally force the issue, it's now to the point where they could be stripped and packaged separately with, probably less technical disturbance than for instance the switch to the two-digit kernel 3.0 numbering scheme caused... which might be why nobody's bothered asserting legal claim to force the issue, since it's hardly worth it now and the separation is gradually happening on its own anyway.

Meanwhile, just in case someone gets the wrong idea... I'm not a lawyer, nor do I even pretend to play one, anywhere. However, it's not like this actually requires a lawyer to see, since the waiver's clearly there in the COPYING file, individual files clearly carry their own various licenses, and the firmware issues are well known.

Not that many will likely read this, over a month after original publication of the parent article, but there's always the few coming in via Google, or following links in later LWN articles referring back to this one.

Duncan

OS abstraction - not always a trap

Posted Oct 2, 2011 0:39 UTC (Sun) by dlang (guest, #313) [Link]

I'll also point out that there are a lot of people who don't see binary blobs that are not executed on the main CPU as being any problem

OS abstraction - not always a trap

Posted Aug 18, 2011 21:24 UTC (Thu) by djbw (subscriber, #78104) [Link] (1 responses)

Interesting... I briefly looked at COG (http://nedbatchelder.com/code/cog/) when I was still in the "maintain the abstraction" stance. But it was ultimately untenable. I can not speak to sound drivers, but for isci it needs to inter-operate with libata, libsas/sas transport class, and the rest of the scsi midlayer. Duplicating all those common definitions was a big problem. Redoing work that an upper layer had already handled was another item. General non-idiomatic expressions and code organization got in the way...

OS abstraction - not always a trap

Posted Aug 19, 2011 2:28 UTC (Fri) by Eliot (guest, #19701) [Link]

>Interesting... I briefly looked at COG (http://nedbatchelder.com/code/cog/) >when I was still in the "maintain the abstraction" stance. But it was >ultimately untenable. I can not speak to sound drivers, but for isci it >needs to inter-operate with libata, libsas/sas transport class, and the >rest of the scsi midlayer. Duplicating all those common definitions was a >big problem. Redoing work that an upper layer had already handled was >another item. General non-idiomatic expressions and code organization got >in the way...

For the driver I refer to, the majority of it is 'below' alsa midlayer, and handles the detailed interface to the hardware. There is one alsa-specific file that uses APIs provided by our shared code.
Sometimes this does get in the way, and the driver may slowly change to cut out the middle-man, but meanwhile, it is still usable.

And yes, I'm familiar with COG. We use it a bit, but not for this purpose.

OS abstraction - not always a trap

Posted Aug 26, 2011 14:50 UTC (Fri) by slashdot (guest, #22014) [Link]

A much simpler solution is to code the driver to the Linux interface, and implement the parts of it that are used for Windows and OSX as well.


Copyright © 2011, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds