|
|
Subscribe / Log in / New account

Choosing between portability and innovation

Choosing between portability and innovation

Posted Mar 2, 2011 19:40 UTC (Wed) by mezcalero (subscriber, #45103)
Parent article: Choosing between portability and innovation

What I actually suggested in that interview was not so much that the BSDs should adopt the Linux APIs, but instead that people should just forget about the BSDs. Full stop.

In the first sentence the article declares "portability" a key concept in the open source ecosystem. That's a claim I don't agree with at all. "Portability" might be something rooted in the darker parts of Unix tradition but I don't see how this matters to open source at all. The fact that Unix was forked so often is a weak point of it, not a strong point. And only that forking put portability between OSes on the agenda at all. So it has something to do with the history of Unix, not with the idea of Free Software. And caring for portability slows down development.

I have a hard time understanding why "portability" in itself should be a good thing. it doesn't help us create better software or better user experience. It just slows us down, and makes things complicated. If you decide to care about portability you should have good reasons for it, but "just because" and "it's a key concept of open source" are bad and wrong reasons. (good reasons sound more like: "i need to reach customers using windows")

There are couple of other statements in this text I cannot agree with. Claiming that the fast pace of development on Linux was "a problem" is just bogus. It's neither a bad thing for Linux nor for BSD, since even the latter gets more features more quickly this way: in many areas the work on Linux does trickle down to BSDs after all. What *really* might be problem for BSD is that BSD development is otherwise really slow.

And on Linux, if we ever want to catch up with MacOS or Windows we probably even need a much faster pace.

And the last sentence of the article is actually the wrongest statement of them all: "The innovation of Linux inevitably comes at a price: Linux is the de facto Unix platform now, ..." -- wow, this is just so wrong. I'd claim right the contrary! The Unix landscape was splintered and balkanized during most its history. We finally are overcoming that, and that is a good thing -- that is a fantastic thing. That's not a price you pay, that's a reason to party!

Lennart


to post comments

Choosing between portability and innovation

Posted Mar 2, 2011 20:51 UTC (Wed) by rfunk (subscriber, #4054) [Link] (7 responses)

While I agree with the idea of making use of what Linux provides, especially in low-level software, what you said in this particular comment is wrong in an incredible number of ways.

For one thing, your historical interpretation dismisses the fact that Unix was expressly written in a portable language, which was revolutionary at the time, and without that would never have gone beyond DEC minicomputers. It also dismisses the benefits that occurred specifically because of forks/reimplementations such as BSD and Linux. You seem to think that Linux is the pinnacle of Unix evolution, and beyond it is nothing but better Linux, which seems rather short-sighted to me.

Alternative implementations of parts of a system improve the longevity of the system, since different implementations may end up adapting better to new circumstances down the road.

Portability has mattered to the free software ecosystem since day one. Just like no one company has all the best programmers, no one operating system has all the best programmers either. We can gain from people on the BSD platforms, and they can gain from us. Remember that much Free Software started on Solaris or other non-Linux platforms, and even now MacBooks are becoming increasingly popular as programming platforms, and there's no reason to shun those people. The Free Software Foundation releases software portable to Windows (with great derision from the OpenBSD folks), treating that software as a gateway drug to help lead users away from Windows.

Yes, we should take advantage of advancements in Linux, and push for more, but we also shouldn't completely throw away portability.

Choosing between portability and innovation

Posted Mar 2, 2011 21:04 UTC (Wed) by halla (subscriber, #14185) [Link] (1 responses)

Erm... Unix was written C to be portable between various kinds of hardware, right? Not between various kinds of Unix. Linux is fantastically portable between various kinds of hardware, and so is Linux software. Even if I, as a stupid application developer am thankful to receive the odd Arm patch for my code...

Choosing between portability and innovation

Posted Mar 3, 2011 9:18 UTC (Thu) by roblucid (guest, #48964) [Link]

Look at Linux history and you'll see feature sets applications depend on changing. Programs written to POSIX and keeping out of system details haven't needed much maintenance.
Those sacrificing portability and delving too deep, are those pesky things that keep breaking.

What's needed is a kernel/user space cooperation to develop good useful APIs, and make them easy to use for applications that can be later emulated on the BSDs with a different underlying implementation.

If it proves very difficult to write another implementation, it probably suggests that the API is a crock of **** and won't stand the test of time, and the ever onward march of progress. But be an ossified handicap to future progress. Good abstractions are things that can be built upon.

Choosing between portability and innovation

Posted Mar 2, 2011 21:05 UTC (Wed) by mezcalero (subscriber, #45103) [Link] (1 responses)

Don't mix up portability between CPU architectures and between OSes. This article is about the latter. Portability between CPU architecture is relatively easy and unproblematic in userspace, and much less of a headache (unless you hack assembly). I have no issue with CPU portability.

Choosing between portability and innovation

Posted Mar 2, 2011 21:19 UTC (Wed) by rfunk (subscriber, #4054) [Link]

In the Free Software world, I actually think the two types of portability are similar, have similar benefits, and both important; they're both platforms on which our code runs, and ideally can be swapped out underneath. But only one sentence of what I wrote was about CPU portability.

Choosing between portability and innovation

Posted Mar 2, 2011 22:36 UTC (Wed) by airlied (subscriber, #9104) [Link] (2 responses)

The thing is application portability support came about more likely because of fragmentation than as a means to allow fragmentation to happen.

Choosing between portability and innovation

Posted Mar 3, 2011 9:31 UTC (Thu) by roblucid (guest, #48964) [Link] (1 responses)

The ATT & BSD strands of UNIX diverged on features that were innovated like :

1) Reliable Signals
2) Virtual Memory
3) Shared Memory / Semaphores
4) Streams
5) FIFO's
6) TERMCAP & curses(3) (re-implemented by ATT as termlib with enhancements)

Given the widely perceived fragmentation of Linux distros, with tweaked features sets, for instance deb v rpm, it's naive to think you can write without regard to portability. Even with FHS, applications end up having to account for cosmetic differences between distros.

Portabiity is a requirement, otherwise you prevent change and innovation, and trust me you would not get much done stuck on Version 6 UNIX.

Look at the problems discussed with move to IPv6 last month!!! Portability allows the whole FOSS ecosystem to evolve and adapt.

Choosing between portability and innovation

Posted Mar 3, 2011 23:49 UTC (Thu) by jmorris42 (guest, #2203) [Link]

> Portability allows the whole FOSS ecosystem to evolve and adapt.

Amen. Without portability we wouldn't where we are now. More importantly, we will eventually be boned without it. Sooner or later Linux comes to the end of the road. The landscape will eventually change such that someone will realize it is time for a total redesign of what an OS is and if nothing is portable that new effort will fail, leaving us in a dead end to wither away.

Choosing between portability and innovation

Posted Mar 2, 2011 20:54 UTC (Wed) by hamjudo (guest, #363) [Link] (9 responses)

While I don't use BSD, I do have many embedded Linux systems that can't run a modern desktop, or in some cases, any desktop. Good monitors cost a lot more than most of those systems, so I really want to be sitting at the machine with my only good monitors.

I don't see a huge loss in losing access to modern desktop features in a particular OS. I've been remotely accessing systems for more than 3 decades. I don't see any reason to stop doing that now.

Standards are a wonderful thing. They let us innovate, while continuing to work with the rest of the infrastructure. I can "work on" all of my headless systems all I want, without having to make them try to keep up with my desktop. Likewise, I get to use the latest and greatest desktop, without worrying about what it will break on my development systems.

Choosing between portability and innovation

Posted Mar 2, 2011 21:35 UTC (Wed) by Cyberax (✭ supporter ✭, #52523) [Link] (8 responses)

"While I don't use BSD, I do have many embedded Linux systems that can't run a modern desktop, or in some cases, any desktop. Good monitors cost a lot more than most of those systems, so I really want to be sitting at the machine with my only good monitors."

So write a web-interface for these systems. Or use ssh/telnet. What's the problem?

Getting both portability and innovation

Posted Mar 2, 2011 21:59 UTC (Wed) by hamjudo (guest, #363) [Link] (7 responses)

I should have written: "While I don't use BSD, I do have many embedded Linux systems that can't run a modern desktop, or in some cases, any desktop. Good monitors cost a lot more than most of those systems anyways, so I sit at the system with the best monitors, and access the development systems using X Windows."

Standards like X11, ssh, NFS, tar, and many more that I don't even realize I'm using, make remote operation easy and painless.

BSD developers who want the latest desktop software can use it, if they want it. It just means that they'll need a copy of Linux running on a real, or virtual host.

Getting both portability and innovation

Posted Mar 2, 2011 22:29 UTC (Wed) by nix (subscriber, #2304) [Link] (6 responses)

You mean the X11 which Wayland is trying to shoot in the head? I'm sure when half your apps are Wayland apps tied to your physical machine that remote work using them will be *such* a lot of fun.

But nobody needs remote access, anyway. That's obsolete, like portability.

Bah.

Getting both portability and innovation

Posted Mar 2, 2011 23:05 UTC (Wed) by einstein (subscriber, #2052) [Link] (2 responses)

> You mean the X11 which Wayland is trying to shoot in the head? I'm sure when half your apps are Wayland apps tied to your physical machine that remote work using them will be *such* a lot of fun.

On the contrary, Wayland will include X11 support - but it won't be loaded by default. That makes sense, since 98% of linux users never use the remote desktop features of X11 - but for those who need it, it will be there. Think of OSX X11 support, done right.

Getting both portability and innovation

Posted Mar 3, 2011 0:06 UTC (Thu) by rgmoore (✭ supporter ✭, #75) [Link] (1 responses)

The death isn't going to come from lack of X11 support in the system, though. It's going to come when apps are written for Wayland rather than X11. Yes, legacy apps will still work with network transparency, but all the development effort will be done on apps that depend on Wayland-only features. The stuff that works over the network will die a slow death from bitrot.

Getting both portability and innovation

Posted Mar 3, 2011 9:34 UTC (Thu) by roblucid (guest, #48964) [Link]

Exactly! And pixel scraping won't work too well, when there's no desktop to scrape pixels off.

Getting both portability and innovation

Posted Mar 3, 2011 0:57 UTC (Thu) by drag (guest, #31333) [Link]

Every single day I use remote desktop applications at work. So do most people at my organization.

The clincher is that none of it depends on X... at all.

Getting rid of X as your display server does not eliminate the possibility of using remote applications. Nor does it remove the possibility of using X11 apps either.

It's mostly a red herring when discussing Wayland vs Xfree Server.

Besides all that...

X11 is obsolete, slow, and a poor match with todays technology. It could be something nice, but that would require X12 and if you ever noticed: nobody is working on that.

In fact I think that people are now using more remote applications then they ever did in the past. It's just that relatively few people actually use X11 networking to do it. It's a poor choice for a variety of reasons. I am not describing what I would like it to be.. I just telling it the way it is. The remote access boat has sailed and it's captain is named 'Citrix'.

I would like to change this fact, but X11 networking is not going cut it.

wayland does not preclude application remoting

Posted Mar 3, 2011 1:28 UTC (Thu) by nwnk (guest, #52271) [Link] (1 responses)

I grow increasingly tired of this strawman.

No one has defined a remoting protocol specifically for a wayland compositing environment yet. This is good, not bad, because it means we have the opportunity to define a good one. The tight binding between window system and presentation in X11 means that while purely graphical interaction is relatively efficient, any interaction between apps is disastrously bad, because X doesn't define IPC, it defines a generic communication channel in which you can build the IPC you want, which is a cacaphony of round trips.

You want a compositing window system. You probably want your remoting protocol in the compositor. The compositing protocol and the remoting protocol should probably mesh nicely but they're not the same problem and conflating them is a fallacy.

You'll note that wayland also does not define a rendering protocol. In fact it goes out of its way not to. Yet for some reason, wayland is not accused of killing OpenGL, or killing cairo, or killing any other rendering API (besides core X11 I suppose). If anything, that people so tightly mentally bind remoting, rendering, and composition is a tribute to the worse-is-better success of X11.

Unlearn what you have learned.

wayland does not preclude application remoting

Posted Mar 3, 2011 16:38 UTC (Thu) by nix (subscriber, #2304) [Link]

I hadn't thought of putting remoting in the compositor. It seems... odd, but not fundamentally any odder than doing it in the thing which draws the graphics, and I suppose it should work, since the compositor sees everything flowing to and from the user. I suppose you could put all of X11 support in there as well and then not need to support remoting anywhere else. (Which makes me wonder why the X11 compatibility stuff isn't already being done that way... or is it?)

Choosing between portability and innovation

Posted Mar 2, 2011 22:42 UTC (Wed) by brugolsky (guest, #28) [Link] (3 responses)

Yes, and the biggest crying shame is that the networking setup in most distributions make so little use of features (like traffic shaping) that have been available since Linux 2.2 in 1999! [ALT Linux and the /etc/net framework is an exception here.]

Choosing between portability and innovation

Posted Mar 3, 2011 16:39 UTC (Thu) by nix (subscriber, #2304) [Link] (2 responses)

Available... but documented? In English? Anywhere? (lartc.org, I suppose...)

Choosing between portability and innovation

Posted Mar 3, 2011 21:51 UTC (Thu) by xyzzy (guest, #1984) [Link]

I'd have to agree, last time (quite a few years ago) I tried to use the traffic shaping the documentation wasn't much help. The lartc howtos were great as cookbook "do it this way" examples, but if I started from scratch I was unable to write a tc ruleset that worked. That's not-worked as in wouldn't pass any traffic at all.

If there's better documentation now I'd love to know about it.

Choosing between portability and innovation

Posted Mar 4, 2011 16:45 UTC (Fri) by jeremiah (subscriber, #1221) [Link]

LOL..amen. Gotta love the high black arts of Linux network administration. Maybe it's time for O'reilly and Olaf to come out with a revised version of the Linux Network administrator's guide. My version is from 1995.

On Tradition of Portability in UNIX

Posted Mar 3, 2011 9:07 UTC (Thu) by roblucid (guest, #48964) [Link] (1 responses)

Portability was a key feature of UNIX itself from early days. It was often simpler to port the OS, to new architectures than port applications to different OSes. That included moving from 16 bit minis to 32 bit mainframe style machines, and application programs benefitted from a sane and simple model for file handling (compared to most 60's & early 70's OSes).

There's nothing wrong with components implementing a layer taking advantage of system specific features. The API provided by that layer of components should however be designed to be cleanly re-implementable and "cut" where much detail can be hidden. So that applications see reduced complexity and focus on the essential essence rather than implementation details.

Exposing everything to applications, just leads to a morass of complexity and poorly done buggy reimplementations of the same old thing. The whole Linux Audio story with OSS/ALSA and your difficulties with Pulse Audio ought to show why it's VERY BAD to have implementation specifics leak into widely distributed applications.

Design of API's is a KEY selling point and if the BSD's, truly have problems emulating a hot disk layer for the desktop, then it suggests an overly highly coupled badly abstracted implementation that replaced HAL.

On Tradition of Portability in UNIX

Posted Mar 3, 2011 17:52 UTC (Thu) by dlang (guest, #313) [Link]

If I am not mistaken, the person calling for ignoring BSD and portability is the author of pulse audio

Choosing between portability and innovation

Posted Mar 3, 2011 9:08 UTC (Thu) by dgm (subscriber, #49227) [Link] (15 responses)

> it doesn't help us create better software or better user experience.

Wrong. It *does* help create better software. Every time I try to build my software with another compiler I discover things that can be done better. Every time I try to port my code to a new platform I find better abstractions for what I was trying to achieve.

Maybe you don't care about that. Maybe you don't care about good, solid code that works because it is properly written. Maybe you just care about getting the thing out and declare you're done?

I guess I will stay away from systemd for a while, then.

Choosing between portability and innovation

Posted Mar 3, 2011 11:00 UTC (Thu) by Seegras (guest, #20463) [Link] (4 responses)

> > it doesn't help us create better software or better user experience.
>
> Wrong. It *does* help create better software. Every time I try to build my
> software with another compiler I discover things that can be done better.
> Every time I try to port my code to a new platform I find better
> abstractions for what I was trying to achieve.

Absolutely.

I've got to hammer this home some more. Porting increases code quality.

("All the world is windows" is, by the way, why games written for windows are sometimes so shabby when released. And only after they've been ported to some other platform they get patched to be useable on windows as well).

Choosing between portability and innovation

Posted Mar 3, 2011 12:32 UTC (Thu) by lmb (subscriber, #39048) [Link] (1 responses)

That is true: porting increases quality.

However, only to a point, because one ends up with ifdefs strewn through the code, or internal abstraction layers to cover the differences between supported platforms of varying ages. This does not improve the quality, readability, nor maintainability of the code indefinitely.

Surely one should employ caution when using new abstractions, and clearly define one's target audience. And be open to clean patches that make code portable (if they come with a maintenance commitment).

But one can't place the entire burden of this on the main author or core team, which also has limited time and resources. Someone else can contribute compatibility libraries or port the new APIs to other platforms - it's open source, after all. ;-)

And quite frankly, a number of POSIX APIs suck. Signals are one such example, IPC is another, and let us not even get started about the steaming pile that is threads. Insisting that one sticks with these forever is not really in the interest of software quality. They are hard to get right, easy to get wrong, and have weird interactions; not exactly an environment in which quality flourishes.

If signalfd() et al for example really are so cool (and they look quite clean to me), maybe it is time they get adopted by other platforms and/or POSIX.

Choosing between portability and innovation

Posted Mar 3, 2011 17:53 UTC (Thu) by nix (subscriber, #2304) [Link]

Linux innovations like this routinely get adopted by POSIX (obviously not things like sysfs, but I would not be remotely surprised to find things like signalfd getting adopted, since it cleans up the horror of signal handling enormously), but new POSIX revisions are not frequent and even after that it takes a long time for new POSIX revisions to get implemented in some of the BSDs (and, for that matter, in Linux, in the rare case that it didn't originate the change).

Choosing between portability and innovation

Posted Mar 3, 2011 13:02 UTC (Thu) by epa (subscriber, #39769) [Link] (1 responses)

Porting increases code quality.
By more than if you spent the same number of hours on some other development task without worrying about portability? Working on portability does have positive side effects even for the main platform, but it takes away programmer time from other things. So you have to decide if it's the best use of effort.

Choosing between portability and innovation

Posted Mar 3, 2011 17:54 UTC (Thu) by dlang (guest, #313) [Link]

yes, because dealing with portability forces you to think more about the big picture and by seeing different ways that things can be done.

Choosing between portability and innovation

Posted Mar 3, 2011 14:51 UTC (Thu) by mezcalero (subscriber, #45103) [Link] (9 responses)

If you want to improve the quality of your software the best thing you can do is actually to use a tool whose purpose is exactly that. So, sit down and reread your code, or sit down and use a static analyzer on it. But porting it to other systems is not the right tool for the job. It might find you a bug or two by side-effect. But if you want to find bugs then your time is much better invested in a tool that checks your code more comprehensively and actually looks for problems rather than just a small set of incompatibilities with your software.

Porting is in fact a quite bad tool for this job, since it shows you primarily issues that are of little ineterest to the platform you actually are interested in. Also, due to the need for abstraction it complicates reading the code and the glue code increases the chance of bugs, since it is more code to maintain.

So, yupp. If you want to review your code, then go and review your code. If you want to staticly analyze your code, then do so. However, by porting you probably find fewer new issues than it might introduce.

Choosing between portability and innovation

Posted Mar 3, 2011 17:43 UTC (Thu) by jensend (guest, #1385) [Link] (8 responses)

But static analyzers, etc won't help you realize how your API etc is badly designed and where it needs improving, while porting will. I know it's hard for you to conceive since you think that your gift of PulseAudio to mortals was the biggest deal since Prometheus gave fire to mortals, but not everybody thinks that it's terribly well designed. In its earlier versions, it was miserable to deal with.

The pattern keeps repeating itself- those who are developing new frameworks where portability is an afterthought tend to have tunnel vision and the resulting design is awful. Sure, the software gets written, but it's only to be replaced by another harebrained API a couple years down the road. This is what gets us the stupid churn which is one of the prime reasons the Linux desktop hasn't really gotten very far in the past decade. I'll give two examples:

If people had sat down and said "what should a modern Unix audio subsystem look like? What are the proper abstractions and interfaces regardless of what kernel is under the hood?" we wouldn't have had the awful trainwreck which is ALSA and half of the complications we have today would have been averted.

The only people doing 3d work on Linux who don't treat BSD as a third-class citizen are the nV developers. Not coincidentally, they're the only ones who have an architecture which works well enough to be consistently competitive with Windows. The DRM/Mesa stack has seen a dozen new acronyms come and go in the past few years, without much real improvement for end users. Frameworks have often been designed for Intel integrated graphics on Linux and only bludgeoned into working for other companies' hardware and - only recently- for other kernels. Even for Intel on Linux the result is crap.

Choosing between portability and innovation

Posted Mar 3, 2011 18:08 UTC (Thu) by nix (subscriber, #2304) [Link] (3 responses)

The DRM/Mesa stack has seen a dozen new acronyms come and go in the past few years, without much real improvement for end users
Shaders and a shader compiler and acceleration and 3D support for lots of new cards isn't enough for you?

Choosing between portability and innovation

Posted Mar 4, 2011 3:30 UTC (Fri) by jensend (guest, #1385) [Link] (2 responses)

It's true that there have been a lot of changes and advances in hardware and in OpenGL; keeping up with these takes effort and new ideas.

But if you go back a decade to when Linux graphics performance and hardware support (Utah-GLX and DRI) were closer to parity with Windows, things weren't simple then either. There were dozens of hardware vendors rather than just three (and a half, if you want to count VIA), each chip was more radically different from its competitors and even from other chips by the same company, etc.

While there's been progress in an absolute sense, relative to Mac and Windows, Linux has lagged significantly in graphics over the past decade. Graphics support is a treadmill; Linux has has often been perilously close to falling off the back of the treadmill.

I don't mean to say the efforts of those working on the Linux X/Mesa stack alphabet soup have all been pointless; nor do I claim that all of the blame rests with them. The ARB deserves a lot of the blame for letting OpenGL stagnate so long. It's a real shame that other graphics vendors and developers from other Unices haven't taken a more active role in helping design and implement the graphics stack, and while I think more could have been done to solicit their input and design things with other hardware and kernels in mind, they're responsible for their own non-participation.

Choosing between portability and innovation

Posted Mar 4, 2011 8:53 UTC (Fri) by drag (guest, #31333) [Link]

It's because the graphics in Linux was not so much designed as it was puked up by accident. It's just something that has been cobbled together and extended to meet new needs instead of undergoing a entire rework like every other relevant OS (aka Windows and OS X). You have no less then 4 separate drivers running 3 separate graphics stacks. They all have overlapping jobs, use the same hardware, and drastically need to work together in a way that is nearly impossible.

It's one thing to treasure portability, but it's quite another when the OS you care about does not even offer the basic functionality that you need to run your applications and improve your graphics stack.

Forcing Linux developers to not only improve and fix Linux's problems, but also drag the BSD's kicking and screaming into the 21st century is completely unreasonable.

Ultimately if you really care about portability the BSD OSes are the least of your worries. It's OS X and Windows that actually matter.

Choosing between portability and innovation

Posted Mar 4, 2011 9:15 UTC (Fri) by airlied (subscriber, #9104) [Link]

The problem was while Windows and Mac OSX got major investments in graphics due to being desktop operating systems, Linux got no investment for years.

So while Linux was making major inroads into server technologies, there was no money behind desktop features such as graphics. I would guess compared to the manpower a single vendor has on a single cross-platform or windows driver, open source development across drivers for all the hw is about 10% the size.

Choosing between portability and innovation

Posted Mar 3, 2011 19:33 UTC (Thu) by mezcalero (subscriber, #45103) [Link] (2 responses)

Uh, you got it all backwards. PA has been portable from the very beginning. I am pretty sure that this fact didn't improve the API in any way, in fact it's not really visible in the API at all. This completely destroys your FUD-filled example, doesn't it?

I think the major problem with the PA API is mostly it's fully asynchronous nature, which makes it very hard to use. I am humble enough to admit that.

If you want to figure out if your API is good, then porting won't help you. Using it yourself however will.

From your comments I figure you have never bothered with hacking on graphics or audio stacks yourself, have you?

Choosing between portability and innovation

Posted Mar 3, 2011 22:59 UTC (Thu) by nix (subscriber, #2304) [Link]

Still, I think the asynchronous API was the right approach, if just because (as the PA simple API shows) it is possible to implement a synchronous API in terms of it, but not vice versa.

Mandatorily blocking I/O is a curse, even if nonblocking I/O is always trickier to use. Kudos for making the right choice here.

Choosing between portability and innovation

Posted Mar 4, 2011 4:19 UTC (Fri) by jensend (guest, #1385) [Link]

I'm no expert in this area, but I thought I remembered people grumbling that Pulse was cross-platform in name only and that it wasn't just a lack of people putting in time to make it work but also a number of design issues. I could be wrong. My main example here is ALSA, not Pulse.

Choosing between portability and innovation

Posted Mar 3, 2011 22:23 UTC (Thu) by airlied (subscriber, #9104) [Link]

nvidia are being paid money by a customer AFAIK to make stuff work on FreeBSD. If you pay me money I'll make any graphics card work on FreeBSD, but it would cost you a lot of money

otherwise you have no clue what you are talking about. Please leave lwn and go back to posting comments on slashdot.

Choosing between portability and innovation

Posted Mar 8, 2011 0:49 UTC (Tue) by lacos (guest, #70616) [Link]

people should just forget about the BSDs

You are promoting vendor lock-in. Under this aspect, it does not matter if a given application is free software: unless the end-user has the resources to make the port happen, he's forced to use the kernel/libc you have chosen for him.

Portability of application source code is about adhering to standards that were distilled with meticulous work, considering as many implementations as possible. The answer to divergent implementations is not killing all of them except one, and taking away the choice from the user. The answer is standardization, and a clearly defined set of extensions per implementation, and allowing the user to choose the platform (for whatever reasons) he'll run the application on.

The Unix landscape was splintered and balkanized during most its history. We finally are overcoming that, and that is a good thing

You seem to intend to overcome diversity by becoming a monopoly. The freeness of software (free as in freedom) doesn't matter here, see above.

Choosing between portability and innovation

Posted Mar 11, 2011 14:04 UTC (Fri) by viro (subscriber, #7872) [Link]

Dear duhveloper. Do not use the worst misdesigns shoved down our throats by Scamantec and its ilk. Yes, I mean fanotify(). No, the fact that it's got in does *NOT* mean it's good to use or should be enabled. Sigh... Lusers - can't live with them, can't dispose of the bodies without violating hazmat regs...


Copyright © 2025, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds