Choosing between portability and innovation
Choosing between portability and innovation
Posted Mar 2, 2011 19:40 UTC (Wed) by mezcalero (subscriber, #45103)Parent article: Choosing between portability and innovation
In the first sentence the article declares "portability" a key concept in the open source ecosystem. That's a claim I don't agree with at all. "Portability" might be something rooted in the darker parts of Unix tradition but I don't see how this matters to open source at all. The fact that Unix was forked so often is a weak point of it, not a strong point. And only that forking put portability between OSes on the agenda at all. So it has something to do with the history of Unix, not with the idea of Free Software. And caring for portability slows down development.
I have a hard time understanding why "portability" in itself should be a good thing. it doesn't help us create better software or better user experience. It just slows us down, and makes things complicated. If you decide to care about portability you should have good reasons for it, but "just because" and "it's a key concept of open source" are bad and wrong reasons. (good reasons sound more like: "i need to reach customers using windows")
There are couple of other statements in this text I cannot agree with. Claiming that the fast pace of development on Linux was "a problem" is just bogus. It's neither a bad thing for Linux nor for BSD, since even the latter gets more features more quickly this way: in many areas the work on Linux does trickle down to BSDs after all. What *really* might be problem for BSD is that BSD development is otherwise really slow.
And on Linux, if we ever want to catch up with MacOS or Windows we probably even need a much faster pace.
And the last sentence of the article is actually the wrongest statement of them all: "The innovation of Linux inevitably comes at a price: Linux is the de facto Unix platform now, ..." -- wow, this is just so wrong. I'd claim right the contrary! The Unix landscape was splintered and balkanized during most its history. We finally are overcoming that, and that is a good thing -- that is a fantastic thing. That's not a price you pay, that's a reason to party!
Lennart
Posted Mar 2, 2011 20:51 UTC (Wed)
by rfunk (subscriber, #4054)
[Link] (7 responses)
For one thing, your historical interpretation dismisses the fact that Unix was expressly written in a portable language, which was revolutionary at the time, and without that would never have gone beyond DEC minicomputers. It also dismisses the benefits that occurred specifically because of forks/reimplementations such as BSD and Linux. You seem to think that Linux is the pinnacle of Unix evolution, and beyond it is nothing but better Linux, which seems rather short-sighted to me.
Alternative implementations of parts of a system improve the longevity of the system, since different implementations may end up adapting better to new circumstances down the road.
Portability has mattered to the free software ecosystem since day one. Just like no one company has all the best programmers, no one operating system has all the best programmers either. We can gain from people on the BSD platforms, and they can gain from us. Remember that much Free Software started on Solaris or other non-Linux platforms, and even now MacBooks are becoming increasingly popular as programming platforms, and there's no reason to shun those people. The Free Software Foundation releases software portable to Windows (with great derision from the OpenBSD folks), treating that software as a gateway drug to help lead users away from Windows.
Yes, we should take advantage of advancements in Linux, and push for more, but we also shouldn't completely throw away portability.
Posted Mar 2, 2011 21:04 UTC (Wed)
by halla (subscriber, #14185)
[Link] (1 responses)
Posted Mar 3, 2011 9:18 UTC (Thu)
by roblucid (guest, #48964)
[Link]
What's needed is a kernel/user space cooperation to develop good useful APIs, and make them easy to use for applications that can be later emulated on the BSDs with a different underlying implementation.
If it proves very difficult to write another implementation, it probably suggests that the API is a crock of **** and won't stand the test of time, and the ever onward march of progress. But be an ossified handicap to future progress. Good abstractions are things that can be built upon.
Posted Mar 2, 2011 21:05 UTC (Wed)
by mezcalero (subscriber, #45103)
[Link] (1 responses)
Posted Mar 2, 2011 21:19 UTC (Wed)
by rfunk (subscriber, #4054)
[Link]
Posted Mar 2, 2011 22:36 UTC (Wed)
by airlied (subscriber, #9104)
[Link] (2 responses)
The thing is application portability support came about more likely because of fragmentation than as a means to allow fragmentation to happen.
Posted Mar 3, 2011 9:31 UTC (Thu)
by roblucid (guest, #48964)
[Link] (1 responses)
1) Reliable Signals
Given the widely perceived fragmentation of Linux distros, with tweaked features sets, for instance deb v rpm, it's naive to think you can write without regard to portability. Even with FHS, applications end up having to account for cosmetic differences between distros.
Portabiity is a requirement, otherwise you prevent change and innovation, and trust me you would not get much done stuck on Version 6 UNIX.
Look at the problems discussed with move to IPv6 last month!!! Portability allows the whole FOSS ecosystem to evolve and adapt.
Posted Mar 3, 2011 23:49 UTC (Thu)
by jmorris42 (guest, #2203)
[Link]
Amen. Without portability we wouldn't where we are now. More importantly, we will eventually be boned without it. Sooner or later Linux comes to the end of the road. The landscape will eventually change such that someone will realize it is time for a total redesign of what an OS is and if nothing is portable that new effort will fail, leaving us in a dead end to wither away.
Posted Mar 2, 2011 20:54 UTC (Wed)
by hamjudo (guest, #363)
[Link] (9 responses)
I don't see a huge loss in losing access to modern desktop features in a particular OS. I've been remotely accessing systems for more than 3 decades. I don't see any reason to stop doing that now.
Standards are a wonderful thing. They let us innovate, while continuing to work with the rest of the infrastructure. I can "work on" all of my headless systems all I want, without having to make them try to keep up with my desktop. Likewise, I get to use the latest and greatest desktop, without worrying about what it will break on my development systems.
Posted Mar 2, 2011 21:35 UTC (Wed)
by Cyberax (✭ supporter ✭, #52523)
[Link] (8 responses)
So write a web-interface for these systems. Or use ssh/telnet. What's the problem?
Posted Mar 2, 2011 21:59 UTC (Wed)
by hamjudo (guest, #363)
[Link] (7 responses)
Standards like X11, ssh, NFS, tar, and many more that I don't even realize I'm using, make remote operation easy and painless.
BSD developers who want the latest desktop software can use it, if they want it. It just means that they'll need a copy of Linux running on a real, or virtual host.
Posted Mar 2, 2011 22:29 UTC (Wed)
by nix (subscriber, #2304)
[Link] (6 responses)
But nobody needs remote access, anyway. That's obsolete, like portability.
Bah.
Posted Mar 2, 2011 23:05 UTC (Wed)
by einstein (subscriber, #2052)
[Link] (2 responses)
On the contrary, Wayland will include X11 support - but it won't be loaded by default. That makes sense, since 98% of linux users never use the remote desktop features of X11 - but for those who need it, it will be there. Think of OSX X11 support, done right.
Posted Mar 3, 2011 0:06 UTC (Thu)
by rgmoore (✭ supporter ✭, #75)
[Link] (1 responses)
Posted Mar 3, 2011 9:34 UTC (Thu)
by roblucid (guest, #48964)
[Link]
Posted Mar 3, 2011 0:57 UTC (Thu)
by drag (guest, #31333)
[Link]
The clincher is that none of it depends on X... at all.
Getting rid of X as your display server does not eliminate the possibility of using remote applications. Nor does it remove the possibility of using X11 apps either.
It's mostly a red herring when discussing Wayland vs Xfree Server.
Besides all that...
X11 is obsolete, slow, and a poor match with todays technology. It could be something nice, but that would require X12 and if you ever noticed: nobody is working on that.
In fact I think that people are now using more remote applications then they ever did in the past. It's just that relatively few people actually use X11 networking to do it. It's a poor choice for a variety of reasons. I am not describing what I would like it to be.. I just telling it the way it is. The remote access boat has sailed and it's captain is named 'Citrix'.
I would like to change this fact, but X11 networking is not going cut it.
Posted Mar 3, 2011 1:28 UTC (Thu)
by nwnk (guest, #52271)
[Link] (1 responses)
No one has defined a remoting protocol specifically for a wayland compositing environment yet. This is good, not bad, because it means we have the opportunity to define a good one. The tight binding between window system and presentation in X11 means that while purely graphical interaction is relatively efficient, any interaction between apps is disastrously bad, because X doesn't define IPC, it defines a generic communication channel in which you can build the IPC you want, which is a cacaphony of round trips.
You want a compositing window system. You probably want your remoting protocol in the compositor. The compositing protocol and the remoting protocol should probably mesh nicely but they're not the same problem and conflating them is a fallacy.
You'll note that wayland also does not define a rendering protocol. In fact it goes out of its way not to. Yet for some reason, wayland is not accused of killing OpenGL, or killing cairo, or killing any other rendering API (besides core X11 I suppose). If anything, that people so tightly mentally bind remoting, rendering, and composition is a tribute to the worse-is-better success of X11.
Unlearn what you have learned.
Posted Mar 3, 2011 16:38 UTC (Thu)
by nix (subscriber, #2304)
[Link]
Posted Mar 2, 2011 22:42 UTC (Wed)
by brugolsky (guest, #28)
[Link] (3 responses)
Posted Mar 3, 2011 16:39 UTC (Thu)
by nix (subscriber, #2304)
[Link] (2 responses)
Posted Mar 3, 2011 21:51 UTC (Thu)
by xyzzy (guest, #1984)
[Link]
If there's better documentation now I'd love to know about it.
Posted Mar 4, 2011 16:45 UTC (Fri)
by jeremiah (subscriber, #1221)
[Link]
Posted Mar 3, 2011 9:07 UTC (Thu)
by roblucid (guest, #48964)
[Link] (1 responses)
There's nothing wrong with components implementing a layer taking advantage of system specific features. The API provided by that layer of components should however be designed to be cleanly re-implementable and "cut" where much detail can be hidden. So that applications see reduced complexity and focus on the essential essence rather than implementation details.
Exposing everything to applications, just leads to a morass of complexity and poorly done buggy reimplementations of the same old thing. The whole Linux Audio story with OSS/ALSA and your difficulties with Pulse Audio ought to show why it's VERY BAD to have implementation specifics leak into widely distributed applications.
Design of API's is a KEY selling point and if the BSD's, truly have problems emulating a hot disk layer for the desktop, then it suggests an overly highly coupled badly abstracted implementation that replaced HAL.
Posted Mar 3, 2011 17:52 UTC (Thu)
by dlang (guest, #313)
[Link]
Posted Mar 3, 2011 9:08 UTC (Thu)
by dgm (subscriber, #49227)
[Link] (15 responses)
Wrong. It *does* help create better software. Every time I try to build my software with another compiler I discover things that can be done better. Every time I try to port my code to a new platform I find better abstractions for what I was trying to achieve.
Maybe you don't care about that. Maybe you don't care about good, solid code that works because it is properly written. Maybe you just care about getting the thing out and declare you're done?
I guess I will stay away from systemd for a while, then.
Posted Mar 3, 2011 11:00 UTC (Thu)
by Seegras (guest, #20463)
[Link] (4 responses)
Absolutely.
I've got to hammer this home some more. Porting increases code quality.
("All the world is windows" is, by the way, why games written for windows are sometimes so shabby when released. And only after they've been ported to some other platform they get patched to be useable on windows as well).
Posted Mar 3, 2011 12:32 UTC (Thu)
by lmb (subscriber, #39048)
[Link] (1 responses)
However, only to a point, because one ends up with ifdefs strewn through the code, or internal abstraction layers to cover the differences between supported platforms of varying ages. This does not improve the quality, readability, nor maintainability of the code indefinitely.
Surely one should employ caution when using new abstractions, and clearly define one's target audience. And be open to clean patches that make code portable (if they come with a maintenance commitment).
But one can't place the entire burden of this on the main author or core team, which also has limited time and resources. Someone else can contribute compatibility libraries or port the new APIs to other platforms - it's open source, after all. ;-)
And quite frankly, a number of POSIX APIs suck. Signals are one such example, IPC is another, and let us not even get started about the steaming pile that is threads. Insisting that one sticks with these forever is not really in the interest of software quality. They are hard to get right, easy to get wrong, and have weird interactions; not exactly an environment in which quality flourishes.
If signalfd() et al for example really are so cool (and they look quite clean to me), maybe it is time they get adopted by other platforms and/or POSIX.
Posted Mar 3, 2011 17:53 UTC (Thu)
by nix (subscriber, #2304)
[Link]
Posted Mar 3, 2011 13:02 UTC (Thu)
by epa (subscriber, #39769)
[Link] (1 responses)
Posted Mar 3, 2011 17:54 UTC (Thu)
by dlang (guest, #313)
[Link]
Posted Mar 3, 2011 14:51 UTC (Thu)
by mezcalero (subscriber, #45103)
[Link] (9 responses)
Porting is in fact a quite bad tool for this job, since it shows you primarily issues that are of little ineterest to the platform you actually are interested in. Also, due to the need for abstraction it complicates reading the code and the glue code increases the chance of bugs, since it is more code to maintain.
So, yupp. If you want to review your code, then go and review your code. If you want to staticly analyze your code, then do so. However, by porting you probably find fewer new issues than it might introduce.
Posted Mar 3, 2011 17:43 UTC (Thu)
by jensend (guest, #1385)
[Link] (8 responses)
The pattern keeps repeating itself- those who are developing new frameworks where portability is an afterthought tend to have tunnel vision and the resulting design is awful. Sure, the software gets written, but it's only to be replaced by another harebrained API a couple years down the road. This is what gets us the stupid churn which is one of the prime reasons the Linux desktop hasn't really gotten very far in the past decade. I'll give two examples:
If people had sat down and said "what should a modern Unix audio subsystem look like? What are the proper abstractions and interfaces regardless of what kernel is under the hood?" we wouldn't have had the awful trainwreck which is ALSA and half of the complications we have today would have been averted.
The only people doing 3d work on Linux who don't treat BSD as a third-class citizen are the nV developers. Not coincidentally, they're the only ones who have an architecture which works well enough to be consistently competitive with Windows. The DRM/Mesa stack has seen a dozen new acronyms come and go in the past few years, without much real improvement for end users. Frameworks have often been designed for Intel integrated graphics on Linux and only bludgeoned into working for other companies' hardware and - only recently- for other kernels. Even for Intel on Linux the result is crap.
Posted Mar 3, 2011 18:08 UTC (Thu)
by nix (subscriber, #2304)
[Link] (3 responses)
Posted Mar 4, 2011 3:30 UTC (Fri)
by jensend (guest, #1385)
[Link] (2 responses)
But if you go back a decade to when Linux graphics performance and hardware support (Utah-GLX and DRI) were closer to parity with Windows, things weren't simple then either. There were dozens of hardware vendors rather than just three (and a half, if you want to count VIA), each chip was more radically different from its competitors and even from other chips by the same company, etc.
While there's been progress in an absolute sense, relative to Mac and Windows, Linux has lagged significantly in graphics over the past decade. Graphics support is a treadmill; Linux has has often been perilously close to falling off the back of the treadmill.
I don't mean to say the efforts of those working on the Linux X/Mesa stack alphabet soup have all been pointless; nor do I claim that all of the blame rests with them. The ARB deserves a lot of the blame for letting OpenGL stagnate so long. It's a real shame that other graphics vendors and developers from other Unices haven't taken a more active role in helping design and implement the graphics stack, and while I think more could have been done to solicit their input and design things with other hardware and kernels in mind, they're responsible for their own non-participation.
Posted Mar 4, 2011 8:53 UTC (Fri)
by drag (guest, #31333)
[Link]
It's one thing to treasure portability, but it's quite another when the OS you care about does not even offer the basic functionality that you need to run your applications and improve your graphics stack.
Forcing Linux developers to not only improve and fix Linux's problems, but also drag the BSD's kicking and screaming into the 21st century is completely unreasonable.
Ultimately if you really care about portability the BSD OSes are the least of your worries. It's OS X and Windows that actually matter.
Posted Mar 4, 2011 9:15 UTC (Fri)
by airlied (subscriber, #9104)
[Link]
So while Linux was making major inroads into server technologies, there was no money behind desktop features such as graphics. I would guess compared to the manpower a single vendor has on a single cross-platform or windows driver, open source development across drivers for all the hw is about 10% the size.
Posted Mar 3, 2011 19:33 UTC (Thu)
by mezcalero (subscriber, #45103)
[Link] (2 responses)
I think the major problem with the PA API is mostly it's fully asynchronous nature, which makes it very hard to use. I am humble enough to admit that.
If you want to figure out if your API is good, then porting won't help you. Using it yourself however will.
From your comments I figure you have never bothered with hacking on graphics or audio stacks yourself, have you?
Posted Mar 3, 2011 22:59 UTC (Thu)
by nix (subscriber, #2304)
[Link]
Mandatorily blocking I/O is a curse, even if nonblocking I/O is always trickier to use. Kudos for making the right choice here.
Posted Mar 4, 2011 4:19 UTC (Fri)
by jensend (guest, #1385)
[Link]
Posted Mar 3, 2011 22:23 UTC (Thu)
by airlied (subscriber, #9104)
[Link]
otherwise you have no clue what you are talking about. Please leave lwn and go back to posting comments on slashdot.
Posted Mar 8, 2011 0:49 UTC (Tue)
by lacos (guest, #70616)
[Link]
Posted Mar 11, 2011 14:04 UTC (Fri)
by viro (subscriber, #7872)
[Link]
Choosing between portability and innovation
Choosing between portability and innovation
Choosing between portability and innovation
Those sacrificing portability and delving too deep, are those pesky things that keep breaking.
Choosing between portability and innovation
Choosing between portability and innovation
Choosing between portability and innovation
Choosing between portability and innovation
2) Virtual Memory
3) Shared Memory / Semaphores
4) Streams
5) FIFO's
6) TERMCAP & curses(3) (re-implemented by ATT as termlib with enhancements)
Choosing between portability and innovation
While I don't use BSD, I do have many embedded Linux systems that can't run a modern desktop, or in some cases, any desktop. Good monitors cost a lot more than most of those systems, so I really want to be sitting at the machine with my only good monitors. Choosing between portability and innovation
Choosing between portability and innovation
I should have written: "While I don't use BSD, I do have many embedded Linux systems that can't run a modern desktop, or in some cases, any desktop. Good monitors cost a lot more than most of those systems anyways, so I sit at the system with the best monitors, and access the development systems using X Windows."Getting both portability and innovation
Getting both portability and innovation
Getting both portability and innovation
The death isn't going to come from lack of X11 support in the system, though. It's going to come when apps are written for Wayland rather than X11. Yes, legacy apps will still work with network transparency, but all the development effort will be done on apps that depend on Wayland-only features. The stuff that works over the network will die a slow death from bitrot.
Getting both portability and innovation
Getting both portability and innovation
Getting both portability and innovation
wayland does not preclude application remoting
wayland does not preclude application remoting
Choosing between portability and innovation
Choosing between portability and innovation
Choosing between portability and innovation
Choosing between portability and innovation
On Tradition of Portability in UNIX
On Tradition of Portability in UNIX
Choosing between portability and innovation
Choosing between portability and innovation
>
> Wrong. It *does* help create better software. Every time I try to build my
> software with another compiler I discover things that can be done better.
> Every time I try to port my code to a new platform I find better
> abstractions for what I was trying to achieve.
Choosing between portability and innovation
Choosing between portability and innovation
Choosing between portability and innovation
Porting increases code quality.
By more than if you spent the same number of hours on some other development task without worrying about portability? Working on portability does have positive side effects even for the main platform, but it takes away programmer time from other things. So you have to decide if it's the best use of effort.
Choosing between portability and innovation
Choosing between portability and innovation
Choosing between portability and innovation
Choosing between portability and innovation
The DRM/Mesa stack has seen a dozen new acronyms come and go in the past few years, without much real improvement for end users
Shaders and a shader compiler and acceleration and 3D support for lots of new cards isn't enough for you?
Choosing between portability and innovation
Choosing between portability and innovation
Choosing between portability and innovation
Choosing between portability and innovation
Choosing between portability and innovation
Choosing between portability and innovation
Choosing between portability and innovation
people should just forget about the BSDs
You are promoting vendor lock-in. Under this aspect, it does not matter if a given application is free software: unless the end-user has the resources to make the port happen, he's forced to use the kernel/libc you have chosen for him.
Portability of application source code is about adhering to standards that were distilled with meticulous work, considering as many implementations as possible. The answer to divergent implementations is not killing all of them except one, and taking away the choice from the user. The answer is standardization, and a clearly defined set of extensions per implementation, and allowing the user to choose the platform (for whatever reasons) he'll run the application on.
The Unix landscape was splintered and balkanized during most its history. We finally are overcoming that, and that is a good thing
You seem to intend to overcome diversity by becoming a monopoly. The freeness of software (free as in freedom) doesn't matter here, see above.
Choosing between portability and innovation
Choosing between portability and innovation