User: Password:
|
|
Subscribe / Log in / New account

Choosing between portability and innovation

Choosing between portability and innovation

Posted Mar 3, 2011 9:08 UTC (Thu) by dgm (subscriber, #49227)
In reply to: Choosing between portability and innovation by mezcalero
Parent article: Choosing between portability and innovation

> it doesn't help us create better software or better user experience.

Wrong. It *does* help create better software. Every time I try to build my software with another compiler I discover things that can be done better. Every time I try to port my code to a new platform I find better abstractions for what I was trying to achieve.

Maybe you don't care about that. Maybe you don't care about good, solid code that works because it is properly written. Maybe you just care about getting the thing out and declare you're done?

I guess I will stay away from systemd for a while, then.


(Log in to post comments)

Choosing between portability and innovation

Posted Mar 3, 2011 11:00 UTC (Thu) by Seegras (guest, #20463) [Link]

> > it doesn't help us create better software or better user experience.
>
> Wrong. It *does* help create better software. Every time I try to build my
> software with another compiler I discover things that can be done better.
> Every time I try to port my code to a new platform I find better
> abstractions for what I was trying to achieve.

Absolutely.

I've got to hammer this home some more. Porting increases code quality.

("All the world is windows" is, by the way, why games written for windows are sometimes so shabby when released. And only after they've been ported to some other platform they get patched to be useable on windows as well).

Choosing between portability and innovation

Posted Mar 3, 2011 12:32 UTC (Thu) by lmb (subscriber, #39048) [Link]

That is true: porting increases quality.

However, only to a point, because one ends up with ifdefs strewn through the code, or internal abstraction layers to cover the differences between supported platforms of varying ages. This does not improve the quality, readability, nor maintainability of the code indefinitely.

Surely one should employ caution when using new abstractions, and clearly define one's target audience. And be open to clean patches that make code portable (if they come with a maintenance commitment).

But one can't place the entire burden of this on the main author or core team, which also has limited time and resources. Someone else can contribute compatibility libraries or port the new APIs to other platforms - it's open source, after all. ;-)

And quite frankly, a number of POSIX APIs suck. Signals are one such example, IPC is another, and let us not even get started about the steaming pile that is threads. Insisting that one sticks with these forever is not really in the interest of software quality. They are hard to get right, easy to get wrong, and have weird interactions; not exactly an environment in which quality flourishes.

If signalfd() et al for example really are so cool (and they look quite clean to me), maybe it is time they get adopted by other platforms and/or POSIX.

Choosing between portability and innovation

Posted Mar 3, 2011 17:53 UTC (Thu) by nix (subscriber, #2304) [Link]

Linux innovations like this routinely get adopted by POSIX (obviously not things like sysfs, but I would not be remotely surprised to find things like signalfd getting adopted, since it cleans up the horror of signal handling enormously), but new POSIX revisions are not frequent and even after that it takes a long time for new POSIX revisions to get implemented in some of the BSDs (and, for that matter, in Linux, in the rare case that it didn't originate the change).

Choosing between portability and innovation

Posted Mar 3, 2011 13:02 UTC (Thu) by epa (subscriber, #39769) [Link]

Porting increases code quality.
By more than if you spent the same number of hours on some other development task without worrying about portability? Working on portability does have positive side effects even for the main platform, but it takes away programmer time from other things. So you have to decide if it's the best use of effort.

Choosing between portability and innovation

Posted Mar 3, 2011 17:54 UTC (Thu) by dlang (subscriber, #313) [Link]

yes, because dealing with portability forces you to think more about the big picture and by seeing different ways that things can be done.

Choosing between portability and innovation

Posted Mar 3, 2011 14:51 UTC (Thu) by mezcalero (subscriber, #45103) [Link]

If you want to improve the quality of your software the best thing you can do is actually to use a tool whose purpose is exactly that. So, sit down and reread your code, or sit down and use a static analyzer on it. But porting it to other systems is not the right tool for the job. It might find you a bug or two by side-effect. But if you want to find bugs then your time is much better invested in a tool that checks your code more comprehensively and actually looks for problems rather than just a small set of incompatibilities with your software.

Porting is in fact a quite bad tool for this job, since it shows you primarily issues that are of little ineterest to the platform you actually are interested in. Also, due to the need for abstraction it complicates reading the code and the glue code increases the chance of bugs, since it is more code to maintain.

So, yupp. If you want to review your code, then go and review your code. If you want to staticly analyze your code, then do so. However, by porting you probably find fewer new issues than it might introduce.

Choosing between portability and innovation

Posted Mar 3, 2011 17:43 UTC (Thu) by jensend (guest, #1385) [Link]

But static analyzers, etc won't help you realize how your API etc is badly designed and where it needs improving, while porting will. I know it's hard for you to conceive since you think that your gift of PulseAudio to mortals was the biggest deal since Prometheus gave fire to mortals, but not everybody thinks that it's terribly well designed. In its earlier versions, it was miserable to deal with.

The pattern keeps repeating itself- those who are developing new frameworks where portability is an afterthought tend to have tunnel vision and the resulting design is awful. Sure, the software gets written, but it's only to be replaced by another harebrained API a couple years down the road. This is what gets us the stupid churn which is one of the prime reasons the Linux desktop hasn't really gotten very far in the past decade. I'll give two examples:

If people had sat down and said "what should a modern Unix audio subsystem look like? What are the proper abstractions and interfaces regardless of what kernel is under the hood?" we wouldn't have had the awful trainwreck which is ALSA and half of the complications we have today would have been averted.

The only people doing 3d work on Linux who don't treat BSD as a third-class citizen are the nV developers. Not coincidentally, they're the only ones who have an architecture which works well enough to be consistently competitive with Windows. The DRM/Mesa stack has seen a dozen new acronyms come and go in the past few years, without much real improvement for end users. Frameworks have often been designed for Intel integrated graphics on Linux and only bludgeoned into working for other companies' hardware and - only recently- for other kernels. Even for Intel on Linux the result is crap.

Choosing between portability and innovation

Posted Mar 3, 2011 18:08 UTC (Thu) by nix (subscriber, #2304) [Link]

The DRM/Mesa stack has seen a dozen new acronyms come and go in the past few years, without much real improvement for end users
Shaders and a shader compiler and acceleration and 3D support for lots of new cards isn't enough for you?

Choosing between portability and innovation

Posted Mar 4, 2011 3:30 UTC (Fri) by jensend (guest, #1385) [Link]

It's true that there have been a lot of changes and advances in hardware and in OpenGL; keeping up with these takes effort and new ideas.

But if you go back a decade to when Linux graphics performance and hardware support (Utah-GLX and DRI) were closer to parity with Windows, things weren't simple then either. There were dozens of hardware vendors rather than just three (and a half, if you want to count VIA), each chip was more radically different from its competitors and even from other chips by the same company, etc.

While there's been progress in an absolute sense, relative to Mac and Windows, Linux has lagged significantly in graphics over the past decade. Graphics support is a treadmill; Linux has has often been perilously close to falling off the back of the treadmill.

I don't mean to say the efforts of those working on the Linux X/Mesa stack alphabet soup have all been pointless; nor do I claim that all of the blame rests with them. The ARB deserves a lot of the blame for letting OpenGL stagnate so long. It's a real shame that other graphics vendors and developers from other Unices haven't taken a more active role in helping design and implement the graphics stack, and while I think more could have been done to solicit their input and design things with other hardware and kernels in mind, they're responsible for their own non-participation.

Choosing between portability and innovation

Posted Mar 4, 2011 8:53 UTC (Fri) by drag (subscriber, #31333) [Link]

It's because the graphics in Linux was not so much designed as it was puked up by accident. It's just something that has been cobbled together and extended to meet new needs instead of undergoing a entire rework like every other relevant OS (aka Windows and OS X). You have no less then 4 separate drivers running 3 separate graphics stacks. They all have overlapping jobs, use the same hardware, and drastically need to work together in a way that is nearly impossible.

It's one thing to treasure portability, but it's quite another when the OS you care about does not even offer the basic functionality that you need to run your applications and improve your graphics stack.

Forcing Linux developers to not only improve and fix Linux's problems, but also drag the BSD's kicking and screaming into the 21st century is completely unreasonable.

Ultimately if you really care about portability the BSD OSes are the least of your worries. It's OS X and Windows that actually matter.

Choosing between portability and innovation

Posted Mar 4, 2011 9:15 UTC (Fri) by airlied (subscriber, #9104) [Link]

The problem was while Windows and Mac OSX got major investments in graphics due to being desktop operating systems, Linux got no investment for years.

So while Linux was making major inroads into server technologies, there was no money behind desktop features such as graphics. I would guess compared to the manpower a single vendor has on a single cross-platform or windows driver, open source development across drivers for all the hw is about 10% the size.

Choosing between portability and innovation

Posted Mar 3, 2011 19:33 UTC (Thu) by mezcalero (subscriber, #45103) [Link]

Uh, you got it all backwards. PA has been portable from the very beginning. I am pretty sure that this fact didn't improve the API in any way, in fact it's not really visible in the API at all. This completely destroys your FUD-filled example, doesn't it?

I think the major problem with the PA API is mostly it's fully asynchronous nature, which makes it very hard to use. I am humble enough to admit that.

If you want to figure out if your API is good, then porting won't help you. Using it yourself however will.

From your comments I figure you have never bothered with hacking on graphics or audio stacks yourself, have you?

Choosing between portability and innovation

Posted Mar 3, 2011 22:59 UTC (Thu) by nix (subscriber, #2304) [Link]

Still, I think the asynchronous API was the right approach, if just because (as the PA simple API shows) it is possible to implement a synchronous API in terms of it, but not vice versa.

Mandatorily blocking I/O is a curse, even if nonblocking I/O is always trickier to use. Kudos for making the right choice here.

Choosing between portability and innovation

Posted Mar 4, 2011 4:19 UTC (Fri) by jensend (guest, #1385) [Link]

I'm no expert in this area, but I thought I remembered people grumbling that Pulse was cross-platform in name only and that it wasn't just a lack of people putting in time to make it work but also a number of design issues. I could be wrong. My main example here is ALSA, not Pulse.

Choosing between portability and innovation

Posted Mar 3, 2011 22:23 UTC (Thu) by airlied (subscriber, #9104) [Link]

nvidia are being paid money by a customer AFAIK to make stuff work on FreeBSD. If you pay me money I'll make any graphics card work on FreeBSD, but it would cost you a lot of money

otherwise you have no clue what you are talking about. Please leave lwn and go back to posting comments on slashdot.


Copyright © 2017, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds