User: Password:
Subscribe / Log in / New account

Choosing between portability and innovation

Choosing between portability and innovation

Posted Mar 3, 2011 5:43 UTC (Thu) by drag (subscriber, #31333)
In reply to: Choosing between portability and innovation by josephrjustice
Parent article: Choosing between portability and innovation

> I can think of one reason why portability can be desirable, and why focusing development efforts solely on the Linux platform might be undesirable, that I haven't seen mentioned yet. Namely, on the grounds of avoiding a operating system software monoculture.

The thing is that a OS and desktop environment exist for the sole purpose of running applications. It's a abstraction layer of sorts designed to facilitate the use and development of software.

Therefore anything to make the system more stable, make developer's lives easier, make user's lives easier, make things run smoother, faster, etc etc. Anything that the OS does to improve the attributes of the programs running on the system is a fantastic thing.

Avoiding monoculture for the sake of avoiding monoculture is extremely dubious approach and just ends up making the systems worse, not better.

> We know that Unix-like and Linux-based operating systems are known to have vulnerabilities too, and if we don't know this we can simply look at the ongoing series of vulnerabilities announced by CERT and listed in LWN every week.

The biggest problem for Linux on the desktop is that it's entirely unnecessary for somebody to actually root your system to do very significant damage to you. The /home/username/ and user account is a soft target. It is were all your passwords are stored, were you carry out online commerce, were you communicate, etc etc. Your only line of defense is not your OS, but your applications. Firefox or Chrome is the software that keeps you safe, not the Linux kernel.

At least not yet. I'm hoping that Ubuntu's use of AppArmor (or Smack or SELinux) will eventually provide some layered security.

Some divergence and trying different approaches is valuable, but security does not happen by accident. It's not like 'Oh we are different therefore we are secure'. It has limited utility against 'dumb' automated attacks from very basic viruses and worms... but really it provides almost no real benefit from real attacks other then by complete accident. Against a intelligent attacker with a directed focus it's just a paper tiger.

Windows security sucked not because everybody was using Windows 2000, but it was because Windows security was a pile of shit. Microsoft has made huge strides and security is no longer a significant selling point for Linux desktop usage, if it ever was..


Software is not a biological thing. Be careful of drawing conclusions from false analogies. If there is a problem with the software it's fixable. Not so much with DNA. At least not yet.

Especially if we used layered designed with (a minimal amount of) formal APIs/ABIs. That way you can fix problems in a layer without perturbing the software above it or below it, unless it's very necessary.

Remember 'layered design' concept of 'Unix' and TCP/IP?

That's exactly what the kernel does. Formal API/ABI layer between it and 'userspace' creates a significant amount of freedom for the kernel to change and develop. Just don't break userspace and developers can do most anything they want if they are smart enough to pull it off. It's not perfect (sysfs), but it may not be that big of a deal as long as breakage is kept to a minimum.

Compare and contrast this with a non-layered approach like your X Server from a couple years ago. The same application that had access to your PCI bus to configure hardware, provided network services, provided your terminal services, and touched every almost every single application you used. Even the stuff that ran in your xterms had to get it's input from you filtered through X. Which also, of course, runs a setuid root. It's not only extremely questionable design security-wise... it makes for a very fragile system.

Thank goodness for DRI2/KMS/GEM/TTM/etc...

(Log in to post comments)

Choosing between portability and innovation

Posted Mar 3, 2011 5:57 UTC (Thu) by dlang (subscriber, #313) [Link]

Linux was able to grow and prosper due to the fact that the code was portable and therefor Linux was able to run it.

If Sun had managed to get people programming just for it (the way that people are advocating programming only for Linux) back in the days when it was the premier OS, Linux would have been much harder to get started.

Linux developers today owe it to everyone (including themselves) to not raise the bar for for the eventual linux replacement higher unnecessarily.

that being said, having software take advantage of the latest features is a good thing, but the software should degrade gracefully in the absence of those latest features. This may mean falling back to something not as good, or it may mean disabling some features where there is no fallback.

Choosing between portability and innovation

Posted Mar 3, 2011 7:38 UTC (Thu) by airlied (subscriber, #9104) [Link]

So you should have a whole lot of fallbacks that nobody is testing? and will most likely bitrot into all hell since nobody runs them except maybe some hero once every 2-3 years.

Choosing between portability and innovation

Posted Mar 3, 2011 10:13 UTC (Thu) by roblucid (subscriber, #48964) [Link]

It's called error handling, a major pain yes but robust programs have it.

If applications are no longer designed that way, then when there's a call for Linux 3.0 for some currently unforeseable reason, there'll be a terrible chicken & egg problem, which will make the KDE4.0 saga look minor.

The X redevelopments are actually a good example for need for this, as they have NOT provided uninterrupted functionality to the end users, very many complain about much breakage, and missing features over last couple of years.

Copyright © 2017, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds