|
|
Log in / Subscribe / Register

Ubuntu's multisearch surprise

Ubuntu's multisearch surprise

Posted Aug 7, 2009 23:51 UTC (Fri) by MattPerry (guest, #46341)
Parent article: Ubuntu's multisearch surprise

This issue is just a symptom of the problem of packaging applications with the distribution rather than distributions focusing on making a simple base operating system and letting users get the applications from the application provider. I love linux for servers, but I hate it for the desktop. I've tried using Ubuntu, Fedora, and a few others, and I'm a paying Red Hat user at work, but they all suffer from the same problem: The distribution packages the applications instead of leaving that to the application provider. Worse, they fiddle with them and change them before they package them up, so what you get may not be what the developers wrote. Just look at the mess that was caused when some Debian developer screwed around with OpenSSH. Or how Red Hat hosed up Perl's performance and didn't fix the issues for years. If users are going to use Ubuntu's packaged Firefox, they can hardly act surprised when Ubuntu makes a change, even one that benefits them.

Packaging the applications as part of the distro is great for newbies who want a controlled environment and experience à la Apple. Packages are mostly irrelevant to linux experts who want to do their own system administration and are happy to compile and install applications themselves. But for power users like me that are in the middle of that spectrum, they have to choose between the two extremes. I administrate systems and compile software all day long at work. I don't want to have to do it when I come home as that's time I won't get to spend with my family. At the same time, I don't want to have the versions of applications dictated to me by my distro.

It sucks when I upgrade my Ubuntu system to the latest release and it upgrades all of my applications to new versions even if I didn't want it to do that. The inverse is also true. Just because I want to stick with a "long term stable" release of Ubuntu doesn't mean that I don't want to upgrade my music player or OpenOffice to the latest version.

It's disheartening that the community isn't addressing this issue. Maybe it's just me and I'm the only person who desires the separation of maintaining the OS from maintaining the apps that other operating systems enjoy. When using a linux distro for my desktop, I miss the convenience of downloading an installer for the version of my choice of an app, clicking on the icon, and installing it. If we could reach that point then the drama that spawned this LWN article likely wouldn't have even occurred.

Note: I know each distro has different libraries and there are different architectures, but the linux community is really good at solving technical problems. I don't see that as something that can't be overcome. Also, if you're planning on posting an immature comment such as "Maybe you should ask your vendor for a refund," there is a story at Slashdot on this same Ubuntu issue where you can post such comments. I'd rather have an adult discussion here on LWN about the merits of this issue, or lack thereof.


to post comments

Ubuntu's multisearch surprise

Posted Aug 8, 2009 0:48 UTC (Sat) by ms (subscriber, #41272) [Link] (11 responses)

I think what you're after is NixOS.

I broadly agree with the gist of what you're saying, which is that self-compiled packages and prebuild binaries don't mix in a system. I think that's not enormously related to this article though.

Ubuntu's multisearch surprise

Posted Aug 8, 2009 3:11 UTC (Sat) by MattPerry (guest, #46341) [Link] (10 responses)

> I think that's not enormously related to this article though.

I disagree. This entire problem exists because there is a middleman between the software vendor, Mozilla, and the end user. Only those with significant technical knowledge can successfully circumvent that middleman. Finding a way for users to easily bypass the middleman without being a linux expert will eliminate these problems in the future.

Ubuntu's multisearch surprise

Posted Aug 8, 2009 9:20 UTC (Sat) by ikm (guest, #493) [Link] (6 responses)

With Mozilla, that's actually not true. Go to the download page and see that what you get there is essentially a precompiled binary. Untar it and run it.

Ubuntu's multisearch surprise

Posted Aug 8, 2009 15:33 UTC (Sat) by MattPerry (guest, #46341) [Link] (5 responses)

> With Mozilla, that's actually not true. Go to the download page and see
> that what you get there is essentially a precompiled binary. Untar it and
> run it.

Right, so why is Ubuntu shipping that binary rather than leaving it up to the user to get it for themselves?

Ubuntu's multisearch surprise

Posted Aug 8, 2009 17:25 UTC (Sat) by ikm (guest, #493) [Link]

How else are they supposed to inject a multisearch addon there?

Ubuntu's multisearch surprise

Posted Aug 10, 2009 0:35 UTC (Mon) by ringerc (subscriber, #3071) [Link] (1 responses)

Because Linux systems aren't always binary compatible. There's no guarantee that the FF binary shipped by Mozilla will actually run on a given distro. It's rather difficult for the FF folks to ensure that the binary is as widely compatible as it is, and they make some sacrifices in the process. For example, the bundle their own versions of quite a few libraries, which may not be built with the same settings as other apps' bundled versions and thus may not be perfectly compatible. FF's NSS library is a good example of this.

If there's a security hole in libpng, on a distro-shipped app all you need to do is update one package. If you're using vendor-supplied packages you need to find out which of your apps are affected and manually update them.

Why not share a common libpng, you ask? Well, in the case of libpng that's not unreasonable, but not all libraries have strong API/ABI compatibility guarantees. One app may need version 1.1 and another needs version 1.2.

OK, so parallel install the libraries. Sounds ok, right? Well, it does until you realize that the 1.1 and 1.2 libraries are _both_ linked into one process via a 3rd-hand library dependency, and everything falls in a heap. To work around this you need strict symbol versioning, and even then it's far from reliable.

I see two ways to tackle app distribution. One way is for the app to be completely self-contained, including all dependencies not part of the OS its self. This results in bloated installs (lots of duplicate libraries and data), security problems, app compatiblity problems ("DLL Hell"), memory bloat (DLLs/shared libs can't be shared in memory between different apps) and all sorts of other issues. If some rules are very strictly followed it can work, as Microsoft Windows proves. Ever packaged an app for Windows, though? It's not fun. Coding for the platform is also more complex: you must, for example, never assume that other DLLs are using the same C runtime as you, so you can't free() memory they've malloc()'d or vice versa, you can't pass a FILE* between DLLs, etc etc.

The other way is to ensure that the OS has a way to provide all dependencies in a central way, so apps don't carry them at all. To make this work practically the OS has to be responsible for app packaging and installation too, as it's hard to guarantee perfect compatibility of all libraries across distros especially given the problems with multi-versioning. This means that the app user is somewhat dependent on the OS to provide updated versions, but allows the OS vendor to ensure app compatibility and stability, eliminate shared library conflicts, etc.

I don't think either approach is "right" ... and personally, I wouldn't mind seeing improved distro compatibility in package formats, package names, etc so a package could be produced that'd reliably install on multiple distros. Sometimes, though, the package simply _couldn't_ be installed due to library versioning conflicts, so you'd still have to maintain a couple of versions or start bundling your libraries.

It's not a simple problem, and all the current solutions kind of suck.

Ubuntu's multisearch surprise

Posted Aug 10, 2009 6:29 UTC (Mon) by nix (subscriber, #2304) [Link]

All your examples are rather bad. FF doesn't use the system libpng even if
you build your own copy, because FF relies on some patches that were
rejected by upstream. And FF ships NSS because, um, they were shipping NSS
many years before any Linux distros were using it for anything else at all
(and a good few still don't: its only real advantage is certification
stuff, and if you don't care about certifications GnuTLS has a far less
ugly interface and OpenSSL is faster).

Ubuntu's multisearch surprise

Posted Aug 10, 2009 10:02 UTC (Mon) by dunlapg (guest, #57764) [Link]

For the same reason Ubuntu doesn't just ship the kernel and expect users to download bash themselves. Users *expect* that when they buy an "operating system", it will come with certain key components. These days, for desktops, that includes a graphical interface (X) and a desktop (Gnome &c), along with a web browser and multimedia applications.

Re the "middleman", I definitely feel like the "middleman" is a feature. Look at all the crap that Windows users end up installing because there is no middleman! Because there's no one vetting the software and making sure that it's up to snuff, each individual piece of software wants to at least install its own update-checker and its own useless piece of junk in your status bar, if not its own spyware. With Ubuntu, they can have a reasonable policy across the board for that stuff, and as a result, my Ubuntu desktop is always more clean and usable than my Windows desktop. Even with the current "spyware" issue, I trust Ubuntu a whole lot more than I trust all those random "application providers" that might want to install who-knows-what on my system.

As someone else has already pointed out, Google and Mozilla both know everything that Canonical would know anyway. And after taking some business classes, I can sympathize with wanting to understand what your users are doing so that you can serve them better. (Asking is of limited use, because only certain kinds of people answer, and even when they answer, it may not be representative.)

My main concern is that new versions of Ubuntu not end up like a new Dell box, packed full of junk that I have to un-install before it's remotely useable. That's one of the things that I *really* appreciate about Linux distros: not having to deal with random junk.

Ubuntu's multisearch surprise

Posted Aug 21, 2009 21:25 UTC (Fri) by sigra (guest, #57156) [Link]

> Right, so why is Ubuntu shipping that binary rather than leaving it up
to the user to get it for themselves?

...using wget I assume?

Ubuntu's multisearch surprise

Posted Aug 8, 2009 19:18 UTC (Sat) by k8to (guest, #15413) [Link] (2 responses)

The middleman is a feature. It is fundamentally why I use Linux at all, and why it is superior to other technicaly similar platforms for typical users.

If you're in an environment where you care about certain things a great deal, then yes cut the middleman out of those things, as far down as you need to. If that means no longer using a distribution at all, or moving to another platform, so be it.

For typical medium shops, the middleman feature can make "the rest" of the system work fine with better quality and less effort.

For smaller shops, the middleman feature can be used everywhere.

Of course, if the middleman becomes untrustworthy, get a new one.

I could point to literally *hundreds* of times that the middleman has made my use of software vastly more pleasant than simply acquiring the software from the creator. I can point to about 2 where it has gone the other way.

Ubuntu's multisearch surprise

Posted Aug 8, 2009 23:16 UTC (Sat) by foom (subscriber, #14868) [Link]

Well said, and I 100% agree.

Ubuntu's multisearch surprise

Posted Aug 10, 2009 11:26 UTC (Mon) by iq-0 (subscriber, #36655) [Link]

Amen. Couldn't have said it better :-)

Ubuntu's multisearch surprise

Posted Aug 8, 2009 2:10 UTC (Sat) by drag (guest, #31333) [Link] (23 responses)

The ultimate solution is that packaging gets done upstream, not by distributions.

Instead of it being:
./configure
make && make install

It should really be:

./conifgure
make && make rpm
rpm --install *.rpm

For example take a mundaine example like 'gnome-games'. You have OpenSuse, Redhat, Debian, Ubuntu, Gentoo, etc etc.

So you have the 'gnome-games' source code associated with Gnome 2.24.

Each distro has to individually package, build, and troubleshoot the packages. There is subtle and rather insigificant implementation details that makes it difficult to track bugs and work together to create better software. And each distro has to re-do all the work that every other distro does.

So with OpenSuse, Fedora, Debian, Ubuntu, and Mandriva... Assuming that they have one person in each originization that is in charge of maintaining 'gnome-games' you have 5 individuals doing the same packaging with the same software versions with the same goals and are aiming for the same potential audiance.

That's 500% more work then it takes to produce a installer for Windows. And it doesn't really make sense. For every 1 man-hour that is needed to get software packaged you waste 4 man-hours re-doing all the same work.

That's 4 units of wasted effort for ever 1 unit of real work done. That's 4 units of time and money and effort that could go into bug squashing or improving documentation or working on some other peice of software.

And, ironically, it's those binaries produced for Debian vs Ubuntu vs Fedora, etc etc... all are compatible with each other.

When people like Opera produce proprietary software for Linux they have dozens and dozens of packages for each distro.... but actually if you look at the checksums on the binaries and library files they install it's all identical. Opera installs the same exact hunk of software on every distro, except very old ones (and that is due to the C++ ABI breakage from years ago).

So it's mostly just stuff as silly as versioning numbers and how large projects are carved up into smaller packages that are causing incompatibilities.

Ubuntu's multisearch surprise

Posted Aug 8, 2009 2:19 UTC (Sat) by drag (guest, #31333) [Link] (5 responses)

Well besides my bad math I am sure everybody gets the point. :)

Really Debian needs Ubuntu and Ubuntu needs Debian.

They have to work together. It's reality.

Ubuntu could not exist without the high quality packages that Debian produces. It would be _impossible_.

And Ubuntu is able to provide something that Debian so far has failed utterly... Make a friendly Linux desktop. Debian _can't_ do it. I don't know why. But they can't.

And I actually prefer Debian over Ubuntu for my desktop. Compared to Fedora or Ubuntu the Debian desktop experience is nothing but a huge PITA. It takes about 4-5 hours of nerd work to make their Gnome stuff as usable as what I can get from Fedora in 10 minutes.

I can't even get PA to work properly for me. The volumes controls for Gnome and Compiz and all that are all goofed up. I don't know what to do to fix it. With Fedora it was just right.

Debian needs Ubuntu for that sort of stuff.

Ubuntu's multisearch surprise

Posted Aug 8, 2009 3:03 UTC (Sat) by srivasta (guest, #7075) [Link] (4 responses)

I am not sure why you think Debian needs Ubuntu to make a friendly desktop -- when a couple of line down you confess you prefer Debian's desktop. (As do I, for what it's worth).

What does Debian gain from _other_ operating systems having an (allegedly) friendlier desktop anyway?

Ubuntu's multisearch surprise

Posted Aug 8, 2009 7:01 UTC (Sat) by tajyrink (subscriber, #2750) [Link] (3 responses)

They can import the usability-enhancing patches where applicable?

Ubuntu's multisearch surprise

Posted Aug 8, 2009 7:34 UTC (Sat) by drag (guest, #31333) [Link] (2 responses)

And design.

There is more to getting a good dekstop then just doing the 'make install'. A userfriendly desktop must provide all-working functionality as well as the normal set of expected functionality.

Something like that. It's a total package thing. Just providing working software is a first step.

-----

The trick is that whatever it takes Debian hasn't been able to do it. If Debian had the ability to get good default configuration out to users on a timely basis then there would of never been any need for Ubuntu in the first place. There would of been no market for it... people would of just used Debian.

Ubuntu's multisearch surprise

Posted Aug 9, 2009 17:15 UTC (Sun) by srivasta (guest, #7075) [Link] (1 responses)

Hi,

Part of the so called improvements in the desktop have been pruning away of choices presented to the user. Instead of 23 MUA's, ship 3. Who gets to decide which 3? Why, your distro overlords, of course. The other part consists on streamlining the distribution down to a couple of thousand packages, relegating the rest to a non-core multiverse. However, the former is the probably the major reason for the popularity: when one removes choices, and makes decisions on behalf of the end user, one can offer a slick presentation -- as long as you like the decisions made; and most people usually do not care.

Debian went the direction of choices, allowing people to tailor the distribution to their liking. This makes for more questions, and perhaps more configuration choices, and perhaps, confusion for the novice. But one is only a novice for so long, annd I am glad that the easier shoices prof erred by Debian exist.

If there is a way of creating something that is slick _and_ manages to offer the choices, I think Debian folk would be happy to hear about it.

manoj


Ubuntu's multisearch surprise

Posted Aug 11, 2009 17:03 UTC (Tue) by kov (subscriber, #7423) [Link]

You're helping him make his point, from my perspective.

Need EASY for END-USER install of source code

Posted Aug 8, 2009 14:45 UTC (Sat) by dwheeler (guest, #1216) [Link] (5 responses)

I agree in principle that a source code install should work with the package manager (so that you can easily remove it later, upgrade, know what you're getting, etc.). And behind-the-scenes, the list of operations seems reasonable in principle.

But what is needed is a trivial point-and-click interface that auto-configures, makes, and installs (working with the distro package manager). One that works well enough automatically so that people don't NEED to read the README file to install it the "usual" way. If developers would simply follow the standard conventions for source distribution, this would work well; it'd be even easier with software that would automate DESTDIR (I intend to release software soon to do the latter). That wouldn't solve all the problems (in particular, there need to be conventions so that dependencies are downloaded and installed), but that would help.

Need EASY for END-USER install of source code

Posted Aug 8, 2009 17:39 UTC (Sat) by nix (subscriber, #2304) [Link] (3 responses)

Nice paper.

One point though. You said:

n most software, the "make install" command only uses a few simple commands to actually install the software. In my experience, the most common command by far is "install", which is hardly surprising. Other common commands used in "make install" that might need redirecting from privileged directories (like /bin, /usr, and /etc) include cp, mkdir, ln, mv, touch, chmod, chown, ls, rm, and rmdir.
You forgot the horror which is libtool. That runs the compiler at install time to do a link, so you have to wrap the linker! You may get away without wrapping it if it turns out that it always asks the compiler to put things in the srcdir, and then uses 'install' to move them out.

Need EASY for END-USER install of source code

Posted Aug 9, 2009 1:35 UTC (Sun) by dwheeler (guest, #1216) [Link] (2 responses)

Nice paper.

Thanks.

You forgot the horror which is libtool. That runs the compiler at install time to do a link, so you have to wrap the linker! You may get away without wrapping it if it turns out that it always asks the compiler to put things in the srcdir, and then uses 'install' to move them out.

Ah, libtool. Well, I'm definitely aware of libtool; I wrote the Program Library HOWTO, after all. But from what I've seen, someone who uses libtool will typically support DESTDIR as well (I speculate that if you're a libtool user, you're probably more interested in portability and thus more likely to support DESTDIR).

And even if that's not true, my current approach can accommodate it anyway. I've ended up deciding that to automate DESTDIR, it's best to simply create "wrappers" of the same name and put their location at the front of the PATH. It seems hackish, but it has absolutely NO security issues, and it runs really quickly. Which means that it is more likely to actually be ACCEPTABLE to distros and users. I haven't tried to wrap libtool (yet), but I think that should work well if it turns out to be necessary. I've already wrapped cp, ln, and so on, and tried them out on several test programs without problems.

Need EASY for END-USER install of source code

Posted Aug 9, 2009 10:26 UTC (Sun) by nix (subscriber, #2304) [Link]

But from what I've seen, someone who uses libtool will typically support DESTDIR as well (I speculate that if you're a libtool user, you're probably more interested in portability and thus more likely to support DESTDIR).
True. A quick audit here shows few examples: APR/apr-util/apache are the primary ones, but we know they're seriously weird to the point of not supporting biarch building without ugly hacks (and they have INSTALL_ROOT which you can use instead). (I mean, keeping around a copy of libtool from your configure process and reusing it in other projects? Ew.)

Auto-DESTDIR released

Posted Aug 17, 2009 18:25 UTC (Mon) by dwheeler (guest, #1216) [Link]

By the way, my "Auto-DESTDIR" program has now been released at http://www.dwheeler.com/auto-destdir - it supports DESTDIR in source installs, even when the original makefile doesn't support DESTDIR. One you have Auto-DESTDIR install, just run make-redir instead of make when you do an install. I.E., use "DESTDIR=... make-redir install" instead of "make install".

Need EASY for END-USER install of source code

Posted Aug 9, 2009 14:07 UTC (Sun) by Cyberax (✭ supporter ✭, #52523) [Link]

And then you'll need it to be able to upgrade itself, track dependencies, uninstall, etc.

In short, you need a PACKAGE system. So back to square 1.

Ubuntu's multisearch surprise

Posted Aug 8, 2009 15:46 UTC (Sat) by MattPerry (guest, #46341) [Link] (3 responses)

> The ultimate solution is that packaging gets done upstream, not by
> distributions.

Yes, exactly.

> ./conifgure
> make && make rpm
> rpm --install *.rpm

That's a great idea. If that could be reduced to an action that happens when someone double-clicks on an installable package then it would be ideal.

> For every 1 man-hour that is needed to get software packaged you waste 4
> man-hours re-doing all the same work. [...] And, ironically, it's those
> binaries produced for Debian vs Ubuntu vs Fedora, etc etc... all are
> compatible with each other.

I thought that the Linux Standards Base was supposed to make it so that LSB compliant binaries would work on LSB compliant distros. If more effort was put into LSB compliance from the distro and application providers maybe we could achieve single-click installation and execution for applications. It would be a win for end-users and for the distros too as they wouldn't spend the time duplicating work.

Ubuntu's multisearch surprise

Posted Aug 8, 2009 19:15 UTC (Sat) by drag (guest, #31333) [Link] (1 responses)

LSB works for some things, but obviously it's limited in scope.

Different standards organizations work in different manners. For example Freedesktop.org has it's XDG stuff which is 'next gen' (when it was developed). This has added considerably in helping users experience consistent behavior and addressing many of the annoyances that come from having different desktop environments.

A big example is previous to things like "xdg-open" application developers that wanted to call a seperate browser for documentation or linking online had to hardcode it into their installation scripts or in their applications. The best they could do was to allow people to go into their application preferences and set a default browser. Most people just hardcoded 'mozilla' since that was the premiere browser at the time.

Obviously this is less then preferable. Either forcing everybody to use 'mozilla' or forcing everybody to configure a browser for each and every application that happened to need to call a browser.

LSB is a bit different though. They are a 'best practices' type orginization that goes and finds what is consistent and supportable across all distros and then formalizes it. The amount of stuff that LSB tries to impose on distros is relatively small and distros tend to respond very negatively when they do (like standardizing packages to be RPMs and forcing some new directory structure).

So they don't assume a position of making people work together. Their position is much more centered aorund documenting and making formal in what ways they already do that. So that when you have third party developers (ISVs) or you have people new to Linux then they have a place to go to learn what parts of "Linux OS" they can depend on.

So things that cause incompatibility and problems for developers who try to target multiple distros that LSB doesn't cover should be mostly because distros refuse to work together about those things. It's not LSB's position to try to force those sort of issues.

Traditionally the biggest problem is going to be because of GUI applications. The 'GNU' part in 'GNU/Linux' is very consistant normally, but the differences in how people setup KDE vs Gnome vs Whatever causes all sorts of headaches.

------------------

The latest thing that LSB is trying to deal with is the GUI applications through the Moblinv2 specifications developed by Intel.

This is only targeting smaller devices, but I am hoping that it will extend to desktops and workstations.

Moblin dictates that you need to have a Gnome-based environment and that you have all the dependencies that it says you should have for it. It includes specific libraries and specific versions of libraries. It also assumes that if you have newer then the specified libraries then those libraries should be backwards compatible with the versions that it dictates. It has bunches of PDFs outlining all of this, plus test cases and a certification process.

So far it seems that Novell, Ubuntu, and Fedora are releasing Moblin compliant versions of their operating distros, as well as smaller custom-commercial companies like Xandros.

--------------

the main problem with Moblinv2 is that it's more for smaller devices, but I am hoping that it'll be move 'up' and include desktops and workstations. It's heavily influencing the direction of Gnome 3.0 I believe.

The other big problem is that it's very Gnome centric and thus KDE fans and people who care about older and slower devices will have a tendency to dismiss it early on.

My thoughts on this goes like this:

Gnome stuff, while heavy in nature when running, does not really take up that much disk space. Thus the price you pay for Moblin compatibility is rather low. Even old computers should be handle the additional disk space required without blinking.

And what gets you is a solid and constant base for users to build off of.

----------------------------

I learned this when dealing with OpenLDAP.

People have, in the past, had huge issues learning LDAP with OpenLDAP. There is a wealth of documentation and examples on LDAP, but very little actually covers the initial setup and deployment of OpenLDAP. When you install OpenLDAP from Debian and Fedora it's built correctly, but it's a blank canvas...

So users will read the documentation and want to experiment with OpenLDAP, but the initial setup to get a actual working example is a very high barrier and they get confused and dismayed and usually give up.

So if they were given a nice default setup, even if it wasn't what they needed ultimately then it makes things MUCH easier to learn and deploy and would increase the value of the software by a large amount.

This is the advantage that companies get with Microsoft Small Business Server setup. Everything is designed to work together by default. Even if SBS is not a good match for the company initially then having a consistent and working basis (as well as documentation) is invaluable to build off of and modify so that it does end up being a good match.

------------------------

Linux desktops and desktop application development is not like that. There is no solid basis, no solid and constant foundation.

It's like walking on shifting sand... everything is moving around under your feet. People get the feeling that they are building a house on wet clay... no matter how well and how strong you build your house if the foundations are not done correctly and are not consistent then your screwed. The more you effort/time/money you spend on it the worse off you are.

If users and developers are presented with a solid, working Desktop, that has all the basic features and services done correctly and is working then that makes using and customizing the desktop much easier. Even if 'Moblin' or 'Gnome' is not what they want, having that solid working base to fall back to and build off of makes things much much easier.

What is the fun in getting Awesome WM configured and all EXCELLENT if you have to struggle around with Pulseaudio or Alsa and need to compile patches in Network-Manager (or uninstall it entirely) to get basic OS features working correctly?

With everything 'working by default', then developers only have to worry about the stuff they care about. They don't have to worry and care about things that they don't want care about and don't have in-depth knowledge about.

--------------------------

'Usable By Default' is along the same lines as 'Secure By Default'.

When a user installs a secure OS like OpenBSD or Debian's 'base install' it has almost no chance of providing what the user wants ultimately. They want a network router or a web server or email server or something like that.

However what it gets you is that you can deploy a HTTP server without having to worry about SNMP security and SMTP security and SMB security and NFS security, etc etc. etc. You only have to concentrate on the stuff you need to concentrate on and you still end up with a secure system.

---------------------------

Like if I am developing a video game using Blender.. Idon't want to care about sound APIs. I don't want to have to worry if PulseAudio or Alsa or OSS is being used. I don't want to worry about if the users have the ability to connect to wireless networks correctly for playing multiplayer or if they have a decent browser that is compatible with my documentation. I don't want to worry about if their distro has a good MESA implementation or if the user is using proprietary drivers or if the user does not know that they need drivers installed at all. I don't want to care if they have the ability to full-fill the python requirements or have goofed up python libraries all the time.

If I have to worry and deal with every little detail with the desktop and have to provide work-arounds and documentation and support for every little thing that can screw my game up... then I won't have any time to make the actual game!

OpenLDAP

Posted Aug 12, 2009 4:15 UTC (Wed) by TRS-80 (guest, #1804) [Link]

People have, in the past, had huge issues learning LDAP with OpenLDAP. There is a wealth of documentation and examples on LDAP, but very little actually covers the initial setup and deployment of OpenLDAP. When you install OpenLDAP from Debian and Fedora it's built correctly, but it's a blank canvas...

So users will read the documentation and want to experiment with OpenLDAP, but the initial setup to get a actual working example is a very high barrier and they get confused and dismayed and usually give up.

So if they were given a nice default setup, even if it wasn't what they needed ultimately then it makes things MUCH easier to learn and deploy and would increase the value of the software by a large amount.

The Debian packages these days actually provide a base entry and admin in the database, although this isn't well documented and requires a purge to do it again if you mess up. There are a few documents that properly cover setup, like Debian GNU: Setting up OpenLDAP although that assumes you'll use Kerberos instead of SSL to protect passwords, and my own LDAP for the Lazy Sysadmin which aims to be useful, if a bit ranty. But yes, most LDAP documentation assumes you already know LDAP and is more of a reference, or is just cargo-cult "do this to make it work" with no insight in to what's going on and how to fix it when it breaks.

Ubuntu's multisearch surprise

Posted Aug 9, 2009 17:20 UTC (Sun) by srivasta (guest, #7075) [Link]

I think you are paying far too little attention to systems integration. Anyone can package individual packages, but a bunch of independently packaged software does not a good distribution make.

By far the most impressive bit of Debian is the Technical policy manual, and how the packages create a more integrated whole by following the dictum of policy (which, BTW, is not a statid dead set of rules, but a vibrant, breathing, changing document). Very few upstream packaged software pays much attention to it, as can be seen by running lintian -vi on the packages produced.

manoj

Ubuntu's multisearch surprise

Posted Aug 10, 2009 0:42 UTC (Mon) by ringerc (subscriber, #3071) [Link] (5 responses)

That looks good in principle, but tends to break easily when faced with different apps requiring different versions of libraries that don't maintain ABI compatibility. You can change the soname, but you're still in trouble if you encounter a linkage chain like:

  • libx1.1
    • libthirdparty
    • libx1.2
      • theapplication

Symbol versioning of libx can help, but doesn't seem to solve all the issues.

Ubuntu's multisearch surprise

Posted Aug 10, 2009 11:21 UTC (Mon) by mjthayer (guest, #39183) [Link] (4 responses)

That is more a consequence of problems in the *nix linking model, where all shared objects land in a global namespace. It is desirable for a few things like the C library, but for most shared objects it causes more problems than it solves. It would be so nice if build-time linking had the equivalent of dlopen's RTLD_LOCAL flag. I think this should even work without breaking LD_PRELOAD, as preloaded libraries would still be loaded as "RTLD_GLOBAL" libraries, so would still override the subsequent RTLD_LOCAL loaded ones.

Ubuntu's multisearch surprise

Posted Aug 10, 2009 18:53 UTC (Mon) by nix (subscriber, #2304) [Link] (3 responses)

It can still be a problem even if you have RTLD_LOCAL, if objects managed
by one version of a shared library end up being passed to another version
in the same address space, or if both of them are contending trying to
manage some shared resource (classic examples, both from libc: wtmp and
the malloc arena).

What RTLD_LOCAL fixes is the 'whoops, symbol name clashes between totally
different shared libraries because they dared to use a symbol with the
same name' problem.

Ubuntu's multisearch surprise

Posted Aug 10, 2009 19:08 UTC (Mon) by mjthayer (guest, #39183) [Link] (2 responses)

Right, those are the "few things like the C library" that I meant: things which are global to the application by nature. Having a link-time RTLD_LOCAL would not be a panacea, and would be bound to introduce a few problems of its own. On the other hand, since it would be an opt-in thing for each shared object linked into any ELF binary, the problems could be sorted out incrementally.

Ubuntu's multisearch surprise

Posted Aug 11, 2009 9:22 UTC (Tue) by ringerc (subscriber, #3071) [Link] (1 responses)

It affects more than just a few C library details. In particular, `static' variables can be a problem. Link-time RTLD_LOCAL appears to imply the presence of multiple instances of the library executable mapped into memory, or at least multiple copies of their rw segments. Otherwise you'd encounter funky behaviour differences depending on whether two unrelated libs happened to link to the same versions of some 3rd lib or different versions of it.

That makes it hard when libraries at the same linkage "level" (say libh and libi) want to pass around objects that may, either overtly or behind the scenes, rely on shared static data (library-internal caches or the like) in a library (say libj) they both use. Of course, they can't pass libj objects across library boundaries anyway unless they know they're using at least a compatible version of libj, since their layout might and/or meaning not be compatible, so they may as well share the same instance of libj's rw data segment too, as if it were RTLD_GLOBAL.

This sort of thing is very common in things like application frameworks (qt/gtk/etc) and plugins.

I seem to remember that Mac OS X tackles this by defaulting to something like RTLD_LOCAL linkage, but allowing objects to specify global linkage instead. That's vague memory only though, and I haven't hunted down any documentation to confirm it.

Ubuntu's multisearch surprise

Posted Aug 11, 2009 9:34 UTC (Tue) by mjthayer (guest, #39183) [Link]

Still, I would have thought that whoever is linking in the shared object would have a good idea at link time whether or not they needed to share objects from that object with others. To take your example, if libh and libi share objects created by libj, then they should both know to link in libj into the global scope - that is, not to opt in to local linkage for it.

Ubuntu's multisearch surprise

Posted Aug 10, 2009 8:13 UTC (Mon) by mjthayer (guest, #39183) [Link]

> The ultimate solution is that packaging gets done upstream, not by distributions.

I would suggest a slight correction to your assertion. At least for "true" FOSS applications, the ultimate solution might be for the distribution packagers to work directly on the upstream code base instead of on a distribution fork. The upstream code base would then support your "make rpm", but it would be Redhat/Fedora/SUSE/whoever who would actually run that to create the packages. And the distributino packagers would then have the additional task of monitoring the changes made by packagers for other distributions and making them harmonise (or at the very worst adding upstream "#ifndef FEDORA"s and suchlike) with their own needs, and making sure that upstream versions branches fit in with their own versioning requirements

Ubuntu's multisearch surprise

Posted Aug 8, 2009 2:20 UTC (Sat) by ldarby (guest, #41318) [Link] (8 responses)

I totally agree with all that, and that's partly why I use Slackware almost exclusively. It's the only distro I know of that stays out of my way when it comes to application software (I'm listening to music playing in XMMS 1 and typing this in Opera 10 beta). It also rarely patches upstream, so issues like this and the openssh fiasco just don't happen. If upstream were responsible for something like that, well that would affect all distros anyway.

Also, I subscribed to lwn just for this article :)

Ubuntu's multisearch surprise

Posted Aug 8, 2009 8:56 UTC (Sat) by tzafrir (subscriber, #11501) [Link] (7 responses)

xmms1? With what front-end?

It is a program that has been unmaintained for years. It is based on gtk1.2 which is likewise unmaintained.

Ubuntu's multisearch surprise

Posted Aug 8, 2009 9:48 UTC (Sat) by jengelh (subscriber, #33263) [Link] (6 responses)

It's a crappy world.

- xmms2: do any of the clients provide me with the traditional winamp/xmms1 interface and skin support?
- audacious 1.x: blatant bug: the whole program is not servicing any X events when waiting to connect to a stream.
- audacious 2.1: the alsa output plugin hangs - playback just sits there at 00:00 - works with OSS. Does not play Internet streams at all. Dude, that's crap.

So, back to xmms1.

Ubuntu's multisearch surprise

Posted Aug 8, 2009 9:51 UTC (Sat) by jengelh (subscriber, #33263) [Link] (5 responses)

And, because just bitching is not gonna help, here's what it spouts:

network stream:
** (audacious2:5101): WARNING **: could not open 'http://72.26.204.18:6314', no transport plugin available
Unable to read from http://72.26.204.18:6314, giving up.

alsa:
ERROR: ALSA: alsa-core.c:226 (alsaplug_write_buffer): (write) snd_pcm_recover: Input/output error

Ubuntu's multisearch surprise

Posted Aug 10, 2009 12:25 UTC (Mon) by SEJeff (guest, #51588) [Link] (4 responses)

Perhaps you should file a bug report on the project's bugracker. Because we all know that open source developers look on LWN for bug reports.

Ubuntu's multisearch surprise

Posted Aug 10, 2009 12:30 UTC (Mon) by jengelh (subscriber, #33263) [Link] (3 responses)

Perhaps I should. But then I have better things to do and xmms1 is just ready at hand.

Ubuntu's multisearch surprise

Posted Aug 10, 2009 12:36 UTC (Mon) by SEJeff (guest, #51588) [Link] (2 responses)

I was just pointing out that griping about something when you have done _nothing_ to help out the developers helps no one.

Ubuntu's multisearch surprise

Posted Aug 10, 2009 21:34 UTC (Mon) by k8to (guest, #15413) [Link]

It helps me. I can continue to not bother moving away from xmms1, as I have for years. Periodically I check the others, but this thread saved me my yearly attempt.

Ubuntu's multisearch surprise

Posted Aug 10, 2009 22:18 UTC (Mon) by jengelh (subscriber, #33263) [Link]

FWIW it's reported now.

Ubuntu's multisearch surprise

Posted Aug 8, 2009 2:23 UTC (Sat) by smoogen (subscriber, #97) [Link] (2 responses)

One issue with application vendors shipping stuff is they will probably only support one distribution and none of them would agree on which distribution that was. And after that many application people would not want to deal with stuff like "Oh you have an XYZ video card.. I don't so that video glitch is probably just you." [Which is basically what drives people to tell the distribution people can you package this up for me?]

Ubuntu's multisearch surprise

Posted Aug 8, 2009 15:57 UTC (Sat) by MattPerry (guest, #46341) [Link] (1 responses)

> One issue with application vendors shipping stuff is they will probably
> only support one distribution and none of them would agree on which
> distribution that was.

The Linux Standard Base was supposed to fix this. From what I'm told, it didn't, for reasons I do not know. I still think that fixing the problems with LSB and providing a way to audit distros for LSB compliance will allow application vendors to target an binary package to a particular LSB standard which would work on any LSB compliant distro of the same LSB version.

Ubuntu's multisearch surprise

Posted Aug 8, 2009 18:31 UTC (Sat) by lmb (subscriber, #39048) [Link]

The only way to completely fix it would be to have LSB mandate everything, reducing the diversity of Linux distributions to zero. As long as there is any variation, you will have to take it into account somewhere.

Distributions aren't as much of a problem as this discussion makes them out to be.

Ubuntu's multisearch surprise

Posted Aug 8, 2009 7:18 UTC (Sat) by cuboci (subscriber, #9641) [Link] (6 responses)

> Worse, they fiddle with them and change them before they package them up, so what you get may not be what the developers wrote. Just look at the mess that was caused when some Debian developer screwed around with OpenSSH. Or how Red Hat hosed up Perl's performance and didn't fix the issues for years.

That's like saying upstream doesn't make mistakes and all bugs are introduced by distributors. Just think of all the security holes distributors patch before any upstream developer even takes notice of them.

Also, I wouldn't trust upstream developers to package software correctly for the distribution I use unless all distributions are the same - in which case there wouldn't be any. And just look at the state of software packaging on Windows. Same base for all, no package management that deserves the name.

Distributors only providing a base and upstream developers packaging the software themselves (which is a lot of work, btw!) would probably lead to something similar.

Ubuntu's multisearch surprise

Posted Aug 8, 2009 16:59 UTC (Sat) by MattPerry (guest, #46341) [Link] (4 responses)

> That's like saying upstream doesn't make mistakes and all bugs are
> introduced by distributors.

It's not like that at all. Each person who modifies the code increases the chance of introducing errors. I'd prefer that the developers of the software review the patches that go into a program I use. Mistakes will still happen, but at least a process is in place where the people who know the program best can review what goes into it.

The Debian SSH problem is a great example. The packager did seek feedback from the OpenSSH team about their change and was told it was okay. But they continued to make further changes to a similar piece of code that introduced a security issue. Had that additional change been reviewed by the original developers, the chance of it being caught would have been increased.

> Just think of all the security holes distributors patch before any
> upstream developer even takes notice of them.

That sounds like a communication problem.

* Why did the distro packager not notify the upstream developer of the problem and coordinate a fix?
* Do other distro packagers know about this problem?
* Will this one distro have the fix while other distros and the official source will be vulnerable?
* How does the distro packager know that their fix doesn't introduce another bug, security hole, or damage functionality?

If multiple distro packagers know about the problem and are applying the patch, you are then duplicating work among distros. See drag's comment above about all of the effort that is duplicated and wasted between distro packagers doing the same task over and over.

> Also, I wouldn't trust upstream developers to package software correctly
> for the distribution I use unless all distributions are the same - in
> which case there wouldn't be any.

I'd trust them to package things if we created standards and processes to make packaging easy for them. The Linux Standard Base is supposed to provide such a standard. Maybe we as a community need to revisit such standardization and bring pressure upon distros and application providers to properly conform to said standards.

> And just look at the state of software packaging on Windows. Same base
> for all, no package management that deserves the name.

Windows' package management may pale in comparison to what Linux-based package systems provide, but it's far from broken. From an end-user perspective, it may be simple but it works. For example, I can download Firefox or OpenOffice from the developer's web site and use the same package to install on a variety of Windows versions without issue. Removing a package is equally easy and problem free.

> Distributors only providing a base and upstream developers packaging the
> software themselves (which is a lot of work, btw!) would probably lead
> to something similar.

Plenty of open source and free software vendors successfully package and distribute working install packages of their software for Windows and they install, work, and can remove flawlessly.

Examples: Cygwin, OpenOffice, Vuze, Firefox, CDex, Calibre, Pidgin, VirtualBox, Vim, Emacs, XEmacs, OpenVPN, GIMP, VLC Media Player, Audacity, Scribus, Ekiga, AbiWord, Apache, PHP, PostgreSQL, MySQL.

So they can successfully do this for Windows but doing the same for Linux distros is somehow not possible? I don't believe that for a second.

Ubuntu's multisearch surprise

Posted Aug 9, 2009 10:29 UTC (Sun) by dtucker (subscriber, #6575) [Link] (1 responses)

> The packager did seek feedback from the OpenSSH team about their change
> and was told it was okay. But they continued to make further changes to a
> similar piece of code that introduced a security issue.

The incident I think you're referring to (http://lwn.net/Articles/282038/) was a change to OpenSSL, not OpenSSH. OpenSSH was impacted because it uses OpenSSL's RNG functions but it wasn't the location of the change in question.

Ubuntu's multisearch surprise

Posted Aug 11, 2009 14:46 UTC (Tue) by MattPerry (guest, #46341) [Link]

Thanks for the correction. I knew it was one of those but had forgotten which one.

Ubuntu's multisearch surprise

Posted Aug 10, 2009 0:52 UTC (Mon) by ringerc (subscriber, #3071) [Link] (1 responses)

FF, OO.o, etc work on Windows because they bundle every dependency directly into the app download. They each maintain private copies of libpng, libjpeg, gtk, etc etc etc.

These libraries cannot be shared in memory and use extra disk space. Who cares these days, right? However, the separate libraries mean that the app vendor must issue a security update if any of the libraries they use in the app suffer from a security hole. Each vendor must update its app(s) separately, since they have private copies of the affected libraries. The poor user has to find out about these updates - or is deluged with hard-to-verify-as-authentic update dialogs from a bunch of different apps.

App auto updates are dangerous when the user has no way to verify that (say) the OO.o update dialog really is from OO.o, not a web page trying to trick them into downloading malware. I've already seen fake Adobe Updater dialogs, so this is far from a theoretical problem. To you or I it might seem fairly obvious how to check, but most users just haven't the foggiest, and tend to either reject all updates (leaving their machine with gaping security holes) or accept them all (opening them to malware risks).

FF works around this by not giving the user a choice. That's fine so long as the updates are trivial, you're absolutely certain they'll never break anything, and the user has the access permissions to actually install the updates. The latter, however, is often an issue in FF installs, and tends to leave users confused.

An OS-provided central secure update channel would help mitigate these issues. Think "Microsoft Update" but with 3rd party providers allowed to subscribe their apps. This would put MS in the implied position of being expected to vet all those updates, though, and they're never going to allow it. WHQL drivers already give them quite enough trouble.

As if the security issues weren't unpleasant enough, there's also the fun of "DLL hell" or "shared library hell" caused by ABI-incompatible libraries with the same soname being loaded into the one executable. This would seem trivial to avoid by bundling all your own libraries, and these days _mostly_ is, but you still run into unpleasant issues with LoadLibrary/dlopen(), user PATH / LD_LIBRARY_PATH settings, etc.

Essentially: Yes, the Windows app distribution model works, but it (IMO) suffers from a collection of annoying flaws just like the Linux distro model. They're just different annyoying flaws.

Ubuntu's multisearch surprise

Posted Aug 10, 2009 16:26 UTC (Mon) by Cato (guest, #7643) [Link]

Re Firefox not giving users a choice over updates: it's quite easy to disable all extension updates through the Preferences dialogue box, but of course that could mean they miss a security update. What would help is if extension authors categorised updates as security or other, and Firefox subscribed you to security updates only by default.

Ubuntu's multisearch surprise

Posted Aug 10, 2009 11:28 UTC (Mon) by mjthayer (guest, #39183) [Link]

As per my comment above, a better solution might be for the packagers to work directly on the upstream source instead of on local trees. That way the upstream source supports packaging for all distributions interested in shipping it, packagers' modifications get reviewed by the upstream developers and packagers don't have to duplicate the efforts of other packagers. It complicates version management a bit, but that just boils down to packagers picking a given stable branch of the programme to be packaged for a given distribution release, and ensuring that that branch is maintained for as long as they need it. Which they have to do anyway, but usually with a private distribution branch.

Ubuntu's multisearch surprise

Posted Aug 8, 2009 7:46 UTC (Sat) by tzafrir (subscriber, #11501) [Link]

OpenOffice is indeed a good example.

Linux distributions had to deal with a misbehaving upstream. This should be an interesting test-case for you.

I'm also a proud user of the Debian package mozilla-noscript . The Debian packager there has kept me from the mis-behavings of upstream.

I think the opposite

Posted Aug 8, 2009 9:47 UTC (Sat) by coriordan (guest, #7544) [Link] (1 responses)

The thing I love most about GNU/Linux distros, from a practical point of view, is that all the software has been made work together, and has been audited by someone I choose.

In the 90s, when I used a proprietary OS, the default video player played all videos. Then I'd install QuickTime, and it would impose itself as default video player plus default image viewer despite it's image viewing capabilities being awful. Then I'd install RealPlayer, same problem again. One would do this via the registry, another would do it via start-up programs, and others did it in ways I never got to the bottom of.

Then I moved to GNU/Linux. I can install ten video players, and they don't fight with each other. What a relief.

If everyone got their Firefox direct from Mozilla, and if the Mozilla developers were told to say nothing about a privacy problem, then who would have noticed the privacy problem? When distros modify packages, that introduces an independent review into the supply chain.

I think the opposite

Posted Aug 8, 2009 11:48 UTC (Sat) by jospoortvliet (guest, #33164) [Link]

Indeed. Distributions do screw up sometimes, but everyone does. Generally
the distro does a better job at packaging than the individual projects
ever could, besides, those don't have the resources to package for all
those distributions out there. I think the current scheme, while far from
perfect, is the best possible.

Ubuntu's multisearch surprise

Posted Aug 8, 2009 17:34 UTC (Sat) by TRS-80 (guest, #1804) [Link] (2 responses)

For newer packages on a released distribution, have you heard of backports?

Where do you draw the line between the OS and applications? Are perl and python part of the OS? GTK+, Qt?

You say it's a technical problem, but the technical side has been solved several times. Autopackage has been around for years and got approximately zero traction. I haven't changed my views on the utility of distributions since the last time this was discussed on LWN so I won't repeat myself.

Ubuntu's multisearch surprise

Posted Aug 10, 2009 11:31 UTC (Mon) by mjthayer (guest, #39183) [Link]

While the Autopackage web page is a great information resource on binary compatibility issues, Autopackage is not a great package format, and this is probably one of the reasons it is not widely used.

Ubuntu's multisearch surprise

Posted Aug 12, 2009 0:15 UTC (Wed) by MattPerry (guest, #46341) [Link]

> For newer packages on a released distribution, have you heard of
> backports?

Yes, but the choice of backports is extremely limited. Taking at look at the backports for Ubuntu 9.04 (http://packages.ubuntu.com/jaunty-backports/allpackages?f...), there are only a handful of files. One is left hoping that an experienced developer will backport your application of choice. If all applications were backported then it wouldn't be a problem.

For example, I use an ebook management program called Calibre. The version in the latest Ubuntu is 0.4.143. There have been more than 22 releases of Calibre since that time including two major revisions. The version in Ubuntu has tons of bugs that have since been fixed. Where's the backport?

Not even Firefox 3.5, which is a very popular application, has been backported. When googling about installing FF 3.5, the suggestion from everyone is to download it from the vendor's web site. If that's the best solution, why aren't we facilitating that on a larger scale right from the beginning?

> Where do you draw the line between the OS and applications? Are perl and
> python part of the OS? GTK+, Qt?

My opinion is that if it's not part of the standard Gnome desktop and it's not needed to boot to the desktop, then it shouldn't be installed. If Perl and Python are not needed to get to that point, then they shouldn't be installed until they are needed. For that matter, Firefox should not be installed by default either. The Linux community loves to point fingers at Microsoft for bundling Internet Explorer with Windows, yet linux distro providers do the same thing with Firefox. There are other browsers for Linux than Firefox, and not everyone needs a browser. Any justification for including it can equally be used to justify the inclusion of IE in Windows, monopoly status or not. The same goes for all the other stuff that distros install (word processors, web cam programs, IM & email programs, games, drawing programs, etc).

> You say it's a technical problem, but the technical side has been solved
> several times.

Then maybe it's a management problem. Maybe the community needs to work to find a way to provide application developers with the tools, methods, and information to allow them to create and distribute a single package that will install and work on all modern Linux distros. As an end-user, I don't find that goal to be unreasonable.

Ubuntu's multisearch surprise

Posted Aug 10, 2009 17:21 UTC (Mon) by davide.del.vento (guest, #59196) [Link]

> But for power users like me that are in the
> middle of that spectrum, they have to choose
> between the two extremes. I administrate
> systems and compile software all day long
> at work. I don't want to have to do it when
> I come home as that's time I won't get to spend
> with my family. At the same time, I don't want
> to have the versions of applications dictated
> to me by my distro.

So let me try to understand what do you want.

a) you don't want to be "forced" to do what your distro does
b) you don't want to to the things yourself becase you did them all day long at work

So what? I don't see the third option! I think what you really want from your distro is the ability reading your mind, and doing what you'd like. But... wait a minute, this is what this post was about! So Canonical is right: if they study your behavior they might be able to do what you'd like!

About me, I'm really happy with 95% of Ubuntu LTS choices. For the rest 5%, I do download, compile and install outside Synaptic. I think it's a really great compromise: I don't want the hassle to manage security updates on everything, but the very few stuff that I need in "bleeding edge" release. I don't care if OpenOffice or Inkscape (which I do like and use) are not the lastest version!

Ubuntu's multisearch surprise

Posted Aug 10, 2009 17:40 UTC (Mon) by MattPerry (guest, #46341) [Link]

Hi everyone. I want to say thanks for all of the thoughtful and detailed responses that everyone has provided. You all have definitely given me a lot to think about regarding this issue. The most important thing I have taken away from this is that I need to start using a Linux system for my daily desktop again in spite of these problems. I won't be able to refine my perception of the issue, or even see if it's still a problem, unless I am using the system daily.

Ubuntu's multisearch surprise

Posted Aug 22, 2009 7:14 UTC (Sat) by DarthCthulhu (guest, #50384) [Link]

Allow me to proselytize for GoboLinux again (www.gobolinux.org). It pretty much gives you exactly what you want in so far as ease of use and flexibility. All recipes are upstream versions and you can upgrade them however you want while still keeping around older versions. If you update to a new version of software and it sucks, you can easily go back to using the older version. In most cases, you can even use both versions at the same time.

It really is wonderful.


Copyright © 2026, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds