|
|
Subscribe / Log in / New account

Some unreliable predictions for 2015

Some unreliable predictions for 2015

Posted Jan 14, 2015 23:51 UTC (Wed) by dlang (guest, #313)
In reply to: Some unreliable predictions for 2015 by raven667
Parent article: Some unreliable predictions for 2015

RHEL wants to run extremely well tested versions of software, even if they are old

Fedora wants to run the latest version of software, even if they aren't well tested

how are these two going to pick the same version?

other differences between distros, do you compile it to depend on Gnome, KDE, etc. Do you compile it to put it's data in SQLite, MySQL or PostgreSQL?

Some distros won't compile some options in because they provide support for (or depend on) proprietary software.

Is an option useful or bloat? different distros will decide differently for the same option.

Gentoo says "whatever the admin wants" for all of these things, other distros pick a set of options, or a few sets of options and make packages based on this.

As an example, Ubuntu has multiple versions of the gnuplot package, one that depends on X (and requires all the X libraries be installed), one that requires QT, and one that requires neither.


to post comments

Some unreliable predictions for 2015

Posted Jan 15, 2015 4:39 UTC (Thu) by raven667 (subscriber, #5198) [Link] (2 responses)

All interesting points to be sure but I'd be more interested in imaging what would need to happen to make this work, a Linux Foundation Standard ABI would look a lot like an enterprise distro, maybe shipping new versions of leaf software (like Firefox) but never breaking core software, or a proprietary system like Mac OS X. Right now every distro claims to be suitable for end-user deployment and a target for third party development, that would be true if there were comprehensive compatibility standards, or only one dominant vendor to support, but there are not so every distros advertising is false on this point.

These deployment and compatibility problems have been solved on the proprietary platforms, some of which are even based on Linux, and the Linux kernel team provides the same or better ABI compatibility that many proprietary systems offer, why can't userspace library developers and distros have the same level of quality control that the kernel has?

Some unreliable predictions for 2015

Posted Jan 15, 2015 9:28 UTC (Thu) by Wol (subscriber, #4433) [Link] (1 responses)

> why can't userspace library developers and distros have the same level of quality control that the kernel has?

Maybe because too much userspace software is written by Computer Science guys, and the kernel is run by an irrascible engineer?

There is FAR too much reliance on theory in the computer space in general, and linux (the OS, not kernel) is no exception. Indeed, I would go as far as to say that the database realm in particular has been seriously harmed by this ... :-)

There are far too few engineers out there - people who say "I want it to work in practice, not in theory".

Cheers,
Wol

Some unreliable predictions for 2015

Posted Feb 2, 2015 20:34 UTC (Mon) by nix (subscriber, #2304) [Link]

There is FAR too much reliance on theory in the computer space in general, and linux (the OS, not kernel) is no exception.
This may be true in the database realm you're singlemindedly focussed on (and I suspect in that respect it is only true with respect to one single theory which you happen to dislike and which, to be honest, RDBMS's implement about as closely as they implement flying to Mars), but it's very far from true everywhere else. GCC gained hugely from its switch to a representation that allowed it to actually use algorithms from the published research. The developers of most things other than compilers and Mesa aren't looking at research of any kind. In many cases, there is no research of any kind to look at.

Some unreliable predictions for 2015

Posted Jan 15, 2015 17:50 UTC (Thu) by HIGHGuY (subscriber, #62277) [Link] (1 responses)

Actually, you haven't made any point that I think is not technically solvable. Let's say we build this system:
- app/lib-developers do as they've always done: write software
- packagers do as they've always done, with the exception that their output is a container that contains the app and any necessary libraries. They can even choose to build a small number of different containers, each with slightly different options.
- There's S/W available to aid in creating new and updating/patching existing containers. Much like Docker allows you to modify part of an existing container and call it a new one, you can apply a patched (yet backwards compatible) library or application in a container and ship it as an update to the old one.
- the few "normal-use" (i.e. sorry Gentoo, you're not it ;) )distro's that are left then pick from the existing packages and compose according to wishes. Fedora would likely be a spawning ground for latest versions of all packages, while Red Hat might pick some older (but well-maintained) package with all the patching that it has seen. This also means that Red Hat could reuse packages that originally spawned in Fedora or elsewhere.
- Those that care enough can still build and publish a new container for a package with whatever options they like.

In this scheme, a package with a particular set of options gets built just once. Users and distro's get to pick and match as much as they like. Distro's can reuse work done by other distro's/users and differentiate only where they need. Much of the redundant work is gone.

Some unreliable predictions for 2015

Posted Jan 15, 2015 18:32 UTC (Thu) by dlang (guest, #313) [Link]

The problem has never been that it's not technically solvable.

The problem is that the software (app and library) authors don't do what everyone thinks they should (which _is_ technically impossible, because people have conflicting opinions about what they should do)

Let's talk about older, but well maintained versions of packages.

Who is doing that maintenance? In many cases, software developers only really support the current version of an application, they may support one or two versions back, but any more than that is really unusual.

It's usually the distro packagers/maintainers that do a lot of the work of maintaining the older versions that they ship. And the maintinance of the old versions has the same 'include all changes' vs 'only include what's needed (with the problem of defining what's needed)' issue that the distros have in what versions they ship in the first place.


Copyright © 2025, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds