|
|
Subscribe / Log in / New account

Some unreliable predictions for 2015

Some unreliable predictions for 2015

Posted Jan 14, 2015 19:26 UTC (Wed) by dlang (guest, #313)
In reply to: Some unreliable predictions for 2015 by dlang
Parent article: Some unreliable predictions for 2015

putting it another way, people can't even decide to use only one text editor, what makes you think that they will agree to only configure that editor one way?

Gnome keeps trying to do this "this is the only way you should work" and every time it pushes, it looses lots of users. Why do you keep thinking that things need to be so standardized?


to post comments

Some unreliable predictions for 2015

Posted Jan 14, 2015 21:28 UTC (Wed) by raven667 (subscriber, #5198) [Link] (25 responses)

> people can't even decide to use only one text editor,

That's kind of moving the goal posts isn't it? Weren't we talking about having a better defined standard ABI of system libraries that applications could depend on, like a much more comprehensive version of LSB, rather than each distro and each version of that distro effectively being it's own unique snowflake ABI that has to be specifically targeted by developers because of the lack of standards across distros? No body cares about what applications you use, but there is some care about file formats and a large amount of concern about libraries and ABI.

Standardization has had some success in network protocols like IP or kernel interfaces like POSIX or file formats like JPEG, why could that success not continue forward with an expanded LSB? Right now the proposals on the table are to give up on standardizing anything and just making it as easy as possible to package up whole distros for containers to rely on. I guess it really is that hard to define a userspace ABI that would be useful, or we are at a Nash Equilibrium where no one can do better on their own to make the large global change to kickstart the process to define standards.

Some unreliable predictions for 2015

Posted Jan 14, 2015 21:48 UTC (Wed) by dlang (guest, #313) [Link] (18 responses)

> That's kind of moving the goal posts isn't it? Weren't we talking about having a better defined standard ABI of system libraries that applications could depend on

I didn't think that is what we were talking about.

We were talking about distro packaging and why the same package of a given application isn't suitable for all distros (the reason being that they opt for different options when they compile the application)

As far as each distro having a different ABI for applications, the only reason that some aren't a subset of others (assuming they have the same libraries installed) is that the library authors don't maintain backwards compatibility. There's nothing that a distro can do to solve this problem except all ship exactly the same version and never upgrade it.

And since some distros want the latest, up-to-the-minute version of that library, while other distros want to use a version that's been tested more, you aren't going to have the distros all ship the same version, even for distro releases that happen at the same time (and if one distro releases in April, and another in June, how would they decide which version of the library to run?)

Some unreliable predictions for 2015

Posted Jan 14, 2015 22:35 UTC (Wed) by raven667 (subscriber, #5198) [Link] (6 responses)

> I didn't think that is what we were talking about.

That confusion may be my fault as well, reading back in the thread. You can blame me, it scales well 8-)

> the same package of a given application isn't suitable for all distros
> they opt for different options when they compile

I'd be interested in taking a serious look at what kinds of options are more likely to change and why to see if there are any broad categories which could be standardized if some other underlying problem (like ABI instability) was sufficiently resolved. My gut feeling (not data) is that the vast majority of the differences are not questions of functionality but of integrating with the base OS. Of course distros like Gentoo will continue to stick around for those who want to easily build whatever they want however they want but those distros aren't leading the pack or defining industry standards now and I don't expect that to change. From my Ameri-centric view this would seem to require RHEL/Fedora, Debian/Ubuntu and (Open)SuSE to get together and homogenize as much as possible so that upstream developers effectively have a single target (Linux-ABI-2020) ecosystem and could distribute binaries along with code for the default version of the application. I guess I just don't care about the technical differences between these distros, it all seems like a wash to me, pointless differentiation for marketing purposes. It's not too much to ask developers to package once, if the process is straightforward enough.

Some unreliable predictions for 2015

Posted Jan 14, 2015 23:51 UTC (Wed) by dlang (guest, #313) [Link] (5 responses)

RHEL wants to run extremely well tested versions of software, even if they are old

Fedora wants to run the latest version of software, even if they aren't well tested

how are these two going to pick the same version?

other differences between distros, do you compile it to depend on Gnome, KDE, etc. Do you compile it to put it's data in SQLite, MySQL or PostgreSQL?

Some distros won't compile some options in because they provide support for (or depend on) proprietary software.

Is an option useful or bloat? different distros will decide differently for the same option.

Gentoo says "whatever the admin wants" for all of these things, other distros pick a set of options, or a few sets of options and make packages based on this.

As an example, Ubuntu has multiple versions of the gnuplot package, one that depends on X (and requires all the X libraries be installed), one that requires QT, and one that requires neither.

Some unreliable predictions for 2015

Posted Jan 15, 2015 4:39 UTC (Thu) by raven667 (subscriber, #5198) [Link] (2 responses)

All interesting points to be sure but I'd be more interested in imaging what would need to happen to make this work, a Linux Foundation Standard ABI would look a lot like an enterprise distro, maybe shipping new versions of leaf software (like Firefox) but never breaking core software, or a proprietary system like Mac OS X. Right now every distro claims to be suitable for end-user deployment and a target for third party development, that would be true if there were comprehensive compatibility standards, or only one dominant vendor to support, but there are not so every distros advertising is false on this point.

These deployment and compatibility problems have been solved on the proprietary platforms, some of which are even based on Linux, and the Linux kernel team provides the same or better ABI compatibility that many proprietary systems offer, why can't userspace library developers and distros have the same level of quality control that the kernel has?

Some unreliable predictions for 2015

Posted Jan 15, 2015 9:28 UTC (Thu) by Wol (subscriber, #4433) [Link] (1 responses)

> why can't userspace library developers and distros have the same level of quality control that the kernel has?

Maybe because too much userspace software is written by Computer Science guys, and the kernel is run by an irrascible engineer?

There is FAR too much reliance on theory in the computer space in general, and linux (the OS, not kernel) is no exception. Indeed, I would go as far as to say that the database realm in particular has been seriously harmed by this ... :-)

There are far too few engineers out there - people who say "I want it to work in practice, not in theory".

Cheers,
Wol

Some unreliable predictions for 2015

Posted Feb 2, 2015 20:34 UTC (Mon) by nix (subscriber, #2304) [Link]

There is FAR too much reliance on theory in the computer space in general, and linux (the OS, not kernel) is no exception.
This may be true in the database realm you're singlemindedly focussed on (and I suspect in that respect it is only true with respect to one single theory which you happen to dislike and which, to be honest, RDBMS's implement about as closely as they implement flying to Mars), but it's very far from true everywhere else. GCC gained hugely from its switch to a representation that allowed it to actually use algorithms from the published research. The developers of most things other than compilers and Mesa aren't looking at research of any kind. In many cases, there is no research of any kind to look at.

Some unreliable predictions for 2015

Posted Jan 15, 2015 17:50 UTC (Thu) by HIGHGuY (subscriber, #62277) [Link] (1 responses)

Actually, you haven't made any point that I think is not technically solvable. Let's say we build this system:
- app/lib-developers do as they've always done: write software
- packagers do as they've always done, with the exception that their output is a container that contains the app and any necessary libraries. They can even choose to build a small number of different containers, each with slightly different options.
- There's S/W available to aid in creating new and updating/patching existing containers. Much like Docker allows you to modify part of an existing container and call it a new one, you can apply a patched (yet backwards compatible) library or application in a container and ship it as an update to the old one.
- the few "normal-use" (i.e. sorry Gentoo, you're not it ;) )distro's that are left then pick from the existing packages and compose according to wishes. Fedora would likely be a spawning ground for latest versions of all packages, while Red Hat might pick some older (but well-maintained) package with all the patching that it has seen. This also means that Red Hat could reuse packages that originally spawned in Fedora or elsewhere.
- Those that care enough can still build and publish a new container for a package with whatever options they like.

In this scheme, a package with a particular set of options gets built just once. Users and distro's get to pick and match as much as they like. Distro's can reuse work done by other distro's/users and differentiate only where they need. Much of the redundant work is gone.

Some unreliable predictions for 2015

Posted Jan 15, 2015 18:32 UTC (Thu) by dlang (guest, #313) [Link]

The problem has never been that it's not technically solvable.

The problem is that the software (app and library) authors don't do what everyone thinks they should (which _is_ technically impossible, because people have conflicting opinions about what they should do)

Let's talk about older, but well maintained versions of packages.

Who is doing that maintenance? In many cases, software developers only really support the current version of an application, they may support one or two versions back, but any more than that is really unusual.

It's usually the distro packagers/maintainers that do a lot of the work of maintaining the older versions that they ship. And the maintinance of the old versions has the same 'include all changes' vs 'only include what's needed (with the problem of defining what's needed)' issue that the distros have in what versions they ship in the first place.

Some unreliable predictions for 2015

Posted Jan 16, 2015 1:43 UTC (Fri) by vonbrand (subscriber, #4458) [Link]

Add that distributions (or users) select different packages for the same functionality: different web servers, C/C++ compilers, editors, document/image viewers, ...

Some unreliable predictions for 2015

Posted Jan 16, 2015 11:51 UTC (Fri) by hitmark (guest, #34609) [Link] (9 responses)

Much of the problem goes away if just the package managers could tolerate to have multiple versions of the same lib installed. versioned sonames exist for a reason...

Some unreliable predictions for 2015

Posted Jan 16, 2015 12:41 UTC (Fri) by anselm (subscriber, #2796) [Link] (7 responses)

I don't see why that would be a problem. On my Debian system I have multiple versions of, say, libreadline, libprocps and libtcl installed at the same time, in each case from separate packages, so the support seems to be there already.

Some unreliable predictions for 2015

Posted Jan 16, 2015 15:44 UTC (Fri) by cortana (subscriber, #24596) [Link] (6 responses)

This doesn't do what you think it does.

$ ls -l /lib/x86_64-linux-gnu/libreadline.so.{5,6}
lrwxrwxrwx 1 root root 18 Apr 27  2013 /lib/x86_64-linux-gnu/libreadline.so.5 -> libreadline.so.5.2
lrwxrwxrwx 1 root root 18 Jan 13 03:25 /lib/x86_64-linux-gnu/libreadline.so.6 -> libreadline.so.6.3

Looks promising at first glance, but what if my application wants libreadline 5.1?

The SONAME is not strongly tied to the version of the library, but to the compatibility level.

Some unreliable predictions for 2015

Posted Jan 16, 2015 15:56 UTC (Fri) by mathstuf (subscriber, #69389) [Link] (5 responses)

Then someone goofed. Either you're either depending on unspecified behavior (which should be un-exported if possible), or a soname bump was missed upstream between 5.1 and 5.2.

Some unreliable predictions for 2015

Posted Jan 16, 2015 16:19 UTC (Fri) by raven667 (subscriber, #5198) [Link] (2 responses)

I think it is true that there are a ton of library ABI breaks where the soname isn't changed because many library authors don't know when the changes they make break it, they just recompile which hides a lot of issues. Distros like Gentoo and tools like OBS rebuild applications when a library dependency changes even if the soname is the same for exactly this reason, they wouldn't bother to do this if the soname was a reliable indicator of compatibility.

Some unreliable predictions for 2015

Posted Jan 16, 2015 17:45 UTC (Fri) by peter-b (guest, #66996) [Link] (1 responses)

So we have a perfectly adequate system that some people don't use properly, in fact. This is news?

Some unreliable predictions for 2015

Posted Jan 16, 2015 18:55 UTC (Fri) by cortana (subscriber, #24596) [Link]

No, we have an inadequate system that is only useful as long as upstream developers, downstream developers and distributors never make mistakes.

Some unreliable predictions for 2015

Posted Jan 16, 2015 18:56 UTC (Fri) by cortana (subscriber, #24596) [Link] (1 responses)

BTW, what happens if I actually need libreadline.so.4?

Some unreliable predictions for 2015

Posted Jan 16, 2015 21:22 UTC (Fri) by mathstuf (subscriber, #69389) [Link]

Either bundle or ask for that version to be packaged. Or provide it in your repo (since you're not likely to have such a package shipped by Debian itself without them handing back patches to update it).

Some unreliable predictions for 2015

Posted Jan 16, 2015 13:55 UTC (Fri) by cesarb (subscriber, #6266) [Link]

> Much of the problem goes away if just the package managers could tolerate to have multiple versions of the same lib installed. versioned sonames exist for a reason...

It's not that simple. Suppose a program is linked against library A which in turn is linked against libpng2, and that program is also linked against library B which in turn is linked against libpng3.

Now imagine the program gets from library A a pointer to a libpng structure, which it then passes to library B.

Some unreliable predictions for 2015

Posted Jan 15, 2015 20:22 UTC (Thu) by flussence (guest, #85566) [Link] (5 responses)

> Standardization has had some success in network protocols like IP or kernel interfaces like POSIX or file formats like JPEG

JPEG is a bad example to use there... everyone dropped the official reference implementation after its maintainer went off the rails and started changing the format in backwards-incompatible ways: http://www.libjpeg-turbo.org/About/Jpeg-9

Some unreliable predictions for 2015

Posted Jan 15, 2015 21:40 UTC (Thu) by raven667 (subscriber, #5198) [Link]

The point is that there is more than one interoperable implementation, not that everyone has forked the reference implementation to get interoperability, a JPEG made on an Ubuntu 12.04 will work on an Ubuntu 10.10 and Ubuntu 14.10 and Fedora 21 and Fedora 15 without "recompiling" or converting, whereas software built on one can't run on the other (while using OS provided shared libraries) because they are too different in the tiny details even though they are broadly running the same software.

JPEG / JFIF

Posted Jan 16, 2015 11:24 UTC (Fri) by tialaramex (subscriber, #21167) [Link] (3 responses)

Well, the deal is a bit more complicated than you've suggested. You've focused on the IJG's library, but there's much more.

JPEG per se isn't a file format. The committee weren't focused on storing the data from their compression algorithm as files, they were thinking you'd transmit it to somewhere and it'd get decompressed and then used. So the actual international standard is completely mute about files on disk.

Early on people who did think we should store data in files wrote JPEG compressed data to the pseudo-standard TIFF. But TIFF is a complete mess, conceived as the minimal way to store output from a drum or flatbed scanner on a computer and thus permitting absolutely everything but making nothing easy - and its attempt to handle JPEG led to incompatible (literally, as in "I sent you the file" "No, my program says it's corrupt" "OK, try this" "Did you mean for it to be black and white?") implementations. There were then a series of Adobe "technical notes" for TIFF that try to fix things, several times attempting a fresh start with little success.

JFIF is the "real" name for the file format we all use today, and it's basically where the IJG comes into the picture. Instead of TIFF's mess of irrelevant or nonsensical parameters you've got the exact parameters needed for the codec being used, and then you've got all this raw data to pass into the decoder. And there's this handy free library of code to read and write the files, so everybody just uses that.

So initially the IJG are great unifiers - instead of three or four incompatible attempts to store JPEG data in a TIFF you get these smaller and obviously non-TIFF JPG files and either the recipient can read them or they can't, no confusion as to what they mean. But then they proved (and libpng followed them for a while) incapable of grasping what an ABI is.

JPEG / JFIF

Posted Jan 16, 2015 12:35 UTC (Fri) by peter-b (guest, #66996) [Link] (2 responses)

In TIFF's defence, I used TIFF files and libtiff extensively during my PhD, since they're the only sane way of storing and communicating remote sensing datasets (complex 32-bit fixed point pixel values on 6 image planes? no problem).

I didn't experience any problems that weren't due to my own incompetence.

JPEG / JFIF

Posted Jan 16, 2015 13:27 UTC (Fri) by rleigh (guest, #14622) [Link]

TIFF is certainly complex, but that's made up for by its unmatched power and sophistication. I've recently been working on TIFF reading/writing of microscopy imaging data, and testing all the different combinations of PhotometricInterpretation with and without LUTs, pixel type and depth (including complex floating point types), orientation, large numbers of channels, all sorts of combinations of tile and strip sizes, bigtiff, etc. It's quite surprising how many programs and graphics libraries get it wrong. The worst I found was the Microsoft TIFF support on Windows, e.g. for thumbnailing and viewing, which was incorrect for most cases, and apparently it's been much improved! Support on FreeBSD and Linux with free software viewers was better, but still not perfect for many cases.

I think this is primarily due to most authors staying well inside the 8-bit grey/RGB "comfort zone". Sometimes this extends to 12/16-bit or maybe float, and not testing with more sophisticated data.

Most of that is simply the author screwing up. For example, when dealing with strips and tiles, it's amazing how many people mess up the image data by failing to deal with the strip/tile overlapping the image bounds when it's not a multiple of the image size, sometimes for particular pixel types e.g. 1 or 2 bit data. Just a simple miscalulation or failure to check particular tiff tags.

I'm not sure what the solution is here. A collection of images which exercise usage of all the baseline tags with all the special cases (and their interactions) would be a good start. I currently generate a test set of around 4000 64×64 TIFF images for the set of tags I care about, but it's still far from comprehensive. I know it works for the most part, but even then it's going to fail for tags I don't currently code for.

JPEG / JFIF

Posted Jan 17, 2015 10:32 UTC (Sat) by tialaramex (subscriber, #21167) [Link]

I am, since the name probably doesn't ring a bell, responsible for GIMP's TIFF loader, and in another previous life as a PhD research student I read and wrote a great many complex tiled TIFFs created by art historians studying/ preserving various great European works.

So, I'm not saying it's garbage because I don't understand how to use it, I'm saying it's garbage because I do understand and I don't sympathise.


Copyright © 2025, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds