Some unreliable predictions for 2015
Some unreliable predictions for 2015
Posted Jan 14, 2015 5:17 UTC (Wed) by dlang (guest, #313)In reply to: Some unreliable predictions for 2015 by raven667
Parent article: Some unreliable predictions for 2015
There was a reason for people to code the different options in the first place, if they thought it was important enough to write, who are you to decide that "there must be only one" and it can't be used?
"there is only one way to do things" can be argued to be right for language syntax, but outside of that it's very much the wrong approach to take.
Posted Jan 14, 2015 16:44 UTC (Wed)
by raven667 (subscriber, #5198)
[Link] (28 responses)
> Everyone wants the variety to be reduced, they just would like everyone else to change to match what they do ;-)
Standardization is all about one set of options winning because they work better for the widest variety of cases and others can then trust the software, that doesn't mean someone can't create non-standard options and use the software in non-standard ways but then it is clear that what they are doing is outside of the ecosystem. Right now if you want to grow you have to reduce the variety which is expected to be supported, which is what everyone wants, and what distro vendors do currently, but because none of them have "won" sufficiently outside of individual vertical markets there is no "default" ABI to target for developers so it slows progress. There are reasons the most successful Linux desktop has been Android, which is a standard controlled by a single vendor, and not the cacophony of existing X11 software.
Posted Jan 14, 2015 19:14 UTC (Wed)
by dlang (guest, #313)
[Link] (27 responses)
But don't try to make me use your standardized system that doesn't do what I want it to do.
If you really believed in standardization, you would be using only microsoft products and prohibit anything else from existing.
Posted Jan 14, 2015 19:26 UTC (Wed)
by dlang (guest, #313)
[Link] (26 responses)
Gnome keeps trying to do this "this is the only way you should work" and every time it pushes, it looses lots of users. Why do you keep thinking that things need to be so standardized?
Posted Jan 14, 2015 21:28 UTC (Wed)
by raven667 (subscriber, #5198)
[Link] (25 responses)
That's kind of moving the goal posts isn't it? Weren't we talking about having a better defined standard ABI of system libraries that applications could depend on, like a much more comprehensive version of LSB, rather than each distro and each version of that distro effectively being it's own unique snowflake ABI that has to be specifically targeted by developers because of the lack of standards across distros? No body cares about what applications you use, but there is some care about file formats and a large amount of concern about libraries and ABI.
Standardization has had some success in network protocols like IP or kernel interfaces like POSIX or file formats like JPEG, why could that success not continue forward with an expanded LSB? Right now the proposals on the table are to give up on standardizing anything and just making it as easy as possible to package up whole distros for containers to rely on. I guess it really is that hard to define a userspace ABI that would be useful, or we are at a Nash Equilibrium where no one can do better on their own to make the large global change to kickstart the process to define standards.
Posted Jan 14, 2015 21:48 UTC (Wed)
by dlang (guest, #313)
[Link] (18 responses)
I didn't think that is what we were talking about.
We were talking about distro packaging and why the same package of a given application isn't suitable for all distros (the reason being that they opt for different options when they compile the application)
As far as each distro having a different ABI for applications, the only reason that some aren't a subset of others (assuming they have the same libraries installed) is that the library authors don't maintain backwards compatibility. There's nothing that a distro can do to solve this problem except all ship exactly the same version and never upgrade it.
And since some distros want the latest, up-to-the-minute version of that library, while other distros want to use a version that's been tested more, you aren't going to have the distros all ship the same version, even for distro releases that happen at the same time (and if one distro releases in April, and another in June, how would they decide which version of the library to run?)
Posted Jan 14, 2015 22:35 UTC (Wed)
by raven667 (subscriber, #5198)
[Link] (6 responses)
That confusion may be my fault as well, reading back in the thread. You can blame me, it scales well 8-)
> the same package of a given application isn't suitable for all distros
I'd be interested in taking a serious look at what kinds of options are more likely to change and why to see if there are any broad categories which could be standardized if some other underlying problem (like ABI instability) was sufficiently resolved. My gut feeling (not data) is that the vast majority of the differences are not questions of functionality but of integrating with the base OS. Of course distros like Gentoo will continue to stick around for those who want to easily build whatever they want however they want but those distros aren't leading the pack or defining industry standards now and I don't expect that to change. From my Ameri-centric view this would seem to require RHEL/Fedora, Debian/Ubuntu and (Open)SuSE to get together and homogenize as much as possible so that upstream developers effectively have a single target (Linux-ABI-2020) ecosystem and could distribute binaries along with code for the default version of the application. I guess I just don't care about the technical differences between these distros, it all seems like a wash to me, pointless differentiation for marketing purposes. It's not too much to ask developers to package once, if the process is straightforward enough.
Posted Jan 14, 2015 23:51 UTC (Wed)
by dlang (guest, #313)
[Link] (5 responses)
Fedora wants to run the latest version of software, even if they aren't well tested
how are these two going to pick the same version?
other differences between distros, do you compile it to depend on Gnome, KDE, etc. Do you compile it to put it's data in SQLite, MySQL or PostgreSQL?
Some distros won't compile some options in because they provide support for (or depend on) proprietary software.
Is an option useful or bloat? different distros will decide differently for the same option.
Gentoo says "whatever the admin wants" for all of these things, other distros pick a set of options, or a few sets of options and make packages based on this.
As an example, Ubuntu has multiple versions of the gnuplot package, one that depends on X (and requires all the X libraries be installed), one that requires QT, and one that requires neither.
Posted Jan 15, 2015 4:39 UTC (Thu)
by raven667 (subscriber, #5198)
[Link] (2 responses)
These deployment and compatibility problems have been solved on the proprietary platforms, some of which are even based on Linux, and the Linux kernel team provides the same or better ABI compatibility that many proprietary systems offer, why can't userspace library developers and distros have the same level of quality control that the kernel has?
Posted Jan 15, 2015 9:28 UTC (Thu)
by Wol (subscriber, #4433)
[Link] (1 responses)
Maybe because too much userspace software is written by Computer Science guys, and the kernel is run by an irrascible engineer?
There is FAR too much reliance on theory in the computer space in general, and linux (the OS, not kernel) is no exception. Indeed, I would go as far as to say that the database realm in particular has been seriously harmed by this ... :-)
There are far too few engineers out there - people who say "I want it to work in practice, not in theory".
Cheers,
Posted Feb 2, 2015 20:34 UTC (Mon)
by nix (subscriber, #2304)
[Link]
Posted Jan 15, 2015 17:50 UTC (Thu)
by HIGHGuY (subscriber, #62277)
[Link] (1 responses)
In this scheme, a package with a particular set of options gets built just once. Users and distro's get to pick and match as much as they like. Distro's can reuse work done by other distro's/users and differentiate only where they need. Much of the redundant work is gone.
Posted Jan 15, 2015 18:32 UTC (Thu)
by dlang (guest, #313)
[Link]
The problem is that the software (app and library) authors don't do what everyone thinks they should (which _is_ technically impossible, because people have conflicting opinions about what they should do)
Let's talk about older, but well maintained versions of packages.
Who is doing that maintenance? In many cases, software developers only really support the current version of an application, they may support one or two versions back, but any more than that is really unusual.
It's usually the distro packagers/maintainers that do a lot of the work of maintaining the older versions that they ship. And the maintinance of the old versions has the same 'include all changes' vs 'only include what's needed (with the problem of defining what's needed)' issue that the distros have in what versions they ship in the first place.
Posted Jan 16, 2015 1:43 UTC (Fri)
by vonbrand (subscriber, #4458)
[Link]
Add that distributions (or users) select different packages for the same functionality: different web servers, C/C++ compilers, editors, document/image viewers, ...
Posted Jan 16, 2015 11:51 UTC (Fri)
by hitmark (guest, #34609)
[Link] (9 responses)
Posted Jan 16, 2015 12:41 UTC (Fri)
by anselm (subscriber, #2796)
[Link] (7 responses)
I don't see why that would be a problem. On my Debian system I have multiple versions of, say, libreadline, libprocps and libtcl installed at the same time, in each case from separate packages, so the support seems to be there already.
Posted Jan 16, 2015 15:44 UTC (Fri)
by cortana (subscriber, #24596)
[Link] (6 responses)
This doesn't do what you think it does.
Looks promising at first glance, but what if my application wants libreadline 5.1?
The SONAME is not strongly tied to the version of the library, but to the compatibility level.
Posted Jan 16, 2015 15:56 UTC (Fri)
by mathstuf (subscriber, #69389)
[Link] (5 responses)
Posted Jan 16, 2015 16:19 UTC (Fri)
by raven667 (subscriber, #5198)
[Link] (2 responses)
Posted Jan 16, 2015 17:45 UTC (Fri)
by peter-b (guest, #66996)
[Link] (1 responses)
Posted Jan 16, 2015 18:55 UTC (Fri)
by cortana (subscriber, #24596)
[Link]
Posted Jan 16, 2015 18:56 UTC (Fri)
by cortana (subscriber, #24596)
[Link] (1 responses)
Posted Jan 16, 2015 21:22 UTC (Fri)
by mathstuf (subscriber, #69389)
[Link]
Posted Jan 16, 2015 13:55 UTC (Fri)
by cesarb (subscriber, #6266)
[Link]
It's not that simple. Suppose a program is linked against library A which in turn is linked against libpng2, and that program is also linked against library B which in turn is linked against libpng3.
Now imagine the program gets from library A a pointer to a libpng structure, which it then passes to library B.
Posted Jan 15, 2015 20:22 UTC (Thu)
by flussence (guest, #85566)
[Link] (5 responses)
JPEG is a bad example to use there... everyone dropped the official reference implementation after its maintainer went off the rails and started changing the format in backwards-incompatible ways: http://www.libjpeg-turbo.org/About/Jpeg-9
Posted Jan 15, 2015 21:40 UTC (Thu)
by raven667 (subscriber, #5198)
[Link]
Posted Jan 16, 2015 11:24 UTC (Fri)
by tialaramex (subscriber, #21167)
[Link] (3 responses)
JPEG per se isn't a file format. The committee weren't focused on storing the data from their compression algorithm as files, they were thinking you'd transmit it to somewhere and it'd get decompressed and then used. So the actual international standard is completely mute about files on disk.
Early on people who did think we should store data in files wrote JPEG compressed data to the pseudo-standard TIFF. But TIFF is a complete mess, conceived as the minimal way to store output from a drum or flatbed scanner on a computer and thus permitting absolutely everything but making nothing easy - and its attempt to handle JPEG led to incompatible (literally, as in "I sent you the file" "No, my program says it's corrupt" "OK, try this" "Did you mean for it to be black and white?") implementations. There were then a series of Adobe "technical notes" for TIFF that try to fix things, several times attempting a fresh start with little success.
JFIF is the "real" name for the file format we all use today, and it's basically where the IJG comes into the picture. Instead of TIFF's mess of irrelevant or nonsensical parameters you've got the exact parameters needed for the codec being used, and then you've got all this raw data to pass into the decoder. And there's this handy free library of code to read and write the files, so everybody just uses that.
So initially the IJG are great unifiers - instead of three or four incompatible attempts to store JPEG data in a TIFF you get these smaller and obviously non-TIFF JPG files and either the recipient can read them or they can't, no confusion as to what they mean. But then they proved (and libpng followed them for a while) incapable of grasping what an ABI is.
Posted Jan 16, 2015 12:35 UTC (Fri)
by peter-b (guest, #66996)
[Link] (2 responses)
I didn't experience any problems that weren't due to my own incompetence.
Posted Jan 16, 2015 13:27 UTC (Fri)
by rleigh (guest, #14622)
[Link]
I think this is primarily due to most authors staying well inside the 8-bit grey/RGB "comfort zone". Sometimes this extends to 12/16-bit or maybe float, and not testing with more sophisticated data.
Most of that is simply the author screwing up. For example, when dealing with strips and tiles, it's amazing how many people mess up the image data by failing to deal with the strip/tile overlapping the image bounds when it's not a multiple of the image size, sometimes for particular pixel types e.g. 1 or 2 bit data. Just a simple miscalulation or failure to check particular tiff tags.
I'm not sure what the solution is here. A collection of images which exercise usage of all the baseline tags with all the special cases (and their interactions) would be a good start. I currently generate a test set of around 4000 64×64 TIFF images for the set of tags I care about, but it's still far from comprehensive. I know it works for the most part, but even then it's going to fail for tags I don't currently code for.
Posted Jan 17, 2015 10:32 UTC (Sat)
by tialaramex (subscriber, #21167)
[Link]
So, I'm not saying it's garbage because I don't understand how to use it, I'm saying it's garbage because I do understand and I don't sympathise.
Some unreliable predictions for 2015
Some unreliable predictions for 2015
Some unreliable predictions for 2015
Some unreliable predictions for 2015
Some unreliable predictions for 2015
Some unreliable predictions for 2015
> they opt for different options when they compile
Some unreliable predictions for 2015
Some unreliable predictions for 2015
Some unreliable predictions for 2015
Wol
Some unreliable predictions for 2015
There is FAR too much reliance on theory in the computer space in general, and linux (the OS, not kernel) is no exception.
This may be true in the database realm you're singlemindedly focussed on (and I suspect in that respect it is only true with respect to one single theory which you happen to dislike and which, to be honest, RDBMS's implement about as closely as they implement flying to Mars), but it's very far from true everywhere else. GCC gained hugely from its switch to a representation that allowed it to actually use algorithms from the published research. The developers of most things other than compilers and Mesa aren't looking at research of any kind. In many cases, there is no research of any kind to look at.
Some unreliable predictions for 2015
- app/lib-developers do as they've always done: write software
- packagers do as they've always done, with the exception that their output is a container that contains the app and any necessary libraries. They can even choose to build a small number of different containers, each with slightly different options.
- There's S/W available to aid in creating new and updating/patching existing containers. Much like Docker allows you to modify part of an existing container and call it a new one, you can apply a patched (yet backwards compatible) library or application in a container and ship it as an update to the old one.
- the few "normal-use" (i.e. sorry Gentoo, you're not it ;) )distro's that are left then pick from the existing packages and compose according to wishes. Fedora would likely be a spawning ground for latest versions of all packages, while Red Hat might pick some older (but well-maintained) package with all the patching that it has seen. This also means that Red Hat could reuse packages that originally spawned in Fedora or elsewhere.
- Those that care enough can still build and publish a new container for a package with whatever options they like.
Some unreliable predictions for 2015
Some unreliable predictions for 2015
Some unreliable predictions for 2015
Some unreliable predictions for 2015
Some unreliable predictions for 2015
$ ls -l /lib/x86_64-linux-gnu/libreadline.so.{5,6}
lrwxrwxrwx 1 root root 18 Apr 27 2013 /lib/x86_64-linux-gnu/libreadline.so.5 -> libreadline.so.5.2
lrwxrwxrwx 1 root root 18 Jan 13 03:25 /lib/x86_64-linux-gnu/libreadline.so.6 -> libreadline.so.6.3
Some unreliable predictions for 2015
Some unreliable predictions for 2015
Some unreliable predictions for 2015
Some unreliable predictions for 2015
Some unreliable predictions for 2015
Some unreliable predictions for 2015
Some unreliable predictions for 2015
Some unreliable predictions for 2015
Some unreliable predictions for 2015
JPEG / JFIF
JPEG / JFIF
JPEG / JFIF
JPEG / JFIF