SELF: Anatomy of an (alleged) failure
SELF: Anatomy of an (alleged) failure
Posted Jun 23, 2010 20:10 UTC (Wed) by cmccabe (guest, #60281)Parent article: SELF: Anatomy of an (alleged) failure
The reason why Apple invented FAT binaries is because they were interested in maintaining extensive binary compatibility with their old systems. Linux has never had this policy. Binaries that worked great on Fedora Core 9 probably won't work on Fedora Core 12, or Ubuntu 9.04, or whatever.
> One might question the wisdom of using Hans Reiser as an example of the
> kernel development process gone wrong
This just might be the understatement of the day!
Posted Jun 23, 2010 20:13 UTC (Wed)
by jzb (editor, #7867)
[Link] (2 responses)
Posted Jun 24, 2010 16:27 UTC (Thu)
by fuhchee (guest, #40059)
[Link] (1 responses)
Posted Jun 24, 2010 19:22 UTC (Thu)
by jldugger (guest, #57576)
[Link]
Posted Jun 23, 2010 20:38 UTC (Wed)
by drag (guest, #31333)
[Link] (50 responses)
The point of it is to make things for users easier to deal with... forcing them to deal with UnionFS (especially when it's not part of the kernel and does not seem to ever likely to be incorporated) and using layered file systems by default on every Linux install sounds like a huge PITA to deal with.
Having 'Fat' binaries is really the best solution for OSes that want to support multiple arches in the easiest and most user-friendly way possible (especially in x86-64 were it can run 32bit and 64bit code side by side).
It's not just a matter of supporting Adobe flash or something like that, but it's just a superior technical solution for all levels from a users and system administration perspective.
> The reason why Apple invented FAT binaries is because they were interested in maintaining extensive binary compatibility with their old systems. Linux has never had this policy. Binaries that worked great on Fedora Core 9 probably won't work on Fedora Core 12, or Ubuntu 9.04, or whatever.
Actually Apple is very average when it comes to backwards compatibility. They certainly are no Microsoft. The point of fat binaries is just to make things easier for users and developers... which is exactly the entire point to having a operating system in the first place.
Some Linux kernel developers like to maintain that they support a stable ABI for userland and brag that software written for Linux in 2.0 era will still work in 2.6. In fact it seems that maintaining userspace ABI/API is a high priority for them. (Much higher then typical userland developer anyways. Application libraries are usually the bigger problem then anything with the kernel in terms of compatibility issues.)
Posted Jun 23, 2010 21:40 UTC (Wed)
by dlang (guest, #313)
[Link] (48 responses)
using a 64 bit kernel makes a huge difference in a system, but unless a single application uses more than 3G of ram it usually won't matter much to the app if it's 32 bit or 64 bit. there are some apps where it will matter, but those are special cases and probably not where a universal binary would be applicable.
Posted Jun 23, 2010 22:10 UTC (Wed)
by drag (guest, #31333)
[Link] (32 responses)
I do actually use a 64bit kernel with 32bit userland. With Fat binaries I would not have to give a shit one way or the other.
> but unless a single application uses more than 3G of ram it usually won't matter much to the app if it's 32 bit or 64 bit. there are some apps where it will matter, but those are special cases and probably not where a universal binary would be applicable.
Here are some issues:
* The fat binary solves the problems you run into with the transition process of moving to a 64bit system. This makes it easier for users and Linux distribution developers to cover all the multitude of corner cases. For example: Installing 'Pure 64' versions of Linux for a period of time meant that you had to give up the ability to run OpenOffice.org. This is solved now, but it's certainly not a isolated issue.
* People who actually need to run 64bit software for performance enhancements or memory requirements will have their applications 'just work' (completely regardless to whether they were 32bit or 64bit) with no requirements for complicated multi-lib setups, chroots, and other games that users have to solve. They just install it and it will 'just work'.
* Currently; If you do not need 64bit compatibility now you will probably want to install only 32bit binaries. However if in the future you run into software that requires 64bit compatibility. With the status quo it would require you to re-install the OS
* Distributions would not have to supply multiple copies of the same software packages in order to support the arches they need to support.
* Application developers (both OSS and otherwise) can devote their time more efficiently to meet the needs of their users and can treat 64bit compatibility as a optional feature that they can support when it's appropriate for them rather then being forced to move to 64bit as dictated by Linux OS design limitations.
Yeah FAT binaries only really solve 'special cases' issues with supporting multiple arches, but the number of special cases are actually high and diverse. When you examine the business market were everybody uses custom in-house software then the special cases are even more numerous then the typical problems you run into with home users.
Sure it's not absolutely required and there are lots of work arounds for each issue you run into. On a scale of 1-10 in terms of importance (were 10 is most important, and 1 is least) it ranks about a 3 or a 4, But the point is that FAT binaries is simply a superior technical solution then what we have right now, would solve a lot of usability issues, and comes from a application developer that has to deal with _real_world_ issues caused by lack of fat binaries that works with software that is really desirable for a significant number of potential Linux users.
He would of not spent all this time and effort into implementing FatElf if it did not solve a severe issue for him.
Posted Jun 23, 2010 22:52 UTC (Wed)
by cmccabe (guest, #60281)
[Link] (10 responses)
When you get a new computer, normally you reinstall the OS and copy over your /home directory. For all but a few highly technical users, this is the norm. Windows even has a special "feature" called Windows Genuine Advantage that forces you to reinstall the OS when the hardware has changed. You *cannot* use your previous install.
Anyway, running a Linux installer and then doing some apt-get only takes an hour or two.
> * Application developers (both OSS and otherwise) can devote their time
FATELF has nothing to do with whether software is 64-bit clean. If some doofus is assuming that sizeof(long) == 4, FATELF is not going to ride to the rescue. (Full disclosure: sometimes that doofus has been me in the past.)
> He would of not spent all this time and effort into implementing FatElf if
I can't think of even a single issue that FATELF "solves," except maybe to allow people distributing closed-source binaries to have one download link rather than two. In another 3 or 4 years, 32-bit desktop systems will be a historical curiosity, like dot-matrix printers or commodore 64s, and we will be glad we didn't put some kind of confusing and complicated binary-level compatibility system into the kernel.
Posted Jun 24, 2010 0:25 UTC (Thu)
by drag (guest, #31333)
[Link]
Windows sucks in a lot of ways, but Windows sucking has nothing to do with Linux sucking also. You can improve Linux and make it more easy to use without giving a crap what anybody in Redmond is doing.
If I am your plumber and you pay me money to fix your plumbing and I do a really shitty job at fixing it.. and you complain to me about it to me... does it comfort you when I tell you that whenever your neighbor washes his dishes that the basement floods? Does it make your plumbing better knowing that somebody else has it worse then you?
Posted Jun 24, 2010 12:07 UTC (Thu)
by nye (subscriber, #51576)
[Link] (4 responses)
I know FUD is the order of the day here at LWN, but this has gone beyond that point and I feel the need to call it:
You are a liar.
Posted Jun 25, 2010 8:26 UTC (Fri)
by k8to (guest, #15413)
[Link]
Posted Jun 27, 2010 12:12 UTC (Sun)
by nix (subscriber, #2304)
[Link] (2 responses)
(Certainly when WGA fires, it does make it *appear* that you have to reinstall the OS, because it demands that you pay MS a sum of money equivalent to a new OS install. But, no, they don't give you a new OS for that. You pay piles of cash and get a key back instead, which makes your OS work again -- until you have the temerity to change too much hardware at once; the scoring system used to determine which hardware is 'too much' is documented, but not by Microsoft.)
Posted Jun 28, 2010 10:03 UTC (Mon)
by nye (subscriber, #51576)
[Link] (1 responses)
I've never actually *seen* WGA complain about a hardware change; the only times I've ever seen it are when reinstalling on exactly the same hardware (eg 3 times in a row because of a problem with slipstreaming drivers).
In principal though, if you change more than a few items of hardware at once (obviously this would include transplanting the disk into another machine) or whenever you reinstall then Windows will ask to be reactivated. If you reactivate too many times over a short period, it will demand that you call the phone number to use automated phone activation. At some point it will escalate to non-automated phone activation where you actually speak to a person. This is the furthest I've ever seen it go, though I believe there's a further level where you speak to the person and you have to give them a plausible reason for why you've installed the same copy of Windows two dozen times in the last week. If you then can't persuade them, this would be the point where you have to pay for a new license.
This is obnoxious and hateful, to be sure, but it is entirely unlike the behaviour described. The half-truths and outright untruths directed at Windows from some parts of the open source community make it hard to maintain credibility when describing legitimate grievances or technical problems, and this undermines us all.
Posted Jun 28, 2010 13:25 UTC (Mon)
by nix (subscriber, #2304)
[Link]
I suspect that WGA's behaviour (always ill-documented) has shifted over time, and that as soon as you hit humans on phone lines you become vulnerable to the varying behaviour of those humans. I suspect all the variability can be explained away that way.
Still, give me free software any day. No irritating license enforcer and hackability both.
Posted Jun 24, 2010 12:28 UTC (Thu)
by Cato (guest, #7643)
[Link]
Linux is much better at this generally, but this ability is not unique to Linux.
Posted Jun 24, 2010 17:26 UTC (Thu)
by jschrod (subscriber, #1646)
[Link] (2 responses)
And if you use it for anything beyond office/Web surfing, you configure the system for a few days afterwards... (Except if you have a professional setup with some configuration management behind it, which the target group of this proposal most probably doesn't have.)
> Windows even has a special "feature" called Windows Genuine Advantage
OK, that shows that you are not a professional. This is bullshit, plain and simple: For private and SOHO users, WGA may trigger reactivation, but no reinstall. (Enterprise-class users use deployment tools anyhow and do not come in such a situation.)
Posted Jun 24, 2010 19:04 UTC (Thu)
by cmccabe (guest, #60281)
[Link] (1 responses)
Thank you for the correction. I do not use Windows at work. It's not even installed on my work machine. So I'm not familiar with enterprise deployment tools for Windows. I wasn't trying to spread FUD-- just genuinely did not know there was a way around WGA in this case.
However, the point I was trying to make is that most home users expect that new computer == new OS install. Some people in this thread have been claiming that Linux distributions need to support moving a hard disk between 32 and 64 bit machines in order to be a serious contender for desktop operation system. (And they're unhappy with the obvious solution of using 32-bit everywhere.)
I do not think that most home users, especially nontechnical ones, are aware that this is even possible with Windows. I certainly don't think they would view it as a reason not to switch.
Posted Jun 24, 2010 19:50 UTC (Thu)
by vonbrand (subscriber, #4458)
[Link]
It is much simpler than that: Very few people do move disks from one computer to the next. And those who do have the technical savvy to handle any resulting mess.
Posted Jun 23, 2010 23:29 UTC (Wed)
by dlang (guest, #313)
[Link] (19 responses)
as for transitioning, install a 64 bit system and 32 bit binaries, as long as you have the libraries on the system they will work. fatelf doesn't help you here (it may help if your libraries were all fat, but I fail to see how that's really much better than having /lib32 /lib64 (your hard drive may be large enough to double the size of everything stored on it, but mine sure isn't)
distros would still have to compile and test all the different copies of their software for all the different arch's they support, they would just combine them together before shipping them (at which point they would have to ship more CDs/DVDs and or pay higher bandwidth charges to get people copies of the binaries that don't do them any good)
Posted Jun 24, 2010 0:37 UTC (Thu)
by drag (guest, #31333)
[Link] (3 responses)
I do have to care about it if, in the future, I want to run a application that benefits from 64bit-ness.
Some operations are faster in 64bit and many applications, such as games, already benefit from the larger address space.
> (it may help if your libraries were all fat, but I fail to see how that's really much better than having /lib32 /lib64 (your hard drive may be large enough to double the size of everything stored on it, but mine sure isn't)
Yes. That is what I am talking about. Getting rid of architecture-specific directories and going with FatElf for everything.
Your wrong in thinking that having 64bit and 32bit support in a binary means that your doubling your system's footprint. Generally speaking the architectural-specific files in a software package is small compared to the overall size of the application. Most ELF files are only a few K big. Only rarely do they get up past a half a dozen MB.
My user directory is about 4.1GB large. Adding Fatelf support for 32bit/64bit applications would probably only plump it up a 400-600 MB or so..
Posted Jun 24, 2010 7:56 UTC (Thu)
by dlang (guest, #313)
[Link] (2 responses)
take some distro (say ubuntu since it supports multiple architectures) download the repository (when I did this a couple years ago it was 600G, nowdays it's probably larger so it make take $150 or so to buy a USB 2TB drive, it will take you a while to download everything), then create a unified version of the distro, making all the binaries and libraries 'fat' and advertise the result. I'm willing to bet that if you did this as a plain repackaging of ubuntu with no changes you would even be able to get people to host it for you (you may even be able to get Cononical to host it if your repackaging script is simple enough)
I expect that the size difference is going to be larger than you think (especially if you include every architecure that ubuntu supports, not just i486 and AMD64), and this size will end up costing performance as well as having effects like making it hard to create an install CD etc.
I may be wrong and it works cleanly, in which case there will be real ammunition to go back to the kernel developers with (although you do need to show why you couldn't just use additional ELF sections with a custom loader instead as was asked elsewhere)
If you could do this and make a CD like the ubuntu install CD, but one that would work on multiple architectures (say i486, AMD64, powerPC) that would get people's attention. (but just making a single disk that does this without having the rest of a distro to back it up won't raise nearly the interest that you will get if you can script doing this to an entire distro)
Posted Jun 24, 2010 12:12 UTC (Thu)
by nye (subscriber, #51576)
[Link] (1 responses)
Because the subject of this article already did that: http://icculus.org/fatelf/vm/
It's not as well polished as it could be - I got the impression that he didn't see much point in improving it after it was dismissed out of hand.
Posted Jun 24, 2010 20:47 UTC (Thu)
by MisterIO (guest, #36192)
[Link]
Posted Jun 24, 2010 1:24 UTC (Thu)
by cesarb (subscriber, #6266)
[Link] (14 responses)
And some are very small indeed. One of my machines has only a 4 gigabyte "hard disk" (gigabyte, not terabyte). It is a EeePC 701 4G. (And it is in fact a small SSD, thus the quotes.)
There is also the Live CDs/DVDs, which are limited to a fixed size. Fedora is moving to use LZMA to cram even more stuff into its live images (https://fedoraproject.org/wiki/Features/LZMA_for_Live_Images). Note also that installing from a live image, at least on Fedora and IIRC on Ubuntu, is done by simply copying the whole live image to the target disk, so the size limitations of live images directly influence what is installed by default.
Posted Jun 24, 2010 9:06 UTC (Thu)
by ncm (guest, #165)
[Link] (2 responses)
Posted Jun 24, 2010 20:56 UTC (Thu)
by speedster1 (guest, #8143)
[Link] (1 responses)
Posted Jun 26, 2010 9:56 UTC (Sat)
by ncm (guest, #165)
[Link]
While we're way, way off topic, you might also want to go to desktop/gnome/interface and change gtk_key_theme to "Emacs" so that the text edit box keybindings (except in Epiphany, grr) are Emacs-style.
Contempt, thy name is Gnome.
Getting back on topic, fat binaries makes perfect sense for shared libraries, so they can all go in /lib and /usr/lib. However, there's no reason to think anybody would force them on you for an EEE install.
Posted Jun 24, 2010 12:30 UTC (Thu)
by Cato (guest, #7643)
[Link]
Posted Jun 24, 2010 17:03 UTC (Thu)
by chad.netzer (subscriber, #4257)
[Link] (9 responses)
Posted Jun 24, 2010 19:40 UTC (Thu)
by dlang (guest, #313)
[Link] (8 responses)
do you have a SSD?
are you memory constrained (decompressing requires that you have more space than the uncompressed image)
do you page out parts of the code and want to read in just that page later (if so, you would have to uncompress the entire binary to find the appropriate page)
what compression algorithm do you use? many binaries don't actually compress that well, and some decompression algorithms (bzip2 for example) are significantly slower than just reading the raw data.
I actually test this fairly frequently in dealing with processing log data. in some condititions having the data compressed and uncompressing it when you access it is a win, in other cases it isn't.
Posted Jun 25, 2010 0:14 UTC (Fri)
by chad.netzer (subscriber, #4257)
[Link] (7 responses)
Still, why the heck must my /bin/true executable take 30K on disk? And /bin/false is a separate executable that takes *another* 30K, even though they are both dynamically linked to libc??? Time to move to busybox on the desktop...
Posted Jun 25, 2010 0:38 UTC (Fri)
by dlang (guest, #313)
[Link] (4 responses)
http://www.muppetlabs.com/~breadbox/software/tiny/teensy....
A Whirlwind Tutorial on Creating Really Teensy ELF Executables for Linux
Posted Jun 25, 2010 2:41 UTC (Fri)
by chad.netzer (subscriber, #4257)
[Link] (3 responses)
To be fair getting your executable much smaller than the minimal disk block size is just a fun exercise. Whereas coreutils /bin/true may actually benefit from an extent based filesystem. :) Anyway, it's just a silly complaint I'm making, though it has always annoyed me a tiny bit.
Posted Jun 25, 2010 12:25 UTC (Fri)
by dark (guest, #8483)
[Link] (2 responses)
Posted Jun 25, 2010 15:38 UTC (Fri)
by intgr (subscriber, #39733)
[Link] (1 responses)
PS: Shells like zsh actually ship builtin "true" and "false" commands
Posted Jun 29, 2010 23:03 UTC (Tue)
by peter-b (guest, #66996)
[Link]
:
The following command is equivalent to false:
! :
I regularly use both when writing shell scripts.
Posted Jun 27, 2010 16:42 UTC (Sun)
by nix (subscriber, #2304)
[Link]
(I think this rule makes more sense on non-GNU platforms, where it is common to rename *everything* via --program-prefix=g or something similar, to prevent conflicts with the native tools. But why should those of us using the GNU toolchain everywhere be penalized for this?)
Posted Jun 27, 2010 16:46 UTC (Sun)
by nix (subscriber, #2304)
[Link]
And gnulib, because it has no stable API or ABI, is always statically linked to its users.
26Kb for a printf implementation isn't bad.
Posted Jun 27, 2010 12:08 UTC (Sun)
by nix (subscriber, #2304)
[Link]
Please. There are good arguments for FatELF, but this is not one of them.
Posted Jun 23, 2010 23:34 UTC (Wed)
by cortana (subscriber, #24596)
[Link] (11 responses)
So I could use Flash.
So I could buy a commercial Linux game and run it without having to waste time setting up an i386 chroot or similar.
Both areas that contribute to the continuing success of Windows and Mac OS X on the desktop.
Posted Jun 24, 2010 2:27 UTC (Thu)
by BenHutchings (subscriber, #37955)
[Link] (9 responses)
Posted Jun 24, 2010 9:10 UTC (Thu)
by cortana (subscriber, #24596)
[Link] (8 responses)
Posted Jun 24, 2010 10:43 UTC (Thu)
by michich (guest, #17902)
[Link] (7 responses)
Posted Jun 24, 2010 11:00 UTC (Thu)
by cortana (subscriber, #24596)
[Link] (6 responses)
Even if Debian did have an automatic setup for compiling all library packages with both architectures, you are then screwed because they put the amd64 libraries in /lib (with a symlink at /lib64) and the i386 libraries in /lib32. So your proprietary i386 software that tries to dlopen files in /lib fails because they are of the wrong architecture.
You could argue that these are Debian-specific problems. You might be right. But they are roadblocks to greater adoption of Linux on the desktop, and now that the FatELF way out is gone, we're back to the previous situation: waiting for the 'multiarch' fix (think FatELF but with all libraries in /lib/$(arch-triplet)/libfoo.so rather than the code for several architectures in a FatELF-style, single /lib/libfoo.so), which has failed to materialise in the 6 years since I first saw it mentioned. And which still won't fix proprietary software that expects to find its own architecture's files at /lib.
Posted Jun 24, 2010 17:44 UTC (Thu)
by vonbrand (subscriber, #4458)
[Link]
That multilib doesn't work on Debian is squarely Debian's fault (my Fedora here is still not completely 32-bit free, but getting there). No need to burden the kernel for that.
Posted Jun 27, 2010 12:31 UTC (Sun)
by nix (subscriber, #2304)
[Link] (4 responses)
This simply is not a significant problem.
Posted Jun 27, 2010 13:14 UTC (Sun)
by cortana (subscriber, #24596)
[Link] (1 responses)
Posted Jun 27, 2010 17:48 UTC (Sun)
by nix (subscriber, #2304)
[Link]
(Words cannot express how much I don't care about statically linked apps.)
Posted Jul 10, 2010 12:31 UTC (Sat)
by makomk (guest, #51493)
[Link] (1 responses)
Posted Jul 10, 2010 20:24 UTC (Sat)
by nix (subscriber, #2304)
[Link]
Posted Jun 24, 2010 18:43 UTC (Thu)
by Spudd86 (subscriber, #51683)
[Link]
Posted Jun 24, 2010 7:55 UTC (Thu)
by jengelh (guest, #33263)
[Link] (2 responses)
Hell it will. Unless the program in question directly uses hand-tuned assembler, the 32-bit one will usually not run with SSE2, just the olde x87, which is slower, will be any computations involving larger-than-32 integers..
Posted Jun 24, 2010 18:08 UTC (Thu)
by pkern (subscriber, #32883)
[Link] (1 responses)
Which is only partly true. Look into But of course, normally you don't rely on newer features everywhere, breaking support for older machines. Ubuntu goes i686 now, Fedora's already there, I think; and if you want more optimization I guess Gentoo is the way to go because you don't have to think portable. ;-)
Posted Jun 24, 2010 18:25 UTC (Thu)
by jengelh (guest, #33263)
[Link]
Posted Jun 24, 2010 7:43 UTC (Thu)
by tzafrir (subscriber, #11501)
[Link]
Posted Jun 23, 2010 20:47 UTC (Wed)
by Frej (guest, #4165)
[Link]
Assuming it's not a sin to distribute binaries, how would unionfs help when you download a binary?
>The reason why Apple invented FAT binaries is because they were interested in maintaining extensive binary compatibility with their old systems. Linux has never had this policy. Binaries that worked great on Fedora Core 9 probably won't work on Fedora Core 12, or Ubuntu 9.04, or whatever.
Well without FATelf you need two binaries for each Fedora Core release. But ofcourse if you just want linux for servers and admins - FATEF won't matter that much.
Posted Jun 23, 2010 21:32 UTC (Wed)
by RCL (guest, #63264)
[Link] (17 responses)
Posted Jun 23, 2010 21:53 UTC (Wed)
by dlang (guest, #313)
[Link] (14 responses)
what features are you willing to give up to get your universal compatibility?
as a trivial example, if an application needs to store some data and the upstream support sqlite, mysql, postgresql, flat files, or various 'desktop storage' APIs, which one should the universal binary depend on? and why?
KDE and Gnome each have their 'standard' tool for storing contact information, should Gnome users be force to load KDE libraries and applications (or KDE users forced to use the Gnome ones) to maintain compatibility?
what if someone comes up with something new, should that be forbidden/ignored so that a universal binary can work on older systems that don't have the new software?
Posted Jun 23, 2010 22:53 UTC (Wed)
by RCL (guest, #63264)
[Link] (4 responses)
1) A single entity (with dictatorship rights) is designated to maintain core "system" in a way similar to how Linux kernel itself (or *BSD) is maintained. A new platform name is defined (or, ideally, "Linux" is redefined to mean kernel + core system).
2) Entity picks a set of core libraries which it is capable to actually maintain (and guarantee the backward compatibility for) and no compatible system is allowed to replace it/enhance/modify it in any way (even recompiling the kernel locally) without losing the (official) compatibility and ability to use platform name (which should be made a trademark).
3) Versioning policy is similar to Apple or Windows: every update (other than security fixes) should have a given name and version (with means to check that from code). The platform is updated in its entirety only, bugfixes are accumulated and introduced all at once.
I think that it is sufficient for the above set of libraries to include only functionality needed to write a game (generally speaking, any application with low-latency interactive video and audio).
In some ways it is similar to creating another distro dedicated to binary stability and binary multimedia applications, but it is not intended to be a full-blown distro with its own package management and policies, just a well-defined set of binary libraries + kernel.
Posted Jun 23, 2010 23:16 UTC (Wed)
by dlang (guest, #313)
[Link] (1 responses)
unfortunately in practice it just doesn't work. This may be because they don't have sufficient dictatorial powers, but nobody wants to give them that much power ;-)
as for redefining what 'Linux' means, good luck with that windmill.
Posted Jun 23, 2010 23:41 UTC (Wed)
by RCL (guest, #63264)
[Link]
And overall... well, I'm not going to fight for that binary compatibility. I'm a game developer, sympathetic to Linux, but my target platforms are wildly different.
Posted Jun 24, 2010 16:29 UTC (Thu)
by sorpigal (guest, #36106)
[Link] (1 responses)
Posted Jun 27, 2010 12:14 UTC (Sun)
by nix (subscriber, #2304)
[Link]
Posted Jun 23, 2010 23:03 UTC (Wed)
by drag (guest, #31333)
[Link] (8 responses)
Well presumably with 'Fat Binary Support' the Linux distribution will take advantage of that to provide Fat binaries for their main OS.
That way you avoid dealing with issues with having to do ugly hacks like maintain separate */lib and */lib64 things. So the application author should not have deal with issues like that (unless I am missing some aspect of SQL databases datatypes differences between 32 and 64bit arches.)
A distro moving to "Fat binay support" model should simultaneously be able to support backwards compatibility with 32bit legacy applications and be prepared to deal with the shiny new 64bit future without forcing users and application developers to deal with the details.
--------------------------------
From my personal experiences sharing my home directory between multiple versions of Debian with different arches (64bit/32bit, and PPC 32bit) the only big issue with compatibility on application storage was with X-Moto and it's use of sqlite to store game information. It had to do with endiness issues between x86 and PPC, but I think it was actually fixed at a later date...
Posted Jun 23, 2010 23:19 UTC (Wed)
by dlang (guest, #313)
[Link] (7 responses)
the OP was wanting a single flat binary that would run on every distro, doing that requires that all distros agree on what datastore to use when the application can be compiled to work with many different ones.
Posted Jun 23, 2010 23:49 UTC (Wed)
by drag (guest, #31333)
[Link] (6 responses)
Of course not. It just makes it a more difficult and irritating for users and developers and distribution makers to support mutli-arch and play games with having multiple locations and packages for the same pieces of software.
There is a reason I run 64bit kernel with 32bit userland on my Linux systems nowadays.. I tried running 64bit only and things like that, but it's a PITA to do that in Linux while 64bit application support is trivial (for end users) in OS X...
> you just need the right libraries on the system, and a fat binary doesn't help you there (if you are on a 64 bit system but only have 32 bit versions of some library that the app needs, should you run the 32 bit version?)
Well if I only have a 32bit-only version of a library (instead of the vastly preferable 32/64 fat binary library) then that would presume that only 32bit versions of that library exists.
Therefore a 64bit version (or a 32bit/64bit 'Fat binary' version) of a application that depends on that library would be impossible to have in the first place... right?
Either way it's not really any different then what we have to deal with it now, but with Fat binary support it would be handled intelligently by the system were as right now it requires a lot of manual intervention and a significant technical understanding on the part of end users to deal with these sort of compatibility issues.
Posted Jun 23, 2010 23:53 UTC (Wed)
by drag (guest, #31333)
[Link]
Well for a better understanding; I was running Debian while attempting to juggle both 64bit and 32bit compatibility for the various applications I needed to run. Fedora is a bit better...
Posted Jun 24, 2010 0:59 UTC (Thu)
by dlang (guest, #313)
[Link] (4 responses)
I will admit I don't run binary-only software (i.e. commercial games) on most of my systems, but that's more due to the lack of commercial games available for linux than anything else.
Posted Jun 24, 2010 1:11 UTC (Thu)
by RCL (guest, #63264)
[Link] (3 responses)
Posted Jun 24, 2010 19:59 UTC (Thu)
by vonbrand (subscriber, #4458)
[Link]
I'd guess 32 bit (for oldish machines and netbooks). But some serious gamers I know spend more on their grapics card than I do on a complete machine, for for high-end 64 bit 2 (or even 4) cores is probably the way to go.
Posted Jun 24, 2010 21:41 UTC (Thu)
by MisterIO (guest, #36192)
[Link] (1 responses)
Posted Jun 25, 2010 13:50 UTC (Fri)
by vonbrand (subscriber, #4458)
[Link]
On current Fedora, you can install 32 and 64 bit versions happyly (most of the time), the installed packages do share non-architectured stuff (like manpages and whatnot). Yes, it does require some delicate juggling when building the packages to make sure said manpages and such are exactly equal and some other considerations.
Posted Jun 24, 2010 1:37 UTC (Thu)
by akumria (guest, #7773)
[Link]
What is "wide masses" in your regard?
1% of the global population?
10% of all computer users?
100% of all operating systems?
In some areas it has been the "year of the Linux desktop" since 1997, for others, they are just starting.
An example is in this week's LWN. Poseidon Linux. Year of the scientific Linux desktop since 2004.
Anand
Posted Jun 24, 2010 8:08 UTC (Thu)
by jengelh (guest, #33263)
[Link]
Posted Jun 23, 2010 21:53 UTC (Wed)
by Tara_Li (guest, #26706)
[Link] (29 responses)
Bah. I really don't see a good case for FatELF.
Posted Jun 24, 2010 3:23 UTC (Thu)
by ccurtis (guest, #49713)
[Link] (5 responses)
Posted Jun 24, 2010 5:01 UTC (Thu)
by bronson (subscriber, #4806)
[Link] (2 responses)
Until then, it seems like you're trying to merge the solution before the problem even exists.
Posted Jun 24, 2010 13:37 UTC (Thu)
by ccurtis (guest, #49713)
[Link] (1 responses)
Personally, I like the idea of having solutions rather than problems.
Posted Jun 24, 2010 17:11 UTC (Thu)
by chad.netzer (subscriber, #4257)
[Link]
Posted Jun 24, 2010 20:02 UTC (Thu)
by vonbrand (subscriber, #4458)
[Link]
Then ship the application as a
Posted Jun 25, 2010 15:45 UTC (Fri)
by intgr (subscriber, #39733)
[Link]
Posted Jun 24, 2010 17:49 UTC (Thu)
by pj (subscriber, #4506)
[Link] (3 responses)
Posted Jun 24, 2010 19:13 UTC (Thu)
by tzafrir (subscriber, #11501)
[Link]
Specifically, I have libfoo installed for i386 from my distro. I now want to install libfoo for mips (or even worse: the powerpc variant of the day). Does it mean I have to modify /usr/lib/libfoo.so.1 as shipped by my distro?
Posted Jun 24, 2010 19:43 UTC (Thu)
by dlang (guest, #313)
[Link]
sometimes you need different versions of compilers for different architectures.
go read Rob Landley's blog for ongoing headaches in cross compiling.
having the results all in one file is trivial compared to all the other problems.
Posted Jun 25, 2010 15:59 UTC (Fri)
by vonbrand (subscriber, #4458)
[Link]
Your "FatELF aware toolchain" is the sum total of the separate cross-toolchains, so there is no real gain here. That said, GCC has been the cross compiler of choice for most of its life, so it has quite a set of options for doing what you want, cleanly. Not your everyday use, sure, so it can be rough going.
Posted Jun 24, 2010 23:36 UTC (Thu)
by Tet (guest, #5433)
[Link] (18 responses)
Yeesh. Everyone is bringing up countless examples of where FatELF could be abused and claiming that it's therefore useless. But no one has mentioned that FatELF solves some very real problems, problems that I encounter on a fairly regular basis. Here's a hint: if you don't want to use fat binaries, then don't. I'll guarantee you that even if it were included upstream, Fedora/Debian/OpenSUSE/Ubuntu etc would continue to release architecture specific images. But for some of us, that's not good enough, and FatELF is one solution to the problem. If people want to suggest others, I'm all ears...
Posted Jun 25, 2010 13:54 UTC (Fri)
by vonbrand (subscriber, #4458)
[Link] (17 responses)
Please enlighten us to the recurring problems you have that FatELF would solve.
For my part, I haven't run into any situation that didn't have a simple solution which did not involve changing the kernel and the whole buildchain. Doing so adds so much overhead that the problem would have to be humongous to make it worth my while, but you can leave that consideration out if you like.
Posted Jun 25, 2010 18:57 UTC (Fri)
by Tet (guest, #5433)
[Link] (16 responses)
There's only one ~/.mozilla/plugins
Since my $HOME is NFS mounted across a mix of 32-bit and 64-bit OSes, I'm basically screwed. 32-bit plugins won't work with a 64-bit Firefox and vice versa. Yes, you could argue the application should be fixed, but the same applies to gimp and to countless other apps, which means an awful lot of applications are out there to fix. If I could get a fat libflashplayer.so, for example, everything would Just Work™. I'm not suggesting that the whole OS should be fat binaries/shared libraries. But I'd like the option to use them where they make sense, as I believe they do here. Again, if you have a simple solution that doesn't involve FatELF or something similar, please let me know.
Posted Jun 25, 2010 19:35 UTC (Fri)
by dlang (guest, #313)
[Link] (10 responses)
users of 64 bit desktops still user the 32 bit libflashplayer.so run though a ndiswrapper layer.
so this is not a case where FatELF would help in practice.
even in theory, firefox doesn't have to have the plugin binaries under ~/ so if you don't install them there and instead install them in one of the other places that it can live you would be able to NFS mount $HOME without a problem.
the same thing goes for any application that uses plugins. the plugin binaries should be able to be installed outside of $HOME. If they can be, then the application can work if $HOME is shared.
Posted Jun 25, 2010 21:16 UTC (Fri)
by Tet (guest, #5433)
[Link] (9 responses)
the same thing goes for any application that uses plugins. the plugin binaries should be able to be installed outside of $HOME. If they can be, then the application can work if $HOME is shared.
Yes, but here in the real world, they're not. Even if you could find a suitable location that would be writeable by a non-privileged user, it would mean changing the applications, and as I mentioned, there are many, many of those. Simply making a fat shared library possible would be a much easier and cleaner solution, and would have negligible impact on those that didn't want or need to use it. I don't understand why so many are opposed to it.
Posted Jun 25, 2010 21:27 UTC (Fri)
by dlang (guest, #313)
[Link] (8 responses)
you do realize that the version you have is vunerable to exploits that are being used in the wild. Adobe decided to discontinue the 64 bit version instead of fixing it.
a fat plugin is only useful if you also have fat libraries everywhere. This directly contridicts posts earlier that said not to worry about the bload as the distros would still ship non-fat distros.
by the way, do you expect plugs to work across different operating systems as well so that you can have your $HOME NFS mounted on MACOS as well? where do you draw the line at what yo insist is needed?
Posted Jun 25, 2010 21:40 UTC (Fri)
by Tet (guest, #5433)
[Link] (7 responses)
Posted Jun 25, 2010 22:17 UTC (Fri)
by dlang (guest, #313)
[Link] (5 responses)
these two conflict, if you are going to support FAT binaries for every possible combination of options the distribution problem is much larger. If you are going to want the fat binary to support every possible system in a single binary it's going to be substantially larger.
it's not that I am so opposed to the idea of fat binaries as it is I don't see them as being that useful/desirable. the problems they are trying to address seem to be solvable by other means pretty easily, and there is not much more than hand waving over the cost.
Posted Jun 26, 2010 7:33 UTC (Sat)
by Tet (guest, #5433)
[Link] (4 responses)
I've outlined a case where it would be both useful and desirable to have them, and to date, I haven't seen any sensible alternatives being proposed.
Posted Jun 26, 2010 15:08 UTC (Sat)
by vonbrand (subscriber, #4458)
[Link] (3 responses)
Just use two binary packages, with non-architecure-dependent stuff exactly the same, and arrange for the package manager to manage files belonging to several packages. RPM does this, and it works.
No need to screw around with the kernel, no need to have 3 versions of the package (arch 1, arch 2, fat).
Posted Jun 26, 2010 18:20 UTC (Sat)
by tzafrir (subscriber, #11501)
[Link] (1 responses)
But what you want to do here is that rpm will be able to merge files from different packages into a single file on disk. This won't work.
Posted Jun 26, 2010 21:49 UTC (Sat)
by dlang (guest, #313)
[Link]
Posted Jun 27, 2010 16:11 UTC (Sun)
by Tet (guest, #5433)
[Link]
Your "solution" doesn't solve the problem where application plugins are concerned. Firstly, the majority them are not installed using the system package manager in the first place, and secondly, it's utterly irrelevant anyway. You can't package the achitecture specific bits separately, because the application only looks for them in one place. As I said right at the start, it would be good to fix the applications, but there are a hell of a lot of them. Fat binaries would solve the problem. Your suggestions wouldn't, without first also patching the apps.
Posted Jun 27, 2010 17:17 UTC (Sun)
by nix (subscriber, #2304)
[Link]
Posted Jun 27, 2010 17:15 UTC (Sun)
by nix (subscriber, #2304)
[Link] (4 responses)
I suspect -- though it's a kludge -- you could do this with an LD_PRELOADed wrapper around dlopen() which tweaks the filename appropriately, and slight changes to the downloading parts of e.g. firefox to put its dynamically loaded stuff in per-arch subdirectories when autodownloaded. dlopen() is not a hidden symbol so should be vulnerable to interposition.
Posted Jun 27, 2010 17:55 UTC (Sun)
by Tet (guest, #5433)
[Link] (3 responses)
To solve this particular problem, yes. But then Ryan's FatELF release supported dlopen()ing fat shared libraries.
Posted Jun 27, 2010 20:36 UTC (Sun)
by nix (subscriber, #2304)
[Link] (2 responses)
Posted Jun 28, 2010 8:53 UTC (Mon)
by Tet (guest, #5433)
[Link] (1 responses)
Posted Jun 28, 2010 13:22 UTC (Mon)
by nix (subscriber, #2304)
[Link]
Posted Jun 24, 2010 7:35 UTC (Thu)
by epa (subscriber, #39769)
[Link] (1 responses)
Be careful not to fall into the classic trap of equating 'my favourite system cannot support X' with 'X is unworkable' or even 'X is morally the wrong thing to do'.
Posted Jun 25, 2010 3:36 UTC (Fri)
by ajf (guest, #10844)
[Link]
Posted Jun 24, 2010 17:35 UTC (Thu)
by vonbrand (subscriber, #4458)
[Link]
Au contraire. I do believe the a.out binaries from the very first days of Linux still run fine on current kernels. What has changed is the environment: The currently most popular binary format is completely different, new libraries, languages have new ABIs, new ways to communicate among components are common today, ...
A "FatELF binary" doesn't do any good if the right libraries, configuration files, devices, ... aren't available. Adding all that in would result in GargantuanELF.
In any case, the idea makes no sense, as this can be handled some other ways: Just pack stuff up into a cpio(1) or some such file package plus a custom header, and create a special loader that handles that header. No kernel change needed (heck, if you can run Java or Win32 apps like native, you certainly can do this). The binary format will be different in any case, use that freedom to create something that doesn't require kernel changes.
Posted Jun 27, 2010 17:34 UTC (Sun)
by da4089 (subscriber, #1195)
[Link]
Actually, Apple inherited fat binaries in Mach-O from NeXT.
NeXT supported fat binaries because NeXTSTEP (and later OpenStep) was available for multiple CPU architectures (68k, x86, SPARC32, PARISC), and they wanted to enable ISVs to ship binary applications that worked on all platforms.
To make that work effectively, they
This is a viable approach, as Apple has recently demonstrated.
But it's a very different model to the usual Linux distribution, which needs none of those things, and relies on the dependency resolution of the packaging system and rigorous testing of API compatibility when building the consistent package set.
I don't think attempting to move Linux towards the NeXT/Apple model is useful, but I also don't see why those that want to can't maintain an out-of-tree patch to make it work.
SELF: Anatomy of an (alleged) failure
SELF: Anatomy of an (alleged) failure
SELF: Anatomy of an (alleged) failure
SELF: Anatomy of an (alleged) failure
SELF: Anatomy of an (alleged) failure
SELF: Anatomy of an (alleged) failure
SELF: Anatomy of an (alleged) failure
> want to install only 32bit binaries. However if in the future you run into
> software that requires 64bit compatibility. With the status quo it would
> require you to re-install the OS
> more efficiently to meet the needs of their users and can treat 64bit
> compatibility as a optional feature that they can support when it's
> appropriate for them rather then being forced to move to 64bit as
> dictated by Linux OS design limitations.
> it did not solve a severe issue for him.
SELF: Anatomy of an (alleged) failure
SELF: Anatomy of an (alleged) failure
SELF: Anatomy of an (alleged) failure
SELF: Anatomy of an (alleged) failure
SELF: Anatomy of an (alleged) failure
SELF: Anatomy of an (alleged) failure
SELF: Anatomy of an (alleged) failure
SELF: Anatomy of an (alleged) failure
> your /home directory.
> that forces you to reinstall the OS when the hardware has changed. You
> *cannot* use your previous install.
SELF: Anatomy of an (alleged) failure
> and simple: For private and SOHO users, WGA may trigger reactivation, but
> no reinstall. (Enterprise-class users use deployment tools anyhow and do
> not come in such a situation.)
SELF: Anatomy of an (alleged) failure
SELF: Anatomy of an (alleged) failure
SELF: Anatomy of an (alleged) failure
SELF: Anatomy of an (alleged) failure
SELF: Anatomy of an (alleged) failure
SELF: Anatomy of an (alleged) failure
Disk space (was: SELF: Anatomy of an (alleged) failure)
Disk space (was: SELF: Anatomy of an (alleged) failure)
Disconnecting nautilus from gnome session
Disconnecting nautilus from gnome session
Disk space (was: SELF: Anatomy of an (alleged) failure)
Disk space (was: SELF: Anatomy of an (alleged) failure)
Disk space (was: SELF: Anatomy of an (alleged) failure)
Disk space (was: SELF: Anatomy of an (alleged) failure)
Disk space (was: SELF: Anatomy of an (alleged) failure)
Disk space (was: SELF: Anatomy of an (alleged) failure)
Yes, but GNU true does so much more! It supports --version, which tells you all about who wrote it and about the GPL and the FSF. It also supports --help, which explains true's command-line options (--version and --help). Then there is the i18n support, so that people from all over the world can learn about --help and --version. You just don't get all that with a minimalist ELF binary.
Disk space (was: SELF: Anatomy of an (alleged) failure)
Disk space (was: SELF: Anatomy of an (alleged) failure)
Disk space (was: SELF: Anatomy of an (alleged) failure)
Disk space (was: SELF: Anatomy of an (alleged) failure)
Disk space (was: SELF: Anatomy of an (alleged) failure)
SELF: Anatomy of an (alleged) failure
* Currently; If you do not need 64bit compatibility now you will probably want to install only 32bit binaries. However if in the future you run into software that requires 64bit compatibility. With the status quo it would require you to re-install the OS
So, because some distribution's biarch support sucks enough that it can't install a bunch of 64-bit dependencies into /lib64 and /usr/lib64 when you install a 64-bit binary, we need a kernel hack?
SELF: Anatomy of an (alleged) failure
SELF: Anatomy of an (alleged) failure
SELF: Anatomy of an (alleged) failure
But what you describe already works today and FatELF is not needed for it. It's called multilib.
SELF: Anatomy of an (alleged) failure
SELF: Anatomy of an (alleged) failure
SELF: Anatomy of an (alleged) failure
SELF: Anatomy of an (alleged) failure
Even if Debian did have an automatic setup for compiling all library packages with both architectures, you are then screwed because they put the amd64 libraries in /lib (with a symlink at /lib64) and the i386 libraries in /lib32. So your proprietary i386 software that tries to dlopen files in /lib fails because they are of the wrong architecture.
I've run LFS systems with the /lib / /lib32 layout for many years (because I consider /lib64 inelegant on a principally 64-bit system). You know how many things I've had to fix because they had lib hardwired into them? *Three*. And two of those were -config scripts (which says how old they are right then and there, modern stuff would use pkg-config). Not one was a dlopen(): they all seem to be using $libdir as they should.
SELF: Anatomy of an (alleged) failure
SELF: Anatomy of an (alleged) failure
SELF: Anatomy of an (alleged) failure
SELF: Anatomy of an (alleged) failure
SELF: Anatomy of an (alleged) failure
SELF: Anatomy of an (alleged) failure
SELF: Anatomy of an (alleged) failure
(/usr)?/lib/i686
and you'll see libs that will be loaded by the linker in preference to the plain ia32 ones if the hardware supports more than the least common denominator. It even works with /usr/lib/sse2
here on Debian if the package has support for it (see ATLAS or speex).SELF: Anatomy of an (alleged) failure
SELF: Anatomy of an (alleged) failure
SELF: Anatomy of an (alleged) failure
SELF: Anatomy of an (alleged) failure
SELF: Anatomy of an (alleged) failure
SELF: Anatomy of an (alleged) failure
SELF: Anatomy of an (alleged) failure
SELF: Anatomy of an (alleged) failure
SELF: Anatomy of an (alleged) failure
SELF: Anatomy of an (alleged) failure
SELF: Anatomy of an (alleged) failure
SELF: Anatomy of an (alleged) failure
SELF: Anatomy of an (alleged) failure
SELF: Anatomy of an (alleged) failure
SELF: Anatomy of an (alleged) failure
SELF: Anatomy of an (alleged) failure
SELF: Anatomy of an (alleged) failure
SELF: Anatomy of an (alleged) failure
SELF: Anatomy of an (alleged) failure
SELF: Anatomy of an (alleged) failure
SELF: Anatomy of an (alleged) failure
SELF: Anatomy of an (alleged) failure
SELF: Anatomy of an (alleged) failure
SELF: Anatomy of an (alleged) failure
SELF: Anatomy of an (alleged) failure
SELF: Anatomy of an (alleged) failure
SELF: Anatomy of an (alleged) failure
.jar
(or whatever the virtual machine du jour might be) file. Problem solved.
SELF: Anatomy of an (alleged) failure
SELF: Anatomy of an (alleged) failure
SELF: Anatomy of an (alleged) failure
SELF: Anatomy of an (alleged) failure
SELF: Anatomy of an (alleged) failure
You know, somewhere along that line, you're dropping a 2 or 3 gigabyte binary file on my machine just to run Mozilla?
Bah. I really don't see a good case for FatELF.
SELF: Anatomy of an (alleged) failure
SELF: Anatomy of an (alleged) failure
Please enlighten us to the recurring problems you have that FatELF would solve
SELF: Anatomy of an (alleged) failure
SELF: Anatomy of an (alleged) failure
Ye gods, does it really take much effort to see past the lack of a 64-bit flash plugin (which incidentally, I do have, even if it's been discontinued by Adobe)? The same applies to any plugin. Forget that I mentioned flash, and think instead about a java plugin or an acroread plugin, or any other plugin you care to think of.
SELF: Anatomy of an (alleged) failure
SELF: Anatomy of an (alleged) failure
Yes, I do know about the vulnerabilities with 64-bit flash. But like I said, this conversation isn't about flash. Despite your claim, I don't need fat libraries everywhere. On the 32-bit machines, I would already have the corresponding 32-bit libraries installed. And on the 64-bit machines, I'd have the 64-bit libraries installed. No, I don't use OS X, nor is it relevant to this discussion. If a particular approach solves a problem (as it will here), it's probably worthwhile, even if it doesn't solve every problem. I say again, why are you so anti fat binaries/libraries?
SELF: Anatomy of an (alleged) failure
SELF: Anatomy of an (alleged) failure
I'm not claiming fat binaries solve any particular distribution problem, nor do I believe that their existence means that fat binaries must cover every possible combination. In fact, just shipping a combined ia32 and x86_64 binary would cover 99% of the real world machines. But even if you don't want to ship a fat binary, it's not hard to envisage tools that would allow an end user to create a fat binary from two (or more) slim ones.
SELF: Anatomy of an (alleged) failure
SELF: Anatomy of an (alleged) failure
SELF: Anatomy of an (alleged) failure
SELF: Anatomy of an (alleged) failure
<paxman>SELF: Anatomy of an (alleged) failure
Answer the question!
</paxman>
SELF: Anatomy of an (alleged) failure
SELF: Anatomy of an (alleged) failure
So we don't need completely arbitrary fat binaries at all: we need a 'fat dlopen()'
SELF: Anatomy of an (alleged) failure
SELF: Anatomy of an (alleged) failure
SELF: Anatomy of an (alleged) failure
SELF: Anatomy of an (alleged) failure
SELF: Anatomy of an (alleged) failure
The reason why Apple invented FAT binaries is because they were interested in maintaining extensive binary compatibility with their old systems. Linux has never had this policy.
Might this not change? Perhaps one reason Linux has never kept backwards compatibility as well as Apple (or Windows, or Solaris) is because we haven't had the infrastructure and tools to do so easily. A mechanism for fat binaries might be one piece of the puzzle.
SELF: Anatomy of an (alleged) failure
Perhaps one reason Linux has never kept backwards compatibility as well as Apple (or Windows, or Solaris) is because we haven't had the infrastructure and tools to do so easily.
It's a misunderstanding to say that Apple uses fat binaries because they care about backward compatibility; what they cared about, and implemented fat binaries to support, was cross-platform compatability. (The distinction is that they wanted new software to work with new operating system releases on both old and new hardware; they're less interested in new software working with old operating systems.)
FatELF?
SELF: Anatomy of an (alleged) failure
> in maintaining extensive binary compatibility with their old systems.
a) Maintain ABIs, use weak-linking, etc
b) Distribute applications as a bundle
c) Support fat binaries