|
|
Subscribe / Log in / New account

SELF: Anatomy of an (alleged) failure

SELF: Anatomy of an (alleged) failure

Posted Jun 23, 2010 20:10 UTC (Wed) by cmccabe (guest, #60281)
Parent article: SELF: Anatomy of an (alleged) failure

FATELF seems uneccesary. Why not just put your 32-bit binaries in one filesystem, and your 64 bit ones in another. Then use unionFS to merge one or the other into your rootfs, depending on which architecture you're on. No need for a big new chunk of potentially insecure and buggy kernel code.

The reason why Apple invented FAT binaries is because they were interested in maintaining extensive binary compatibility with their old systems. Linux has never had this policy. Binaries that worked great on Fedora Core 9 probably won't work on Fedora Core 12, or Ubuntu 9.04, or whatever.

> One might question the wisdom of using Hans Reiser as an example of the
> kernel development process gone wrong

This just might be the understatement of the day!


to post comments

SELF: Anatomy of an (alleged) failure

Posted Jun 23, 2010 20:13 UTC (Wed) by jzb (editor, #7867) [Link] (2 responses)

Thanks. I was going for understated. ;-)

SELF: Anatomy of an (alleged) failure

Posted Jun 24, 2010 16:27 UTC (Thu) by fuhchee (guest, #40059) [Link] (1 responses)

Isn't it ad hominem to discount someone's technical ideas merely because of homicide?

SELF: Anatomy of an (alleged) failure

Posted Jun 24, 2010 19:22 UTC (Thu) by jldugger (guest, #57576) [Link]

Ad hominem isn't always a fallacy. If the argument is that the LKML doesn't play well with others, and you use Reiser as an example, demonstrating that Reiser also doesn't play well with others makes it harder to assign blame.

SELF: Anatomy of an (alleged) failure

Posted Jun 23, 2010 20:38 UTC (Wed) by drag (guest, #31333) [Link] (50 responses)

> FATELF seems uneccesary. Why not just put your 32-bit binaries in one filesystem, and your 64 bit ones in another. Then use unionFS to merge one or the other into your rootfs, depending on which architecture you're on. No need for a big new chunk of potentially insecure and buggy kernel code.

The point of it is to make things for users easier to deal with... forcing them to deal with UnionFS (especially when it's not part of the kernel and does not seem to ever likely to be incorporated) and using layered file systems by default on every Linux install sounds like a huge PITA to deal with.

Having 'Fat' binaries is really the best solution for OSes that want to support multiple arches in the easiest and most user-friendly way possible (especially in x86-64 were it can run 32bit and 64bit code side by side).

It's not just a matter of supporting Adobe flash or something like that, but it's just a superior technical solution for all levels from a users and system administration perspective.

> The reason why Apple invented FAT binaries is because they were interested in maintaining extensive binary compatibility with their old systems. Linux has never had this policy. Binaries that worked great on Fedora Core 9 probably won't work on Fedora Core 12, or Ubuntu 9.04, or whatever.

Actually Apple is very average when it comes to backwards compatibility. They certainly are no Microsoft. The point of fat binaries is just to make things easier for users and developers... which is exactly the entire point to having a operating system in the first place.

Some Linux kernel developers like to maintain that they support a stable ABI for userland and brag that software written for Linux in 2.0 era will still work in 2.6. In fact it seems that maintaining userspace ABI/API is a high priority for them. (Much higher then typical userland developer anyways. Application libraries are usually the bigger problem then anything with the kernel in terms of compatibility issues.)

SELF: Anatomy of an (alleged) failure

Posted Jun 23, 2010 21:40 UTC (Wed) by dlang (guest, #313) [Link] (48 responses)

why would you need a fat binary for a AMD64 system? if you care you just use the 32 bit binaries everywhere.

using a 64 bit kernel makes a huge difference in a system, but unless a single application uses more than 3G of ram it usually won't matter much to the app if it's 32 bit or 64 bit. there are some apps where it will matter, but those are special cases and probably not where a universal binary would be applicable.

SELF: Anatomy of an (alleged) failure

Posted Jun 23, 2010 22:10 UTC (Wed) by drag (guest, #31333) [Link] (32 responses)

> using a 64 bit kernel makes a huge difference in a system

I do actually use a 64bit kernel with 32bit userland. With Fat binaries I would not have to give a shit one way or the other.

> but unless a single application uses more than 3G of ram it usually won't matter much to the app if it's 32 bit or 64 bit. there are some apps where it will matter, but those are special cases and probably not where a universal binary would be applicable.

Here are some issues:

* The fat binary solves the problems you run into with the transition process of moving to a 64bit system. This makes it easier for users and Linux distribution developers to cover all the multitude of corner cases. For example: Installing 'Pure 64' versions of Linux for a period of time meant that you had to give up the ability to run OpenOffice.org. This is solved now, but it's certainly not a isolated issue.

* People who actually need to run 64bit software for performance enhancements or memory requirements will have their applications 'just work' (completely regardless to whether they were 32bit or 64bit) with no requirements for complicated multi-lib setups, chroots, and other games that users have to solve. They just install it and it will 'just work'.

* Currently; If you do not need 64bit compatibility now you will probably want to install only 32bit binaries. However if in the future you run into software that requires 64bit compatibility. With the status quo it would require you to re-install the OS

* Distributions would not have to supply multiple copies of the same software packages in order to support the arches they need to support.

* Application developers (both OSS and otherwise) can devote their time more efficiently to meet the needs of their users and can treat 64bit compatibility as a optional feature that they can support when it's appropriate for them rather then being forced to move to 64bit as dictated by Linux OS design limitations.

Yeah FAT binaries only really solve 'special cases' issues with supporting multiple arches, but the number of special cases are actually high and diverse. When you examine the business market were everybody uses custom in-house software then the special cases are even more numerous then the typical problems you run into with home users.

Sure it's not absolutely required and there are lots of work arounds for each issue you run into. On a scale of 1-10 in terms of importance (were 10 is most important, and 1 is least) it ranks about a 3 or a 4, But the point is that FAT binaries is simply a superior technical solution then what we have right now, would solve a lot of usability issues, and comes from a application developer that has to deal with _real_world_ issues caused by lack of fat binaries that works with software that is really desirable for a significant number of potential Linux users.

He would of not spent all this time and effort into implementing FatElf if it did not solve a severe issue for him.

SELF: Anatomy of an (alleged) failure

Posted Jun 23, 2010 22:52 UTC (Wed) by cmccabe (guest, #60281) [Link] (10 responses)

> * Currently; If you do not need 64bit compatibility now you will probably
> want to install only 32bit binaries. However if in the future you run into
> software that requires 64bit compatibility. With the status quo it would
> require you to re-install the OS

When you get a new computer, normally you reinstall the OS and copy over your /home directory. For all but a few highly technical users, this is the norm. Windows even has a special "feature" called Windows Genuine Advantage that forces you to reinstall the OS when the hardware has changed. You *cannot* use your previous install.

Anyway, running a Linux installer and then doing some apt-get only takes an hour or two.

> * Application developers (both OSS and otherwise) can devote their time
> more efficiently to meet the needs of their users and can treat 64bit
> compatibility as a optional feature that they can support when it's
> appropriate for them rather then being forced to move to 64bit as
> dictated by Linux OS design limitations.

FATELF has nothing to do with whether software is 64-bit clean. If some doofus is assuming that sizeof(long) == 4, FATELF is not going to ride to the rescue. (Full disclosure: sometimes that doofus has been me in the past.)

> He would of not spent all this time and effort into implementing FatElf if
> it did not solve a severe issue for him.

I can't think of even a single issue that FATELF "solves," except maybe to allow people distributing closed-source binaries to have one download link rather than two. In another 3 or 4 years, 32-bit desktop systems will be a historical curiosity, like dot-matrix printers or commodore 64s, and we will be glad we didn't put some kind of confusing and complicated binary-level compatibility system into the kernel.

SELF: Anatomy of an (alleged) failure

Posted Jun 24, 2010 0:25 UTC (Thu) by drag (guest, #31333) [Link]

> Windows even has a special "feature" called Windows Genuine Advantage that forces you to reinstall the OS when the hardware has changed. You *cannot* use your previous install.

Windows sucks in a lot of ways, but Windows sucking has nothing to do with Linux sucking also. You can improve Linux and make it more easy to use without giving a crap what anybody in Redmond is doing.

If I am your plumber and you pay me money to fix your plumbing and I do a really shitty job at fixing it.. and you complain to me about it to me... does it comfort you when I tell you that whenever your neighbor washes his dishes that the basement floods? Does it make your plumbing better knowing that somebody else has it worse then you?

SELF: Anatomy of an (alleged) failure

Posted Jun 24, 2010 12:07 UTC (Thu) by nye (subscriber, #51576) [Link] (4 responses)

>When you get a new computer, normally you reinstall the OS and copy over your /home directory. For all but a few highly technical users, this is the norm. Windows even has a special "feature" called Windows Genuine Advantage that forces you to reinstall the OS when the hardware has changed. You *cannot* use your previous install.

I know FUD is the order of the day here at LWN, but this has gone beyond that point and I feel the need to call it:

You are a liar.

SELF: Anatomy of an (alleged) failure

Posted Jun 25, 2010 8:26 UTC (Fri) by k8to (guest, #15413) [Link]

I'm confused. FUD is the order of the day?

SELF: Anatomy of an (alleged) failure

Posted Jun 27, 2010 12:12 UTC (Sun) by nix (subscriber, #2304) [Link] (2 responses)

Well, to be charitable, WGA is an appalling intentionally-user-hostile mess that MS keep very much underdocumented, so it is reasonable to believe that this is what WGA does without being a liar. One could simply be mistaken.

(Certainly when WGA fires, it does make it *appear* that you have to reinstall the OS, because it demands that you pay MS a sum of money equivalent to a new OS install. But, no, they don't give you a new OS for that. You pay piles of cash and get a key back instead, which makes your OS work again -- until you have the temerity to change too much hardware at once; the scoring system used to determine which hardware is 'too much' is documented, but not by Microsoft.)

SELF: Anatomy of an (alleged) failure

Posted Jun 28, 2010 10:03 UTC (Mon) by nye (subscriber, #51576) [Link] (1 responses)

For the record, my experience of WGA is as follows:

I've never actually *seen* WGA complain about a hardware change; the only times I've ever seen it are when reinstalling on exactly the same hardware (eg 3 times in a row because of a problem with slipstreaming drivers).

In principal though, if you change more than a few items of hardware at once (obviously this would include transplanting the disk into another machine) or whenever you reinstall then Windows will ask to be reactivated. If you reactivate too many times over a short period, it will demand that you call the phone number to use automated phone activation. At some point it will escalate to non-automated phone activation where you actually speak to a person. This is the furthest I've ever seen it go, though I believe there's a further level where you speak to the person and you have to give them a plausible reason for why you've installed the same copy of Windows two dozen times in the last week. If you then can't persuade them, this would be the point where you have to pay for a new license.

This is obnoxious and hateful, to be sure, but it is entirely unlike the behaviour described. The half-truths and outright untruths directed at Windows from some parts of the open source community make it hard to maintain credibility when describing legitimate grievances or technical problems, and this undermines us all.

SELF: Anatomy of an (alleged) failure

Posted Jun 28, 2010 13:25 UTC (Mon) by nix (subscriber, #2304) [Link]

Well, that's quite different from my experience (it fired once and demanded I phone a number where a licensing goon tried to extract the cost of an entire Windows license from me despite my giving them a key: 'that key is no longer valid because WGA has fired', wtf?).

I suspect that WGA's behaviour (always ill-documented) has shifted over time, and that as soon as you hit humans on phone lines you become vulnerable to the varying behaviour of those humans. I suspect all the variability can be explained away that way.

Still, give me free software any day. No irritating license enforcer and hackability both.

SELF: Anatomy of an (alleged) failure

Posted Jun 24, 2010 12:28 UTC (Thu) by Cato (guest, #7643) [Link]

Windows does make it hard to re-use an existing installation on new hardware, but it is certainly possible. Enterprises do this every day, and some backup tools make it possible to restore Windows partition images onto arbitrary hardware, including virtual machines.

Linux is much better at this generally, but this ability is not unique to Linux.

SELF: Anatomy of an (alleged) failure

Posted Jun 24, 2010 17:26 UTC (Thu) by jschrod (subscriber, #1646) [Link] (2 responses)

> When you get a new computer, normally you reinstall the OS and copy over
> your /home directory.

And if you use it for anything beyond office/Web surfing, you configure the system for a few days afterwards... (Except if you have a professional setup with some configuration management behind it, which the target group of this proposal most probably doesn't have.)

> Windows even has a special "feature" called Windows Genuine Advantage
> that forces you to reinstall the OS when the hardware has changed. You
> *cannot* use your previous install.

OK, that shows that you are not a professional. This is bullshit, plain and simple: For private and SOHO users, WGA may trigger reactivation, but no reinstall. (Enterprise-class users use deployment tools anyhow and do not come in such a situation.)

SELF: Anatomy of an (alleged) failure

Posted Jun 24, 2010 19:04 UTC (Thu) by cmccabe (guest, #60281) [Link] (1 responses)

> OK, that shows that you are not a professional. This is bullshit, plain
> and simple: For private and SOHO users, WGA may trigger reactivation, but
> no reinstall. (Enterprise-class users use deployment tools anyhow and do
> not come in such a situation.)

Thank you for the correction. I do not use Windows at work. It's not even installed on my work machine. So I'm not familiar with enterprise deployment tools for Windows. I wasn't trying to spread FUD-- just genuinely did not know there was a way around WGA in this case.

However, the point I was trying to make is that most home users expect that new computer == new OS install. Some people in this thread have been claiming that Linux distributions need to support moving a hard disk between 32 and 64 bit machines in order to be a serious contender for desktop operation system. (And they're unhappy with the obvious solution of using 32-bit everywhere.)

I do not think that most home users, especially nontechnical ones, are aware that this is even possible with Windows. I certainly don't think they would view it as a reason not to switch.

SELF: Anatomy of an (alleged) failure

Posted Jun 24, 2010 19:50 UTC (Thu) by vonbrand (subscriber, #4458) [Link]

It is much simpler than that: Very few people do move disks from one computer to the next. And those who do have the technical savvy to handle any resulting mess.

SELF: Anatomy of an (alleged) failure

Posted Jun 23, 2010 23:29 UTC (Wed) by dlang (guest, #313) [Link] (19 responses)

you happily run 32 bit userspace on a 64 bit kernel, you already don't have to care about this.

as for transitioning, install a 64 bit system and 32 bit binaries, as long as you have the libraries on the system they will work. fatelf doesn't help you here (it may help if your libraries were all fat, but I fail to see how that's really much better than having /lib32 /lib64 (your hard drive may be large enough to double the size of everything stored on it, but mine sure isn't)

distros would still have to compile and test all the different copies of their software for all the different arch's they support, they would just combine them together before shipping them (at which point they would have to ship more CDs/DVDs and or pay higher bandwidth charges to get people copies of the binaries that don't do them any good)

SELF: Anatomy of an (alleged) failure

Posted Jun 24, 2010 0:37 UTC (Thu) by drag (guest, #31333) [Link] (3 responses)

> you happily run 32 bit userspace on a 64 bit kernel, you already don't have to care about this.

I do have to care about it if, in the future, I want to run a application that benefits from 64bit-ness.

Some operations are faster in 64bit and many applications, such as games, already benefit from the larger address space.

> (it may help if your libraries were all fat, but I fail to see how that's really much better than having /lib32 /lib64 (your hard drive may be large enough to double the size of everything stored on it, but mine sure isn't)

Yes. That is what I am talking about. Getting rid of architecture-specific directories and going with FatElf for everything.

Your wrong in thinking that having 64bit and 32bit support in a binary means that your doubling your system's footprint. Generally speaking the architectural-specific files in a software package is small compared to the overall size of the application. Most ELF files are only a few K big. Only rarely do they get up past a half a dozen MB.

My user directory is about 4.1GB large. Adding Fatelf support for 32bit/64bit applications would probably only plump it up a 400-600 MB or so..

SELF: Anatomy of an (alleged) failure

Posted Jun 24, 2010 7:56 UTC (Thu) by dlang (guest, #313) [Link] (2 responses)

If it really is such low overhead and as useful as you say, Why don't you (alone or with help from others who believe this) demonstrate this for us on a real distro.

take some distro (say ubuntu since it supports multiple architectures) download the repository (when I did this a couple years ago it was 600G, nowdays it's probably larger so it make take $150 or so to buy a USB 2TB drive, it will take you a while to download everything), then create a unified version of the distro, making all the binaries and libraries 'fat' and advertise the result. I'm willing to bet that if you did this as a plain repackaging of ubuntu with no changes you would even be able to get people to host it for you (you may even be able to get Cononical to host it if your repackaging script is simple enough)

I expect that the size difference is going to be larger than you think (especially if you include every architecure that ubuntu supports, not just i486 and AMD64), and this size will end up costing performance as well as having effects like making it hard to create an install CD etc.

I may be wrong and it works cleanly, in which case there will be real ammunition to go back to the kernel developers with (although you do need to show why you couldn't just use additional ELF sections with a custom loader instead as was asked elsewhere)

If you could do this and make a CD like the ubuntu install CD, but one that would work on multiple architectures (say i486, AMD64, powerPC) that would get people's attention. (but just making a single disk that does this without having the rest of a distro to back it up won't raise nearly the interest that you will get if you can script doing this to an entire distro)

SELF: Anatomy of an (alleged) failure

Posted Jun 24, 2010 12:12 UTC (Thu) by nye (subscriber, #51576) [Link] (1 responses)

>If it really is such low overhead and as useful as you say, Why don't you (alone or with help from others who believe this) demonstrate this for us on a real distro.

Because the subject of this article already did that: http://icculus.org/fatelf/vm/

It's not as well polished as it could be - I got the impression that he didn't see much point in improving it after it was dismissed out of hand.

SELF: Anatomy of an (alleged) failure

Posted Jun 24, 2010 20:47 UTC (Thu) by MisterIO (guest, #36192) [Link]

IMO the problem with FatELF isn't that nobody proved that it was doable(because they did that), but that nobody really acknowledeged that there's any real problem with the current situation.

Disk space (was: SELF: Anatomy of an (alleged) failure)

Posted Jun 24, 2010 1:24 UTC (Thu) by cesarb (subscriber, #6266) [Link] (14 responses)

> your hard drive may be large enough to double the size of everything stored on it, but mine sure isn't

And some are very small indeed. One of my machines has only a 4 gigabyte "hard disk" (gigabyte, not terabyte). It is a EeePC 701 4G. (And it is in fact a small SSD, thus the quotes.)

There is also the Live CDs/DVDs, which are limited to a fixed size. Fedora is moving to use LZMA to cram even more stuff into its live images (https://fedoraproject.org/wiki/Features/LZMA_for_Live_Images). Note also that installing from a live image, at least on Fedora and IIRC on Ubuntu, is done by simply copying the whole live image to the target disk, so the size limitations of live images directly influence what is installed by default.

Disk space (was: SELF: Anatomy of an (alleged) failure)

Posted Jun 24, 2010 9:06 UTC (Thu) by ncm (guest, #165) [Link] (2 responses)

This, very incidentally, is one of the reasons I object to Gnome forcing a dependency on Nautilus into gnome-session. In practice, Gnome works fine without Nautilus, once you jimmy the gnome-session package install and poke exactly one gconf entry. That saves 60M on disk, and a gratifying amount of RAM/swap. It's only arrogance and contempt that makes upstream keep the dependency.

Disconnecting nautilus from gnome session

Posted Jun 24, 2010 20:56 UTC (Thu) by speedster1 (guest, #8143) [Link] (1 responses)

I know this is off-topic... but would you mind giving a little more detail on how to remove the nauilus dependency?

Disconnecting nautilus from gnome session

Posted Jun 26, 2010 9:56 UTC (Sat) by ncm (guest, #165) [Link]

In gconf-editor, go to desktop/gnome/session/, and change required_components_list to "[windowmanager,panel]".

While we're way, way off topic, you might also want to go to desktop/gnome/interface and change gtk_key_theme to "Emacs" so that the text edit box keybindings (except in Epiphany, grr) are Emacs-style.

Contempt, thy name is Gnome.

Getting back on topic, fat binaries makes perfect sense for shared libraries, so they can all go in /lib and /usr/lib. However, there's no reason to think anybody would force them on you for an EEE install.

Disk space (was: SELF: Anatomy of an (alleged) failure)

Posted Jun 24, 2010 12:30 UTC (Thu) by Cato (guest, #7643) [Link]

Nobody is suggesting that everyone should have to double the size of their binaries - most distros would use single architecture binaries. FatELF is a handy feature for many special cases, that's all.

Disk space (was: SELF: Anatomy of an (alleged) failure)

Posted Jun 24, 2010 17:03 UTC (Thu) by chad.netzer (subscriber, #4257) [Link] (9 responses)

Which reminds me, why aren't loadable binaries compressed on disk, and uncompressed on the fly? Surely any decompression overhead is lower than a rotating storage disk seek, and common uncompressed binaries would get cached anyways. I suppose it's because it conflicts with mmap() or something.

Disk space (was: SELF: Anatomy of an (alleged) failure)

Posted Jun 24, 2010 19:40 UTC (Thu) by dlang (guest, #313) [Link] (8 responses)

it all depends on your system.

do you have a SSD?

are you memory constrained (decompressing requires that you have more space than the uncompressed image)

do you page out parts of the code and want to read in just that page later (if so, you would have to uncompress the entire binary to find the appropriate page)

what compression algorithm do you use? many binaries don't actually compress that well, and some decompression algorithms (bzip2 for example) are significantly slower than just reading the raw data.

I actually test this fairly frequently in dealing with processing log data. in some condititions having the data compressed and uncompressing it when you access it is a win, in other cases it isn't.

Disk space (was: SELF: Anatomy of an (alleged) failure)

Posted Jun 25, 2010 0:14 UTC (Fri) by chad.netzer (subscriber, #4257) [Link] (7 responses)

Yeah, I made reference to some of the gotchas (spindles, mmap/paging). Actually, it sounds like the kind of thing that, should you care about it, is better handled by a compressed filesystem mounted onto the bin directories, rather than some program loader hackery.

Still, why the heck must my /bin/true executable take 30K on disk? And /bin/false is a separate executable that takes *another* 30K, even though they are both dynamically linked to libc??? Time to move to busybox on the desktop...

Disk space (was: SELF: Anatomy of an (alleged) failure)

Posted Jun 25, 2010 0:38 UTC (Fri) by dlang (guest, #313) [Link] (4 responses)

re: size of binaries

http://www.muppetlabs.com/~breadbox/software/tiny/teensy....

A Whirlwind Tutorial on Creating Really Teensy ELF Executables for Linux

Disk space (was: SELF: Anatomy of an (alleged) failure)

Posted Jun 25, 2010 2:41 UTC (Fri) by chad.netzer (subscriber, #4257) [Link] (3 responses)

Yes, I'm familiar with that old bit of cleverness. :) Note that the GNU coreutils stripped /bin/true and /bin/false executables are more than an order of magnitude larger than the *starting* binary that is whittled down in that demonstration. Now, *that* is code bloat.

To be fair getting your executable much smaller than the minimal disk block size is just a fun exercise. Whereas coreutils /bin/true may actually benefit from an extent based filesystem. :) Anyway, it's just a silly complaint I'm making, though it has always annoyed me a tiny bit.

Disk space (was: SELF: Anatomy of an (alleged) failure)

Posted Jun 25, 2010 12:25 UTC (Fri) by dark (guest, #8483) [Link] (2 responses)

Yes, but GNU true does so much more! It supports --version, which tells you all about who wrote it and about the GPL and the FSF. It also supports --help, which explains true's command-line options (--version and --help). Then there is the i18n support, so that people from all over the world can learn about --help and --version. You just don't get all that with a minimalist ELF binary.

Disk space (was: SELF: Anatomy of an (alleged) failure)

Posted Jun 25, 2010 15:38 UTC (Fri) by intgr (subscriber, #39733) [Link] (1 responses)

Indeed, I use those features every day! ;)

PS: Shells like zsh actually ship builtin "true" and "false" commands

Disk space (was: SELF: Anatomy of an (alleged) failure)

Posted Jun 29, 2010 23:03 UTC (Tue) by peter-b (guest, #66996) [Link]

So does POSIX sh. The following command is equivalent to true:

:

The following command is equivalent to false:

! :

I regularly use both when writing shell scripts.

Disk space (was: SELF: Anatomy of an (alleged) failure)

Posted Jun 27, 2010 16:42 UTC (Sun) by nix (subscriber, #2304) [Link]

There are two separate binaries because the GNU Project thinks it is confusing to have single binaries whose behaviour changes depending on what name they are run as, even though this is ancient hoary Unix tradition. Apparently people might go renaming the binaries and then get confused when they don't work the same. Because we do that all the time, y'know.

(I think this rule makes more sense on non-GNU platforms, where it is common to rename *everything* via --program-prefix=g or something similar, to prevent conflicts with the native tools. But why should those of us using the GNU toolchain everywhere be penalized for this?)

Disk space (was: SELF: Anatomy of an (alleged) failure)

Posted Jun 27, 2010 16:46 UTC (Sun) by nix (subscriber, #2304) [Link]

The size, btw, is probably because the gnulib folks have found bugs in printf which the glibc folks refuse to fix (they only cause buffer overruns or bus errors on invalid input, after all, how problematic could that be?) so GNU software that uses gnulib will automatically replace glibc's printf with gnulib's at configure time. (That this happens even for things like /bin/true, which will never print the class of things that triggers the glibc printf bugs, is a flaw, but not a huge one.)

And gnulib, because it has no stable API or ABI, is always statically linked to its users.

26Kb for a printf implementation isn't bad.

SELF: Anatomy of an (alleged) failure

Posted Jun 27, 2010 12:08 UTC (Sun) by nix (subscriber, #2304) [Link]

* Currently; If you do not need 64bit compatibility now you will probably want to install only 32bit binaries. However if in the future you run into software that requires 64bit compatibility. With the status quo it would require you to re-install the OS
So, because some distribution's biarch support sucks enough that it can't install a bunch of 64-bit dependencies into /lib64 and /usr/lib64 when you install a 64-bit binary, we need a kernel hack?

Please. There are good arguments for FatELF, but this is not one of them.

SELF: Anatomy of an (alleged) failure

Posted Jun 23, 2010 23:34 UTC (Wed) by cortana (subscriber, #24596) [Link] (11 responses)

> why would you need a fat binary for a AMD64 system? if you care you just use the 32 bit binaries everywhere.

So I could use Flash.

So I could buy a commercial Linux game and run it without having to waste time setting up an i386 chroot or similar.

Both areas that contribute to the continuing success of Windows and Mac OS X on the desktop.

SELF: Anatomy of an (alleged) failure

Posted Jun 24, 2010 2:27 UTC (Thu) by BenHutchings (subscriber, #37955) [Link] (9 responses)

FatELF might make it somewhat easier for Adobe or the game developer to distribute x86-64 binaries to those that can use them, but if they don't intend to build and support x86-64 binaries then it doesn't solve your problem.

SELF: Anatomy of an (alleged) failure

Posted Jun 24, 2010 9:10 UTC (Thu) by cortana (subscriber, #24596) [Link] (8 responses)

FatELF would have made it easier for distributors to ship a combined i386/amd64 distro. This would have made it possible for them to ship i386 libraries that are required to support i386 web browsers, for Flash, and i386 games and other proprietary software.

SELF: Anatomy of an (alleged) failure

Posted Jun 24, 2010 10:43 UTC (Thu) by michich (guest, #17902) [Link] (7 responses)

But what you describe already works today and FatELF is not needed for it. It's called multilib.

SELF: Anatomy of an (alleged) failure

Posted Jun 24, 2010 11:00 UTC (Thu) by cortana (subscriber, #24596) [Link] (6 responses)

But it doesn't work today, at least on any Debian-derived distribution. You have to rely on the ia32-libs conglomeration package to have picked up the right version of a library that you want, when it was last updated, which is not a regular occurrence (bugs asking that libraries for Flash 10 are still open 2 years later).

Even if Debian did have an automatic setup for compiling all library packages with both architectures, you are then screwed because they put the amd64 libraries in /lib (with a symlink at /lib64) and the i386 libraries in /lib32. So your proprietary i386 software that tries to dlopen files in /lib fails because they are of the wrong architecture.

You could argue that these are Debian-specific problems. You might be right. But they are roadblocks to greater adoption of Linux on the desktop, and now that the FatELF way out is gone, we're back to the previous situation: waiting for the 'multiarch' fix (think FatELF but with all libraries in /lib/$(arch-triplet)/libfoo.so rather than the code for several architectures in a FatELF-style, single /lib/libfoo.so), which has failed to materialise in the 6 years since I first saw it mentioned. And which still won't fix proprietary software that expects to find its own architecture's files at /lib.

SELF: Anatomy of an (alleged) failure

Posted Jun 24, 2010 17:44 UTC (Thu) by vonbrand (subscriber, #4458) [Link]

That multilib doesn't work on Debian is squarely Debian's fault (my Fedora here is still not completely 32-bit free, but getting there). No need to burden the kernel for that.

SELF: Anatomy of an (alleged) failure

Posted Jun 27, 2010 12:31 UTC (Sun) by nix (subscriber, #2304) [Link] (4 responses)

Even if Debian did have an automatic setup for compiling all library packages with both architectures, you are then screwed because they put the amd64 libraries in /lib (with a symlink at /lib64) and the i386 libraries in /lib32. So your proprietary i386 software that tries to dlopen files in /lib fails because they are of the wrong architecture.
I've run LFS systems with the /lib / /lib32 layout for many years (because I consider /lib64 inelegant on a principally 64-bit system). You know how many things I've had to fix because they had lib hardwired into them? *Three*. And two of those were -config scripts (which says how old they are right then and there, modern stuff would use pkg-config). Not one was a dlopen(): they all seem to be using $libdir as they should.

This simply is not a significant problem.

SELF: Anatomy of an (alleged) failure

Posted Jun 27, 2010 13:14 UTC (Sun) by cortana (subscriber, #24596) [Link] (1 responses)

I'm very happy that you did not run into this problem, but I have. IIRC it was with Google Earth. strace clearly showed it trying to dlopen some DRI-related library, followed by it complaining about 'wrong ELF class' and quitting.

SELF: Anatomy of an (alleged) failure

Posted Jun 27, 2010 17:48 UTC (Sun) by nix (subscriber, #2304) [Link]

Well, DRI is a whole different kettle of worms. I suspect a problem with your OpenGL implemementation, unless Google Earth has a statically linked one (ew).

(Words cannot express how much I don't care about statically linked apps.)

SELF: Anatomy of an (alleged) failure

Posted Jul 10, 2010 12:31 UTC (Sat) by makomk (guest, #51493) [Link] (1 responses)

Yeah, dlopen() problems with not finding libraries in /lib32 don't tend to happen, mostly because it's just easier to do it the right way from the start and let dlopen() search the appropriate directories. (Even on pure 32-bit systems, some libraries are in /lib on some systems, /usr/lib on others, and perhaps even in /usr/local/lib or $HOME/lib if they've been manually installed.)

SELF: Anatomy of an (alleged) failure

Posted Jul 10, 2010 20:24 UTC (Sat) by nix (subscriber, #2304) [Link]

dlopen() doesn't search directories for you, does it? Programs generally want to look in a subdirectory of the libdir, anyway. Nonetheless they almost all look in the right place.

SELF: Anatomy of an (alleged) failure

Posted Jun 24, 2010 18:43 UTC (Thu) by Spudd86 (subscriber, #51683) [Link]

FatELF won't have any effect on the flash situation at all, it has nothing to do with shipping one or two binaries, Adobe just doesn't care enough about 64-bit Linux to ship flash for it, that's it, FatELF won't change that, and it won't magically make 32<->64 dynamic linking work either, they are different ABI's and you'd still need a shim layer

SELF: Anatomy of an (alleged) failure

Posted Jun 24, 2010 7:55 UTC (Thu) by jengelh (guest, #33263) [Link] (2 responses)

>but unless a single application uses more than 3G of ram it usually won't matter much to the app if it's 32 bit or 64 bit.

Hell it will. Unless the program in question directly uses hand-tuned assembler, the 32-bit one will usually not run with SSE2, just the olde x87, which is slower, will be any computations involving larger-than-32 integers..

SELF: Anatomy of an (alleged) failure

Posted Jun 24, 2010 18:08 UTC (Thu) by pkern (subscriber, #32883) [Link] (1 responses)

Which is only partly true. Look into (/usr)?/lib/i686 and you'll see libs that will be loaded by the linker in preference to the plain ia32 ones if the hardware supports more than the least common denominator. It even works with /usr/lib/sse2 here on Debian if the package has support for it (see ATLAS or speex).

But of course, normally you don't rely on newer features everywhere, breaking support for older machines. Ubuntu goes i686 now, Fedora's already there, I think; and if you want more optimization I guess Gentoo is the way to go because you don't have to think portable. ;-)

SELF: Anatomy of an (alleged) failure

Posted Jun 24, 2010 18:25 UTC (Thu) by jengelh (guest, #33263) [Link]

Indeed, but that is for libraries only, it does not catch code inside programs or dlopened plugins.

SELF: Anatomy of an (alleged) failure

Posted Jun 24, 2010 7:43 UTC (Thu) by tzafrir (subscriber, #11501) [Link]

If I make a multi-arch CD (i386+powerpc, for instance) I already have to work aorund a number of issues. The ability to use standard binaries from packages rather than rebuilding my own as fat ones, is a GoodThing. I have to mess up with a unionfs anyway, for the writable file system.

SELF: Anatomy of an (alleged) failure

Posted Jun 23, 2010 20:47 UTC (Wed) by Frej (guest, #4165) [Link]

>FATELF seems uneccesary. Why not just put your 32-bit binaries in one filesystem, and your 64 bit ones in another. Then use unionFS to merge one or the other into your rootfs, depending on which architecture you're on. No need for a big new chunk of potentially insecure and buggy kernel code.

Assuming it's not a sin to distribute binaries, how would unionfs help when you download a binary?

>The reason why Apple invented FAT binaries is because they were interested in maintaining extensive binary compatibility with their old systems. Linux has never had this policy. Binaries that worked great on Fedora Core 9 probably won't work on Fedora Core 12, or Ubuntu 9.04, or whatever.

Well without FATelf you need two binaries for each Fedora Core release. But ofcourse if you just want linux for servers and admins - FATEF won't matter that much.

SELF: Anatomy of an (alleged) failure

Posted Jun 23, 2010 21:32 UTC (Wed) by RCL (guest, #63264) [Link] (17 responses)

Those who want "year of Linux desktop" (i.e. adoption by wide masses) to come should treat maintaining binary compatibility (backward and/or between major distros) as the most important goal...

SELF: Anatomy of an (alleged) failure

Posted Jun 23, 2010 21:53 UTC (Wed) by dlang (guest, #313) [Link] (14 responses)

you have to establish compatibility before you can worry about maintaining it ;-)

what features are you willing to give up to get your universal compatibility?

as a trivial example, if an application needs to store some data and the upstream support sqlite, mysql, postgresql, flat files, or various 'desktop storage' APIs, which one should the universal binary depend on? and why?

KDE and Gnome each have their 'standard' tool for storing contact information, should Gnome users be force to load KDE libraries and applications (or KDE users forced to use the Gnome ones) to maintain compatibility?

what if someone comes up with something new, should that be forbidden/ignored so that a universal binary can work on older systems that don't have the new software?

SELF: Anatomy of an (alleged) failure

Posted Jun 23, 2010 22:53 UTC (Wed) by RCL (guest, #63264) [Link] (4 responses)

I don't really have well-thought solution, but more or less it's like this:

1) A single entity (with dictatorship rights) is designated to maintain core "system" in a way similar to how Linux kernel itself (or *BSD) is maintained. A new platform name is defined (or, ideally, "Linux" is redefined to mean kernel + core system).

2) Entity picks a set of core libraries which it is capable to actually maintain (and guarantee the backward compatibility for) and no compatible system is allowed to replace it/enhance/modify it in any way (even recompiling the kernel locally) without losing the (official) compatibility and ability to use platform name (which should be made a trademark).

3) Versioning policy is similar to Apple or Windows: every update (other than security fixes) should have a given name and version (with means to check that from code). The platform is updated in its entirety only, bugfixes are accumulated and introduced all at once.

I think that it is sufficient for the above set of libraries to include only functionality needed to write a game (generally speaking, any application with low-latency interactive video and audio).

In some ways it is similar to creating another distro dedicated to binary stability and binary multimedia applications, but it is not intended to be a full-blown distro with its own package management and policies, just a well-defined set of binary libraries + kernel.

SELF: Anatomy of an (alleged) failure

Posted Jun 23, 2010 23:16 UTC (Wed) by dlang (guest, #313) [Link] (1 responses)

this is exactly what the Linux Standards Base is attempting to do.

unfortunately in practice it just doesn't work. This may be because they don't have sufficient dictatorial powers, but nobody wants to give them that much power ;-)

as for redefining what 'Linux' means, good luck with that windmill.

SELF: Anatomy of an (alleged) failure

Posted Jun 23, 2010 23:41 UTC (Wed) by RCL (guest, #63264) [Link]

I think that LSB is not exactly the same, but is much wider in scope - it even dictates the installer. And it is a certification board, not a vendor so it cannot produce/maintain the aforementioned core system.

And overall... well, I'm not going to fight for that binary compatibility. I'm a game developer, sympathetic to Linux, but my target platforms are wildly different.

SELF: Anatomy of an (alleged) failure

Posted Jun 24, 2010 16:29 UTC (Thu) by sorpigal (guest, #36106) [Link] (1 responses)

This sounds an awfully lot like Debian, to me.

SELF: Anatomy of an (alleged) failure

Posted Jun 27, 2010 12:14 UTC (Sun) by nix (subscriber, #2304) [Link]

More like a sort of really crippled and inflexible FreeBSD, with all ports forced to update only when the OS has a major version number bump: if you want a bugfix you have to wait for another giant mass of features to land. Great idea, not.

SELF: Anatomy of an (alleged) failure

Posted Jun 23, 2010 23:03 UTC (Wed) by drag (guest, #31333) [Link] (8 responses)

> as a trivial example, if an application needs to store some data and the upstream support sqlite, mysql, postgresql, flat files, or various 'desktop storage' APIs, which one should the universal binary depend on? and why?

Well presumably with 'Fat Binary Support' the Linux distribution will take advantage of that to provide Fat binaries for their main OS.

That way you avoid dealing with issues with having to do ugly hacks like maintain separate */lib and */lib64 things. So the application author should not have deal with issues like that (unless I am missing some aspect of SQL databases datatypes differences between 32 and 64bit arches.)

A distro moving to "Fat binay support" model should simultaneously be able to support backwards compatibility with 32bit legacy applications and be prepared to deal with the shiny new 64bit future without forcing users and application developers to deal with the details.

--------------------------------

From my personal experiences sharing my home directory between multiple versions of Debian with different arches (64bit/32bit, and PPC 32bit) the only big issue with compatibility on application storage was with X-Moto and it's use of sqlite to store game information. It had to do with endiness issues between x86 and PPC, but I think it was actually fixed at a later date...

SELF: Anatomy of an (alleged) failure

Posted Jun 23, 2010 23:19 UTC (Wed) by dlang (guest, #313) [Link] (7 responses)

you don't need fat binaries to run 32 bit binaries on 64 bit systems, you just need the right libraries on the system, and a fat binary doesn't help you there (if you are on a 64 bit system but only have 32 bit versions of some library that the app needs, should you run the 32 bit version?)

the OP was wanting a single flat binary that would run on every distro, doing that requires that all distros agree on what datastore to use when the application can be compiled to work with many different ones.

SELF: Anatomy of an (alleged) failure

Posted Jun 23, 2010 23:49 UTC (Wed) by drag (guest, #31333) [Link] (6 responses)

> you don't need fat binaries to run 32 bit binaries on 64 bit systems,

Of course not. It just makes it a more difficult and irritating for users and developers and distribution makers to support mutli-arch and play games with having multiple locations and packages for the same pieces of software.

There is a reason I run 64bit kernel with 32bit userland on my Linux systems nowadays.. I tried running 64bit only and things like that, but it's a PITA to do that in Linux while 64bit application support is trivial (for end users) in OS X...

> you just need the right libraries on the system, and a fat binary doesn't help you there (if you are on a 64 bit system but only have 32 bit versions of some library that the app needs, should you run the 32 bit version?)

Well if I only have a 32bit-only version of a library (instead of the vastly preferable 32/64 fat binary library) then that would presume that only 32bit versions of that library exists.

Therefore a 64bit version (or a 32bit/64bit 'Fat binary' version) of a application that depends on that library would be impossible to have in the first place... right?

Either way it's not really any different then what we have to deal with it now, but with Fat binary support it would be handled intelligently by the system were as right now it requires a lot of manual intervention and a significant technical understanding on the part of end users to deal with these sort of compatibility issues.

SELF: Anatomy of an (alleged) failure

Posted Jun 23, 2010 23:53 UTC (Wed) by drag (guest, #31333) [Link]

> I tried running 64bit only

Well for a better understanding; I was running Debian while attempting to juggle both 64bit and 32bit compatibility for the various applications I needed to run. Fedora is a bit better...

SELF: Anatomy of an (alleged) failure

Posted Jun 24, 2010 0:59 UTC (Thu) by dlang (guest, #313) [Link] (4 responses)

I run 64 bit only systems everywhere (although ubuntu uses ndis wrappers to run 32 bit flash for firefox) and don't run into problems.

I will admit I don't run binary-only software (i.e. commercial games) on most of my systems, but that's more due to the lack of commercial games available for linux than anything else.

SELF: Anatomy of an (alleged) failure

Posted Jun 24, 2010 1:11 UTC (Thu) by RCL (guest, #63264) [Link] (3 responses)

Just by the way... Given the above discussion, what would you recommend for a developer that wants to ship a binary-only Linux game (or game-like application) and to target as wide userbase as possible?

SELF: Anatomy of an (alleged) failure

Posted Jun 24, 2010 19:59 UTC (Thu) by vonbrand (subscriber, #4458) [Link]

I'd guess 32 bit (for oldish machines and netbooks). But some serious gamers I know spend more on their grapics card than I do on a complete machine, for for high-end 64 bit 2 (or even 4) cores is probably the way to go.

SELF: Anatomy of an (alleged) failure

Posted Jun 24, 2010 21:41 UTC (Thu) by MisterIO (guest, #36192) [Link] (1 responses)

I may be somewhat naive here, but what about 32 and 64 bit versions of .deb and .rpm packages?

SELF: Anatomy of an (alleged) failure

Posted Jun 25, 2010 13:50 UTC (Fri) by vonbrand (subscriber, #4458) [Link]

On current Fedora, you can install 32 and 64 bit versions happyly (most of the time), the installed packages do share non-architectured stuff (like manpages and whatnot). Yes, it does require some delicate juggling when building the packages to make sure said manpages and such are exactly equal and some other considerations.

SELF: Anatomy of an (alleged) failure

Posted Jun 24, 2010 1:37 UTC (Thu) by akumria (guest, #7773) [Link]

What is "wide masses" in your regard?

1% of the global population?

10% of all computer users?

100% of all operating systems?

In some areas it has been the "year of the Linux desktop" since 1997, for others, they are just starting.

An example is in this week's LWN. Poseidon Linux. Year of the scientific Linux desktop since 2004.

Anand

SELF: Anatomy of an (alleged) failure

Posted Jun 24, 2010 8:08 UTC (Thu) by jengelh (guest, #33263) [Link]

Linux is backward compatible - you can still run binaries once compiled for Linux 2.0, and there are even reports that some ddated to 0.99 work. Ask tytso.

SELF: Anatomy of an (alleged) failure

Posted Jun 23, 2010 21:53 UTC (Wed) by Tara_Li (guest, #26706) [Link] (29 responses)

Um... FatELF - Ok, we're going to have one binary that runs on... i386, x86-64, itanium, a dozen different ARMs, Sparcs... I think Alpha support got dropped somewhere along the way... IBM Sys/390s... You know, somewhere along that line, you're dropping a 2 or 3 gigabyte binary file on my machine just to run Mozilla?

Bah. I really don't see a good case for FatELF.

SELF: Anatomy of an (alleged) failure

Posted Jun 24, 2010 3:23 UTC (Thu) by ccurtis (guest, #49713) [Link] (5 responses)

It seems fairly plain to me. Look at all the different flavors of ARM and MIPS and VIA and A3 and Atom cores that people carry around in their handheld computers. When the day comes that you don't have to depend on an iStore or the App Market or Obj-C or Dalvik or whatever, and you just want to ship your 5MB game binary with its 500MB of textures without making your customers dig through lists of every cell phone model in existence, FatELF might actually be rather handy.

SELF: Anatomy of an (alleged) failure

Posted Jun 24, 2010 5:01 UTC (Thu) by bronson (subscriber, #4806) [Link] (2 responses)

IF that day comes (I'm skeptical -- architectural diversity seems to be increasing), I expect kernel devs will be more receptive to it.

Until then, it seems like you're trying to merge the solution before the problem even exists.

SELF: Anatomy of an (alleged) failure

Posted Jun 24, 2010 13:37 UTC (Thu) by ccurtis (guest, #49713) [Link] (1 responses)

I'm not necessarily arguing for FatELF, but isn't anticipating the market and having a solution before something becomes a problem the very definition of innovation?

Personally, I like the idea of having solutions rather than problems.

SELF: Anatomy of an (alleged) failure

Posted Jun 24, 2010 17:11 UTC (Thu) by chad.netzer (subscriber, #4257) [Link]

Except when you guess wrong, and burden everyone with a worse problem. (Many examples exist, though RAMBUS jumps to mind)

SELF: Anatomy of an (alleged) failure

Posted Jun 24, 2010 20:02 UTC (Thu) by vonbrand (subscriber, #4458) [Link]

Then ship the application as a .jar (or whatever the virtual machine du jour might be) file. Problem solved.

SELF: Anatomy of an (alleged) failure

Posted Jun 25, 2010 15:45 UTC (Fri) by intgr (subscriber, #39733) [Link]

As has been mentioned above, this problem is already solved. Shell scripts run on pretty much all Linux devices and are perfectly adequate for choosing the right binary to execute.

SELF: Anatomy of an (alleged) failure

Posted Jun 24, 2010 17:49 UTC (Thu) by pj (subscriber, #4506) [Link] (3 responses)

I'm less worried about Mozilla and more interested in lib* so that a FatELF-aware gcc/linker can do cross compiles easily. Ever tried doing a build for an embedded box like ppc or arm on a non-ppc or arm machine? The toolchain is *painful* because you have to make sure to like all the right libs from all the right places for the destination arch plus you have to tell them that although on the current system they're found in /lib/arch-foo/, on the destination system they'll be in /lib ... total PITA. FatELF would provide a solution to that: all the libs are in... /lib. Done, period, end of story, picking the right segment out of the ELF file is something that the linker should do (and complain if it's not found!).

SELF: Anatomy of an (alleged) failure

Posted Jun 24, 2010 19:13 UTC (Thu) by tzafrir (subscriber, #11501) [Link]

What will it take to create them?

Specifically, I have libfoo installed for i386 from my distro. I now want to install libfoo for mips (or even worse: the powerpc variant of the day). Does it mean I have to modify /usr/lib/libfoo.so.1 as shipped by my distro?

SELF: Anatomy of an (alleged) failure

Posted Jun 24, 2010 19:43 UTC (Thu) by dlang (guest, #313) [Link]

If only it worked that easily.

sometimes you need different versions of compilers for different architectures.

go read Rob Landley's blog for ongoing headaches in cross compiling.

having the results all in one file is trivial compared to all the other problems.

SELF: Anatomy of an (alleged) failure

Posted Jun 25, 2010 15:59 UTC (Fri) by vonbrand (subscriber, #4458) [Link]

Your "FatELF aware toolchain" is the sum total of the separate cross-toolchains, so there is no real gain here. That said, GCC has been the cross compiler of choice for most of its life, so it has quite a set of options for doing what you want, cleanly. Not your everyday use, sure, so it can be rough going.

SELF: Anatomy of an (alleged) failure

Posted Jun 24, 2010 23:36 UTC (Thu) by Tet (guest, #5433) [Link] (18 responses)

You know, somewhere along that line, you're dropping a 2 or 3 gigabyte binary file on my machine just to run Mozilla? Bah. I really don't see a good case for FatELF.

Yeesh. Everyone is bringing up countless examples of where FatELF could be abused and claiming that it's therefore useless. But no one has mentioned that FatELF solves some very real problems, problems that I encounter on a fairly regular basis. Here's a hint: if you don't want to use fat binaries, then don't. I'll guarantee you that even if it were included upstream, Fedora/Debian/OpenSUSE/Ubuntu etc would continue to release architecture specific images. But for some of us, that's not good enough, and FatELF is one solution to the problem. If people want to suggest others, I'm all ears...

SELF: Anatomy of an (alleged) failure

Posted Jun 25, 2010 13:54 UTC (Fri) by vonbrand (subscriber, #4458) [Link] (17 responses)

Please enlighten us to the recurring problems you have that FatELF would solve.

For my part, I haven't run into any situation that didn't have a simple solution which did not involve changing the kernel and the whole buildchain. Doing so adds so much overhead that the problem would have to be humongous to make it worth my while, but you can leave that consideration out if you like.

SELF: Anatomy of an (alleged) failure

Posted Jun 25, 2010 18:57 UTC (Fri) by Tet (guest, #5433) [Link] (16 responses)

Please enlighten us to the recurring problems you have that FatELF would solve

There's only one ~/.mozilla/plugins

Since my $HOME is NFS mounted across a mix of 32-bit and 64-bit OSes, I'm basically screwed. 32-bit plugins won't work with a 64-bit Firefox and vice versa. Yes, you could argue the application should be fixed, but the same applies to gimp and to countless other apps, which means an awful lot of applications are out there to fix. If I could get a fat libflashplayer.so, for example, everything would Just Work™. I'm not suggesting that the whole OS should be fat binaries/shared libraries. But I'd like the option to use them where they make sense, as I believe they do here. Again, if you have a simple solution that doesn't involve FatELF or something similar, please let me know.

SELF: Anatomy of an (alleged) failure

Posted Jun 25, 2010 19:35 UTC (Fri) by dlang (guest, #313) [Link] (10 responses)

before you worry about getting a fat libflashplayer.so there first needs to be a 64 bit libflashplayer.so for you to use and merge with the 32 bit one.

users of 64 bit desktops still user the 32 bit libflashplayer.so run though a ndiswrapper layer.

so this is not a case where FatELF would help in practice.

even in theory, firefox doesn't have to have the plugin binaries under ~/ so if you don't install them there and instead install them in one of the other places that it can live you would be able to NFS mount $HOME without a problem.

the same thing goes for any application that uses plugins. the plugin binaries should be able to be installed outside of $HOME. If they can be, then the application can work if $HOME is shared.

SELF: Anatomy of an (alleged) failure

Posted Jun 25, 2010 21:16 UTC (Fri) by Tet (guest, #5433) [Link] (9 responses)

Ye gods, does it really take much effort to see past the lack of a 64-bit flash plugin (which incidentally, I do have, even if it's been discontinued by Adobe)? The same applies to any plugin. Forget that I mentioned flash, and think instead about a java plugin or an acroread plugin, or any other plugin you care to think of.

the same thing goes for any application that uses plugins. the plugin binaries should be able to be installed outside of $HOME. If they can be, then the application can work if $HOME is shared.

Yes, but here in the real world, they're not. Even if you could find a suitable location that would be writeable by a non-privileged user, it would mean changing the applications, and as I mentioned, there are many, many of those. Simply making a fat shared library possible would be a much easier and cleaner solution, and would have negligible impact on those that didn't want or need to use it. I don't understand why so many are opposed to it.

SELF: Anatomy of an (alleged) failure

Posted Jun 25, 2010 21:27 UTC (Fri) by dlang (guest, #313) [Link] (8 responses)

re: 64 bit flash

you do realize that the version you have is vunerable to exploits that are being used in the wild. Adobe decided to discontinue the 64 bit version instead of fixing it.

a fat plugin is only useful if you also have fat libraries everywhere. This directly contridicts posts earlier that said not to worry about the bload as the distros would still ship non-fat distros.

by the way, do you expect plugs to work across different operating systems as well so that you can have your $HOME NFS mounted on MACOS as well? where do you draw the line at what yo insist is needed?

SELF: Anatomy of an (alleged) failure

Posted Jun 25, 2010 21:40 UTC (Fri) by Tet (guest, #5433) [Link] (7 responses)

Yes, I do know about the vulnerabilities with 64-bit flash. But like I said, this conversation isn't about flash. Despite your claim, I don't need fat libraries everywhere. On the 32-bit machines, I would already have the corresponding 32-bit libraries installed. And on the 64-bit machines, I'd have the 64-bit libraries installed. No, I don't use OS X, nor is it relevant to this discussion. If a particular approach solves a problem (as it will here), it's probably worthwhile, even if it doesn't solve every problem. I say again, why are you so anti fat binaries/libraries?

SELF: Anatomy of an (alleged) failure

Posted Jun 25, 2010 22:17 UTC (Fri) by dlang (guest, #313) [Link] (5 responses)

people supporting want a Fat binary that's only fat enough for your particular system, but claim that having fat binaries would solve distribution problems because there wouldn't need to be multiple copies.

these two conflict, if you are going to support FAT binaries for every possible combination of options the distribution problem is much larger. If you are going to want the fat binary to support every possible system in a single binary it's going to be substantially larger.

it's not that I am so opposed to the idea of fat binaries as it is I don't see them as being that useful/desirable. the problems they are trying to address seem to be solvable by other means pretty easily, and there is not much more than hand waving over the cost.

SELF: Anatomy of an (alleged) failure

Posted Jun 26, 2010 7:33 UTC (Sat) by Tet (guest, #5433) [Link] (4 responses)

I'm not claiming fat binaries solve any particular distribution problem, nor do I believe that their existence means that fat binaries must cover every possible combination. In fact, just shipping a combined ia32 and x86_64 binary would cover 99% of the real world machines. But even if you don't want to ship a fat binary, it's not hard to envisage tools that would allow an end user to create a fat binary from two (or more) slim ones.

I've outlined a case where it would be both useful and desirable to have them, and to date, I haven't seen any sensible alternatives being proposed.

SELF: Anatomy of an (alleged) failure

Posted Jun 26, 2010 15:08 UTC (Sat) by vonbrand (subscriber, #4458) [Link] (3 responses)

Just use two binary packages, with non-architecure-dependent stuff exactly the same, and arrange for the package manager to manage files belonging to several packages. RPM does this, and it works.

No need to screw around with the kernel, no need to have 3 versions of the package (arch 1, arch 2, fat).

SELF: Anatomy of an (alleged) failure

Posted Jun 26, 2010 18:20 UTC (Sat) by tzafrir (subscriber, #11501) [Link] (1 responses)

It works for rpm when all those shared fiels are identical in every package.

But what you want to do here is that rpm will be able to merge files from different packages into a single file on disk. This won't work.

SELF: Anatomy of an (alleged) failure

Posted Jun 26, 2010 21:49 UTC (Sat) by dlang (guest, #313) [Link]

and fat binaries won't work in cases where you need different config file options for different architectures and the config file is on a shared drive

SELF: Anatomy of an (alleged) failure

Posted Jun 27, 2010 16:11 UTC (Sun) by Tet (guest, #5433) [Link]

<paxman>
Answer the question!
</paxman>

Your "solution" doesn't solve the problem where application plugins are concerned. Firstly, the majority them are not installed using the system package manager in the first place, and secondly, it's utterly irrelevant anyway. You can't package the achitecture specific bits separately, because the application only looks for them in one place. As I said right at the start, it would be good to fix the applications, but there are a hell of a lot of them. Fat binaries would solve the problem. Your suggestions wouldn't, without first also patching the apps.

SELF: Anatomy of an (alleged) failure

Posted Jun 27, 2010 17:17 UTC (Sun) by nix (subscriber, #2304) [Link]

FatELF wouldn't help with OSX anyway: OSX doesn't use ELF.

SELF: Anatomy of an (alleged) failure

Posted Jun 27, 2010 17:15 UTC (Sun) by nix (subscriber, #2304) [Link] (4 responses)

Ah, right. So we don't need completely arbitrary fat binaries at all: we need a 'fat dlopen()'.

I suspect -- though it's a kludge -- you could do this with an LD_PRELOADed wrapper around dlopen() which tweaks the filename appropriately, and slight changes to the downloading parts of e.g. firefox to put its dynamically loaded stuff in per-arch subdirectories when autodownloaded. dlopen() is not a hidden symbol so should be vulnerable to interposition.

SELF: Anatomy of an (alleged) failure

Posted Jun 27, 2010 17:55 UTC (Sun) by Tet (guest, #5433) [Link] (3 responses)

So we don't need completely arbitrary fat binaries at all: we need a 'fat dlopen()'

To solve this particular problem, yes. But then Ryan's FatELF release supported dlopen()ing fat shared libraries.

SELF: Anatomy of an (alleged) failure

Posted Jun 27, 2010 20:36 UTC (Sun) by nix (subscriber, #2304) [Link] (2 responses)

Yes, but if that's all you need to do, the kernel side of FatELF is superfluous.

SELF: Anatomy of an (alleged) failure

Posted Jun 28, 2010 8:53 UTC (Mon) by Tet (guest, #5433) [Link] (1 responses)

Oh agreed, and in this particular case, it's not necessary. However, there are other situations where full fat binaries might be a win. I just get extremely annoyed by people claiming that the whole concept of multi-arch binaries is useless just because they happen to not have a valid use for them, and are unable to see that others might have.

SELF: Anatomy of an (alleged) failure

Posted Jun 28, 2010 13:22 UTC (Mon) by nix (subscriber, #2304) [Link]

The attitude appears to be 'distributors don't need them therefore they are useless'. This seems, to me, more than a little shortsighted...

SELF: Anatomy of an (alleged) failure

Posted Jun 24, 2010 7:35 UTC (Thu) by epa (subscriber, #39769) [Link] (1 responses)

The reason why Apple invented FAT binaries is because they were interested in maintaining extensive binary compatibility with their old systems. Linux has never had this policy.
Might this not change? Perhaps one reason Linux has never kept backwards compatibility as well as Apple (or Windows, or Solaris) is because we haven't had the infrastructure and tools to do so easily. A mechanism for fat binaries might be one piece of the puzzle.

Be careful not to fall into the classic trap of equating 'my favourite system cannot support X' with 'X is unworkable' or even 'X is morally the wrong thing to do'.

SELF: Anatomy of an (alleged) failure

Posted Jun 25, 2010 3:36 UTC (Fri) by ajf (guest, #10844) [Link]

Perhaps one reason Linux has never kept backwards compatibility as well as Apple (or Windows, or Solaris) is because we haven't had the infrastructure and tools to do so easily.
It's a misunderstanding to say that Apple uses fat binaries because they care about backward compatibility; what they cared about, and implemented fat binaries to support, was cross-platform compatability. (The distinction is that they wanted new software to work with new operating system releases on both old and new hardware; they're less interested in new software working with old operating systems.)

FatELF?

Posted Jun 24, 2010 17:35 UTC (Thu) by vonbrand (subscriber, #4458) [Link]

Au contraire. I do believe the a.out binaries from the very first days of Linux still run fine on current kernels. What has changed is the environment: The currently most popular binary format is completely different, new libraries, languages have new ABIs, new ways to communicate among components are common today, ... A "FatELF binary" doesn't do any good if the right libraries, configuration files, devices, ... aren't available. Adding all that in would result in GargantuanELF.

In any case, the idea makes no sense, as this can be handled some other ways: Just pack stuff up into a cpio(1) or some such file package plus a custom header, and create a special loader that handles that header. No kernel change needed (heck, if you can run Java or Win32 apps like native, you certainly can do this). The binary format will be different in any case, use that freedom to create something that doesn't require kernel changes.

SELF: Anatomy of an (alleged) failure

Posted Jun 27, 2010 17:34 UTC (Sun) by da4089 (subscriber, #1195) [Link]

> The reason why Apple invented FAT binaries is because they were interested
> in maintaining extensive binary compatibility with their old systems.

Actually, Apple inherited fat binaries in Mach-O from NeXT.

NeXT supported fat binaries because NeXTSTEP (and later OpenStep) was available for multiple CPU architectures (68k, x86, SPARC32, PARISC), and they wanted to enable ISVs to ship binary applications that worked on all platforms.

To make that work effectively, they
a) Maintain ABIs, use weak-linking, etc
b) Distribute applications as a bundle
c) Support fat binaries

This is a viable approach, as Apple has recently demonstrated.

But it's a very different model to the usual Linux distribution, which needs none of those things, and relies on the dependency resolution of the packaging system and rigorous testing of API compatibility when building the consistent package set.

I don't think attempting to move Linux towards the NeXT/Apple model is useful, but I also don't see why those that want to can't maintain an out-of-tree patch to make it work.


Copyright © 2025, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds