|
|
Subscribe / Log in / New account

SELF: Anatomy of an (alleged) failure

SELF: Anatomy of an (alleged) failure

Posted Jun 23, 2010 20:38 UTC (Wed) by drag (guest, #31333)
In reply to: SELF: Anatomy of an (alleged) failure by cmccabe
Parent article: SELF: Anatomy of an (alleged) failure

> FATELF seems uneccesary. Why not just put your 32-bit binaries in one filesystem, and your 64 bit ones in another. Then use unionFS to merge one or the other into your rootfs, depending on which architecture you're on. No need for a big new chunk of potentially insecure and buggy kernel code.

The point of it is to make things for users easier to deal with... forcing them to deal with UnionFS (especially when it's not part of the kernel and does not seem to ever likely to be incorporated) and using layered file systems by default on every Linux install sounds like a huge PITA to deal with.

Having 'Fat' binaries is really the best solution for OSes that want to support multiple arches in the easiest and most user-friendly way possible (especially in x86-64 were it can run 32bit and 64bit code side by side).

It's not just a matter of supporting Adobe flash or something like that, but it's just a superior technical solution for all levels from a users and system administration perspective.

> The reason why Apple invented FAT binaries is because they were interested in maintaining extensive binary compatibility with their old systems. Linux has never had this policy. Binaries that worked great on Fedora Core 9 probably won't work on Fedora Core 12, or Ubuntu 9.04, or whatever.

Actually Apple is very average when it comes to backwards compatibility. They certainly are no Microsoft. The point of fat binaries is just to make things easier for users and developers... which is exactly the entire point to having a operating system in the first place.

Some Linux kernel developers like to maintain that they support a stable ABI for userland and brag that software written for Linux in 2.0 era will still work in 2.6. In fact it seems that maintaining userspace ABI/API is a high priority for them. (Much higher then typical userland developer anyways. Application libraries are usually the bigger problem then anything with the kernel in terms of compatibility issues.)


to post comments

SELF: Anatomy of an (alleged) failure

Posted Jun 23, 2010 21:40 UTC (Wed) by dlang (guest, #313) [Link] (48 responses)

why would you need a fat binary for a AMD64 system? if you care you just use the 32 bit binaries everywhere.

using a 64 bit kernel makes a huge difference in a system, but unless a single application uses more than 3G of ram it usually won't matter much to the app if it's 32 bit or 64 bit. there are some apps where it will matter, but those are special cases and probably not where a universal binary would be applicable.

SELF: Anatomy of an (alleged) failure

Posted Jun 23, 2010 22:10 UTC (Wed) by drag (guest, #31333) [Link] (32 responses)

> using a 64 bit kernel makes a huge difference in a system

I do actually use a 64bit kernel with 32bit userland. With Fat binaries I would not have to give a shit one way or the other.

> but unless a single application uses more than 3G of ram it usually won't matter much to the app if it's 32 bit or 64 bit. there are some apps where it will matter, but those are special cases and probably not where a universal binary would be applicable.

Here are some issues:

* The fat binary solves the problems you run into with the transition process of moving to a 64bit system. This makes it easier for users and Linux distribution developers to cover all the multitude of corner cases. For example: Installing 'Pure 64' versions of Linux for a period of time meant that you had to give up the ability to run OpenOffice.org. This is solved now, but it's certainly not a isolated issue.

* People who actually need to run 64bit software for performance enhancements or memory requirements will have their applications 'just work' (completely regardless to whether they were 32bit or 64bit) with no requirements for complicated multi-lib setups, chroots, and other games that users have to solve. They just install it and it will 'just work'.

* Currently; If you do not need 64bit compatibility now you will probably want to install only 32bit binaries. However if in the future you run into software that requires 64bit compatibility. With the status quo it would require you to re-install the OS

* Distributions would not have to supply multiple copies of the same software packages in order to support the arches they need to support.

* Application developers (both OSS and otherwise) can devote their time more efficiently to meet the needs of their users and can treat 64bit compatibility as a optional feature that they can support when it's appropriate for them rather then being forced to move to 64bit as dictated by Linux OS design limitations.

Yeah FAT binaries only really solve 'special cases' issues with supporting multiple arches, but the number of special cases are actually high and diverse. When you examine the business market were everybody uses custom in-house software then the special cases are even more numerous then the typical problems you run into with home users.

Sure it's not absolutely required and there are lots of work arounds for each issue you run into. On a scale of 1-10 in terms of importance (were 10 is most important, and 1 is least) it ranks about a 3 or a 4, But the point is that FAT binaries is simply a superior technical solution then what we have right now, would solve a lot of usability issues, and comes from a application developer that has to deal with _real_world_ issues caused by lack of fat binaries that works with software that is really desirable for a significant number of potential Linux users.

He would of not spent all this time and effort into implementing FatElf if it did not solve a severe issue for him.

SELF: Anatomy of an (alleged) failure

Posted Jun 23, 2010 22:52 UTC (Wed) by cmccabe (guest, #60281) [Link] (10 responses)

> * Currently; If you do not need 64bit compatibility now you will probably
> want to install only 32bit binaries. However if in the future you run into
> software that requires 64bit compatibility. With the status quo it would
> require you to re-install the OS

When you get a new computer, normally you reinstall the OS and copy over your /home directory. For all but a few highly technical users, this is the norm. Windows even has a special "feature" called Windows Genuine Advantage that forces you to reinstall the OS when the hardware has changed. You *cannot* use your previous install.

Anyway, running a Linux installer and then doing some apt-get only takes an hour or two.

> * Application developers (both OSS and otherwise) can devote their time
> more efficiently to meet the needs of their users and can treat 64bit
> compatibility as a optional feature that they can support when it's
> appropriate for them rather then being forced to move to 64bit as
> dictated by Linux OS design limitations.

FATELF has nothing to do with whether software is 64-bit clean. If some doofus is assuming that sizeof(long) == 4, FATELF is not going to ride to the rescue. (Full disclosure: sometimes that doofus has been me in the past.)

> He would of not spent all this time and effort into implementing FatElf if
> it did not solve a severe issue for him.

I can't think of even a single issue that FATELF "solves," except maybe to allow people distributing closed-source binaries to have one download link rather than two. In another 3 or 4 years, 32-bit desktop systems will be a historical curiosity, like dot-matrix printers or commodore 64s, and we will be glad we didn't put some kind of confusing and complicated binary-level compatibility system into the kernel.

SELF: Anatomy of an (alleged) failure

Posted Jun 24, 2010 0:25 UTC (Thu) by drag (guest, #31333) [Link]

> Windows even has a special "feature" called Windows Genuine Advantage that forces you to reinstall the OS when the hardware has changed. You *cannot* use your previous install.

Windows sucks in a lot of ways, but Windows sucking has nothing to do with Linux sucking also. You can improve Linux and make it more easy to use without giving a crap what anybody in Redmond is doing.

If I am your plumber and you pay me money to fix your plumbing and I do a really shitty job at fixing it.. and you complain to me about it to me... does it comfort you when I tell you that whenever your neighbor washes his dishes that the basement floods? Does it make your plumbing better knowing that somebody else has it worse then you?

SELF: Anatomy of an (alleged) failure

Posted Jun 24, 2010 12:07 UTC (Thu) by nye (subscriber, #51576) [Link] (4 responses)

>When you get a new computer, normally you reinstall the OS and copy over your /home directory. For all but a few highly technical users, this is the norm. Windows even has a special "feature" called Windows Genuine Advantage that forces you to reinstall the OS when the hardware has changed. You *cannot* use your previous install.

I know FUD is the order of the day here at LWN, but this has gone beyond that point and I feel the need to call it:

You are a liar.

SELF: Anatomy of an (alleged) failure

Posted Jun 25, 2010 8:26 UTC (Fri) by k8to (guest, #15413) [Link]

I'm confused. FUD is the order of the day?

SELF: Anatomy of an (alleged) failure

Posted Jun 27, 2010 12:12 UTC (Sun) by nix (subscriber, #2304) [Link] (2 responses)

Well, to be charitable, WGA is an appalling intentionally-user-hostile mess that MS keep very much underdocumented, so it is reasonable to believe that this is what WGA does without being a liar. One could simply be mistaken.

(Certainly when WGA fires, it does make it *appear* that you have to reinstall the OS, because it demands that you pay MS a sum of money equivalent to a new OS install. But, no, they don't give you a new OS for that. You pay piles of cash and get a key back instead, which makes your OS work again -- until you have the temerity to change too much hardware at once; the scoring system used to determine which hardware is 'too much' is documented, but not by Microsoft.)

SELF: Anatomy of an (alleged) failure

Posted Jun 28, 2010 10:03 UTC (Mon) by nye (subscriber, #51576) [Link] (1 responses)

For the record, my experience of WGA is as follows:

I've never actually *seen* WGA complain about a hardware change; the only times I've ever seen it are when reinstalling on exactly the same hardware (eg 3 times in a row because of a problem with slipstreaming drivers).

In principal though, if you change more than a few items of hardware at once (obviously this would include transplanting the disk into another machine) or whenever you reinstall then Windows will ask to be reactivated. If you reactivate too many times over a short period, it will demand that you call the phone number to use automated phone activation. At some point it will escalate to non-automated phone activation where you actually speak to a person. This is the furthest I've ever seen it go, though I believe there's a further level where you speak to the person and you have to give them a plausible reason for why you've installed the same copy of Windows two dozen times in the last week. If you then can't persuade them, this would be the point where you have to pay for a new license.

This is obnoxious and hateful, to be sure, but it is entirely unlike the behaviour described. The half-truths and outright untruths directed at Windows from some parts of the open source community make it hard to maintain credibility when describing legitimate grievances or technical problems, and this undermines us all.

SELF: Anatomy of an (alleged) failure

Posted Jun 28, 2010 13:25 UTC (Mon) by nix (subscriber, #2304) [Link]

Well, that's quite different from my experience (it fired once and demanded I phone a number where a licensing goon tried to extract the cost of an entire Windows license from me despite my giving them a key: 'that key is no longer valid because WGA has fired', wtf?).

I suspect that WGA's behaviour (always ill-documented) has shifted over time, and that as soon as you hit humans on phone lines you become vulnerable to the varying behaviour of those humans. I suspect all the variability can be explained away that way.

Still, give me free software any day. No irritating license enforcer and hackability both.

SELF: Anatomy of an (alleged) failure

Posted Jun 24, 2010 12:28 UTC (Thu) by Cato (guest, #7643) [Link]

Windows does make it hard to re-use an existing installation on new hardware, but it is certainly possible. Enterprises do this every day, and some backup tools make it possible to restore Windows partition images onto arbitrary hardware, including virtual machines.

Linux is much better at this generally, but this ability is not unique to Linux.

SELF: Anatomy of an (alleged) failure

Posted Jun 24, 2010 17:26 UTC (Thu) by jschrod (subscriber, #1646) [Link] (2 responses)

> When you get a new computer, normally you reinstall the OS and copy over
> your /home directory.

And if you use it for anything beyond office/Web surfing, you configure the system for a few days afterwards... (Except if you have a professional setup with some configuration management behind it, which the target group of this proposal most probably doesn't have.)

> Windows even has a special "feature" called Windows Genuine Advantage
> that forces you to reinstall the OS when the hardware has changed. You
> *cannot* use your previous install.

OK, that shows that you are not a professional. This is bullshit, plain and simple: For private and SOHO users, WGA may trigger reactivation, but no reinstall. (Enterprise-class users use deployment tools anyhow and do not come in such a situation.)

SELF: Anatomy of an (alleged) failure

Posted Jun 24, 2010 19:04 UTC (Thu) by cmccabe (guest, #60281) [Link] (1 responses)

> OK, that shows that you are not a professional. This is bullshit, plain
> and simple: For private and SOHO users, WGA may trigger reactivation, but
> no reinstall. (Enterprise-class users use deployment tools anyhow and do
> not come in such a situation.)

Thank you for the correction. I do not use Windows at work. It's not even installed on my work machine. So I'm not familiar with enterprise deployment tools for Windows. I wasn't trying to spread FUD-- just genuinely did not know there was a way around WGA in this case.

However, the point I was trying to make is that most home users expect that new computer == new OS install. Some people in this thread have been claiming that Linux distributions need to support moving a hard disk between 32 and 64 bit machines in order to be a serious contender for desktop operation system. (And they're unhappy with the obvious solution of using 32-bit everywhere.)

I do not think that most home users, especially nontechnical ones, are aware that this is even possible with Windows. I certainly don't think they would view it as a reason not to switch.

SELF: Anatomy of an (alleged) failure

Posted Jun 24, 2010 19:50 UTC (Thu) by vonbrand (subscriber, #4458) [Link]

It is much simpler than that: Very few people do move disks from one computer to the next. And those who do have the technical savvy to handle any resulting mess.

SELF: Anatomy of an (alleged) failure

Posted Jun 23, 2010 23:29 UTC (Wed) by dlang (guest, #313) [Link] (19 responses)

you happily run 32 bit userspace on a 64 bit kernel, you already don't have to care about this.

as for transitioning, install a 64 bit system and 32 bit binaries, as long as you have the libraries on the system they will work. fatelf doesn't help you here (it may help if your libraries were all fat, but I fail to see how that's really much better than having /lib32 /lib64 (your hard drive may be large enough to double the size of everything stored on it, but mine sure isn't)

distros would still have to compile and test all the different copies of their software for all the different arch's they support, they would just combine them together before shipping them (at which point they would have to ship more CDs/DVDs and or pay higher bandwidth charges to get people copies of the binaries that don't do them any good)

SELF: Anatomy of an (alleged) failure

Posted Jun 24, 2010 0:37 UTC (Thu) by drag (guest, #31333) [Link] (3 responses)

> you happily run 32 bit userspace on a 64 bit kernel, you already don't have to care about this.

I do have to care about it if, in the future, I want to run a application that benefits from 64bit-ness.

Some operations are faster in 64bit and many applications, such as games, already benefit from the larger address space.

> (it may help if your libraries were all fat, but I fail to see how that's really much better than having /lib32 /lib64 (your hard drive may be large enough to double the size of everything stored on it, but mine sure isn't)

Yes. That is what I am talking about. Getting rid of architecture-specific directories and going with FatElf for everything.

Your wrong in thinking that having 64bit and 32bit support in a binary means that your doubling your system's footprint. Generally speaking the architectural-specific files in a software package is small compared to the overall size of the application. Most ELF files are only a few K big. Only rarely do they get up past a half a dozen MB.

My user directory is about 4.1GB large. Adding Fatelf support for 32bit/64bit applications would probably only plump it up a 400-600 MB or so..

SELF: Anatomy of an (alleged) failure

Posted Jun 24, 2010 7:56 UTC (Thu) by dlang (guest, #313) [Link] (2 responses)

If it really is such low overhead and as useful as you say, Why don't you (alone or with help from others who believe this) demonstrate this for us on a real distro.

take some distro (say ubuntu since it supports multiple architectures) download the repository (when I did this a couple years ago it was 600G, nowdays it's probably larger so it make take $150 or so to buy a USB 2TB drive, it will take you a while to download everything), then create a unified version of the distro, making all the binaries and libraries 'fat' and advertise the result. I'm willing to bet that if you did this as a plain repackaging of ubuntu with no changes you would even be able to get people to host it for you (you may even be able to get Cononical to host it if your repackaging script is simple enough)

I expect that the size difference is going to be larger than you think (especially if you include every architecure that ubuntu supports, not just i486 and AMD64), and this size will end up costing performance as well as having effects like making it hard to create an install CD etc.

I may be wrong and it works cleanly, in which case there will be real ammunition to go back to the kernel developers with (although you do need to show why you couldn't just use additional ELF sections with a custom loader instead as was asked elsewhere)

If you could do this and make a CD like the ubuntu install CD, but one that would work on multiple architectures (say i486, AMD64, powerPC) that would get people's attention. (but just making a single disk that does this without having the rest of a distro to back it up won't raise nearly the interest that you will get if you can script doing this to an entire distro)

SELF: Anatomy of an (alleged) failure

Posted Jun 24, 2010 12:12 UTC (Thu) by nye (subscriber, #51576) [Link] (1 responses)

>If it really is such low overhead and as useful as you say, Why don't you (alone or with help from others who believe this) demonstrate this for us on a real distro.

Because the subject of this article already did that: http://icculus.org/fatelf/vm/

It's not as well polished as it could be - I got the impression that he didn't see much point in improving it after it was dismissed out of hand.

SELF: Anatomy of an (alleged) failure

Posted Jun 24, 2010 20:47 UTC (Thu) by MisterIO (guest, #36192) [Link]

IMO the problem with FatELF isn't that nobody proved that it was doable(because they did that), but that nobody really acknowledeged that there's any real problem with the current situation.

Disk space (was: SELF: Anatomy of an (alleged) failure)

Posted Jun 24, 2010 1:24 UTC (Thu) by cesarb (subscriber, #6266) [Link] (14 responses)

> your hard drive may be large enough to double the size of everything stored on it, but mine sure isn't

And some are very small indeed. One of my machines has only a 4 gigabyte "hard disk" (gigabyte, not terabyte). It is a EeePC 701 4G. (And it is in fact a small SSD, thus the quotes.)

There is also the Live CDs/DVDs, which are limited to a fixed size. Fedora is moving to use LZMA to cram even more stuff into its live images (https://fedoraproject.org/wiki/Features/LZMA_for_Live_Images). Note also that installing from a live image, at least on Fedora and IIRC on Ubuntu, is done by simply copying the whole live image to the target disk, so the size limitations of live images directly influence what is installed by default.

Disk space (was: SELF: Anatomy of an (alleged) failure)

Posted Jun 24, 2010 9:06 UTC (Thu) by ncm (guest, #165) [Link] (2 responses)

This, very incidentally, is one of the reasons I object to Gnome forcing a dependency on Nautilus into gnome-session. In practice, Gnome works fine without Nautilus, once you jimmy the gnome-session package install and poke exactly one gconf entry. That saves 60M on disk, and a gratifying amount of RAM/swap. It's only arrogance and contempt that makes upstream keep the dependency.

Disconnecting nautilus from gnome session

Posted Jun 24, 2010 20:56 UTC (Thu) by speedster1 (guest, #8143) [Link] (1 responses)

I know this is off-topic... but would you mind giving a little more detail on how to remove the nauilus dependency?

Disconnecting nautilus from gnome session

Posted Jun 26, 2010 9:56 UTC (Sat) by ncm (guest, #165) [Link]

In gconf-editor, go to desktop/gnome/session/, and change required_components_list to "[windowmanager,panel]".

While we're way, way off topic, you might also want to go to desktop/gnome/interface and change gtk_key_theme to "Emacs" so that the text edit box keybindings (except in Epiphany, grr) are Emacs-style.

Contempt, thy name is Gnome.

Getting back on topic, fat binaries makes perfect sense for shared libraries, so they can all go in /lib and /usr/lib. However, there's no reason to think anybody would force them on you for an EEE install.

Disk space (was: SELF: Anatomy of an (alleged) failure)

Posted Jun 24, 2010 12:30 UTC (Thu) by Cato (guest, #7643) [Link]

Nobody is suggesting that everyone should have to double the size of their binaries - most distros would use single architecture binaries. FatELF is a handy feature for many special cases, that's all.

Disk space (was: SELF: Anatomy of an (alleged) failure)

Posted Jun 24, 2010 17:03 UTC (Thu) by chad.netzer (subscriber, #4257) [Link] (9 responses)

Which reminds me, why aren't loadable binaries compressed on disk, and uncompressed on the fly? Surely any decompression overhead is lower than a rotating storage disk seek, and common uncompressed binaries would get cached anyways. I suppose it's because it conflicts with mmap() or something.

Disk space (was: SELF: Anatomy of an (alleged) failure)

Posted Jun 24, 2010 19:40 UTC (Thu) by dlang (guest, #313) [Link] (8 responses)

it all depends on your system.

do you have a SSD?

are you memory constrained (decompressing requires that you have more space than the uncompressed image)

do you page out parts of the code and want to read in just that page later (if so, you would have to uncompress the entire binary to find the appropriate page)

what compression algorithm do you use? many binaries don't actually compress that well, and some decompression algorithms (bzip2 for example) are significantly slower than just reading the raw data.

I actually test this fairly frequently in dealing with processing log data. in some condititions having the data compressed and uncompressing it when you access it is a win, in other cases it isn't.

Disk space (was: SELF: Anatomy of an (alleged) failure)

Posted Jun 25, 2010 0:14 UTC (Fri) by chad.netzer (subscriber, #4257) [Link] (7 responses)

Yeah, I made reference to some of the gotchas (spindles, mmap/paging). Actually, it sounds like the kind of thing that, should you care about it, is better handled by a compressed filesystem mounted onto the bin directories, rather than some program loader hackery.

Still, why the heck must my /bin/true executable take 30K on disk? And /bin/false is a separate executable that takes *another* 30K, even though they are both dynamically linked to libc??? Time to move to busybox on the desktop...

Disk space (was: SELF: Anatomy of an (alleged) failure)

Posted Jun 25, 2010 0:38 UTC (Fri) by dlang (guest, #313) [Link] (4 responses)

re: size of binaries

http://www.muppetlabs.com/~breadbox/software/tiny/teensy....

A Whirlwind Tutorial on Creating Really Teensy ELF Executables for Linux

Disk space (was: SELF: Anatomy of an (alleged) failure)

Posted Jun 25, 2010 2:41 UTC (Fri) by chad.netzer (subscriber, #4257) [Link] (3 responses)

Yes, I'm familiar with that old bit of cleverness. :) Note that the GNU coreutils stripped /bin/true and /bin/false executables are more than an order of magnitude larger than the *starting* binary that is whittled down in that demonstration. Now, *that* is code bloat.

To be fair getting your executable much smaller than the minimal disk block size is just a fun exercise. Whereas coreutils /bin/true may actually benefit from an extent based filesystem. :) Anyway, it's just a silly complaint I'm making, though it has always annoyed me a tiny bit.

Disk space (was: SELF: Anatomy of an (alleged) failure)

Posted Jun 25, 2010 12:25 UTC (Fri) by dark (guest, #8483) [Link] (2 responses)

Yes, but GNU true does so much more! It supports --version, which tells you all about who wrote it and about the GPL and the FSF. It also supports --help, which explains true's command-line options (--version and --help). Then there is the i18n support, so that people from all over the world can learn about --help and --version. You just don't get all that with a minimalist ELF binary.

Disk space (was: SELF: Anatomy of an (alleged) failure)

Posted Jun 25, 2010 15:38 UTC (Fri) by intgr (subscriber, #39733) [Link] (1 responses)

Indeed, I use those features every day! ;)

PS: Shells like zsh actually ship builtin "true" and "false" commands

Disk space (was: SELF: Anatomy of an (alleged) failure)

Posted Jun 29, 2010 23:03 UTC (Tue) by peter-b (guest, #66996) [Link]

So does POSIX sh. The following command is equivalent to true:

:

The following command is equivalent to false:

! :

I regularly use both when writing shell scripts.

Disk space (was: SELF: Anatomy of an (alleged) failure)

Posted Jun 27, 2010 16:42 UTC (Sun) by nix (subscriber, #2304) [Link]

There are two separate binaries because the GNU Project thinks it is confusing to have single binaries whose behaviour changes depending on what name they are run as, even though this is ancient hoary Unix tradition. Apparently people might go renaming the binaries and then get confused when they don't work the same. Because we do that all the time, y'know.

(I think this rule makes more sense on non-GNU platforms, where it is common to rename *everything* via --program-prefix=g or something similar, to prevent conflicts with the native tools. But why should those of us using the GNU toolchain everywhere be penalized for this?)

Disk space (was: SELF: Anatomy of an (alleged) failure)

Posted Jun 27, 2010 16:46 UTC (Sun) by nix (subscriber, #2304) [Link]

The size, btw, is probably because the gnulib folks have found bugs in printf which the glibc folks refuse to fix (they only cause buffer overruns or bus errors on invalid input, after all, how problematic could that be?) so GNU software that uses gnulib will automatically replace glibc's printf with gnulib's at configure time. (That this happens even for things like /bin/true, which will never print the class of things that triggers the glibc printf bugs, is a flaw, but not a huge one.)

And gnulib, because it has no stable API or ABI, is always statically linked to its users.

26Kb for a printf implementation isn't bad.

SELF: Anatomy of an (alleged) failure

Posted Jun 27, 2010 12:08 UTC (Sun) by nix (subscriber, #2304) [Link]

* Currently; If you do not need 64bit compatibility now you will probably want to install only 32bit binaries. However if in the future you run into software that requires 64bit compatibility. With the status quo it would require you to re-install the OS
So, because some distribution's biarch support sucks enough that it can't install a bunch of 64-bit dependencies into /lib64 and /usr/lib64 when you install a 64-bit binary, we need a kernel hack?

Please. There are good arguments for FatELF, but this is not one of them.

SELF: Anatomy of an (alleged) failure

Posted Jun 23, 2010 23:34 UTC (Wed) by cortana (subscriber, #24596) [Link] (11 responses)

> why would you need a fat binary for a AMD64 system? if you care you just use the 32 bit binaries everywhere.

So I could use Flash.

So I could buy a commercial Linux game and run it without having to waste time setting up an i386 chroot or similar.

Both areas that contribute to the continuing success of Windows and Mac OS X on the desktop.

SELF: Anatomy of an (alleged) failure

Posted Jun 24, 2010 2:27 UTC (Thu) by BenHutchings (subscriber, #37955) [Link] (9 responses)

FatELF might make it somewhat easier for Adobe or the game developer to distribute x86-64 binaries to those that can use them, but if they don't intend to build and support x86-64 binaries then it doesn't solve your problem.

SELF: Anatomy of an (alleged) failure

Posted Jun 24, 2010 9:10 UTC (Thu) by cortana (subscriber, #24596) [Link] (8 responses)

FatELF would have made it easier for distributors to ship a combined i386/amd64 distro. This would have made it possible for them to ship i386 libraries that are required to support i386 web browsers, for Flash, and i386 games and other proprietary software.

SELF: Anatomy of an (alleged) failure

Posted Jun 24, 2010 10:43 UTC (Thu) by michich (guest, #17902) [Link] (7 responses)

But what you describe already works today and FatELF is not needed for it. It's called multilib.

SELF: Anatomy of an (alleged) failure

Posted Jun 24, 2010 11:00 UTC (Thu) by cortana (subscriber, #24596) [Link] (6 responses)

But it doesn't work today, at least on any Debian-derived distribution. You have to rely on the ia32-libs conglomeration package to have picked up the right version of a library that you want, when it was last updated, which is not a regular occurrence (bugs asking that libraries for Flash 10 are still open 2 years later).

Even if Debian did have an automatic setup for compiling all library packages with both architectures, you are then screwed because they put the amd64 libraries in /lib (with a symlink at /lib64) and the i386 libraries in /lib32. So your proprietary i386 software that tries to dlopen files in /lib fails because they are of the wrong architecture.

You could argue that these are Debian-specific problems. You might be right. But they are roadblocks to greater adoption of Linux on the desktop, and now that the FatELF way out is gone, we're back to the previous situation: waiting for the 'multiarch' fix (think FatELF but with all libraries in /lib/$(arch-triplet)/libfoo.so rather than the code for several architectures in a FatELF-style, single /lib/libfoo.so), which has failed to materialise in the 6 years since I first saw it mentioned. And which still won't fix proprietary software that expects to find its own architecture's files at /lib.

SELF: Anatomy of an (alleged) failure

Posted Jun 24, 2010 17:44 UTC (Thu) by vonbrand (subscriber, #4458) [Link]

That multilib doesn't work on Debian is squarely Debian's fault (my Fedora here is still not completely 32-bit free, but getting there). No need to burden the kernel for that.

SELF: Anatomy of an (alleged) failure

Posted Jun 27, 2010 12:31 UTC (Sun) by nix (subscriber, #2304) [Link] (4 responses)

Even if Debian did have an automatic setup for compiling all library packages with both architectures, you are then screwed because they put the amd64 libraries in /lib (with a symlink at /lib64) and the i386 libraries in /lib32. So your proprietary i386 software that tries to dlopen files in /lib fails because they are of the wrong architecture.
I've run LFS systems with the /lib / /lib32 layout for many years (because I consider /lib64 inelegant on a principally 64-bit system). You know how many things I've had to fix because they had lib hardwired into them? *Three*. And two of those were -config scripts (which says how old they are right then and there, modern stuff would use pkg-config). Not one was a dlopen(): they all seem to be using $libdir as they should.

This simply is not a significant problem.

SELF: Anatomy of an (alleged) failure

Posted Jun 27, 2010 13:14 UTC (Sun) by cortana (subscriber, #24596) [Link] (1 responses)

I'm very happy that you did not run into this problem, but I have. IIRC it was with Google Earth. strace clearly showed it trying to dlopen some DRI-related library, followed by it complaining about 'wrong ELF class' and quitting.

SELF: Anatomy of an (alleged) failure

Posted Jun 27, 2010 17:48 UTC (Sun) by nix (subscriber, #2304) [Link]

Well, DRI is a whole different kettle of worms. I suspect a problem with your OpenGL implemementation, unless Google Earth has a statically linked one (ew).

(Words cannot express how much I don't care about statically linked apps.)

SELF: Anatomy of an (alleged) failure

Posted Jul 10, 2010 12:31 UTC (Sat) by makomk (guest, #51493) [Link] (1 responses)

Yeah, dlopen() problems with not finding libraries in /lib32 don't tend to happen, mostly because it's just easier to do it the right way from the start and let dlopen() search the appropriate directories. (Even on pure 32-bit systems, some libraries are in /lib on some systems, /usr/lib on others, and perhaps even in /usr/local/lib or $HOME/lib if they've been manually installed.)

SELF: Anatomy of an (alleged) failure

Posted Jul 10, 2010 20:24 UTC (Sat) by nix (subscriber, #2304) [Link]

dlopen() doesn't search directories for you, does it? Programs generally want to look in a subdirectory of the libdir, anyway. Nonetheless they almost all look in the right place.

SELF: Anatomy of an (alleged) failure

Posted Jun 24, 2010 18:43 UTC (Thu) by Spudd86 (subscriber, #51683) [Link]

FatELF won't have any effect on the flash situation at all, it has nothing to do with shipping one or two binaries, Adobe just doesn't care enough about 64-bit Linux to ship flash for it, that's it, FatELF won't change that, and it won't magically make 32<->64 dynamic linking work either, they are different ABI's and you'd still need a shim layer

SELF: Anatomy of an (alleged) failure

Posted Jun 24, 2010 7:55 UTC (Thu) by jengelh (guest, #33263) [Link] (2 responses)

>but unless a single application uses more than 3G of ram it usually won't matter much to the app if it's 32 bit or 64 bit.

Hell it will. Unless the program in question directly uses hand-tuned assembler, the 32-bit one will usually not run with SSE2, just the olde x87, which is slower, will be any computations involving larger-than-32 integers..

SELF: Anatomy of an (alleged) failure

Posted Jun 24, 2010 18:08 UTC (Thu) by pkern (subscriber, #32883) [Link] (1 responses)

Which is only partly true. Look into (/usr)?/lib/i686 and you'll see libs that will be loaded by the linker in preference to the plain ia32 ones if the hardware supports more than the least common denominator. It even works with /usr/lib/sse2 here on Debian if the package has support for it (see ATLAS or speex).

But of course, normally you don't rely on newer features everywhere, breaking support for older machines. Ubuntu goes i686 now, Fedora's already there, I think; and if you want more optimization I guess Gentoo is the way to go because you don't have to think portable. ;-)

SELF: Anatomy of an (alleged) failure

Posted Jun 24, 2010 18:25 UTC (Thu) by jengelh (guest, #33263) [Link]

Indeed, but that is for libraries only, it does not catch code inside programs or dlopened plugins.

SELF: Anatomy of an (alleged) failure

Posted Jun 24, 2010 7:43 UTC (Thu) by tzafrir (subscriber, #11501) [Link]

If I make a multi-arch CD (i386+powerpc, for instance) I already have to work aorund a number of issues. The ability to use standard binaries from packages rather than rebuilding my own as fat ones, is a GoodThing. I have to mess up with a unionfs anyway, for the writable file system.


Copyright © 2025, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds