> FATELF seems uneccesary. Why not just put your 32-bit binaries in one filesystem, and your 64 bit ones in another. Then use unionFS to merge one or the other into your rootfs, depending on which architecture you're on. No need for a big new chunk of potentially insecure and buggy kernel code.
The point of it is to make things for users easier to deal with... forcing them to deal with UnionFS (especially when it's not part of the kernel and does not seem to ever likely to be incorporated) and using layered file systems by default on every Linux install sounds like a huge PITA to deal with.
Having 'Fat' binaries is really the best solution for OSes that want to support multiple arches in the easiest and most user-friendly way possible (especially in x86-64 were it can run 32bit and 64bit code side by side).
It's not just a matter of supporting Adobe flash or something like that, but it's just a superior technical solution for all levels from a users and system administration perspective.
> The reason why Apple invented FAT binaries is because they were interested in maintaining extensive binary compatibility with their old systems. Linux has never had this policy. Binaries that worked great on Fedora Core 9 probably won't work on Fedora Core 12, or Ubuntu 9.04, or whatever.
Actually Apple is very average when it comes to backwards compatibility. They certainly are no Microsoft. The point of fat binaries is just to make things easier for users and developers... which is exactly the entire point to having a operating system in the first place.
Some Linux kernel developers like to maintain that they support a stable ABI for userland and brag that software written for Linux in 2.0 era will still work in 2.6. In fact it seems that maintaining userspace ABI/API is a high priority for them. (Much higher then typical userland developer anyways. Application libraries are usually the bigger problem then anything with the kernel in terms of compatibility issues.)