LWN.net Logo

FatELF: universal binaries for Linux

October 28, 2009

This article was contributed by Koen Vervloesem

One interesting feature of Mac OS X is the concept of a Universal Binary, a single binary file that runs natively on both PowerPC and Intel platforms. Professional game porter Ryan Gordon got sick of Mac developers pointing out that Linux doesn't have anything like that, so he did something about it and wrote FatELF. FatELF brings the idea of single binaries supporting multiple architectures to Linux.

Universal binaries in Mac OS X

Apple introduced the Universal Binary file format in 2005 to ease the transition of the Mac platform from the PowerPC architecture to the Intel architecture. The solution was to include both PowerPC and x86 versions of an application in one "fat binary". If a universal binary is run by Mac OS X, the operating system executes the appropriate section depending on the architecture in use. The big advantage was that Mac developers could distribute one executable of their software, so that end-users wouldn't have to worry about which version to download. Later, Apple went even further and allowed four-architecture binaries: 32 and 64 bit for both Intel and PowerPC.

This was not the first time Apple performed such a trick: in 1994 the company transitioned from Motorola 68k processors to PowerPC and introduced a "fat binary" which included executable code for both platforms. Moreover, NeXTSTEP, the predecessor of Mac OS X, had a fat binary file format (called "Multi-Architecture Binaries") which supported Motorola 68k, Intel x86, Sun SPARC, and HP PA-RISC. So Apple knew what needed to be done when they chose Intel as their new Mac platform. In fact, the Universal Binary format in Mac OS X is essentially the same as NeXTSTEP's Multi-Architecture Binaries. This was possible because Apple uses NeXTSTEP's Mach-O as the native object file format in Mac OS X.

A fat elf for Linux

Ryan Gordon is a well-known game porter: he has created ports of commercial games and other software to Linux and Mac OS X. Notable examples of his work are the Linux ports of the Unreal Tournament series, some of the Serious Sam Series, the Postal Series, Devastation and Prey, but also non-gaming software such as Google Earth and Second Life. With this experience, he knows a lot of both Mac OS X and Linux, so Ryan is well suited to implement the Mac OS X universal binary functionality in Linux.

His FatELF file format embeds multiple Linux binaries for different architectures in a single file. FatELF is actually a simple container format: it adds some accounting information at the start of the file and then appends all the ELF (Executable and Linking Format) binaries after it, adding padding for alignment. FatELF can be used for both executable files and shared libraries (.so files).

An obvious downside of FatELF is that the executable's size gets multiplied by the number of embedded ELF architectures. However, this only holds for the executable files and libraries; common non-executable resources such as images and data files are just shipped as they are without FatELF. For example, a game that ships with hundreds of megabytes of data will, relatively, become only slightly larger.

Moreover, a FatELF binary doesn't require more RAM to run than a regular ELF binary, because the operating system decides which chunk of the file is needed to run on the current system and ignores the ELF objects of the other architectures. This also means that the entire FatELF file does not have to be read (except for kernel modules), so the disk bandwidth overhead is minimal.

On the project's website, Ryan lists a lot of reasons why someone would use FatELF. Some of them are rather far-fetched, such as:

Distributions no longer need to have separate downloads for various platforms. Given enough disc space, there's no reason you couldn't have one DVD ISO file that installs an x86-64, x86, PowerPC, SPARC, and MIPS system, doing the right thing at boot time. You can remove all the confusing text from your website about "which installer is right for me?"

Another benefit in the same vein is that third party packages no longer have to publish multiple packages for different architectures. An obvious critique is that this multiplies the needed disk space and bandwidth if FatELF is used systematically.

However, there is something to be said for FatELF as a means to abstract away architecture differences for end-users. For example, install scripts for proprietary Linux software, such as the scripts for the graphics drivers by AMD and Nvidia, that select which driver to install based on the detected architecture, could be implemented as FatELF binaries. This seems like a cleaner solution than each software vendor implementing his own scripts and flaky logic to detect the right version. Web browser plug-ins are another type of binary that could be an interesting match for FatELF. In support of this idea, Ryan admits he made flaky shell script errors himself in the past:

Many years ago, I shipped a game that ran on i686 and PowerPC Linux. I could not have predicted that one day people would be running x86_64 systems that would be able to run the i686 version, so doing something like: exec $(uname -m)/mygame would fail, and there's really no good way to future-proof that sort of thing. As that game now fails to start on x86_64 systems, it would have been better to just ship for i686 and not try to select a CPU arch.

Another use for FatELF is what Apple used its universal binary for: a transition to a new architecture. The 32-bit to 64-bit transition comes to mind, where FatELF makes it possible to no longer need separate /lib, /lib32 and /lib64 trees. It also makes it possible to get rid of IA-32 compatibility libraries: if you want to run a couple of 32-bit applications on a 64-bit system, you only need FatELF versions of the handful of packages needed by them. But more exotic transitions are also possible, for example when the ELF OSABI (Operating System Application Binary Interface) used by the system changes, or for CPUs that can handle different byte orders.

Status

At the moment, Ryan has written a file format specification and documentation for FatELF. To make the fat binary concept possible on Linux, he created patches for the Linux kernel to support FatELF, and he also adapted the file command to recognize FatELF files, the binutils commands to allow GCC to link against a FatELF shared library, and gdb to be able to debug FatELF binaries. The patches are stored in a Mercurial repository "until they have been merged into the upstream project". The repository also hosts some tools to manipulate FatELF binaries, which are zlib-licensed.

One of the FatELF tools is fatelf-extract, which lets the user extract a specific ELF binary from a FatELF file, e.g. the x86_64 one. The fatelf-split command extracts all embedded ELF binaries, ending up with files like my_fatelf_binary-i386 and my_fatelf_binary-x86_64. The fatelf-info command reports interesting information about a FatELF file. A tool for developers is fatelf-glue, which will glue ELF binaries together, because GCC currently can't build FatELF binaries. You just have to build each ELF binary separately and then create a FatELF file of them.

As a proof-of-concept, Ryan created a VMware virtual machine image of Ubuntu 9.04 where almost every binary and library is a FatELF file with x86 and x86_64 support. The image can be downloaded and run in VMware Workstation or VMware Player to try the FatELF functionality. But this is not the regular use case. When FatELF is used, it's probably only for a handful of applications. FatELF files also coexist fine with ELF binaries: a FatELF binary can load ELF shared libraries and vice versa.

Relatively simple implementation

Ryan recalls the real point of inspiration for FatELF, a thread on the mailing list of the installer program MojoSetup. On May 20 2007, he writes on this list:

I'd love someone to extend the ELF format so that it supports "fat" binaries, like Apple's Mach-O format does for the PowerPC/Intel "Universal" binaries...but that would require coordination and support at several points in the system software stack.

Two years later, Ryan has implemented this idea:

I have a long list of things that Linux should blatantly steal from Mac OS X, and given infinite time, I'll implement them all. FatELF happens to be something on that list that is directly useful to my work as a game developer that also happens to be a simple project. I think the changes required to the system are pretty small for what could be good benefits to Unix as a whole.

So after a few weeks of work in his spare time, Ryan got a working fat binary implementation for Linux. In contrast, building the virtual machine proof-of-concept literally took days, because it took a lot of work to automate. Ryan also spent a lot of time preparing to post the kernel patches:

I was so intimidated by the kernel mailing list, that I spent a disproportionate amount of time researching etiquette, culture, procedure. I didn't want to offend anyone or waste their time.

Reception

Overall, the patch that allows the Linux kernel to load a FatELF file was received quite positively, but with some questions. For example, Jeremy Fitzhardinge asked why Ryan made it ELF-specific:

The idea seem interesting, but does it need to be ELF-specific? What about making the executable a simple archive file format (possibly just an "ar" archive?) which contains other executables. The archive file format would be implemented as its own binfmt, and the internal executables could be arbitrary other executables. The outer loader would just try executing each executable until one works (or it runs out).

Later in the discussion, Jeremy adds that a generic approach would allow the last executable in the file to be a shell script. If no other format was supported, this shell script would then be executed, doing something like displaying a useful message. Ryan seems unsure that the added flexibility is worth the extra complications, although he admitted that he would have chosen this route if other executable formats like a.out files "were still in widespread use and actively competed with ELF for mindshare." He also thinks it should be possible to support other executable formats in the existing FatELF format.

Some reactions to the patch that allows kernel modules to be FatELF binaries are less positive. For example, Jeremy objected to this because it would only encourage more binary modules. Ryan understands his concern, but answered: "I worry about refusing to take steps that would aid free software developers in case it might help the closed-source people, too." However, Jeremy didn't see it that way, casting doubt on the use case of FatELF kernel modules:

Any open source driver should be encouraged to be merged with mainline Linux so there's no need to distribute them separately. With the staging/ tree, that's easier than ever.

I don't see much upside in making it "easier" to distribute binary-only open source drivers separately. (It wouldn't help that much, in the end; the modules would still be compiled for some finite set of kernels, and if the user wants to use something else they're still stuck.)

Moreover, even for proprietary kernel modules the use case is not that compelling. Companies like Nvidia have to distribute modules for multiple kernel versions. If the OSABI version doesn't change, they can't use FatELF to pack together multiple drivers for this purpose. So, all in all, FatELF support for kernel modules seems a bit dubious.

In another discussion, Rayson Ho found that Apple (NeXT, actually) has patented the technologies behind universal binaries, as a "method and apparatus for architecture independent executable files" (#5432937 and #5604905). Something that may be considered prior art is the mix of 32-bit and 64-bit object files in a single archive on AIX, Rayson thinks. David Miller adds another possible prior art: TILO, a variant of the Sparc SILO boot loader, that packs a 32-bit and 64-bit Linux kernel into one file an figures out which one to actually boot depending on the machine it is running on, but Rayson doubts this counts, because the project was started in 1995 or 1996, while NeXT's patent filing is from 1993. Ryan also entered the discussion and clarified that FatELF has a few fields that Apple's format doesn't, so the flow chart in the patent isn't the same. However, it's not clear yet if Ryan should be concerned and if so, which changes he should make to work around the patent.

The future

There are still a lot of things to do. Patches for module-init-tools, glibc (for loading shared FatELF libraries), and elfutils still have to be written. And the patches for binutils and gdb still have to be submitted, Ryan said:

I've only submitted the kernel patches. If the kernel community is ultimately uninterested, there's not much point in bothering the binutils people. The patches for all the other parts are sitting in my Mercurial repository. If FatELF makes it into Linus's mainline, several other mailing lists will get patches sent to them right away.

Ryan even thinks about embedding binaries from other UNIXes into a FatELF file. He mentions FreeBSD, OpenBSD, NetBSD and OpenSolaris. In principle, each operating system using ELF files for its binaries could be supported. In addition to the ones mentioned, this also includes DragonFly BSD, IRIX, HP-UX, Haiku, and Syllable. The implementations should not be difficult, according to Ryan:

You have to touch several parts of the system, but the changes you have to make to them are reasonably straightforward, so you'll probably spend more time getting comfortable with their code than patching it. And then twice as long trying to figure out how to boot a custom kernel and libc.

The support for other operating systems will make it possible to ship one file that works across Linux and FreeBSD, for example, without a platform compatibility layer. This could also be an interesting feature for hybrid Debian GNU/Linux and Debian GNU/kFreeBSD binaries.

The biggest hurdle that FatELF is facing now are adoption pains, Ryan explains:

If Linus applies it in the 2.6.33 merge window and every other project puts the patches into revision control, too, we're looking at maybe 6 to 12 months before distributions pick it up and some time later before you can count on people running those distributions.

Another disadvantage is the problems with creating fat binaries in build systems. For example, Erik de Castro Lopo writes about this on his blog. According to Ryan making the build systems handle this situation cleanly still needs some work. He expects the most popular way to build FatELF files will be to do two totally independent builds and glue them together instead of rethinking autoconf and such.

Conclusion

While a universal binary seems much less interesting for Linux than for Mac OS X, because most software in Linux is installed from within a package manager that knows the architecture, the concept is interesting for proprietary Linux software such as games. For a non-expert user, it's not evident if their processor is 32 or 64 bit. A FatELF download embedding both the x86 and x86_64 binary may be a good solution for this problem. And if ARM-based smartbooks become more popular, an x86/x86_64/arm FatELF binary may be the perfect way to distribute a binary that works on 32 bit Intel Atom netbooks, 64 bit Intel computers and ARM smartbooks.


(Log in to post comments)

FatELF: universal binaries for Linux

Posted Oct 29, 2009 2:30 UTC (Thu) by joey (subscriber, #328) [Link]

> On the project's website, Ryan lists a lot of reasons why someone would
> use FatELF. Some of them are rather far-fetched, such as:

>> Distributions no longer need to have separate downloads for various
>> platforms. Given enough disc space, there's no reason you couldn't have
>> one DVD ISO file that installs an x86-64, x86, PowerPC, SPARC, and MIPS
>> system, doing the right thing at boot time. You can remove all the
>> confusing text from your website about "which installer is right for
>> me?"

Multi-arch CDs are not farfetched. Making something bootable on more than 3
or 4 arches is, due to boot sector clashes. (x86/x86-64/powerpc is doable;
so is alpha/hppa/ia64).

But to boot a single installer image that uses FatELF is farfetched because

a) It runs from memory, so fat executables waste memory.. which does matter
on at least some of the arches.

b) The installer boot process tends to be significantly different for
different arches. Often the first point of difference is how the ramdisk is
piggybacked onto the kernel (or otherwise loaded).

liberate /lib

Posted Oct 29, 2009 3:06 UTC (Thu) by ncm (subscriber, #165) [Link]

This looks immediately practical as an alternative to multilib ia32 and amd64. Instead of /lib and /lib64, each library package would contain a .so that goes in /lib, as usual, and the right part of it gets used at load time. To me that's enough reason to go ahead with it. Anything else that might (or might not) be made to work, someday, is extra.

liberate /lib

Posted Oct 29, 2009 13:29 UTC (Thu) by nix (subscriber, #2304) [Link]

Yes indeed. Multiarch works, but requires support from the build system (and some don't support relocation of the libdir) and even from applications, if they load plugins (and glib and gtk at least hardwire module loading from subdirectories of "lib", so distros have to patch this away today).

It looks like fat binaries instead of multiarch would pretty much Just Work without requiring such a degree of thought (not that it's *much* thought, but it seems to be too much to expect from e.g. the gtk devs).

liberate /lib

Posted Oct 30, 2009 0:48 UTC (Fri) by ikm (subscriber, #493) [Link]

Yes. I actually think it's the only real use for this. I used to hate the idea of FatELF before, but I'd really love to have this kind of lib on my system.

Prior art (FatELF: universal binaries for Linux)

Posted Oct 29, 2009 10:49 UTC (Thu) by eru (subscriber, #2753) [Link]

I have a dim recollection that Apollo DomainOS also implemented the fat binary idea, very long ago (no personal experience, but 20 years ago, there was an Apollo workstation in the company, and this feature was mentioned by the guy who used it). However, Google does not offer confirmation, apart from this mailing list mention: http://gcc.gnu.org/ml/gcc/1999-04n/msg00688.html

Prior art (FatELF: universal binaries for Linux)

Posted Oct 29, 2009 13:32 UTC (Thu) by nix (subscriber, #2304) [Link]

DomainOS didn't use this feature for multiple architectures, as far as I know: this was in the BSD/SysV war days, and they had multiple user-switchable 'universes', so apps could be marked as being BSD or SysV-specific, you could have distinct libraries with apparently identical names for each universe, and you could switch from BSD to SysV at any time. You could even reference paths in the other universe via //$UNIVERSE/... (where $UNIVERSE is the name of the universe, of course).

POSIX still contains a special case allowing // at the root to mean something different from / (in all other cases, strings of consecutive /s in pathnames are collapsed to /). Samba of course benefits from this.

Prior art (FatELF: universal binaries for Linux)

Posted Nov 7, 2009 18:30 UTC (Sat) by dfa (✭ supporter ✭, #6767) [Link]

Domain/OS got fat binaries, very much as described in this article, with
the advent of the RISC based DN10000s, follow-on to the Motorola 680x0s.

It was very common to provide locally needed binaries on local disk,
then access the rest across the (ring) network. An administrator could
opt to load the single architectures onto separate shared directories
or could load the fat versions for universal use into a single directory.
This feature made diskless boot support painless.

The accommodation of three different OS conventions (Domain, BSD, and SysV)
was handled in filename space, as described, using environmental variables
which the filesystem used in creating the actual names accessed. It was
very cool, and extremely convenient for setting up personal/group/corporate
tailorings.

The DomainOS "//" convention was used to access the local machine's
"network root", the set of host names known to the local host, a very
neat network naming space for files was the result.

Prior art (FatELF: universal binaries for Linux)

Posted Oct 29, 2009 13:36 UTC (Thu) by clugstj (subscriber, #4020) [Link]

What a stupid idea! We already have a file system that stores, wait for it..., FILES. Why create a new format that stores files within files? I can quite easily reproduce this "feature" with a two-line shell script:

#!/bin/sh
exec $0.`uname -m`

Put the binaries for each architecture in the same directory (with the arch as a filename suffix), link this script to the name of each binary (without the suffix) and you are done.

Or, you could easily add a feature to the packaging system to install the proper binary for the correct architecture and not waste disk space on other unused arch binaries.

Prior art (FatELF: universal binaries for Linux)

Posted Oct 29, 2009 14:14 UTC (Thu) by slougi (subscriber, #58033) [Link]

Read those e-mails.

Your script would fail in certain scenarios. For example, running a x86 binary on an amd64 system.

Prior art (FatELF: universal binaries for Linux)

Posted Oct 29, 2009 14:51 UTC (Thu) by dtlin (✭ supporter ✭, #36537) [Link]

$ uname -m
x86_64
$ setarch i386 uname -m
i686
I don't see what the problem is.

Prior art (FatELF: universal binaries for Linux)

Posted Oct 30, 2009 21:30 UTC (Fri) by giraffedata (subscriber, #1954) [Link]

My initial reaction was the same: that we already have multi-file packages, so isn't it more natural just to have a binary for each architecture?

But when I thought about the real complaints (above) about the difficulty of living with /lib and /lib64, I realized this: which binary is required is a characteristic of facilities under user space -- kernel and/or machine. So placing the burden of choosing one on user space is wrong. And files are user space things; the kernel should not navigate directories.

Now, where having multiple architecture binaries in a single system (filesystem) isn't useful, I would prefer a package with multiple binaries, where the installer installs in /lib the relevant one.

Prior art (FatELF: universal binaries for Linux)

Posted Nov 1, 2009 3:50 UTC (Sun) by elanthis (guest, #6227) [Link]

Oddly enough, you can't link against a shell script...

Multi-arch binaries are not tremendously useful. Multi-arch libraries are very useful. Yes, directories once again could be used, but various "standards" groups have already agreed on a defacto lib vs lib64 multi-arch setup which totally falls apart in the face of anything besides a single pair 32-bit and 64-bit architectures. I'd much rather have just seen the platform encoded in the library sonames and filenames (.e.g libfoo.so.linux.x86_64.1.2.0 vs libfoo.so.bsd.ppc64.1.2.0), but alas it wasn't up to me to make the call.

Oddly enough, though, multi-arch executables are actually a better solution than directories, because the question comes down to which directory to search for executables. We could have /bin broken into /bin/i386, /bin/i586, /bin/x86_64, /bin/ppc, etc. with the PATH environment variable used to select which to search... but it'd be ugly and force changes on every installer, package set, and so on. Granted, I don't find multi-arch binaries particularly useful, so I have no problem with packages or installers just figuring out which binary to install.

However, people who use NFS-mounted root directories across a variety of systems could get a big boost out of something like fatELF. A single root directory tree could theoretical serve thin things running both native i386, native x86_64, or native ppc code. Less maintenance and all that jazz.

All at the cost of a little extra disk space on a server and a little bigger packages to download on the 50mpbs pipes you can get for cheap these days.

That said, for the purpose fatELF is obstensibly being designed for (commercial games), fatELF is just silly. The installer can just install the proper binaries based on the target architecture. An installer shell script can pick which installer binary to run (or better yet, the Linux software distribution scene could get its head out of its ass and supply a standard distro-neutral installer framework that's installed in the base package set for every distro like how it should've been done 15 years ago).

What's the problem again?

Posted Oct 29, 2009 14:31 UTC (Thu) by alex (subscriber, #1355) [Link]

I struggle to understand the use case, especially for software I get from my distro repo. I know the disk usage isn't a major component but it still seems wasteful to have a bunch of unused code lying around on those platters.

However a Fat${PKG} format might be be more inline with the implied use case of distributing 3rd party applications.

What's the problem again?

Posted Oct 29, 2009 17:14 UTC (Thu) by mrshiny (subscriber, #4266) [Link]

I agree. It seems to me that many use-cases for this technology fail on a typical Linux system because of the way software is normally distributed.

However, for multi-arch systems this might be useful, and I suppose that in the days when I had a mix of architectures it would have been nice to be able to install FatElf software to a shared network drive and have it just work. Plus for installers this sort of thing could be handy. But since most software that I install comes from a repository, a fat package would work just as well as fat binaries.

FatELF: universal binaries for Linux

Posted Oct 30, 2009 4:31 UTC (Fri) by pj (subscriber, #4506) [Link]

He totally overlooked the killer app for FatELF: cross-compiling.

One of the major pains of cross-compiling is getting all the library paths sorted - which ones do your tools use vs which ones will what you're building with those tools use, etc. With FatELF, cross-platform development just got a *lot* simpler, because the paths all stay the same, and the tools look in the standard place for the library... and inside the library for the arch they're wanting to link against.

Relatedly, if compilers and linkers are FatELF-aware, they can build fatELFs a bit faster because they only have to do the parsing once, and then object code for all the platforms from the same resultant parse tree.

So the people he should really be appealing to are embedded developers who deal with all this pain _all the time_. The OpenWRT and etc guys will be ecstatic, if only to save them from LD_FLAG hell.

FatELF: universal binaries for Linux

Posted Oct 30, 2009 6:14 UTC (Fri) by ringerc (subscriber, #3071) [Link]

"Relatedly, if compilers and linkers are FatELF-aware, they can build fatELFs a bit faster because they only have to do the parsing once, and then object code for all the platforms from the same resultant parse tree."

Unfortunately, that's not true in any language where the use of a preprocessor is standard and commonplace, like C or C++. The preprocessor is a step run before parsing, and the work the preprocessor does may change what the result of parsing is by adding/removing/changing parts of the program text. It's very common in C/C++ to use the preprocessor to handle things like type selection, type size and byte order issues - exactly the sorts of things that'll change depending on target arch.

FatELF: universal binaries for Linux

Posted Nov 4, 2009 16:00 UTC (Wed) by pj (subscriber, #4506) [Link]

Good point. I still think the win as a solution to arch-based-path hell for cross-compiles might be worth it.

FatELF: universal binaries for Linux

Posted Oct 30, 2009 19:30 UTC (Fri) by clugstj (subscriber, #4020) [Link]

"The biggest hurdle that FatELF is facing now are adoption pains"

No, the biggest problem is that this "solution" will require all of the development tools to be redesigned for dubious benefit.

FatELF: universal binaries for Linux

Posted Oct 30, 2009 21:11 UTC (Fri) by nix (subscriber, #2304) [Link]

No more than separated debug info did. You build several times and then do
some sort of objcopy dance to merge the distinct binaries into FatELFs
(and probably automatically diff the rest to ensure it's identical before
zapping all but one copy of it).

Trivial, and only objcopy needs extension (as it has been, IIRC).

Update: FatELF dead

Posted Nov 4, 2009 2:14 UTC (Wed) by leoc (subscriber, #39773) [Link]

FatELF: universal binaries for Linux

Posted Nov 4, 2009 16:20 UTC (Wed) by Darkmere (subscriber, #53695) [Link]

For me, I do not think FatELF would solve the problems with regards to optimizations. Let us take the more "modern" architectures ( and disregard the need for "i686 without CMOV" ) we have things like Atom(ipia) vs. i686 where the two have vastly different behaviour wrt. performance. Right now we don't even know which way will be dominant in the future, but a fair guess would be to see that atom-like architectures become more common in the future, and I don't think the dynamic linker will be a good place for this kind of logic.

Then again, I also believe in link time optimization for gcc and the tooth fairy.

FatELF: universal binaries for Linux

Posted Nov 5, 2009 18:16 UTC (Thu) by nix (subscriber, #2304) [Link]

Support for -ftooth-fairy was added to GCC a few weeks ago.

FatELF: universal binaries for Linux

Posted Nov 6, 2009 20:02 UTC (Fri) by Darkmere (subscriber, #53695) [Link]

Only things left are full system profile guided optimizations (Buildsystem issue I suspect) and working link time optimization (For things like firefox. -fwhole-program and similar are. interesting.)

FatELF: universal binaries for Linux

Posted Nov 6, 2009 20:36 UTC (Fri) by nix (subscriber, #2304) [Link]

Well, the point of -fwhopr is to make LTO optimizations practical even for
very large programs, without needing insane amounts of memory. So that's
been added too.

FatELF: universal binaries for Linux

Posted Nov 7, 2009 2:28 UTC (Sat) by Darkmere (subscriber, #53695) [Link]

Oh, seems they really did add my -ponies then!

Copyright © 2009, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds