Not logged in
Log in now
Create an account
Subscribe to LWN
LWN.net Weekly Edition for December 5, 2013
Deadline scheduling: coming soon?
LWN.net Weekly Edition for November 27, 2013
ACPI for ARM?
LWN.net Weekly Edition for November 21, 2013
SELF: Anatomy of an (alleged) failure
Posted Jun 23, 2010 20:35 UTC (Wed) by dlang (✭ supporter ✭, #313)
even within a single distro you have different versions using different libraries depending on what features are compiled in (does it use mysql or sqlite for a database for example)
Posted Jun 23, 2010 20:56 UTC (Wed) by aliguori (subscriber, #30636)
Posted Jun 24, 2010 7:26 UTC (Thu) by tzafrir (subscriber, #11501)
BTW: At least on Debian Squeeze / Sid, you just need to use:
aptitude install qemu-user-static binfmt-support
The rest of the setup is done automagically. But then again, if you want to use this for non-static binaries, you need to set up a chroot.
Posted Jun 23, 2010 21:45 UTC (Wed) by xav (subscriber, #18536)
Posted Jun 24, 2010 1:08 UTC (Thu) by cesarb (subscriber, #6266)
This way the same "executable" would run on ARM, x86-32, x86-64...
Posted Jun 24, 2010 9:50 UTC (Thu) by tzafrir (subscriber, #11501)
Posted Jun 26, 2010 0:11 UTC (Sat) by waucka (subscriber, #63097)
Posted Jun 24, 2010 14:09 UTC (Thu) by pcampe (guest, #28223)
We have something definitely better: source code.
Allowing for the distribution of a (more or less) universally-runnable, auto-sustained and self-contained component will ultimately result in making easier the distribution of closed source program, which is something we should contrast.
Posted Jun 24, 2010 16:20 UTC (Thu) by Spudd86 (guest, #51683)
Also people are going to want to use pre-compiled code, and most people don't really want to learn how to package their stuff for every distro ever, let alone actually compile it 20 or 30 times.
Posted Jun 24, 2010 16:13 UTC (Thu) by Spudd86 (guest, #51683)
The papers talk about profiling and optimizing the IR and writing that back to the binary, so you get a binary optimized for your workload.
This still has the issues of library incompatibilities across architectures (even within the same distro) since the library may not have all the same options compiled in, or many export a slightly different set of symbols or all kinds of other things...
Posted Jun 27, 2010 16:03 UTC (Sun) by nix (subscriber, #2304)
Posted Jun 24, 2010 21:20 UTC (Thu) by dw (subscriber, #12017)
I believe ANDF was the basis for some failed UNIX standard in the early 90s, but that's long before my time.
There's at least one more recent attempt along the same lines (forgotten its name).
Posted Jun 28, 2010 16:13 UTC (Mon) by salimma (subscriber, #34460)
Posted Jul 1, 2010 7:45 UTC (Thu) by eduperez (guest, #11232)
You man, like... Java?
Posted May 4, 2012 19:10 UTC (Fri) by ShannonG (guest, #84474)
Posted Jun 24, 2010 2:18 UTC (Thu) by PaulWay (✭ supporter ✭, #45600)
Obviously installing a system with every binary containing code for every possible architecture is going to be horribly large. But that's not what you use FatELF for.
Imagine, however, a boot CD or USB key that can boot and run on many architectures. That would be a case where the extra space used would be compensated by its universality. A live or install CD could then drag architecture-specific packages from the relevant distribution. A system rescue CD would work anywhere. You wouldn't worry about the overhead because the benefit would be one medium that would work (just about) everywhere. Likewise, an application installer could provide an initial FatELF loader that would then choose from the many supplied architecture-specific binaries to install.
In these circumstances I think FatELF makes a lot of sense. And, as Apple seems to be proving, the overhead is something that people don't notice (or, at least, are willing to cope with).
Posted Jun 24, 2010 20:44 UTC (Thu) by vonbrand (subscriber, #4458)
If it really was for "many architectures" (How many do you even see in a normal week? For me it's 3: x86_64, i386, SPARC64; very rarely a PPC Mac. And of those I'd futz around with x86_64 and i386 almost exclusively.) it would be at most some 100MiB for each on a normal CD. Better get USB thumbdrives for each (or carry 5 or 6 CDs around).
Posted Jun 25, 2010 1:49 UTC (Fri) by dvdeug (subscriber, #10998)
I'm having a lot of trouble finding any case where FatELF can't be replaced by shell scripts and symlinks. You want to support both ix86 and AMD-64 with your game; have the executable name run a shell script that runs the right binaries.
Posted Jun 24, 2010 7:34 UTC (Thu) by michaeljt (subscriber, #39183)
Posted Jun 24, 2010 16:35 UTC (Thu) by Spudd86 (guest, #51683)
These days you mostly have to worry about make sure you compile on a system that has old enough versions of everything that you're not using newer versions of stuff than your users will have (eg use g++ 4.3 so that Gentoo users that use the stable compiler don't have to install newer gcc and mess about with LD_LIBRARY_PATH so your app can find the newer libstdc++, it's nice since g++ 4.4 and 4.5's libstdc++ is backwards ABI compatible with all the older ones (4.0 and later, 3.x is a separate issue, but you can both available at once so you just need a compat package)) You don't even need to statically link many things, unless you have reason to believe they will not be packaged by your user's distro.
Posted Jun 24, 2010 18:08 UTC (Thu) by jwakely (subscriber, #60262)
3.4 and later
Posted Jun 24, 2010 7:49 UTC (Thu) by jengelh (subscriber, #33263)
Posted Jun 24, 2010 8:33 UTC (Thu) by dlang (✭ supporter ✭, #313)
Copyright © 2013, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds