SELF: Anatomy of an (alleged) failure
SELF: Anatomy of an (alleged) failure
Posted Jun 23, 2010 19:19 UTC (Wed) by Kamilion (subscriber, #42576)Parent article: SELF: Anatomy of an (alleged) failure
Storage is cheap -- time is not always.
Don't give up, Ryan! Perseverance and patience is rewarded eventually.
Posted Jun 23, 2010 20:01 UTC (Wed)
by Lovechild (guest, #3592)
[Link] (23 responses)
Posted Jun 23, 2010 20:35 UTC (Wed)
by dlang (guest, #313)
[Link]
even within a single distro you have different versions using different libraries depending on what features are compiled in (does it use mysql or sqlite for a database for example)
Posted Jun 23, 2010 20:56 UTC (Wed)
by aliguori (subscriber, #30636)
[Link] (1 responses)
Posted Jun 24, 2010 7:26 UTC (Thu)
by tzafrir (subscriber, #11501)
[Link]
BTW: At least on Debian Squeeze / Sid, you just need to use:
aptitude install qemu-user-static binfmt-support
The rest of the setup is done automagically. But then again, if you want to use this for non-static binaries, you need to set up a chroot.
Posted Jun 23, 2010 21:45 UTC (Wed)
by xav (guest, #18536)
[Link] (17 responses)
Posted Jun 24, 2010 1:08 UTC (Thu)
by cesarb (subscriber, #6266)
[Link] (10 responses)
This way the same "executable" would run on ARM, x86-32, x86-64...
Posted Jun 24, 2010 9:50 UTC (Thu)
by tzafrir (subscriber, #11501)
[Link] (1 responses)
Posted Jun 26, 2010 0:11 UTC (Sat)
by waucka (guest, #63097)
[Link]
Posted Jun 24, 2010 14:09 UTC (Thu)
by pcampe (guest, #28223)
[Link] (1 responses)
We have something definitely better: source code.
Allowing for the distribution of a (more or less) universally-runnable, auto-sustained and self-contained component will ultimately result in making easier the distribution of closed source program, which is something we should contrast.
Posted Jun 24, 2010 16:20 UTC (Thu)
by Spudd86 (subscriber, #51683)
[Link]
Also people are going to want to use pre-compiled code, and most people don't really want to learn how to package their stuff for every distro ever, let alone actually compile it 20 or 30 times.
Posted Jun 24, 2010 16:13 UTC (Thu)
by Spudd86 (subscriber, #51683)
[Link] (1 responses)
The papers talk about profiling and optimizing the IR and writing that back to the binary, so you get a binary optimized for your workload.
This still has the issues of library incompatibilities across architectures (even within the same distro) since the library may not have all the same options compiled in, or many export a slightly different set of symbols or all kinds of other things...
Posted Jun 27, 2010 16:03 UTC (Sun)
by nix (subscriber, #2304)
[Link]
Posted Jun 24, 2010 21:20 UTC (Thu)
by dw (subscriber, #12017)
[Link] (1 responses)
I believe ANDF was the basis for some failed UNIX standard in the early 90s, but that's long before my time.
There's at least one more recent attempt along the same lines (forgotten its name).
Posted Jun 28, 2010 16:13 UTC (Mon)
by salimma (subscriber, #34460)
[Link]
Posted Jul 1, 2010 7:45 UTC (Thu)
by eduperez (guest, #11232)
[Link]
You man, like... Java?
Posted May 4, 2012 19:10 UTC (Fri)
by ShannonG (guest, #84474)
[Link]
Posted Jun 24, 2010 2:18 UTC (Thu)
by PaulWay (guest, #45600)
[Link] (2 responses)
Obviously installing a system with every binary containing code for every possible architecture is going to be horribly large. But that's not what you use FatELF for.
Imagine, however, a boot CD or USB key that can boot and run on many architectures. That would be a case where the extra space used would be compensated by its universality. A live or install CD could then drag architecture-specific packages from the relevant distribution. A system rescue CD would work anywhere. You wouldn't worry about the overhead because the benefit would be one medium that would work (just about) everywhere. Likewise, an application installer could provide an initial FatELF loader that would then choose from the many supplied architecture-specific binaries to install.
In these circumstances I think FatELF makes a lot of sense. And, as Apple seems to be proving, the overhead is something that people don't notice (or, at least, are willing to cope with).
Have fun,
Paul
Posted Jun 24, 2010 20:44 UTC (Thu)
by vonbrand (subscriber, #4458)
[Link]
If it really was for "many architectures" (How many do you even see in a normal week? For me it's 3: x86_64, i386, SPARC64; very rarely a PPC Mac. And of those I'd futz around with x86_64 and i386 almost exclusively.) it would be at most some 100MiB for each on a normal CD. Better get USB thumbdrives for each (or carry 5 or 6 CDs around).
Posted Jun 25, 2010 1:49 UTC (Fri)
by dvdeug (guest, #10998)
[Link]
I'm having a lot of trouble finding any case where FatELF can't be replaced by shell scripts and symlinks. You want to support both ix86 and AMD-64 with your game; have the executable name run a shell script that runs the right binaries.
Posted Jun 24, 2010 7:34 UTC (Thu)
by mjthayer (guest, #39183)
[Link] (2 responses)
Posted Jun 24, 2010 16:35 UTC (Thu)
by Spudd86 (subscriber, #51683)
[Link] (1 responses)
These days you mostly have to worry about make sure you compile on a system that has old enough versions of everything that you're not using newer versions of stuff than your users will have (eg use g++ 4.3 so that Gentoo users that use the stable compiler don't have to install newer gcc and mess about with LD_LIBRARY_PATH so your app can find the newer libstdc++, it's nice since g++ 4.4 and 4.5's libstdc++ is backwards ABI compatible with all the older ones (4.0 and later, 3.x is a separate issue, but you can both available at once so you just need a compat package)) You don't even need to statically link many things, unless you have reason to believe they will not be packaged by your user's distro.
Posted Jun 24, 2010 18:08 UTC (Thu)
by jwakely (subscriber, #60262)
[Link]
3.4 and later
Posted Jun 24, 2010 7:49 UTC (Thu)
by jengelh (guest, #33263)
[Link] (1 responses)
Posted Jun 24, 2010 8:33 UTC (Thu)
by dlang (guest, #313)
[Link]
Posted Jun 23, 2010 20:10 UTC (Wed)
by gnb (subscriber, #5132)
[Link] (3 responses)
Posted Jun 24, 2010 15:03 UTC (Thu)
by flewellyn (subscriber, #5047)
[Link]
Posted Jun 24, 2010 16:39 UTC (Thu)
by Spudd86 (subscriber, #51683)
[Link] (1 responses)
Posted Jun 24, 2010 19:05 UTC (Thu)
by gnb (subscriber, #5132)
[Link]
SELF: Anatomy of an (alleged) failure
SELF: Anatomy of an (alleged) failure
SELF: Anatomy of an (alleged) failure
SELF: Anatomy of an (alleged) failure
SELF: Anatomy of an (alleged) failure
And then there are more differences than simply arch diffs. What's the use of a Fedora ARM binary, or an Ubuntu Itanium one ? If the matching distro doesn't exists, it's a waste.
SELF: Anatomy of an (alleged) failure
SELF: Anatomy of an (alleged) failure
SELF: Anatomy of an (alleged) failure
SELF: Anatomy of an (alleged) failure
SELF: Anatomy of an (alleged) failure
SELF: Anatomy of an (alleged) failure
SELF: Anatomy of an (alleged) failure
SELF: Anatomy of an (alleged) failure
SELF: Anatomy of an (alleged) failure
SELF: Anatomy of an (alleged) failure
> This way the same "executable" would run on ARM, x86-32, x86-64...
SELF: Anatomy of an (alleged) failure
SELF: Anatomy of an (alleged) failure
SELF: Anatomy of an (alleged) failure
SELF: Anatomy of an (alleged) failure
SELF: Anatomy of an (alleged) failure
Speaking from personal experience, binary compatibility is a lot easier than most Linux people think. I think that the focus they have on source makes them terrified of binary issues. (Source is good of course, but that doesn't mean that binary is always bad.) I help maintain a rather complex piece of software for which we have (among other options) a universal binary installer. It was a bit of work to start with to work out what issues we had to solve (the autopackage webpage is a very good resource here, whatever you may think of their package format), but once we got past that we have had very few issues over many years.
SELF: Anatomy of an (alleged) failure
SELF: Anatomy of an (alleged) failure
SELF: Anatomy of an (alleged) failure
SELF: Anatomy of an (alleged) failure
SELF: Anatomy of an (alleged) failure
SELF: Anatomy of an (alleged) failure
SELF: Anatomy of an (alleged) failure
SELF: Anatomy of an (alleged) failure