|
|
Subscribe / Log in / New account

SELF: Anatomy of an (alleged) failure

SELF: Anatomy of an (alleged) failure

Posted Jun 23, 2010 19:19 UTC (Wed) by Kamilion (subscriber, #42576)
Parent article: SELF: Anatomy of an (alleged) failure

It's a shame, because I really would have liked to have the FatELF system. I have a large USB flash drive that I move between several systems. Right now, I'm stuck with a 32-bit Lucid simply because I can't boot 64-bit on every system. It would have been nice to build in PPC and ARM support too.

Storage is cheap -- time is not always.

Don't give up, Ryan! Perseverance and patience is rewarded eventually.


to post comments

SELF: Anatomy of an (alleged) failure

Posted Jun 23, 2010 20:01 UTC (Wed) by Lovechild (guest, #3592) [Link] (23 responses)

I remain very excited about FatELF, I hope that the general idea of a universal binary will still arrive at some point in the near future. It is a really useful feature for reaching beyond were we are now and making things just work.

SELF: Anatomy of an (alleged) failure

Posted Jun 23, 2010 20:35 UTC (Wed) by dlang (guest, #313) [Link]

before you worry about getting a binary that will work on arm, i386, powerpc, AMD64, itanium, etc a bigger issue is that even if you stick with a single architecture you end up with different libraries on different systems and so you frequently need different binaries to support the different options they use.

even within a single distro you have different versions using different libraries depending on what features are compiled in (does it use mysql or sqlite for a database for example)

SELF: Anatomy of an (alleged) failure

Posted Jun 23, 2010 20:56 UTC (Wed) by aliguori (subscriber, #30636) [Link] (1 responses)

If you install qemu's linux-user, then you can run non-native Linux binaries with just as you run native Linux binaries. This works by installing a misc binary hook and requires no kernel support. That said, we can't get a single binary that works universally across Linux distributions on a single architecture so I'm not sure that multi-arch binaries really makes that much sense.

SELF: Anatomy of an (alleged) failure

Posted Jun 24, 2010 7:26 UTC (Thu) by tzafrir (subscriber, #11501) [Link]

What about /lib/ld-linux.so.2 ?

BTW: At least on Debian Squeeze / Sid, you just need to use:

aptitude install qemu-user-static binfmt-support

The rest of the setup is done automagically. But then again, if you want to use this for non-static binaries, you need to set up a chroot.

SELF: Anatomy of an (alleged) failure

Posted Jun 23, 2010 21:45 UTC (Wed) by xav (guest, #18536) [Link] (17 responses)

FATelf is a born-dead project. Nobody wants to pay the price of compiling as many times an exec as there are different architectures.
And then there are more differences than simply arch diffs. What's the use of a Fedora ARM binary, or an Ubuntu Itanium one ? If the matching distro doesn't exists, it's a waste.

SELF: Anatomy of an (alleged) failure

Posted Jun 24, 2010 1:08 UTC (Thu) by cesarb (subscriber, #6266) [Link] (10 responses)

What would be interesting would be a bytecode architecture, like some sort of LLVM IR, compiled via a JIT at run time.

This way the same "executable" would run on ARM, x86-32, x86-64...

SELF: Anatomy of an (alleged) failure

Posted Jun 24, 2010 9:50 UTC (Thu) by tzafrir (subscriber, #11501) [Link] (1 responses)

Someone already has. It's called qemu.

SELF: Anatomy of an (alleged) failure

Posted Jun 26, 2010 0:11 UTC (Sat) by waucka (guest, #63097) [Link]

If you want to do JIT, you should do it on an intermediate representation (IR) designed for the purpose. Deliberately using x86 (or any native code, really) for that purpose is ridiculous. Besides, we wouldn't necessarily have to do JIT all the time. With a good IR, we could have live CDs and USB sticks use JIT and convert to native code at install-time.

SELF: Anatomy of an (alleged) failure

Posted Jun 24, 2010 14:09 UTC (Thu) by pcampe (guest, #28223) [Link] (1 responses)

>What would be interesting would be a bytecode architecture [...]

We have something definitely better: source code.

Allowing for the distribution of a (more or less) universally-runnable, auto-sustained and self-contained component will ultimately result in making easier the distribution of closed source program, which is something we should contrast.

SELF: Anatomy of an (alleged) failure

Posted Jun 24, 2010 16:20 UTC (Thu) by Spudd86 (subscriber, #51683) [Link]

yes, but unless you run a source distro like Gentoo you may not have the dev files for everything on your system and lots of users are fairly averse to compiling themselves, plus a source distro can hit problems that DO NOT EVER hit binary distros (including people with misbehaving build systems, automagic deps (weather it ends up depending on another package being installed changes if the other package is installed a build time or not, with no way to turn this behavior off))

Also people are going to want to use pre-compiled code, and most people don't really want to learn how to package their stuff for every distro ever, let alone actually compile it 20 or 30 times.

SELF: Anatomy of an (alleged) failure

Posted Jun 24, 2010 16:13 UTC (Thu) by Spudd86 (subscriber, #51683) [Link] (1 responses)

If you go look at some of the older LLVM papers they pretty much describe doing this... (I don't know if anyone implemented it, but given that they DO have a JIT compiler for LLVM IR already I think you could probably already do this in a limited form see http://llvm.org/cmds/lli.html the current llvm command that will run a LLVM IR bytecode object file with the LLVM JIT)

The papers talk about profiling and optimizing the IR and writing that back to the binary, so you get a binary optimized for your workload.

This still has the issues of library incompatibilities across architectures (even within the same distro) since the library may not have all the same options compiled in, or many export a slightly different set of symbols or all kinds of other things...

SELF: Anatomy of an (alleged) failure

Posted Jun 27, 2010 16:03 UTC (Sun) by nix (subscriber, #2304) [Link]

IIRC this is currently being done by ClamAV (using LLVM, natch).

SELF: Anatomy of an (alleged) failure

Posted Jun 24, 2010 21:20 UTC (Thu) by dw (subscriber, #12017) [Link] (1 responses)

This has been tried many times, including (but not limited to) TDF (http://en.wikipedia.org/wiki/TenDRA_Distribution_Format) and ANDF (http://en.wikipedia.org/wiki/Architecture_Neutral_Distrib...).

I believe ANDF was the basis for some failed UNIX standard in the early 90s, but that's long before my time.

There's at least one more recent attempt along the same lines (forgotten its name).

SELF: Anatomy of an (alleged) failure

Posted Jun 28, 2010 16:13 UTC (Mon) by salimma (subscriber, #34460) [Link]

GNUstep? also, the ROX (RISC OS on X) Desktop.

SELF: Anatomy of an (alleged) failure

Posted Jul 1, 2010 7:45 UTC (Thu) by eduperez (guest, #11232) [Link]

> What would be interesting would be a bytecode architecture, like some sort of LLVM IR, compiled via a JIT at run time.
> This way the same "executable" would run on ARM, x86-32, x86-64...

You man, like... Java?

SELF: Anatomy of an (alleged) failure

Posted May 4, 2012 19:10 UTC (Fri) by ShannonG (guest, #84474) [Link]

This is why the kernel-mailing is hostile.

SELF: Anatomy of an (alleged) failure

Posted Jun 24, 2010 2:18 UTC (Thu) by PaulWay (guest, #45600) [Link] (2 responses)

I think you're seeing it as a solve-everything idea, where really it's a solve-specific-things idea.

Obviously installing a system with every binary containing code for every possible architecture is going to be horribly large. But that's not what you use FatELF for.

Imagine, however, a boot CD or USB key that can boot and run on many architectures. That would be a case where the extra space used would be compensated by its universality. A live or install CD could then drag architecture-specific packages from the relevant distribution. A system rescue CD would work anywhere. You wouldn't worry about the overhead because the benefit would be one medium that would work (just about) everywhere. Likewise, an application installer could provide an initial FatELF loader that would then choose from the many supplied architecture-specific binaries to install.

In these circumstances I think FatELF makes a lot of sense. And, as Apple seems to be proving, the overhead is something that people don't notice (or, at least, are willing to cope with).

Have fun,

Paul

SELF: Anatomy of an (alleged) failure

Posted Jun 24, 2010 20:44 UTC (Thu) by vonbrand (subscriber, #4458) [Link]

If it really was for "many architectures" (How many do you even see in a normal week? For me it's 3: x86_64, i386, SPARC64; very rarely a PPC Mac. And of those I'd futz around with x86_64 and i386 almost exclusively.) it would be at most some 100MiB for each on a normal CD. Better get USB thumbdrives for each (or carry 5 or 6 CDs around).

SELF: Anatomy of an (alleged) failure

Posted Jun 25, 2010 1:49 UTC (Fri) by dvdeug (guest, #10998) [Link]

Can you even get one CD to boot on both ix86 and ARM or PowerPC? Even if you can, along with getting the right kernel to boot up, you should be to symlink in the correct binary directories for the architecture.

I'm having a lot of trouble finding any case where FatELF can't be replaced by shell scripts and symlinks. You want to support both ix86 and AMD-64 with your game; have the executable name run a shell script that runs the right binaries.

SELF: Anatomy of an (alleged) failure

Posted Jun 24, 2010 7:34 UTC (Thu) by mjthayer (guest, #39183) [Link] (2 responses)

> What's the use of a Fedora ARM binary, or an Ubuntu Itanium one ? If the matching distro doesn't exists, it's a waste.
Speaking from personal experience, binary compatibility is a lot easier than most Linux people think. I think that the focus they have on source makes them terrified of binary issues. (Source is good of course, but that doesn't mean that binary is always bad.) I help maintain a rather complex piece of software for which we have (among other options) a universal binary installer. It was a bit of work to start with to work out what issues we had to solve (the autopackage webpage is a very good resource here, whatever you may think of their package format), but once we got past that we have had very few issues over many years.

SELF: Anatomy of an (alleged) failure

Posted Jun 24, 2010 16:35 UTC (Thu) by Spudd86 (subscriber, #51683) [Link] (1 responses)

Yes, binary compatibility is easier than most people seem to think, but it is also very frequently done wrong (including by Mr. Gordon, I generally have make stuff he's packaged stop using at least 2 or 3 bundled libraries before it works (if I'm not using the Gentoo ebuild of it...), he tends to bundle libstdc++ and libgcc, as well as SDL... all of these have had a stable ABI for a while now, and if the ABI changes, so does the soname so distros can ship a compat package (which they generally do) so there's no need to distribute them, the only people who benefit would be people with old versions that are missing new API that the game uses, it's irritating. I bought the Humble Bundle and all the games that weren't flash based didn't start due to the bundled libraries causing breakage)

These days you mostly have to worry about make sure you compile on a system that has old enough versions of everything that you're not using newer versions of stuff than your users will have (eg use g++ 4.3 so that Gentoo users that use the stable compiler don't have to install newer gcc and mess about with LD_LIBRARY_PATH so your app can find the newer libstdc++, it's nice since g++ 4.4 and 4.5's libstdc++ is backwards ABI compatible with all the older ones (4.0 and later, 3.x is a separate issue, but you can both available at once so you just need a compat package)) You don't even need to statically link many things, unless you have reason to believe they will not be packaged by your user's distro.

SELF: Anatomy of an (alleged) failure

Posted Jun 24, 2010 18:08 UTC (Thu) by jwakely (subscriber, #60262) [Link]

> it's nice since g++ 4.4 and 4.5's libstdc++ is backwards ABI compatible with all the older ones (4.0 and later, ...

3.4 and later

SELF: Anatomy of an (alleged) failure

Posted Jun 24, 2010 7:49 UTC (Thu) by jengelh (guest, #33263) [Link] (1 responses)

FatELF is an excuse to provide source code.

SELF: Anatomy of an (alleged) failure

Posted Jun 24, 2010 8:33 UTC (Thu) by dlang (guest, #313) [Link]

I think you mean that it's an excuse to NOT provide source code.

SELF: Anatomy of an (alleged) failure

Posted Jun 23, 2010 20:10 UTC (Wed) by gnb (subscriber, #5132) [Link] (3 responses)

One thing I may have missed in the original lkml discussion of FatELF is why this required any significant kernel work. ELF allows a file to contains plenty of sections, the section naming is flexible and the file can specify its own interpreter. So why what does a new file format achieve that can't be done by installing an ELF program loader that knows how to find the sections for the correct arch, and a standard ELF file that includes section data for all the supported arches and a .interp section that names said loader as its interpreter?

SELF: Anatomy of an (alleged) failure

Posted Jun 24, 2010 15:03 UTC (Thu) by flewellyn (subscriber, #5047) [Link]

Or, even if a new format IS necessary, using binfmt_misc to load its custom dynamic linker that chooses the correct sections for the architecture? That strikes me as the obvious solution.

SELF: Anatomy of an (alleged) failure

Posted Jun 24, 2010 16:39 UTC (Thu) by Spudd86 (subscriber, #51683) [Link] (1 responses)

FatELF isn't a new format, it's just ELF with special section set up for multi-arch stuff, the kernel needs to understand this so it can load them properly

SELF: Anatomy of an (alleged) failure

Posted Jun 24, 2010 19:05 UTC (Thu) by gnb (subscriber, #5132) [Link]

That's the bit I don't understand. Why can't it just hand the thing off to a custom program loader, either by specifying one in the .interp section or, as others have said, by using binfmt support?


Copyright © 2025, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds