|
|
Subscribe / Log in / New account

SELF: Anatomy of an (alleged) failure

SELF: Anatomy of an (alleged) failure

Posted Jun 23, 2010 21:45 UTC (Wed) by xav (guest, #18536)
In reply to: SELF: Anatomy of an (alleged) failure by Lovechild
Parent article: SELF: Anatomy of an (alleged) failure

FATelf is a born-dead project. Nobody wants to pay the price of compiling as many times an exec as there are different architectures.
And then there are more differences than simply arch diffs. What's the use of a Fedora ARM binary, or an Ubuntu Itanium one ? If the matching distro doesn't exists, it's a waste.


to post comments

SELF: Anatomy of an (alleged) failure

Posted Jun 24, 2010 1:08 UTC (Thu) by cesarb (subscriber, #6266) [Link] (10 responses)

What would be interesting would be a bytecode architecture, like some sort of LLVM IR, compiled via a JIT at run time.

This way the same "executable" would run on ARM, x86-32, x86-64...

SELF: Anatomy of an (alleged) failure

Posted Jun 24, 2010 9:50 UTC (Thu) by tzafrir (subscriber, #11501) [Link] (1 responses)

Someone already has. It's called qemu.

SELF: Anatomy of an (alleged) failure

Posted Jun 26, 2010 0:11 UTC (Sat) by waucka (guest, #63097) [Link]

If you want to do JIT, you should do it on an intermediate representation (IR) designed for the purpose. Deliberately using x86 (or any native code, really) for that purpose is ridiculous. Besides, we wouldn't necessarily have to do JIT all the time. With a good IR, we could have live CDs and USB sticks use JIT and convert to native code at install-time.

SELF: Anatomy of an (alleged) failure

Posted Jun 24, 2010 14:09 UTC (Thu) by pcampe (guest, #28223) [Link] (1 responses)

>What would be interesting would be a bytecode architecture [...]

We have something definitely better: source code.

Allowing for the distribution of a (more or less) universally-runnable, auto-sustained and self-contained component will ultimately result in making easier the distribution of closed source program, which is something we should contrast.

SELF: Anatomy of an (alleged) failure

Posted Jun 24, 2010 16:20 UTC (Thu) by Spudd86 (subscriber, #51683) [Link]

yes, but unless you run a source distro like Gentoo you may not have the dev files for everything on your system and lots of users are fairly averse to compiling themselves, plus a source distro can hit problems that DO NOT EVER hit binary distros (including people with misbehaving build systems, automagic deps (weather it ends up depending on another package being installed changes if the other package is installed a build time or not, with no way to turn this behavior off))

Also people are going to want to use pre-compiled code, and most people don't really want to learn how to package their stuff for every distro ever, let alone actually compile it 20 or 30 times.

SELF: Anatomy of an (alleged) failure

Posted Jun 24, 2010 16:13 UTC (Thu) by Spudd86 (subscriber, #51683) [Link] (1 responses)

If you go look at some of the older LLVM papers they pretty much describe doing this... (I don't know if anyone implemented it, but given that they DO have a JIT compiler for LLVM IR already I think you could probably already do this in a limited form see http://llvm.org/cmds/lli.html the current llvm command that will run a LLVM IR bytecode object file with the LLVM JIT)

The papers talk about profiling and optimizing the IR and writing that back to the binary, so you get a binary optimized for your workload.

This still has the issues of library incompatibilities across architectures (even within the same distro) since the library may not have all the same options compiled in, or many export a slightly different set of symbols or all kinds of other things...

SELF: Anatomy of an (alleged) failure

Posted Jun 27, 2010 16:03 UTC (Sun) by nix (subscriber, #2304) [Link]

IIRC this is currently being done by ClamAV (using LLVM, natch).

SELF: Anatomy of an (alleged) failure

Posted Jun 24, 2010 21:20 UTC (Thu) by dw (subscriber, #12017) [Link] (1 responses)

This has been tried many times, including (but not limited to) TDF (http://en.wikipedia.org/wiki/TenDRA_Distribution_Format) and ANDF (http://en.wikipedia.org/wiki/Architecture_Neutral_Distrib...).

I believe ANDF was the basis for some failed UNIX standard in the early 90s, but that's long before my time.

There's at least one more recent attempt along the same lines (forgotten its name).

SELF: Anatomy of an (alleged) failure

Posted Jun 28, 2010 16:13 UTC (Mon) by salimma (subscriber, #34460) [Link]

GNUstep? also, the ROX (RISC OS on X) Desktop.

SELF: Anatomy of an (alleged) failure

Posted Jul 1, 2010 7:45 UTC (Thu) by eduperez (guest, #11232) [Link]

> What would be interesting would be a bytecode architecture, like some sort of LLVM IR, compiled via a JIT at run time.
> This way the same "executable" would run on ARM, x86-32, x86-64...

You man, like... Java?

SELF: Anatomy of an (alleged) failure

Posted May 4, 2012 19:10 UTC (Fri) by ShannonG (guest, #84474) [Link]

This is why the kernel-mailing is hostile.

SELF: Anatomy of an (alleged) failure

Posted Jun 24, 2010 2:18 UTC (Thu) by PaulWay (guest, #45600) [Link] (2 responses)

I think you're seeing it as a solve-everything idea, where really it's a solve-specific-things idea.

Obviously installing a system with every binary containing code for every possible architecture is going to be horribly large. But that's not what you use FatELF for.

Imagine, however, a boot CD or USB key that can boot and run on many architectures. That would be a case where the extra space used would be compensated by its universality. A live or install CD could then drag architecture-specific packages from the relevant distribution. A system rescue CD would work anywhere. You wouldn't worry about the overhead because the benefit would be one medium that would work (just about) everywhere. Likewise, an application installer could provide an initial FatELF loader that would then choose from the many supplied architecture-specific binaries to install.

In these circumstances I think FatELF makes a lot of sense. And, as Apple seems to be proving, the overhead is something that people don't notice (or, at least, are willing to cope with).

Have fun,

Paul

SELF: Anatomy of an (alleged) failure

Posted Jun 24, 2010 20:44 UTC (Thu) by vonbrand (subscriber, #4458) [Link]

If it really was for "many architectures" (How many do you even see in a normal week? For me it's 3: x86_64, i386, SPARC64; very rarely a PPC Mac. And of those I'd futz around with x86_64 and i386 almost exclusively.) it would be at most some 100MiB for each on a normal CD. Better get USB thumbdrives for each (or carry 5 or 6 CDs around).

SELF: Anatomy of an (alleged) failure

Posted Jun 25, 2010 1:49 UTC (Fri) by dvdeug (guest, #10998) [Link]

Can you even get one CD to boot on both ix86 and ARM or PowerPC? Even if you can, along with getting the right kernel to boot up, you should be to symlink in the correct binary directories for the architecture.

I'm having a lot of trouble finding any case where FatELF can't be replaced by shell scripts and symlinks. You want to support both ix86 and AMD-64 with your game; have the executable name run a shell script that runs the right binaries.

SELF: Anatomy of an (alleged) failure

Posted Jun 24, 2010 7:34 UTC (Thu) by mjthayer (guest, #39183) [Link] (2 responses)

> What's the use of a Fedora ARM binary, or an Ubuntu Itanium one ? If the matching distro doesn't exists, it's a waste.
Speaking from personal experience, binary compatibility is a lot easier than most Linux people think. I think that the focus they have on source makes them terrified of binary issues. (Source is good of course, but that doesn't mean that binary is always bad.) I help maintain a rather complex piece of software for which we have (among other options) a universal binary installer. It was a bit of work to start with to work out what issues we had to solve (the autopackage webpage is a very good resource here, whatever you may think of their package format), but once we got past that we have had very few issues over many years.

SELF: Anatomy of an (alleged) failure

Posted Jun 24, 2010 16:35 UTC (Thu) by Spudd86 (subscriber, #51683) [Link] (1 responses)

Yes, binary compatibility is easier than most people seem to think, but it is also very frequently done wrong (including by Mr. Gordon, I generally have make stuff he's packaged stop using at least 2 or 3 bundled libraries before it works (if I'm not using the Gentoo ebuild of it...), he tends to bundle libstdc++ and libgcc, as well as SDL... all of these have had a stable ABI for a while now, and if the ABI changes, so does the soname so distros can ship a compat package (which they generally do) so there's no need to distribute them, the only people who benefit would be people with old versions that are missing new API that the game uses, it's irritating. I bought the Humble Bundle and all the games that weren't flash based didn't start due to the bundled libraries causing breakage)

These days you mostly have to worry about make sure you compile on a system that has old enough versions of everything that you're not using newer versions of stuff than your users will have (eg use g++ 4.3 so that Gentoo users that use the stable compiler don't have to install newer gcc and mess about with LD_LIBRARY_PATH so your app can find the newer libstdc++, it's nice since g++ 4.4 and 4.5's libstdc++ is backwards ABI compatible with all the older ones (4.0 and later, 3.x is a separate issue, but you can both available at once so you just need a compat package)) You don't even need to statically link many things, unless you have reason to believe they will not be packaged by your user's distro.

SELF: Anatomy of an (alleged) failure

Posted Jun 24, 2010 18:08 UTC (Thu) by jwakely (subscriber, #60262) [Link]

> it's nice since g++ 4.4 and 4.5's libstdc++ is backwards ABI compatible with all the older ones (4.0 and later, ...

3.4 and later


Copyright © 2025, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds