User: Password:
|
|
Subscribe / Log in / New account

Prior art (FatELF: universal binaries for Linux)

Prior art (FatELF: universal binaries for Linux)

Posted Oct 29, 2009 10:49 UTC (Thu) by eru (subscriber, #2753)
Parent article: FatELF: universal binaries for Linux

I have a dim recollection that Apollo DomainOS also implemented the fat binary idea, very long ago (no personal experience, but 20 years ago, there was an Apollo workstation in the company, and this feature was mentioned by the guy who used it). However, Google does not offer confirmation, apart from this mailing list mention: http://gcc.gnu.org/ml/gcc/1999-04n/msg00688.html


(Log in to post comments)

Prior art (FatELF: universal binaries for Linux)

Posted Oct 29, 2009 13:32 UTC (Thu) by nix (subscriber, #2304) [Link]

DomainOS didn't use this feature for multiple architectures, as far as I know: this was in the BSD/SysV war days, and they had multiple user-switchable 'universes', so apps could be marked as being BSD or SysV-specific, you could have distinct libraries with apparently identical names for each universe, and you could switch from BSD to SysV at any time. You could even reference paths in the other universe via //$UNIVERSE/... (where $UNIVERSE is the name of the universe, of course).

POSIX still contains a special case allowing // at the root to mean something different from / (in all other cases, strings of consecutive /s in pathnames are collapsed to /). Samba of course benefits from this.

Prior art (FatELF: universal binaries for Linux)

Posted Nov 7, 2009 18:30 UTC (Sat) by dfa (✭ supporter ✭, #6767) [Link]

Domain/OS got fat binaries, very much as described in this article, with
the advent of the RISC based DN10000s, follow-on to the Motorola 680x0s.

It was very common to provide locally needed binaries on local disk,
then access the rest across the (ring) network. An administrator could
opt to load the single architectures onto separate shared directories
or could load the fat versions for universal use into a single directory.
This feature made diskless boot support painless.

The accommodation of three different OS conventions (Domain, BSD, and SysV)
was handled in filename space, as described, using environmental variables
which the filesystem used in creating the actual names accessed. It was
very cool, and extremely convenient for setting up personal/group/corporate
tailorings.

The DomainOS "//" convention was used to access the local machine's
"network root", the set of host names known to the local host, a very
neat network naming space for files was the result.

Prior art (FatELF: universal binaries for Linux)

Posted Oct 29, 2009 13:36 UTC (Thu) by clugstj (subscriber, #4020) [Link]

What a stupid idea! We already have a file system that stores, wait for it..., FILES. Why create a new format that stores files within files? I can quite easily reproduce this "feature" with a two-line shell script:

#!/bin/sh
exec $0.`uname -m`

Put the binaries for each architecture in the same directory (with the arch as a filename suffix), link this script to the name of each binary (without the suffix) and you are done.

Or, you could easily add a feature to the packaging system to install the proper binary for the correct architecture and not waste disk space on other unused arch binaries.

Prior art (FatELF: universal binaries for Linux)

Posted Oct 29, 2009 14:14 UTC (Thu) by louai (subscriber, #58033) [Link]

Read those e-mails.

Your script would fail in certain scenarios. For example, running a x86 binary on an amd64 system.

Prior art (FatELF: universal binaries for Linux)

Posted Oct 29, 2009 14:51 UTC (Thu) by dtlin (✭ supporter ✭, #36537) [Link]

$ uname -m
x86_64
$ setarch i386 uname -m
i686
I don't see what the problem is.

Prior art (FatELF: universal binaries for Linux)

Posted Oct 30, 2009 21:30 UTC (Fri) by giraffedata (subscriber, #1954) [Link]

My initial reaction was the same: that we already have multi-file packages, so isn't it more natural just to have a binary for each architecture?

But when I thought about the real complaints (above) about the difficulty of living with /lib and /lib64, I realized this: which binary is required is a characteristic of facilities under user space -- kernel and/or machine. So placing the burden of choosing one on user space is wrong. And files are user space things; the kernel should not navigate directories.

Now, where having multiple architecture binaries in a single system (filesystem) isn't useful, I would prefer a package with multiple binaries, where the installer installs in /lib the relevant one.

Prior art (FatELF: universal binaries for Linux)

Posted Nov 1, 2009 3:50 UTC (Sun) by elanthis (guest, #6227) [Link]

Oddly enough, you can't link against a shell script...

Multi-arch binaries are not tremendously useful. Multi-arch libraries are very useful. Yes, directories once again could be used, but various "standards" groups have already agreed on a defacto lib vs lib64 multi-arch setup which totally falls apart in the face of anything besides a single pair 32-bit and 64-bit architectures. I'd much rather have just seen the platform encoded in the library sonames and filenames (.e.g libfoo.so.linux.x86_64.1.2.0 vs libfoo.so.bsd.ppc64.1.2.0), but alas it wasn't up to me to make the call.

Oddly enough, though, multi-arch executables are actually a better solution than directories, because the question comes down to which directory to search for executables. We could have /bin broken into /bin/i386, /bin/i586, /bin/x86_64, /bin/ppc, etc. with the PATH environment variable used to select which to search... but it'd be ugly and force changes on every installer, package set, and so on. Granted, I don't find multi-arch binaries particularly useful, so I have no problem with packages or installers just figuring out which binary to install.

However, people who use NFS-mounted root directories across a variety of systems could get a big boost out of something like fatELF. A single root directory tree could theoretical serve thin things running both native i386, native x86_64, or native ppc code. Less maintenance and all that jazz.

All at the cost of a little extra disk space on a server and a little bigger packages to download on the 50mpbs pipes you can get for cheap these days.

That said, for the purpose fatELF is obstensibly being designed for (commercial games), fatELF is just silly. The installer can just install the proper binaries based on the target architecture. An installer shell script can pick which installer binary to run (or better yet, the Linux software distribution scene could get its head out of its ass and supply a standard distro-neutral installer framework that's installed in the base package set for every distro like how it should've been done 15 years ago).


Copyright © 2017, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds