SELF: Anatomy of an (alleged) failure
SELF: Anatomy of an (alleged) failure
Posted Jun 23, 2010 20:38 UTC (Wed) by drag (guest, #31333)In reply to: SELF: Anatomy of an (alleged) failure by cmccabe
Parent article: SELF: Anatomy of an (alleged) failure
The point of it is to make things for users easier to deal with... forcing them to deal with UnionFS (especially when it's not part of the kernel and does not seem to ever likely to be incorporated) and using layered file systems by default on every Linux install sounds like a huge PITA to deal with.
Having 'Fat' binaries is really the best solution for OSes that want to support multiple arches in the easiest and most user-friendly way possible (especially in x86-64 were it can run 32bit and 64bit code side by side).
It's not just a matter of supporting Adobe flash or something like that, but it's just a superior technical solution for all levels from a users and system administration perspective.
> The reason why Apple invented FAT binaries is because they were interested in maintaining extensive binary compatibility with their old systems. Linux has never had this policy. Binaries that worked great on Fedora Core 9 probably won't work on Fedora Core 12, or Ubuntu 9.04, or whatever.
Actually Apple is very average when it comes to backwards compatibility. They certainly are no Microsoft. The point of fat binaries is just to make things easier for users and developers... which is exactly the entire point to having a operating system in the first place.
Some Linux kernel developers like to maintain that they support a stable ABI for userland and brag that software written for Linux in 2.0 era will still work in 2.6. In fact it seems that maintaining userspace ABI/API is a high priority for them. (Much higher then typical userland developer anyways. Application libraries are usually the bigger problem then anything with the kernel in terms of compatibility issues.)
Posted Jun 23, 2010 21:40 UTC (Wed)
by dlang (guest, #313)
[Link] (48 responses)
using a 64 bit kernel makes a huge difference in a system, but unless a single application uses more than 3G of ram it usually won't matter much to the app if it's 32 bit or 64 bit. there are some apps where it will matter, but those are special cases and probably not where a universal binary would be applicable.
Posted Jun 23, 2010 22:10 UTC (Wed)
by drag (guest, #31333)
[Link] (32 responses)
I do actually use a 64bit kernel with 32bit userland. With Fat binaries I would not have to give a shit one way or the other.
> but unless a single application uses more than 3G of ram it usually won't matter much to the app if it's 32 bit or 64 bit. there are some apps where it will matter, but those are special cases and probably not where a universal binary would be applicable.
Here are some issues:
* The fat binary solves the problems you run into with the transition process of moving to a 64bit system. This makes it easier for users and Linux distribution developers to cover all the multitude of corner cases. For example: Installing 'Pure 64' versions of Linux for a period of time meant that you had to give up the ability to run OpenOffice.org. This is solved now, but it's certainly not a isolated issue.
* People who actually need to run 64bit software for performance enhancements or memory requirements will have their applications 'just work' (completely regardless to whether they were 32bit or 64bit) with no requirements for complicated multi-lib setups, chroots, and other games that users have to solve. They just install it and it will 'just work'.
* Currently; If you do not need 64bit compatibility now you will probably want to install only 32bit binaries. However if in the future you run into software that requires 64bit compatibility. With the status quo it would require you to re-install the OS
* Distributions would not have to supply multiple copies of the same software packages in order to support the arches they need to support.
* Application developers (both OSS and otherwise) can devote their time more efficiently to meet the needs of their users and can treat 64bit compatibility as a optional feature that they can support when it's appropriate for them rather then being forced to move to 64bit as dictated by Linux OS design limitations.
Yeah FAT binaries only really solve 'special cases' issues with supporting multiple arches, but the number of special cases are actually high and diverse. When you examine the business market were everybody uses custom in-house software then the special cases are even more numerous then the typical problems you run into with home users.
Sure it's not absolutely required and there are lots of work arounds for each issue you run into. On a scale of 1-10 in terms of importance (were 10 is most important, and 1 is least) it ranks about a 3 or a 4, But the point is that FAT binaries is simply a superior technical solution then what we have right now, would solve a lot of usability issues, and comes from a application developer that has to deal with _real_world_ issues caused by lack of fat binaries that works with software that is really desirable for a significant number of potential Linux users.
He would of not spent all this time and effort into implementing FatElf if it did not solve a severe issue for him.
Posted Jun 23, 2010 22:52 UTC (Wed)
by cmccabe (guest, #60281)
[Link] (10 responses)
When you get a new computer, normally you reinstall the OS and copy over your /home directory. For all but a few highly technical users, this is the norm. Windows even has a special "feature" called Windows Genuine Advantage that forces you to reinstall the OS when the hardware has changed. You *cannot* use your previous install.
Anyway, running a Linux installer and then doing some apt-get only takes an hour or two.
> * Application developers (both OSS and otherwise) can devote their time
FATELF has nothing to do with whether software is 64-bit clean. If some doofus is assuming that sizeof(long) == 4, FATELF is not going to ride to the rescue. (Full disclosure: sometimes that doofus has been me in the past.)
> He would of not spent all this time and effort into implementing FatElf if
I can't think of even a single issue that FATELF "solves," except maybe to allow people distributing closed-source binaries to have one download link rather than two. In another 3 or 4 years, 32-bit desktop systems will be a historical curiosity, like dot-matrix printers or commodore 64s, and we will be glad we didn't put some kind of confusing and complicated binary-level compatibility system into the kernel.
Posted Jun 24, 2010 0:25 UTC (Thu)
by drag (guest, #31333)
[Link]
Windows sucks in a lot of ways, but Windows sucking has nothing to do with Linux sucking also. You can improve Linux and make it more easy to use without giving a crap what anybody in Redmond is doing.
If I am your plumber and you pay me money to fix your plumbing and I do a really shitty job at fixing it.. and you complain to me about it to me... does it comfort you when I tell you that whenever your neighbor washes his dishes that the basement floods? Does it make your plumbing better knowing that somebody else has it worse then you?
Posted Jun 24, 2010 12:07 UTC (Thu)
by nye (subscriber, #51576)
[Link] (4 responses)
I know FUD is the order of the day here at LWN, but this has gone beyond that point and I feel the need to call it:
You are a liar.
Posted Jun 25, 2010 8:26 UTC (Fri)
by k8to (guest, #15413)
[Link]
Posted Jun 27, 2010 12:12 UTC (Sun)
by nix (subscriber, #2304)
[Link] (2 responses)
(Certainly when WGA fires, it does make it *appear* that you have to reinstall the OS, because it demands that you pay MS a sum of money equivalent to a new OS install. But, no, they don't give you a new OS for that. You pay piles of cash and get a key back instead, which makes your OS work again -- until you have the temerity to change too much hardware at once; the scoring system used to determine which hardware is 'too much' is documented, but not by Microsoft.)
Posted Jun 28, 2010 10:03 UTC (Mon)
by nye (subscriber, #51576)
[Link] (1 responses)
I've never actually *seen* WGA complain about a hardware change; the only times I've ever seen it are when reinstalling on exactly the same hardware (eg 3 times in a row because of a problem with slipstreaming drivers).
In principal though, if you change more than a few items of hardware at once (obviously this would include transplanting the disk into another machine) or whenever you reinstall then Windows will ask to be reactivated. If you reactivate too many times over a short period, it will demand that you call the phone number to use automated phone activation. At some point it will escalate to non-automated phone activation where you actually speak to a person. This is the furthest I've ever seen it go, though I believe there's a further level where you speak to the person and you have to give them a plausible reason for why you've installed the same copy of Windows two dozen times in the last week. If you then can't persuade them, this would be the point where you have to pay for a new license.
This is obnoxious and hateful, to be sure, but it is entirely unlike the behaviour described. The half-truths and outright untruths directed at Windows from some parts of the open source community make it hard to maintain credibility when describing legitimate grievances or technical problems, and this undermines us all.
Posted Jun 28, 2010 13:25 UTC (Mon)
by nix (subscriber, #2304)
[Link]
I suspect that WGA's behaviour (always ill-documented) has shifted over time, and that as soon as you hit humans on phone lines you become vulnerable to the varying behaviour of those humans. I suspect all the variability can be explained away that way.
Still, give me free software any day. No irritating license enforcer and hackability both.
Posted Jun 24, 2010 12:28 UTC (Thu)
by Cato (guest, #7643)
[Link]
Linux is much better at this generally, but this ability is not unique to Linux.
Posted Jun 24, 2010 17:26 UTC (Thu)
by jschrod (subscriber, #1646)
[Link] (2 responses)
And if you use it for anything beyond office/Web surfing, you configure the system for a few days afterwards... (Except if you have a professional setup with some configuration management behind it, which the target group of this proposal most probably doesn't have.)
> Windows even has a special "feature" called Windows Genuine Advantage
OK, that shows that you are not a professional. This is bullshit, plain and simple: For private and SOHO users, WGA may trigger reactivation, but no reinstall. (Enterprise-class users use deployment tools anyhow and do not come in such a situation.)
Posted Jun 24, 2010 19:04 UTC (Thu)
by cmccabe (guest, #60281)
[Link] (1 responses)
Thank you for the correction. I do not use Windows at work. It's not even installed on my work machine. So I'm not familiar with enterprise deployment tools for Windows. I wasn't trying to spread FUD-- just genuinely did not know there was a way around WGA in this case.
However, the point I was trying to make is that most home users expect that new computer == new OS install. Some people in this thread have been claiming that Linux distributions need to support moving a hard disk between 32 and 64 bit machines in order to be a serious contender for desktop operation system. (And they're unhappy with the obvious solution of using 32-bit everywhere.)
I do not think that most home users, especially nontechnical ones, are aware that this is even possible with Windows. I certainly don't think they would view it as a reason not to switch.
Posted Jun 24, 2010 19:50 UTC (Thu)
by vonbrand (subscriber, #4458)
[Link]
It is much simpler than that: Very few people do move disks from one computer to the next. And those who do have the technical savvy to handle any resulting mess.
Posted Jun 23, 2010 23:29 UTC (Wed)
by dlang (guest, #313)
[Link] (19 responses)
as for transitioning, install a 64 bit system and 32 bit binaries, as long as you have the libraries on the system they will work. fatelf doesn't help you here (it may help if your libraries were all fat, but I fail to see how that's really much better than having /lib32 /lib64 (your hard drive may be large enough to double the size of everything stored on it, but mine sure isn't)
distros would still have to compile and test all the different copies of their software for all the different arch's they support, they would just combine them together before shipping them (at which point they would have to ship more CDs/DVDs and or pay higher bandwidth charges to get people copies of the binaries that don't do them any good)
Posted Jun 24, 2010 0:37 UTC (Thu)
by drag (guest, #31333)
[Link] (3 responses)
I do have to care about it if, in the future, I want to run a application that benefits from 64bit-ness.
Some operations are faster in 64bit and many applications, such as games, already benefit from the larger address space.
> (it may help if your libraries were all fat, but I fail to see how that's really much better than having /lib32 /lib64 (your hard drive may be large enough to double the size of everything stored on it, but mine sure isn't)
Yes. That is what I am talking about. Getting rid of architecture-specific directories and going with FatElf for everything.
Your wrong in thinking that having 64bit and 32bit support in a binary means that your doubling your system's footprint. Generally speaking the architectural-specific files in a software package is small compared to the overall size of the application. Most ELF files are only a few K big. Only rarely do they get up past a half a dozen MB.
My user directory is about 4.1GB large. Adding Fatelf support for 32bit/64bit applications would probably only plump it up a 400-600 MB or so..
Posted Jun 24, 2010 7:56 UTC (Thu)
by dlang (guest, #313)
[Link] (2 responses)
take some distro (say ubuntu since it supports multiple architectures) download the repository (when I did this a couple years ago it was 600G, nowdays it's probably larger so it make take $150 or so to buy a USB 2TB drive, it will take you a while to download everything), then create a unified version of the distro, making all the binaries and libraries 'fat' and advertise the result. I'm willing to bet that if you did this as a plain repackaging of ubuntu with no changes you would even be able to get people to host it for you (you may even be able to get Cononical to host it if your repackaging script is simple enough)
I expect that the size difference is going to be larger than you think (especially if you include every architecure that ubuntu supports, not just i486 and AMD64), and this size will end up costing performance as well as having effects like making it hard to create an install CD etc.
I may be wrong and it works cleanly, in which case there will be real ammunition to go back to the kernel developers with (although you do need to show why you couldn't just use additional ELF sections with a custom loader instead as was asked elsewhere)
If you could do this and make a CD like the ubuntu install CD, but one that would work on multiple architectures (say i486, AMD64, powerPC) that would get people's attention. (but just making a single disk that does this without having the rest of a distro to back it up won't raise nearly the interest that you will get if you can script doing this to an entire distro)
Posted Jun 24, 2010 12:12 UTC (Thu)
by nye (subscriber, #51576)
[Link] (1 responses)
Because the subject of this article already did that: http://icculus.org/fatelf/vm/
It's not as well polished as it could be - I got the impression that he didn't see much point in improving it after it was dismissed out of hand.
Posted Jun 24, 2010 20:47 UTC (Thu)
by MisterIO (guest, #36192)
[Link]
Posted Jun 24, 2010 1:24 UTC (Thu)
by cesarb (subscriber, #6266)
[Link] (14 responses)
And some are very small indeed. One of my machines has only a 4 gigabyte "hard disk" (gigabyte, not terabyte). It is a EeePC 701 4G. (And it is in fact a small SSD, thus the quotes.)
There is also the Live CDs/DVDs, which are limited to a fixed size. Fedora is moving to use LZMA to cram even more stuff into its live images (https://fedoraproject.org/wiki/Features/LZMA_for_Live_Images). Note also that installing from a live image, at least on Fedora and IIRC on Ubuntu, is done by simply copying the whole live image to the target disk, so the size limitations of live images directly influence what is installed by default.
Posted Jun 24, 2010 9:06 UTC (Thu)
by ncm (guest, #165)
[Link] (2 responses)
Posted Jun 24, 2010 20:56 UTC (Thu)
by speedster1 (guest, #8143)
[Link] (1 responses)
Posted Jun 26, 2010 9:56 UTC (Sat)
by ncm (guest, #165)
[Link]
While we're way, way off topic, you might also want to go to desktop/gnome/interface and change gtk_key_theme to "Emacs" so that the text edit box keybindings (except in Epiphany, grr) are Emacs-style.
Contempt, thy name is Gnome.
Getting back on topic, fat binaries makes perfect sense for shared libraries, so they can all go in /lib and /usr/lib. However, there's no reason to think anybody would force them on you for an EEE install.
Posted Jun 24, 2010 12:30 UTC (Thu)
by Cato (guest, #7643)
[Link]
Posted Jun 24, 2010 17:03 UTC (Thu)
by chad.netzer (subscriber, #4257)
[Link] (9 responses)
Posted Jun 24, 2010 19:40 UTC (Thu)
by dlang (guest, #313)
[Link] (8 responses)
do you have a SSD?
are you memory constrained (decompressing requires that you have more space than the uncompressed image)
do you page out parts of the code and want to read in just that page later (if so, you would have to uncompress the entire binary to find the appropriate page)
what compression algorithm do you use? many binaries don't actually compress that well, and some decompression algorithms (bzip2 for example) are significantly slower than just reading the raw data.
I actually test this fairly frequently in dealing with processing log data. in some condititions having the data compressed and uncompressing it when you access it is a win, in other cases it isn't.
Posted Jun 25, 2010 0:14 UTC (Fri)
by chad.netzer (subscriber, #4257)
[Link] (7 responses)
Still, why the heck must my /bin/true executable take 30K on disk? And /bin/false is a separate executable that takes *another* 30K, even though they are both dynamically linked to libc??? Time to move to busybox on the desktop...
Posted Jun 25, 2010 0:38 UTC (Fri)
by dlang (guest, #313)
[Link] (4 responses)
http://www.muppetlabs.com/~breadbox/software/tiny/teensy....
A Whirlwind Tutorial on Creating Really Teensy ELF Executables for Linux
Posted Jun 25, 2010 2:41 UTC (Fri)
by chad.netzer (subscriber, #4257)
[Link] (3 responses)
To be fair getting your executable much smaller than the minimal disk block size is just a fun exercise. Whereas coreutils /bin/true may actually benefit from an extent based filesystem. :) Anyway, it's just a silly complaint I'm making, though it has always annoyed me a tiny bit.
Posted Jun 25, 2010 12:25 UTC (Fri)
by dark (guest, #8483)
[Link] (2 responses)
Posted Jun 25, 2010 15:38 UTC (Fri)
by intgr (subscriber, #39733)
[Link] (1 responses)
PS: Shells like zsh actually ship builtin "true" and "false" commands
Posted Jun 29, 2010 23:03 UTC (Tue)
by peter-b (guest, #66996)
[Link]
:
The following command is equivalent to false:
! :
I regularly use both when writing shell scripts.
Posted Jun 27, 2010 16:42 UTC (Sun)
by nix (subscriber, #2304)
[Link]
(I think this rule makes more sense on non-GNU platforms, where it is common to rename *everything* via --program-prefix=g or something similar, to prevent conflicts with the native tools. But why should those of us using the GNU toolchain everywhere be penalized for this?)
Posted Jun 27, 2010 16:46 UTC (Sun)
by nix (subscriber, #2304)
[Link]
And gnulib, because it has no stable API or ABI, is always statically linked to its users.
26Kb for a printf implementation isn't bad.
Posted Jun 27, 2010 12:08 UTC (Sun)
by nix (subscriber, #2304)
[Link]
Please. There are good arguments for FatELF, but this is not one of them.
Posted Jun 23, 2010 23:34 UTC (Wed)
by cortana (subscriber, #24596)
[Link] (11 responses)
So I could use Flash.
So I could buy a commercial Linux game and run it without having to waste time setting up an i386 chroot or similar.
Both areas that contribute to the continuing success of Windows and Mac OS X on the desktop.
Posted Jun 24, 2010 2:27 UTC (Thu)
by BenHutchings (subscriber, #37955)
[Link] (9 responses)
Posted Jun 24, 2010 9:10 UTC (Thu)
by cortana (subscriber, #24596)
[Link] (8 responses)
Posted Jun 24, 2010 10:43 UTC (Thu)
by michich (guest, #17902)
[Link] (7 responses)
Posted Jun 24, 2010 11:00 UTC (Thu)
by cortana (subscriber, #24596)
[Link] (6 responses)
Even if Debian did have an automatic setup for compiling all library packages with both architectures, you are then screwed because they put the amd64 libraries in /lib (with a symlink at /lib64) and the i386 libraries in /lib32. So your proprietary i386 software that tries to dlopen files in /lib fails because they are of the wrong architecture.
You could argue that these are Debian-specific problems. You might be right. But they are roadblocks to greater adoption of Linux on the desktop, and now that the FatELF way out is gone, we're back to the previous situation: waiting for the 'multiarch' fix (think FatELF but with all libraries in /lib/$(arch-triplet)/libfoo.so rather than the code for several architectures in a FatELF-style, single /lib/libfoo.so), which has failed to materialise in the 6 years since I first saw it mentioned. And which still won't fix proprietary software that expects to find its own architecture's files at /lib.
Posted Jun 24, 2010 17:44 UTC (Thu)
by vonbrand (subscriber, #4458)
[Link]
That multilib doesn't work on Debian is squarely Debian's fault (my Fedora here is still not completely 32-bit free, but getting there). No need to burden the kernel for that.
Posted Jun 27, 2010 12:31 UTC (Sun)
by nix (subscriber, #2304)
[Link] (4 responses)
This simply is not a significant problem.
Posted Jun 27, 2010 13:14 UTC (Sun)
by cortana (subscriber, #24596)
[Link] (1 responses)
Posted Jun 27, 2010 17:48 UTC (Sun)
by nix (subscriber, #2304)
[Link]
(Words cannot express how much I don't care about statically linked apps.)
Posted Jul 10, 2010 12:31 UTC (Sat)
by makomk (guest, #51493)
[Link] (1 responses)
Posted Jul 10, 2010 20:24 UTC (Sat)
by nix (subscriber, #2304)
[Link]
Posted Jun 24, 2010 18:43 UTC (Thu)
by Spudd86 (subscriber, #51683)
[Link]
Posted Jun 24, 2010 7:55 UTC (Thu)
by jengelh (guest, #33263)
[Link] (2 responses)
Hell it will. Unless the program in question directly uses hand-tuned assembler, the 32-bit one will usually not run with SSE2, just the olde x87, which is slower, will be any computations involving larger-than-32 integers..
Posted Jun 24, 2010 18:08 UTC (Thu)
by pkern (subscriber, #32883)
[Link] (1 responses)
Which is only partly true. Look into But of course, normally you don't rely on newer features everywhere, breaking support for older machines. Ubuntu goes i686 now, Fedora's already there, I think; and if you want more optimization I guess Gentoo is the way to go because you don't have to think portable. ;-)
Posted Jun 24, 2010 18:25 UTC (Thu)
by jengelh (guest, #33263)
[Link]
Posted Jun 24, 2010 7:43 UTC (Thu)
by tzafrir (subscriber, #11501)
[Link]
SELF: Anatomy of an (alleged) failure
SELF: Anatomy of an (alleged) failure
SELF: Anatomy of an (alleged) failure
> want to install only 32bit binaries. However if in the future you run into
> software that requires 64bit compatibility. With the status quo it would
> require you to re-install the OS
> more efficiently to meet the needs of their users and can treat 64bit
> compatibility as a optional feature that they can support when it's
> appropriate for them rather then being forced to move to 64bit as
> dictated by Linux OS design limitations.
> it did not solve a severe issue for him.
SELF: Anatomy of an (alleged) failure
SELF: Anatomy of an (alleged) failure
SELF: Anatomy of an (alleged) failure
SELF: Anatomy of an (alleged) failure
SELF: Anatomy of an (alleged) failure
SELF: Anatomy of an (alleged) failure
SELF: Anatomy of an (alleged) failure
SELF: Anatomy of an (alleged) failure
> your /home directory.
> that forces you to reinstall the OS when the hardware has changed. You
> *cannot* use your previous install.
SELF: Anatomy of an (alleged) failure
> and simple: For private and SOHO users, WGA may trigger reactivation, but
> no reinstall. (Enterprise-class users use deployment tools anyhow and do
> not come in such a situation.)
SELF: Anatomy of an (alleged) failure
SELF: Anatomy of an (alleged) failure
SELF: Anatomy of an (alleged) failure
SELF: Anatomy of an (alleged) failure
SELF: Anatomy of an (alleged) failure
SELF: Anatomy of an (alleged) failure
Disk space (was: SELF: Anatomy of an (alleged) failure)
Disk space (was: SELF: Anatomy of an (alleged) failure)
Disconnecting nautilus from gnome session
Disconnecting nautilus from gnome session
Disk space (was: SELF: Anatomy of an (alleged) failure)
Disk space (was: SELF: Anatomy of an (alleged) failure)
Disk space (was: SELF: Anatomy of an (alleged) failure)
Disk space (was: SELF: Anatomy of an (alleged) failure)
Disk space (was: SELF: Anatomy of an (alleged) failure)
Disk space (was: SELF: Anatomy of an (alleged) failure)
Yes, but GNU true does so much more! It supports --version, which tells you all about who wrote it and about the GPL and the FSF. It also supports --help, which explains true's command-line options (--version and --help). Then there is the i18n support, so that people from all over the world can learn about --help and --version. You just don't get all that with a minimalist ELF binary.
Disk space (was: SELF: Anatomy of an (alleged) failure)
Disk space (was: SELF: Anatomy of an (alleged) failure)
Disk space (was: SELF: Anatomy of an (alleged) failure)
Disk space (was: SELF: Anatomy of an (alleged) failure)
Disk space (was: SELF: Anatomy of an (alleged) failure)
SELF: Anatomy of an (alleged) failure
* Currently; If you do not need 64bit compatibility now you will probably want to install only 32bit binaries. However if in the future you run into software that requires 64bit compatibility. With the status quo it would require you to re-install the OS
So, because some distribution's biarch support sucks enough that it can't install a bunch of 64-bit dependencies into /lib64 and /usr/lib64 when you install a 64-bit binary, we need a kernel hack?
SELF: Anatomy of an (alleged) failure
SELF: Anatomy of an (alleged) failure
SELF: Anatomy of an (alleged) failure
But what you describe already works today and FatELF is not needed for it. It's called multilib.
SELF: Anatomy of an (alleged) failure
SELF: Anatomy of an (alleged) failure
SELF: Anatomy of an (alleged) failure
SELF: Anatomy of an (alleged) failure
Even if Debian did have an automatic setup for compiling all library packages with both architectures, you are then screwed because they put the amd64 libraries in /lib (with a symlink at /lib64) and the i386 libraries in /lib32. So your proprietary i386 software that tries to dlopen files in /lib fails because they are of the wrong architecture.
I've run LFS systems with the /lib / /lib32 layout for many years (because I consider /lib64 inelegant on a principally 64-bit system). You know how many things I've had to fix because they had lib hardwired into them? *Three*. And two of those were -config scripts (which says how old they are right then and there, modern stuff would use pkg-config). Not one was a dlopen(): they all seem to be using $libdir as they should.
SELF: Anatomy of an (alleged) failure
SELF: Anatomy of an (alleged) failure
SELF: Anatomy of an (alleged) failure
SELF: Anatomy of an (alleged) failure
SELF: Anatomy of an (alleged) failure
SELF: Anatomy of an (alleged) failure
SELF: Anatomy of an (alleged) failure
(/usr)?/lib/i686
and you'll see libs that will be loaded by the linker in preference to the plain ia32 ones if the hardware supports more than the least common denominator. It even works with /usr/lib/sse2
here on Debian if the package has support for it (see ATLAS or speex).SELF: Anatomy of an (alleged) failure
SELF: Anatomy of an (alleged) failure