|
|
Subscribe / Log in / New account

Linker limitations on 32-bit architectures

Linker limitations on 32-bit architectures

Posted Aug 27, 2019 14:38 UTC (Tue) by patrakov (subscriber, #97174)
In reply to: Linker limitations on 32-bit architectures by ju3Ceemi
Parent article: Linker limitations on 32-bit architectures

The build systems of too many packages are not compliant. E.g., when you compile the PHP interpreter, you need to run ./configure, make, make test, make install (well, oversimplifying here). But at the ./configure stage, it checks whether getaddrinfo() actually works. It does so by compiling and running a test program. If it cannot run a test program (e.g. when cross-compiling), it assumes that getaddrinfo() does not work, and disables code that uses this function - even though it might, in fact, work just fine.

https://github.com/php/php-src/blob/452356af2aa66493daf8f...

Another problem is that many packages, by mistake, check properties of the host system, not the target. E.g. alsa-lib calls the "python-config" script that gets the necessary includes and libs. But that script describes the host, not the target!

https://git.alsa-project.org/?p=alsa-lib.git;a=blob;f=con...

Buildroot and Yocto carry a ton of hacks and workarounds to these classes of problems. Debian avoids them by building natively.


to post comments

Linker limitations on 32-bit architectures

Posted Aug 28, 2019 2:03 UTC (Wed) by mathstuf (subscriber, #69389) [Link] (7 responses)

One should be able to preload the cache result for that check somehow. However, having worked with build systems a lot (I work on CMake), compile tests are bad, but run tests are worse. They break cross compilation, are really slow (generally) and should be done as preprocessor or runtime checks if possible. All sizeof(builtin_type) things have preprocessor definitions available, broken platform checks should just be done once and statically placed in the code (how much energy has been wasted seeing if "send" is a valid function? Or getting sizekf(float)?). Library viability checks are more problematic, but should be handled with version checks via the preprocessor. But bad habits persist :( .

Basically: send a patch to PHP to stop doing such dumb things. Find out which platforms have a busted getaddrinfo and just #ifdef it in the code. They're not likely to be fixed any time soon anyways and when they do, someone will be throwing parties about it finally getting some love.

Linker limitations on 32-bit architectures

Posted Aug 28, 2019 13:30 UTC (Wed) by Sesse (subscriber, #53779) [Link] (6 responses)

That kind of “table-driven” configure was attempted during the 80s. It's a massive pain to maintain, which led directly to GNU autoconf.

Linker limitations on 32-bit architectures

Posted Aug 28, 2019 13:51 UTC (Wed) by pizza (subscriber, #46) [Link] (4 responses)

Autconf's insanity stems directly from the fact that it relies on the least-common denominator for, well, everything. It can't even assume the presence of a shell that supports function definitions.

But one can make a case for revisiting some of those assumptions -- After all, "Unix-ish" systems are far more hetrogenous than they used to be. Does software produced today need to care about supporting ancient SunOS, IRIX or HPUX systems? Or pre-glibc2 Linux? Or <32-bit systems?

Linker limitations on 32-bit architectures

Posted Aug 28, 2019 15:34 UTC (Wed) by halla (subscriber, #14185) [Link] (2 responses)

I think you mean homogenous?

Linker limitations on 32-bit architectures

Posted Aug 29, 2019 22:55 UTC (Thu) by antiphase (subscriber, #111993) [Link]

Did you mean homogeneous?

Linker limitations on 32-bit architectures

Posted Aug 30, 2019 3:28 UTC (Fri) by pizza (subscriber, #46) [Link]

You are correct; I doublethunk my self into the wrong term.

Linker limitations on 32-bit architectures

Posted Aug 28, 2019 17:15 UTC (Wed) by madscientist (subscriber, #16861) [Link]

Just FYI there already was a first step towards modernizing what autoconf can support... for example configure scripts generated by autoconf these days definitely DO use shell functions. That's been true for >10 years, since autoconf 2.63.

As far as supporting older systems, some of that depends on the software. Some GNU facilities make a very conscious effort to support VERY old systems; this is particularly true for "bootstrap" software. Others simply make assumptions instead, and don't add checks for those facilities into their configure.ac. It's not really up to autoconf what these packages decide to check (or not).

Also, much modern GNU software takes advantage of gnulib which provides portable versions of less-than-portable facilities... sometimes it's not a matter of whether a particular system call is supported, but that it works differently (sometimes subtly differently) on different systems. That's still true today on systems like Solaris, various BSD variants, etc. even without considering IRIX.

Linker limitations on 32-bit architectures

Posted Aug 28, 2019 14:20 UTC (Wed) by mathstuf (subscriber, #69389) [Link]

Toolchains and platforms are much more uniform these days.

- Any significant platform differences usually need conditional codepaths *anyways* (think epoll vs. kqueue)
- POSIX exists and has been effective at the core functionality (see the above for non-POSIX platforms)
- Broken platforms should fix their shit (your test suite should point this stuff out), but workarounds can be placed behind #ifdef for handling such brokenness (with a version constraint when it is fixed)
- Compilers are much more uniform because new compilers have to emulate one of GCC, Clang, or MSVC at the start to show that they are actually working with existing codebases


Copyright © 2025, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds