LWN: Comments on "Linker limitations on 32-bit architectures" https://lwn.net/Articles/797303/ This is a special feed containing comments posted to the individual LWN article titled "Linker limitations on 32-bit architectures". en-us Thu, 25 Sep 2025 14:14:36 +0000 Thu, 25 Sep 2025 14:14:36 +0000 https://www.rssboard.org/rss-specification lwn@lwn.net Linker limitations on 32-bit architectures https://lwn.net/Articles/804449/ https://lwn.net/Articles/804449/ mcfrisk <div class="FormattedComment"> Chromium on Debian has not been compiling on i686 HW since 2014 <a rel="nofollow" href="https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=765169">https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=765169</a> so this isnt a new problem. Solution approved by release etc teams was that i686 was compiled on amd64 machines and kernels in a 32bit chroot. Solutions would have been nice back then but very few people cared enough. Heck, even on amd64 with modern c++ and tens of gigabytes physical RAM, memory often runs out and oom killer wrecks builds. Finding reliable parallel build options is still a black art since tools only count threads and cores, not available memory.<br> Just ask anyone who bitbakes a lot :)<br> </div> Tue, 12 Nov 2019 19:19:54 +0000 ecological concerns and old hardware https://lwn.net/Articles/800766/ https://lwn.net/Articles/800766/ sammythesnake <div class="FormattedComment"> I just upgraded from 8GiB to 24 specifically because my web browser was constantly at 20GiB+ of virtual memory.<br> <p> 8GiB for a single web page is plenty, but even with discarded tabs, it's not enough for my usage patterns...<br> </div> Sat, 28 Sep 2019 07:56:48 +0000 Linker limitations on 32-bit architectures https://lwn.net/Articles/799132/ https://lwn.net/Articles/799132/ marcH <div class="FormattedComment"> Going back I see I missed one of the comment threads above, sorry for the noise.<br> </div> Fri, 13 Sep 2019 15:41:35 +0000 Linker limitations on 32-bit architectures https://lwn.net/Articles/799130/ https://lwn.net/Articles/799130/ marcH <div class="FormattedComment"> <font class="QuotedText">&gt; However, cross-compilation (in the sense used by these projects) invariably means the inability to run compiled code.</font><br> <p> So, developers of embedded systems don't regularly run tests on real hardware? That doesn't seem to make sense...<br> <br> </div> Fri, 13 Sep 2019 15:37:32 +0000 Linker limitations on 32-bit architectures https://lwn.net/Articles/799071/ https://lwn.net/Articles/799071/ frostsnow <div class="FormattedComment"> This reminds me of the effort I had to put into the mfgtools' uuu flasher in order to flash the Librem5 from my 32-bit ARM device (the flasher was trying to mmap a &gt;3GB file): <a href="https://www.frostsnow.net/blog/2019-08-04.html">https://www.frostsnow.net/blog/2019-08-04.html</a><br> </div> Thu, 12 Sep 2019 22:31:42 +0000 Linker limitations on 32-bit architectures https://lwn.net/Articles/798538/ https://lwn.net/Articles/798538/ flussence <div class="FormattedComment"> My Atom netbook gets used every day. Software is easy to deal with - either it works on my machine, or I throw it away and find another (and as I'm the technical one in most of my friend groups, that usually causes ripple effects that push things like Electron-based IM programs out).<br> </div> Thu, 05 Sep 2019 22:45:28 +0000 Linker limitations on 32-bit architectures https://lwn.net/Articles/798500/ https://lwn.net/Articles/798500/ unprinted <div class="FormattedComment"> All the netbooks here are 32-bit only: Pentium-M or early Atom.<br> <p> They're not used much, but they're paid for and very portable when needed.<br> <p> The older Raspberry Pis - the Pi 2 A and B are only four years old.<br> <p> <p> <p> </div> Thu, 05 Sep 2019 16:04:04 +0000 ecological concerns and old hardware https://lwn.net/Articles/798443/ https://lwn.net/Articles/798443/ HelloWorld <div class="FormattedComment"> 8 GB is plenty for a web browser even nowadays. Most phones have less than that and are perfectly capable of displaying most websites.<br> <p> I was using a machine with 8 GB until recently and it was perfectly capable of running KDE Plasma, Firefox and IntelliJ at the same time. It wasn't fast (though usable), but that was due to the slow CPU, not lack of RAM. I'd still be using it if it weren't for the fact that I had an opportunity to get a faster (used) machine for free.<br> </div> Thu, 05 Sep 2019 12:42:49 +0000 ecological concerns and old hardware https://lwn.net/Articles/798435/ https://lwn.net/Articles/798435/ davidgerard <p>I just had to replace my otherwise perfectly good 2013 desktop at the office with a new box, because the old box can't take more than 8 GB RAM. <p>I blame this <i>entirely</i> on web page bloat, where what was once HTML is now a virtual machine running several megabytes of JavaScript to lovingly render a few kilobytes of text. <p>I expect another raft of obsolescence when current CPUs start hitting the 38-bit address limit. Thu, 05 Sep 2019 10:25:32 +0000 ecological concerns and old hardware https://lwn.net/Articles/798233/ https://lwn.net/Articles/798233/ mstone_ <div class="FormattedComment"> The argument for keeping old hardware running fails to address the fact that far newer hardware is already being thrown out regardless of what any other individual is doing with their own hardware. A person can upgrade by simply replacing a 5 year old machine heading to the dump with a 15 year old machine, and take advantage of the improved power consumption, speed, and memory, without net increasing the number of machines in the dump. <br> </div> Tue, 03 Sep 2019 15:11:42 +0000 Linker limitations on 32-bit architectures https://lwn.net/Articles/798189/ https://lwn.net/Articles/798189/ dave4444 <div class="FormattedComment"> <p> Use, -Wl,--hash-size xxxx. That's the knob gcc LD has to adjust the memory/performance ratio. High hash-size results in a larger memory footprint for LD and faster link times for large executables. Low hash-size results in lower memory footprint but longer link times.<br> <p> Having done large links for 32bit builds, this makes quite a difference for link time (minutes), but if you've got to fit into a 2GB or 3GB virtual address space it can also help (at a longer link time).<br> </div> Tue, 03 Sep 2019 13:27:39 +0000 Linker limitations on 32-bit architectures https://lwn.net/Articles/797884/ https://lwn.net/Articles/797884/ pizza <div class="FormattedComment"> You are correct; I doublethunk my self into the wrong term.<br> </div> Fri, 30 Aug 2019 03:28:29 +0000 Linker limitations on 32-bit architectures https://lwn.net/Articles/797877/ https://lwn.net/Articles/797877/ antiphase <div class="FormattedComment"> Did you mean homogeneous?<br> </div> Thu, 29 Aug 2019 22:55:12 +0000 ecological concerns and old hardware https://lwn.net/Articles/797709/ https://lwn.net/Articles/797709/ eru <div class="FormattedComment"> This is how most old people I know use a computer. They turn it of for some specific tasks, usually for electronic banking in addition to the email, then turn it off and do something in the real world. And are very annoyed or even scared when the bank web site starts complaining the browser is too old, or certificates have expired and scary-looking warnings pop up. Fixing this may then require upgrading the browser, which may require upgrading the OS, which may require a new computer...<br> <p> </div> Thu, 29 Aug 2019 05:55:49 +0000 ecological concerns and old hardware https://lwn.net/Articles/797665/ https://lwn.net/Articles/797665/ arnd <div class="FormattedComment"> If you only turn it on once a week to check for email, keeping the old Pentium 4 makes ecologically and economically. For any daily use, it does not.<br> </div> Wed, 28 Aug 2019 20:44:48 +0000 Linker limitations on 32-bit architectures https://lwn.net/Articles/797654/ https://lwn.net/Articles/797654/ arnd <div class="FormattedComment"> I tried finding anything for sale recently, with no luck.<br> <p> <a href="https://www.mouser.de/Semiconductors/Embedded-Processors-Controllers/CPU-Central-Processing-Units/_/N-ba96s">https://www.mouser.de/Semiconductors/Embedded-Processors-...</a> still lists Quark SoCs, and while those can run embedded Linux, normal distros like Debian typically won't work. Lots of Atom chips and boards are still being sold, but they seem to all be 64-bit in practice. <a href="https://ark.intel.com/content/www/us/en/ark/products/codename/37567/tunnel-creek.html">https://ark.intel.com/content/www/us/en/ark/products/code...</a> lists some embedded Atoms from 2010 that were 32-bit only and are not officially discontinued, but are basically nowhere in stock as far as I can tell, neither chips nor boards.<br> <p> For non-Intel parts it looks even worse:<br> <p> <a href="https://www.heise.de/preisvergleich/?cat=mbson">https://www.heise.de/preisvergleich/?cat=mbson</a> lists two mainboards with 32-bit VIA CPUs (no Intel or AMD), but those are new old stock sold at 10x the price it was at 10 years ago. Zhaoxin took over VIA's x86 line, but they are all 64-bit now.<br> <p> DM&amp;P Vortex86DX and a few others are theoretically still around, but equally outdated and hard to find for sale anywhere, it's easier to find an ARMv4 or SH3 based system.<br> </div> Wed, 28 Aug 2019 20:39:26 +0000 ecological concerns and old hardware https://lwn.net/Articles/797658/ https://lwn.net/Articles/797658/ k8to <div class="FormattedComment"> A goal that requires people to stop caring about new features as a driver for adoption and payment seems doomed to failure.<br> <p> Sadly.<br> </div> Wed, 28 Aug 2019 20:10:49 +0000 Linker limitations on 32-bit architectures https://lwn.net/Articles/797650/ https://lwn.net/Articles/797650/ k8to <div class="FormattedComment"> Are you sure? I thought the last atom chips were discontinued. I'm sure there's someone making a SoC still, but I thought physical 32bit x86 was pretty dead.<br> <p> Of course service life for some systems is pretty long.<br> </div> Wed, 28 Aug 2019 19:56:51 +0000 Linker limitations on 32-bit architectures https://lwn.net/Articles/797643/ https://lwn.net/Articles/797643/ nix <div class="FormattedComment"> It could, in theory, iff people were happy with just type info (and, soon enough, backtrace info -- enough to do a bt and chase down argument types and inspect the args in the debugger). However, there is a tradeoff here: it is harder to link a CTF section than to link most other sections because we don't just append them to each other, we merge them together type-by-type and will soon deduplicate the types as we go. This consumes memory, though I hope to make it less memory to link than would be required to load all the CTF at once and then concatenate it. At the moment, we have no deduplicator to speak of, so it actually needs *more* memory than a concatenator would because of internal hashes for name lookup etc on top of the raw file data. This is a worst case and things will improve very soon.<br> <p> Essentially we choose to trade off memory in favour of disk space. This is the same decision that most parts of the toolchain take (it takes much more memory to compile or link a program than the size of the resulting binary) -- but I too sometimes build on small machines, and will try not to make the situation too much worse!<br> <p> I certainly hope to make linking CTF use much less memory than linking DWARF -- but that is mostly because DWARF is usually bigger, not because linking CTF is especially memory-efficient. However, this won't really help, since I expect that distros that adopt CTF will usually build with both CTF *and* DWARF, stripping the DWARF out to debuginfo packages and keeping the CTF around so that simpler stuff works without needing to install debuginfo packages. I can't imagine general-purpose distros abandoning DWARF generation entirely, so adding an extra format isn't going to reduce linker memory usage for them.<br> </div> Wed, 28 Aug 2019 19:33:55 +0000 Linker limitations on 32-bit architectures https://lwn.net/Articles/797640/ https://lwn.net/Articles/797640/ nix <blockquote> 32-bit x86 hardware has been completely irrelevant from a commercial point of view for a while now </blockquote> What? Intel is still making and selling 32-bit CPUs. Perhaps not at very large scale, but it seems unlikely that none of those machines are running Linux. (I'd bet that <i>most</i> of them are.) Wed, 28 Aug 2019 19:20:05 +0000 Linker limitations on 32-bit architectures https://lwn.net/Articles/797619/ https://lwn.net/Articles/797619/ madscientist <div class="FormattedComment"> Just FYI there already was a first step towards modernizing what autoconf can support... for example configure scripts generated by autoconf these days definitely DO use shell functions. That's been true for &gt;10 years, since autoconf 2.63.<br> <p> As far as supporting older systems, some of that depends on the software. Some GNU facilities make a very conscious effort to support VERY old systems; this is particularly true for "bootstrap" software. Others simply make assumptions instead, and don't add checks for those facilities into their configure.ac. It's not really up to autoconf what these packages decide to check (or not).<br> <p> Also, much modern GNU software takes advantage of gnulib which provides portable versions of less-than-portable facilities... sometimes it's not a matter of whether a particular system call is supported, but that it works differently (sometimes subtly differently) on different systems. That's still true today on systems like Solaris, various BSD variants, etc. even without considering IRIX.<br> </div> Wed, 28 Aug 2019 17:15:18 +0000 Linker limitations on 32-bit architectures https://lwn.net/Articles/797602/ https://lwn.net/Articles/797602/ halla <div class="FormattedComment"> I think you mean homogenous?<br> </div> Wed, 28 Aug 2019 15:34:54 +0000 Linker limitations on 32-bit architectures https://lwn.net/Articles/797556/ https://lwn.net/Articles/797556/ mathstuf <div class="FormattedComment"> Toolchains and platforms are much more uniform these days.<br> <p> - Any significant platform differences usually need conditional codepaths *anyways* (think epoll vs. kqueue)<br> - POSIX exists and has been effective at the core functionality (see the above for non-POSIX platforms)<br> - Broken platforms should fix their shit (your test suite should point this stuff out), but workarounds can be placed behind #ifdef for handling such brokenness (with a version constraint when it is fixed)<br> - Compilers are much more uniform because new compilers have to emulate one of GCC, Clang, or MSVC at the start to show that they are actually working with existing codebases<br> </div> Wed, 28 Aug 2019 14:20:01 +0000 Linker limitations on 32-bit architectures https://lwn.net/Articles/797552/ https://lwn.net/Articles/797552/ pizza <div class="FormattedComment"> Autconf's insanity stems directly from the fact that it relies on the least-common denominator for, well, everything. It can't even assume the presence of a shell that supports function definitions.<br> <p> But one can make a case for revisiting some of those assumptions -- After all, "Unix-ish" systems are far more hetrogenous than they used to be. Does software produced today need to care about supporting ancient SunOS, IRIX or HPUX systems? Or pre-glibc2 Linux? Or &lt;32-bit systems?<br> </div> Wed, 28 Aug 2019 13:51:55 +0000 ecological concerns and old hardware https://lwn.net/Articles/797548/ https://lwn.net/Articles/797548/ eru <p> This argument may make sense for old desktop PC:s, but not so much for laptops and other portable devices that had low power requirements to start with. <p> Asking a user to ditch a perfectly working computer just because of bloated new software just feels wrong. In many cases the advances in the new software are marginal. You can say the user should then stick to old versions, but this is usually not sustainable for other reasons, like no more security fixes for old software, or a network protocol change makes it not interoperable. (Of course the resulting upgrade cycle is what keeps the computer industry humming, so I really should keep my mouth shut). <p> It may also be the user would prefer to spend his dollars or euros on something else. But in modern societies, one is almost forced to have a computing device that can access the net. And the web pages keep bloating too, and adopting features supported only on the newer browsers, thus contributing to the upgrade treadmill. <p> It would be interesting to see estimates about the energy needed to make a laptop, and how it compares to its lifetime power usage. Computing devices contain some extremely refined raw materials, and etching and packaging the chips also takes energy. <p> &lt;/rant&gt; Wed, 28 Aug 2019 13:47:24 +0000 Linker limitations on 32-bit architectures https://lwn.net/Articles/797550/ https://lwn.net/Articles/797550/ Sesse <div class="FormattedComment"> That kind of “table-driven” configure was attempted during the 80s. It's a massive pain to maintain, which led directly to GNU autoconf.<br> </div> Wed, 28 Aug 2019 13:30:16 +0000 Linker limitations on 32-bit architectures https://lwn.net/Articles/797546/ https://lwn.net/Articles/797546/ mebrown <div class="FormattedComment"> "People" might be generous. There's a guy. :)<br> </div> Wed, 28 Aug 2019 13:04:48 +0000 Linker limitations on 32-bit architectures https://lwn.net/Articles/797543/ https://lwn.net/Articles/797543/ pizza <div class="FormattedComment"> Nobody is talking about running "modern" applications (much less locally compiling them) on those 16-bit x86 CPUs.<br> <p> And elks is "Linux-like", not "Linux".<br> <p> </div> Wed, 28 Aug 2019 12:51:24 +0000 Linker limitations on 32-bit architectures https://lwn.net/Articles/797539/ https://lwn.net/Articles/797539/ roc <div class="FormattedComment"> That's a good point.<br> <p> Note, however, that x86 can check alignment via the Alignment Check flag: <a href="https://stackoverflow.com/questions/1929588/how-to-catch-data-alignment-faults-on-x86-aka-sigbus-on-sparc">https://stackoverflow.com/questions/1929588/how-to-catch-...</a><br> so it might be possible to emulate alignment faults with no overhead.<br> </div> Wed, 28 Aug 2019 11:01:21 +0000 Linker limitations on 32-bit architectures https://lwn.net/Articles/797533/ https://lwn.net/Articles/797533/ chris_se <div class="FormattedComment"> <font class="QuotedText">&gt; The probability of a user-space test accidentally passing due to a QEMU bug has to be very remote.</font><br> <p> That's not quite true - for example, if you try to emulate a platform that faults on unaligned memory accesses on another platform that doesn't do that (or at least doesn't do it for all cases where the former faults), then the emulated version will typically not fault, for performance reasons.<br> <p> For example, arm64 typically supports unaligned access, but most arm32 systems don't, so running arm32 in an emulator on either e.g. arm64 or x86_64 will typically not catch these kinds of bugs.<br> <p> And that's just one example of a type of bug that qemu isn't able to catch.<br> </div> Wed, 28 Aug 2019 08:31:54 +0000 Linker limitations on 32-bit architectures https://lwn.net/Articles/797534/ https://lwn.net/Articles/797534/ Sesse <div class="FormattedComment"> <font class="QuotedText">&gt; If Debian stops supporting i386, someone better gift me new hardware.</font><br> <p> Why is it anyone else's responsibility to make sure you can run Debian?<br> <p> <font class="QuotedText">&gt; If Debian stops supporting m68k, I’ll be *SERIOUSLY* pissed because I invested about three years of my life into resurrecting it.</font><br> <p> You choosing to spend your time on m68k doesn't mean anyone else is obliged to.<br> </div> Wed, 28 Aug 2019 08:24:36 +0000 ecological concerns and old hardware https://lwn.net/Articles/797531/ https://lwn.net/Articles/797531/ vadim <div class="FormattedComment"> Yup. Modern hardware can pack into 10W what an old machine couldn't into 200W.<br> <p> There's no point in keeping that old Pentium around. Get yourself a Pi 4 or an Atom which will be much faster, have much more memory, support virtualization, and be supported by current software just fine all while having a much smaller power bill. Even a modern, desktop CPU is probably better power-wise due to all the advances in power saving.<br> </div> Wed, 28 Aug 2019 07:38:41 +0000 Linker limitations on 32-bit architectures https://lwn.net/Articles/797519/ https://lwn.net/Articles/797519/ mathstuf <div class="FormattedComment"> One should be able to preload the cache result for that check somehow. However, having worked with build systems a lot (I work on CMake), compile tests are bad, but run tests are worse. They break cross compilation, are really slow (generally) and should be done as preprocessor or runtime checks if possible. All sizeof(builtin_type) things have preprocessor definitions available, broken platform checks should just be done once and statically placed in the code (how much energy has been wasted seeing if "send" is a valid function? Or getting sizekf(float)?). Library viability checks are more problematic, but should be handled with version checks via the preprocessor. But bad habits persist :( .<br> <p> Basically: send a patch to PHP to stop doing such dumb things. Find out which platforms have a busted getaddrinfo and just #ifdef it in the code. They're not likely to be fixed any time soon anyways and when they do, someone will be throwing parties about it finally getting some love.<br> </div> Wed, 28 Aug 2019 02:03:08 +0000 Linker limitations on 32-bit architectures https://lwn.net/Articles/797518/ https://lwn.net/Articles/797518/ roc <div class="FormattedComment"> Interesting, thanks. Sounds like it would be fairly easy to fix with an option.<br> </div> Wed, 28 Aug 2019 00:34:12 +0000 Linker limitations on 32-bit architectures https://lwn.net/Articles/797512/ https://lwn.net/Articles/797512/ atai <div class="FormattedComment"> 32-bit limitations? There are people running Linux on 16-bit x86<br> <a href="https://github.com/jbruchon/elks">https://github.com/jbruchon/elks</a><br> </div> Tue, 27 Aug 2019 23:36:47 +0000 ecological concerns and old hardware https://lwn.net/Articles/797508/ https://lwn.net/Articles/797508/ JoeBuck <div class="FormattedComment"> It is sometimes argued that we should keep using old machines out of concern for the environment, but this ignores the high power consumption (both for direct power consumption and cooling) that is often required for a meager return in compute power. By continuing to use the old machine we don't have e-waste to dispose of, which is a good thing, but the electric bill and the waste heat could wind up being vastly more than a newer low-end machine would require.<br> </div> Tue, 27 Aug 2019 22:38:10 +0000 Linker limitations on 32-bit architectures https://lwn.net/Articles/797504/ https://lwn.net/Articles/797504/ foom <div class="FormattedComment"> Qemu has "fails to crash" bugs. They tend to increase emulation speed, by not checking unimportant edge cases.<br> <p> The most annoying one to me is that it doesn't check pointer alignment for load/store instructions which fail when misaligned on real hardware. E.g. ldm and ldrd on armv7 require 4-byte alignment (even though ldr does not), but qemu does not check this.<br> <p> It's unfortunately pretty easy to screw up alignment in C code, and cause your program to only crash on real hardware...<br> </div> Tue, 27 Aug 2019 22:28:11 +0000 Linker limitations on 32-bit architectures https://lwn.net/Articles/797507/ https://lwn.net/Articles/797507/ roc <div class="FormattedComment"> <font class="QuotedText">&gt; But the process-level emulation also has way more opportunities for bugs and weird behaviour.</font><br> <p> Actually I would guess full-system emulation (i.e. running the native kernel) would be more likely to show bugs and weird behavior. The CPU+hardware behavior exposed to the kernel is a lot more complicated and less well tested in general than that exposed to user-space.<br> <p> <font class="QuotedText">&gt; For example, if a shell script run under qemu-arm-static runs `cat /proc/cpuinfo` to check for some CPU capabilities, what will happen? Does QEMU have the smarts to notice that a read() system call is being made for /proc/cpuinfo and substitute some emulated values for some ARM CPU, or will it just read the /proc/cpuinfo from the host machine resulting in values for some x86 cpu?</font><br> <p> That is a good question. Issues like that could be fixed outside QEMU by running the process in a chroot environment. You may be doing that for cross-compilation anyway.<br> </div> Tue, 27 Aug 2019 22:24:16 +0000 Linker limitations on 32-bit architectures https://lwn.net/Articles/797506/ https://lwn.net/Articles/797506/ roc <div class="FormattedComment"> Yes, I agree that in some cases QEMU being slow would also be a bad argument.<br> </div> Tue, 27 Aug 2019 22:20:43 +0000 Linker limitations on 32-bit architectures https://lwn.net/Articles/797497/ https://lwn.net/Articles/797497/ dezgeg <div class="FormattedComment"> I wonder what the author meant by "QEMU-based emulation", does it potentially include full system emulation (qemu-system-FOO) or was it specific to process-level emulation (qemu-FOO-static). Because from the build system point of view, the process-level emulation is especially tempting (just replace call to 'make check' with something like 'qemu-FOO-static make check' ) than full system emulation (now you also need a kernel image, a root filesystem image, a way to copy the build tree to the VM etc.). But the process-level emulation also has way more opportunities for bugs and weird behaviour.<br> <p> For example, if a shell script run under qemu-arm-static runs `cat /proc/cpuinfo` to check for some CPU capabilities, what will happen? Does QEMU have the smarts to notice that a read() system call is being made for /proc/cpuinfo and substitute some emulated values for some ARM CPU, or will it just read the /proc/cpuinfo from the host machine resulting in values for some x86 cpu?<br> </div> Tue, 27 Aug 2019 22:16:12 +0000