|
|
Subscribe / Log in / New account

No more 32-bit Firefox support

Mozilla has announced that support for the Firefox browser on 32-bit systems ends with version 144. "For users who cannot transition immediately, Firefox ESR 140 will remain available — including 32-bit builds — and will continue to receive security updates until at least September 2026."

to post comments

Missing the reason

Posted Sep 6, 2025 7:15 UTC (Sat) by kalvdans (subscriber, #82065) [Link] (62 responses)

A program written in portable C++ and rust should be agnostic to the hardware architecture. Is it SpiderMonkey Javascript JIT that's dropping 32-bit targets? I wished their announcement had more details and links to where we could help out.

Missing the reason

Posted Sep 6, 2025 9:54 UTC (Sat) by kanru (subscriber, #63577) [Link] (13 responses)

Most often it just means resources are not going to be spent on running CI, producing release artifacts, and have people employed to fix quirks of the architecture.

FF 32-bit support's probably going to be like it's (direct) ALSA support, community driven

Posted Sep 6, 2025 16:36 UTC (Sat) by Duncan (guest, #6647) [Link] (12 responses)

I remember some years ago when FF dropped direct ALSA support in favor of pulse, which did indeed simply mean no upstream resources were being dedicated to continued support any longer. In that case the patches continued to be supported by the community, and Gentoo (at least) continued to offer it as a build option (USE flag) for the firefox package. While the gentoo-packaged upstream-built firefox-bin package of course lost /direct/ ALSA support, Gentoo's alsa-only users who lacked the significant resources required to build FF themselves or who otherwise wanted/needed to use the upstream binary, continued to be supported there via the apulse shim-lib.

The going was a bit rough for a few firefox releases (and here I was on chromium for a year or so), but eventually both the firefox-bin via apulse and firefox build package direct ALSA support, now fully community driven, stabilized.

I was an amd64 (somewhat) early adopter (my first 64-bit was a dual socket original 3-digit model number AMD Opteron) and have been off 32-bit for over two decades now, but it's still a bit weird to think about 32-bit support actually dropping, remembering how we struggled with 32-bit assumptions back in the day...

Anyway, yeah, as with ALSA support, this probably means 32-bit firefox will still be available, but it'll be on the community to keep it that way and ensure stability. It is worth noting, however, that realistically, 32-bit builds have only been doable on 64-bit for some time, due to the 32-bit 4-gig memory ceiling. That's really not enough for a full firefox build, these days, tho as I said I've been 64-bit-only myself for decades now (with even 32-bit multi-lib turned off for near that long), so I'm not entirely sure if it's still /technically/ doable (--jobs=1 build, without -pipe and avoiding a tmpfs build to minimize memory usage, likely 24-hour or longer) or not.

Tho with many distros dropping 32-bit as well, I wonder how long the community support will be viable, at least at production-level stability... Just because it can still be built and can still be started without an immediate crash, doesn't mean it is still stable and secure enough to use for banking, browsing at-all questionable websites, or even full media sites such as youtube. How long until 32-bit is stuck on links/lynx, etc.?

FF 32-bit support's probably going to be like it's (direct) ALSA support, community driven

Posted Sep 6, 2025 20:08 UTC (Sat) by josh (subscriber, #17465) [Link] (11 responses)

What is the use case for continuing to use direct ALSA?

These days, pulse is largely dead in favor of pipewire, but I'm curious what leads people to want direct ALSA instead.

FF 32-bit support's probably going to be like it's (direct) ALSA support, community driven

Posted Sep 7, 2025 5:00 UTC (Sun) by mirabilos (subscriber, #84359) [Link] (9 responses)

Uhm, pulseaudio and pipewire all also just use ALSA below, so why not skip the ever-changing middleman? Worse, what if you want to use JACK atop ALSA instead, with latency-critical applications?

(Firefox works fine with just ALSA on trixie, FWIW.)

FF 32-bit support's probably going to be like it's (direct) ALSA support, community driven

Posted Sep 7, 2025 5:14 UTC (Sun) by josh (subscriber, #17465) [Link] (8 responses)

> why not skip the ever-changing middleman?

Because pipewire makes things like screen sharing work reliably, because with an audio server running *not* talking to the audio server means going through an extra layer, because using pipewire makes it easier to control where different audio goes (or comes from)...

> Worse, what if you want to use JACK atop ALSA instead, with latency-critical applications?

That wasn't the use case I was asking about. I was asking for the use case for directly using ALSA.

FF 32-bit support's probably going to be like it's (direct) ALSA support, community driven

Posted Sep 7, 2025 10:06 UTC (Sun) by malmedal (subscriber, #56172) [Link] (2 responses)

> use case for directly using ALSA.

Simply that the sound plays without any distortion or skipping whenever I'm watching a video while running something heavy on the machine. With pulseaudio I never set out to remove it from the start, only as a reaction to it messing up the sound, this would happen within a day or two after I set up a new machine. Pipewire has improved things a lot, it took three or four months before I noticed skipping and uninstalled it.

FF 32-bit support's probably going to be like it's (direct) ALSA support, community driven

Posted Sep 7, 2025 11:55 UTC (Sun) by josh (subscriber, #17465) [Link] (1 responses)

Interesting. I've never observed that, but I'm sorry to hear that you did. Did you have rtkit installed, which pipewire uses to avoid that?

FF 32-bit support's probably going to be like it's (direct) ALSA support, community driven

Posted Sep 7, 2025 13:04 UTC (Sun) by malmedal (subscriber, #56172) [Link]

Yes, I assume rtkit is the reason Pipewire worked until i ran something extra heavy. I had actually used ptrace to mlock the video file
to avoid video stutters.

FF (direct) ALSA support

Posted Sep 9, 2025 5:25 UTC (Tue) by Duncan (guest, #6647) [Link] (4 responses)

>> why not skip the ever-changing middleman?

Indeed. Additionally, for me, tho this doesn't apply to most Linux users, at least to the extent they apply here, particularly since most install from distro (or flatpack/snap) binary packages.

As already mentioned I'm a Gentooer. While they do have prebuilt binaries available these days, as I suspect most gentooers I prefer my own customized builds.

But one thing a gentooer building and importantly _upgrading_ packages themselves *quickly* learns is that having unused packages installed isn't just bad security practice, when you're building the upgrades, it's *expensive* in time and compute resources. There's a very high incentive to just uninstall those things you don't /really/ need, or even the ones you might need occasionally, but find yourself upgrading more often than you actually run them. And more to the point, that goes for the build options (gentoo USE flags) that control optional features on your leaf packages but pull in additional dependencies to supply those features, as well.

Put simply, having a package installed really is a different expense proposition when you're building /and/ /building/ /upgrades/ yourself, than if you're simply installing prebuilt binaries.

But that's the general case I could have replied to directly above. Here there's more specific points:

> Because pipewire makes things like screen sharing work reliably

It's not just screen sharing. In fact, here screen sharing would be a net-negative as the only conceivable screensharing would be malware so better not to have it available to be abused.

But the same mechanism is used (on wayland, X of course lets any app have at it) for screenshotting and at least on plasma, for plasmashell taskmanager plasmoid thumbnailing. For the latter, fortunately kwin's taskswitcher plugins aren't affected since kwin's the plasma wayland compositor, and with a near 25 MegaPx desktop (3 by 4k bigscreen TVs as computer monitors) and multiple desktops, window-stacking is often avoided, so focus-follows-mouse and the keyboard-triggered kwin task-switchers are enough and I don't need nor have configured any plasmashell taskbars or the like. Screenshotting would be more useful, particularly for filing kde related bugs since I track live-git (via the gentoo/kde project overlay packages available for that purpose) for most of my kde stuff, but spectacle, plasma's screenshotter, requires pipewire.

But it's not worth it just for that. I can word-describe the problem in bugs I file, and if they aren't immediately fixed (as bugs against live-git often are), someone else generally eventually comes along and provides screenshots where I can't due to missing pipewire.

> because with an audio server running *not* talking to the audio server means going through an extra layer, because using pipewire makes it easier to control where different audio goes (or comes from)...

But no audio server here to talk to, so it's not running and talking to alsa directly isn't an extra layer for the gentoo distro firefox. For firefox-bin, the gentoo-packaged mozilla-built binary (or for the mozilla-packaged user-installed binary), there's the apulse shim library as an extra layer talking to the prebuilt binary via the pulse/pipewire api, but that's still thinner than the non-shim full version of either would be.

And with only a single active audio device to talk to, there's no /need/ for pulse/pipewire device-routing.

Multi-stream audio merging, assuming your hardware can't handle it (my current hardware doesn't but my old hardware did), can still be useful, but alsa can software-merge streams via dmix of properly configured. I've had mixed results trying to get it working (it seems to work sometimes but not reliably), probably because the configuration and documentation for it is a bit arcane and I'm still doing something not quite right, but in theory it's available from alsa.

Turns out, however, that most of the time I actually prefer my primary audio stream uninterrupted. This actually kind of surprised me when I switched to less capable hardware that didn't handle that itself, but I found most of the time I prefer /not/ having sound effects and even second "feature" streams interfering with my enjoyment of whatever I'm already playing!

So it turns out that's not something I'd want to install pulse/pipewire for either, both because alsa can at least theoretically handle it if properly configured, and because it turns out I don't actually want multiple streams most of the time anyway.

Now one thing I /did/ find worth digging into the alsa documentation and text-editing the alsa config for, was the alsaequal 10-band equalizer. I believe a pulse-based equalizer is significantly closer to plug-and-play, but after setting up a bunch of presets and a script to activate them, plus adding another menu to my existing custom hotkey-activated menu setup so activating a new eq preset is a two-key sequence (using an "extra" key on my logitech keyboard as the first one), I'm quite happy indeed with how that turned out. =:^) (Tho FWIW I did run a firefox eq extension before that, primarily for youtube.)

All that said, reports suggest pipewire's a more stable solution than pulse was, and definitely, pipewire's window snapshotting and thumbnailing functionality make it more practically useful than the audio-only pulse, so I'll probably switch to it "someday", maybe when/if I ever get that threadripper hardware upgrade I want and will be rebuilding for the new hardware anyway... But that could be a few years...

FF (direct) ALSA support

Posted Sep 9, 2025 9:39 UTC (Tue) by taladar (subscriber, #68407) [Link]

> But one thing a gentooer building and importantly _upgrading_ packages themselves *quickly* learns is that having unused packages installed isn't just bad security practice, when you're building the upgrades, it's *expensive* in time and compute resources. There's a very high incentive to just uninstall those things you don't /really/ need, or even the ones you might need occasionally, but find yourself upgrading more often than you actually run them.

As a fellow Gentoo user of many years I can't confirm that at all. The number of packages is basically meaningless for upgrade times, what does matter is the few really, really large packages that take ages, lots of disk space and lots of RAM to build (looking at you any fork of webkit/blink/chrome/....). Small packages in particular often take more time to unpack than to build and even if they were a binary package they would not take significantly less time to install.

FF (direct) ALSA support

Posted Sep 9, 2025 12:21 UTC (Tue) by pizza (subscriber, #46) [Link] (2 responses)

> And with only a single active audio device to talk to, there's no /need/ for pulse/pipewire device-routing.

Good for you. But I hope you understand you are in a *tiny* minority.

The "normal" use case these days involves on-the-fly switching between built-in speakers and bluetooth. Or increasingly, bluetooth only. (And BT is a real PITA to use if you want to interact with ALSA directly. It exposes a lot of application bugs too, I might add. The same sort of bugs that were blamed on pulseaudio back in the day...)

FF (direct) ALSA support

Posted Sep 9, 2025 22:33 UTC (Tue) by mirabilos (subscriber, #84359) [Link]

Nah, Bluetooth is pretty much unusable as it’s also on the 2.4 GHz band which is already congested with microwaves, way too many WLANs from all the neighbours in the same house, adjacent and opposite houses, and whatever that neighbour to one side is occasionally doing.

I just recently had my new employer give me a decent wired headset.

FF (direct) ALSA support

Posted Sep 9, 2025 22:59 UTC (Tue) by Duncan (guest, #6647) [Link]

>> And with only a single active audio device to talk to, there's no /need/ for pulse/pipewire device-routing.

> Good for you. But I hope you understand you are in a *tiny* minority.

Well, I did say this was for me, and that it didn't really apply to most Linux users (tho as edited -- believe it or not my original was much longer than that long thing above -- that disclaimer was rather weaker than my original).

Of course for many of us that's what makes Linux so great, that it's open and not forcing a single-size-fits-all solution on everyone. All those "tiny minorities" together may not make a majority (in Linux scope I guess that'd be Android), but together they're anyway a significant minority, and the larger Linux community would be missing something quite valuable without them.

FF 32-bit support's probably going to be like it's (direct) ALSA support, community driven

Posted Sep 18, 2025 14:35 UTC (Thu) by millihertz (guest, #175019) [Link]

I think you've answered your own question. Pipewire is unlikely to be the last word in desktop-level Linux audio, but ALSA clearly isn't ever going away (even if someone backs a Brinks Mat truck up to an enchanted well...) so once you manage to chance upon a stable ALSA config, it's likely to work forever.

Missing the reason

Posted Sep 6, 2025 16:20 UTC (Sat) by Nahor (subscriber, #51583) [Link]

> A program written in portable C++ and rust should be agnostic to the hardware architecture.

True in theory, false in practice.

Things like image/video decoding, encryption/decryption, ... are still dependent on the data width. You can write a portable version of those but it won't be as performant as one coded with the data width in mind.
Similarly, 64-bits platforms also come with newer instructions and more registers, which can make the 64-bit implementation even more different than a 32-bit one.

I don't know if that affects Firefox (it quite possible it does for things like sandboxing/security), but a 64-bit pointer has some unused bits than can be used for other things (like tagging what a specific memory block is for). 32-bit pointers don't have that luxury and so require other workarounds, which are likely less performant and/or less secure. So Firefox would either need two different implementations, or it would use only one and make the 64-bit binary worse than what it could be.

And 32-bit applications are limited to <4G of RAM. This might force Firefox to implement tricks to reduce the memory usage of today's unoptimized ad-laden HTML pages.

And of course, as kanru mentioned, even if the code was 100% identical, there is still be a cost in having extra binaries to test and maintain.

Missing the reason

Posted Sep 6, 2025 19:04 UTC (Sat) by dilinger (subscriber, #2867) [Link] (42 responses)

It is surprising that they're not planning to continue support for 32-bit ARM.

Chromium dropped i386 (aka, 32-bit x86) support a while back, but they still maintain 32-bit ARM support. That makes it pretty trivial to re-add i386 support with just a single small patch. However, completely dropping ALL 32-bit support is another matter entirely..

Missing the reason

Posted Sep 6, 2025 19:45 UTC (Sat) by amacater (subscriber, #790) [Link] (38 responses)

Support on 32 bit ARM? It's hard to "build" Firefox on memory constrained hardware so it's hardly likely to be a priority

Missing the reason

Posted Sep 6, 2025 20:09 UTC (Sat) by josh (subscriber, #17465) [Link] (37 responses)

Support for 32-bit ARM isn't predicated on being able to *build* on 32-bit ARM; it could be cross-compiled.

That said, there are support and simplification advantages to ruling out 32-bit across the board.

Missing the reason

Posted Sep 6, 2025 20:25 UTC (Sat) by dilinger (subscriber, #2867) [Link] (3 responses)

Also for the record, debian continues to build both chromium and firefox on 32bit arm(hf). It's done in virtualized environments on arm64 hosts, but it's still not cross-compiled. For chromium, we do end up needing to disable link-time optimizations (LTO), including even ThinLTO, in order to fit into the smaller memory.

Missing the reason

Posted Sep 6, 2025 21:46 UTC (Sat) by josh (subscriber, #17465) [Link] (2 responses)

I'm aware. If Debian keeps 32-bit architectures, it would probably be beneficial to create an infrastructure for cross-compiling Debian packages (and then separately testing them natively, if they have tests). Leaving aside software that just won't link in less than 4 GB, those systems need all the performance they can get, and things like LTO and BOLT and PGO would help with that.

Missing the reason

Posted Sep 8, 2025 9:54 UTC (Mon) by Lennie (subscriber, #49641) [Link] (1 responses)

"If Debian keeps 32-bit architectures"

Well... funny you should mention that:

https://www.debian.org/releases/stable/release-notes/issu...

Missing the reason

Posted Sep 9, 2025 7:33 UTC (Tue) by jmalcolm (subscriber, #8876) [Link]

i386 in Debian was already not "386". In fact, Debian i386 would not even run on a Pentium, even in Debian 12. It needed a Pentium Pro (i686) at a minimum.

It should also be noted that AntiX is continuing 32 bit support apparently. Presumably AntiX based distros like MX Linux and DSL will as well. If you are a fan of 32 bit Debian, this is a way to keep getting it.

Missing the reason

Posted Sep 7, 2025 5:01 UTC (Sun) by mirabilos (subscriber, #84359) [Link] (32 responses)

Downsides as well, though.

I used to run i386 binaries of Firefox on x32/amd64 systems, to have a natural upper bound of its memory usage. Can’t eat up all system memory if it can address less than 4 GiB :þ

Worked very well.

Missing the reason

Posted Sep 7, 2025 5:17 UTC (Sun) by josh (subscriber, #17465) [Link] (9 responses)

Then it would just crash or fail, making it less reliable. Also, it has multiple processes these days, each of which has its own address space.

Missing the reason

Posted Sep 7, 2025 6:45 UTC (Sun) by Wol (subscriber, #4433) [Link] (2 responses)

You're missing the point. Those extra processes ARE the problem, because they're what's eating the memory.

And so what if firefox is less reliable, when the alternative is the system being less reliable. What's the point of firefox being better, if it takes down the system it's running on?

Cheers,
Wol

Missing the reason

Posted Sep 7, 2025 12:01 UTC (Sun) by josh (subscriber, #17465) [Link] (1 responses)

> You're missing the point. Those extra processes ARE the problem, because they're what's eating the memory.

I was responding to someone suggesting that they wanted 32-bit builds to limit memory usage. That won't help if the browser has multiple processes each using 4GB.

And multiple processes solve different problems that people care about: security, and isolation. Those multiple processes help avoid the problem that one tab crashing or using too much memory means the *whole browser* gets killed.

Memory limits

Posted Sep 8, 2025 7:50 UTC (Mon) by arnd (subscriber, #8866) [Link]

Modern Firefox is likely to hit both the physical (installed) memory limits and the virtual addressing limits of 32-bit machines, and it really only gets worse over time:
  • On arm32 and x86-32 systems running a kernel config with HIGHMEM and VMSPLIT_3G, the per process address limit is 3GB, as the 32-bit address space is shared with the kernel's lowmem (768MB) and vmalloc (256MB) areas.
  • Physical memory on x86-32 hardware is theoretically limited to 4GB with CONFIG_HIGHMEM_4G, but in practice the chipsets have a 3GB limit at best. 32-bit arm systems can theoretically go up to 16GB with HIGHMEM and LPAE, but most SoCs cannot connect more than 2GB (four 256Mb x16 DDR3 chips).
  • The 3GB virtual address limit will typically get lowered to 1.75GB or 2GB once the kernel loses support for highmem, because one has to use CONFIG_VMSPLIT_2G to keep accessing 2GB the physical memory (lowmem). 32-bit systems with 3GB or more RAM at that point lose both physical memory and virtual address limits.
  • On 64-bit systems, the virtual addressing can go up to 4GB for 32-bit compat tasks, so there may be cases where a particular 32-bit browser binary works fine when tested on 64-bit systems, but is unable to fit within the virtual limits on 32-bit kernels with the same amount of physical memory.
  • Both browser implementations and popular websites tend to use more memory every year, from a combination of added features and developers caring less about low-end devices. 3GB of physical memory is already tight for opening multiple tabs, while 3GB of address space may not be enough for a process showing a website with a lot of complex Javascript.
  • Mozilla is pushing integrated AI features into their browsers, which likely brings a lot of workloads over one of the above limits, making it completely unusable.

Missing the reason

Posted Sep 7, 2025 16:48 UTC (Sun) by mirabilos (subscriber, #84359) [Link] (5 responses)

> Then it would just crash or fail

But that’s *precisely* what I want! The browser can always be restartet, using less memory than before and not loading possibly problematic tabs (or I can switch to lynx), but the rest of the system is not under memory pressure by a boring webbrowser.

> it has multiple processes these days, each of which has its own address space.

Yes, that’s why I used past tense when I said I used to use i386 builds.

I’d still prefer that these days…

Missing the reason

Posted Sep 7, 2025 18:02 UTC (Sun) by josh (subscriber, #17465) [Link] (4 responses)

For me, if my *entire* browser crashes, I lose most of the applications I have running, other than those running in a terminal. The move towards tab isolation (as well as general improvements in browser stability) has been a huge improvement.

Missing the reason

Posted Sep 7, 2025 18:27 UTC (Sun) by mirabilos (subscriber, #84359) [Link] (2 responses)

Eh, Firefox reopens tabs on start after a crash. I often `xkill(1)` it to reclaim memory, even, and generally exit it when not in use.

Most of the programs I use run in a terminal, so…

Firefox memory leaks: alternative to a full browser kill and restart to recover the leaked memory

Posted Sep 10, 2025 0:23 UTC (Wed) by Duncan (guest, #6647) [Link] (1 responses)

I generally exit firefox (and anything else) when not in use as well, but I'm using a bit different technique for firefox memory management.

First detailing the memory issue I see: For me it's generally youtube, but I expect the same happens on other "infinite scroll" sites, particularly image (thumbnails for youtube) -heavy ones. My particular issue may be exacerbated by extensions, likely either HoverZoom+ when actively hovering over many thumbnails over time (thus assuming the memory for those is leaked), or, someone else's theory I read, that uBlockOrigin's blocked connections somehow don't fully close and thus leak memory over hours of blocking youtube ads, etc.

Regardless, using a single tab for hours on youtube, either searching (with perhaps a thousand hover-zoomed thumbnails over several hours), or playing multiple videos (tho fewer longer videos doesn't seem to leak as much as many short ones, could be from either many more blocked ads or many more thumbnails loaded, tho with fewer zoomed than the first scenario) eventually uses multiple gigabytes of RAM, even sending me gigs deep into swap on a 16 gig RAM system if I let it go long enough. (Tho I've never had it actually crash, likely because I have an equal 16-gig swap partition on each of four SSDs (so I can stripe them using equal swap-priority values for all four), so 64 gig swap in addition to 16 gig RAM, and I've never let it get beyond single-digits gig into swap.)

In the search scenario, if I middle-click to launch the actual videos in new tabs, then close those tabs in a relatively shorter time to go back and search further from the main search tab, the newer tabs don't seem to accumulate as much and closing them doesn't seem to release much either -- the big memory usage is all in the main search tab that has stayed open. And simply refreshing the page doesn't reclaim the lost memory, nor does deleting cache (which is on tmpfs so does use some memory), so the culprit isn't the tmpfs cache. And FWIW, forcing garbage collection from about:memory doesn't help either -- whatever's leaking, firefox doesn't consider it garbage to be collected!

*BUT*, either duplicating the tab (along with its session history) and closing the old one, or (with another tab open, say just open a new-tab to its default page, or one of the separate-tabbed video pages, so the entire browser doesn't close), closing the offending tab and immediately doing a reopen-closed-tab, keeps the tab's session history, but releases all that leaked memory, sometimes double-digit gigs of it!

So I don't close/kill the entire browser, I simply ensure at least one additional firefox tab is open so the browser itself doesn't close, then either duplicate the leaking tab and close the old one, or close and reopen the leaker. That gets my RAM back without killing the full browser.

Perhaps you'll find that useful as well, tho admittedly, in some cases it's likely less hassle to just do the kill and restart thing. (Much like after upgrading various in-use libraries and apps, while reverting to a text-mode login and running something to tell you what's still using deleted/stale files and restarting it, plus possibly having systemd reload itself too if it's on the list, can usually avoid a full reboot, it's often just easier to just do the reboot and get everything fresh all at once, rather than doing it one app at a time.)

Firefox memory leaks: alternative to a full browser kill and restart to recover the leaked memory

Posted Sep 10, 2025 0:49 UTC (Wed) by mirabilos (subscriber, #84359) [Link]

Good point, I’ve used that as well.

Sometimes there are tabs that just use up lots of resources (RAM, but mostly one full CPU) in the background, despite disabling service workers, so I still do that.

Missing the reason

Posted Sep 18, 2025 14:44 UTC (Thu) by millihertz (guest, #175019) [Link]

My experience - of running 64-bit Firefox on an x64 system with 2-8GB RAM and no swap - is that while it doesn't crash, it does occasionally decide to try and use all of the 28GB it has allocated, which rapidly brings the entire system to a halt as executable pages are crowded out by dirty data pages with nowhere to go. This has been depressingly repeatable across every system I've tried it on, and is why I switched back to using 32-bit Firefox in the first place.

Call me old-fashioned, but I would rather have the browser segfault than DoS the entire system it's running on... which, as I understand it, isn't fantastic for security either, to address another point raised in these comments.

Missing the reason

Posted Sep 7, 2025 6:43 UTC (Sun) by hsivonen (subscriber, #91034) [Link] (20 responses)

Running a 32-bit build when you could run a 64-bit build, since it prevents the use of the larger address space for security: https://github.com/mozilla-firefox/firefox/blob/c79acad61...

With multi-process browsers, 64-bit is more about address space than about memory usage.

Missing the reason

Posted Sep 7, 2025 6:54 UTC (Sun) by hsivonen (subscriber, #91034) [Link] (2 responses)

Oops, bad editing. I meant to say running a 32-bit build is a bad idea when you could run a 64-bit build.

Costs and benefits are multi-dimensional

Posted Sep 7, 2025 9:42 UTC (Sun) by jreiser (subscriber, #11027) [Link] (1 responses)

A 64-bit build is not necessarily better in every dimension of cost than a 32-bit build, and users may regard different dimensions as more important. For instance, the hardware memory cache might have a lower miss ratio on a 32-bit build because a pointer occupies only half as much space. Also, using the "x32" code model on x86_64, which restricts process address space to 4GiB but still enables using 16 CPU registers, might compensate for some of the advantages of full 64-bit. Even pointer tagging can be done on 32-bit pointers by segregating allocations that have different tag values into aligned 32KiB or 64KiB subspaces, and associating the tag with the subspace instead of directly in the pointer.

Costs and benefits are multi-dimensional

Posted Sep 7, 2025 9:49 UTC (Sun) by wtarreau (subscriber, #51152) [Link]

> A 64-bit build is not necessarily better in every dimension of cost than a 32-bit build

I absolutely agree. For example my build farm at home runs on cortex A72 cores. After long testing, it appears that 32-bit thumb2 binaries are up to ~20% faster than aarch64 ones. Needless to say that I've built my toolchains for that target! In all my tests, aarch64 code is systematically bigger and slower than thumb2, it's sad that they've not worked on a compact and efficient instruction set for aarch64.

Missing the reason

Posted Sep 7, 2025 16:52 UTC (Sun) by mirabilos (subscriber, #84359) [Link] (16 responses)

I know.

But at that time (before it split itself into multiple processes) it was still a very useful way to save on RAM.

I’d love to still be able to limit that, at least on the “8 GiB is maximum you can put in” laptop.

Missing the reason

Posted Sep 7, 2025 17:01 UTC (Sun) by mb (subscriber, #50428) [Link] (15 responses)

Using the 32bit version of an application just for limiting its memory use sounds like using a hammer as tool for a problem where you'd actually need to use a screw driver.

Have you looked into cgroups for limiting the memory consumption of applications?
That would even work with multi process applications.

Missing the reason

Posted Sep 7, 2025 18:06 UTC (Sun) by mirabilos (subscriber, #84359) [Link] (14 responses)

No, I’ve had enough trouble to get libvirt and/or Docker working because they want to use cgroups, and I’ve never saw any useful docs for how to use them other than “just use systemd, man, it’ll do it automagically for you”, and then there’s the v1 vs. v2 issue…

Missing the reason

Posted Sep 7, 2025 18:49 UTC (Sun) by mb (subscriber, #50428) [Link]

Reminds me of this:
https://xkcd.com/1172/

Missing the reason

Posted Sep 11, 2025 10:48 UTC (Thu) by kpfleming (subscriber, #23250) [Link] (6 responses)

I believe 'systemd-run' can do this now - launch any executable, creating a cgroup to contain it, and set various limits including memory usage. It may be worth a try.

Missing the reason

Posted Sep 11, 2025 13:00 UTC (Thu) by mathstuf (subscriber, #69389) [Link]

`systemd-run` has been able to do this for a long time. I've used it to make sure a build doesn't require more than 4 GB per TU by running a non-parallel build under a strict memory limit. But, AFAIK, it does require using systemd as your init system (or, rather, cgroup manager), so perhaps systemd-run is the "just use systemd, man" solution alluded to here.

Missing the reason

Posted Sep 11, 2025 22:27 UTC (Thu) by mbunkus (subscriber, #87248) [Link]

It sure can. It's as easy as:

systemd-run --user --property=MemoryMax=16G firefox

Missing the reason

Posted Sep 11, 2025 22:49 UTC (Thu) by mirabilos (subscriber, #84359) [Link] (3 responses)

Note the part in the comment you replied to which says:

> never saw any useful docs for how to use them other than “just use systemd, man

You fulfilled the cliché beautifully, though, I’ve to admit.

(No, I don’t and won’t run systemd, period.)

Missing the reason

Posted Sep 12, 2025 6:08 UTC (Fri) by zdzichu (subscriber, #17118) [Link] (2 responses)

You have to drive a nail in the wood. People say "just use a hammer, man". And you are like "never hammer, I will manage with this round rock somehow". It's your fault you ignore tools, so stop complaining.

Missing the reason

Posted Sep 12, 2025 15:03 UTC (Fri) by mirabilos (subscriber, #84359) [Link] (1 responses)

Except systemd is not the hammer; systemd is a cheap asian version of a swiss army pocket knife, with too many functions bundled.

Missing the reason

Posted Sep 12, 2025 20:58 UTC (Fri) by Cyberax (✭ supporter ✭, #52523) [Link]

Like, Linux itself?

Missing the reason

Posted Sep 11, 2025 13:49 UTC (Thu) by intelfx (guest, #130118) [Link] (2 responses)

> because they want to use cgroups, and I’ve never saw any useful docs for how to use them other than “just use systemd, man, it’ll do it automagically for you”

Well, cgmanager exists (or, at least, existed). In its time it was absolutely a viable alternative to systemd, and docker used to support both. The upstream development stopped in 2020, presumably because nobody wanted to do it anymore.

It’s hardly systemd’s fault that it turned out so good nobody actually desired to continue developing the alternatives.

> and then there’s the v1 vs. v2 issue

There is no issue. One is obsolete, other is actively supported.

Missing the reason

Posted Sep 11, 2025 22:52 UTC (Thu) by mirabilos (subscriber, #84359) [Link] (1 responses)

Hm, no cgmanager in Debian (bullseye, but likely not trixie either).

> > and then there’s the v1 vs. v2 issue
>
> There is no issue. One is obsolete, other is actively supported.

Who cares about supported?

The issue is that some software will only work with one of them. (I *think* I had to mkdir and mount cgroups v1 stuff in trixie to get… something… to work, Docker maybe or libvirt).

Missing the reason

Posted Sep 11, 2025 23:23 UTC (Thu) by intelfx (guest, #130118) [Link]

> Hm, no cgmanager in Debian (bullseye, but likely not trixie either).

Precisely. Case in point.

> Who cares about supported?

Those who write the code that you complain about.

Missing the reason

Posted Sep 13, 2025 17:11 UTC (Sat) by donald.buczek (subscriber, #112892) [Link] (2 responses)

> cgroups, and I’ve never saw any useful docs for how to use them other than “just use systemd, man, it’ll do it automagically for you”

You and most people will probably already know this, but of course you can also use cgroups without any special software by manipulating the files and directories in /sys/fs/cgroup (or wherever else you mount the cgroup2 filesystem).
Everything is documented at https://docs.kernel.org/admin-guide/cgroup-v2.html.

However, I don't know whether this helps in the context of libvirt and Docker. I'm not familiar with them and don't know what their requirements are.

Missing the reason

Posted Sep 13, 2025 17:38 UTC (Sat) by mirabilos (subscriber, #84359) [Link] (1 responses)

I did in fact not know that; I hadn’t researched into cgroups in more detail.

Much appreciated.

Missing the reason

Posted Sep 16, 2025 7:11 UTC (Tue) by taladar (subscriber, #68407) [Link]

You might want to have a look at the cgroups(7) manpage to get an introduction to cgroups in general.

Missing the reason

Posted Sep 18, 2025 14:37 UTC (Thu) by millihertz (guest, #175019) [Link]

I still do exactly this, and it works a treat. By and large, limiting the js heap to 512MB stops it from crashing too; some javascript might break, but that's fine.

Missing the reason

Posted Sep 7, 2025 6:52 UTC (Sun) by hsivonen (subscriber, #91034) [Link]

The announcement talks about 32-bit Linux. 32-bit x86 is the only 32-bit Linux that Mozilla has shipped binaries for.

The announcement does not say anything about 32-bit Windows or 32-bit Android, and reading it as saying anything about 32-bit ARM buildability for desktop Linux is probably reading too much into it.

Missing the reason

Posted Sep 7, 2025 16:50 UTC (Sun) by mirabilos (subscriber, #84359) [Link]

Not supporting does not have to mean actively dropping the code though, one can hope.

Missing the reason

Posted Sep 8, 2025 7:41 UTC (Mon) by sylvestre (subscriber, #57054) [Link]

32 bit will still be supported

Missing the reason

Posted Sep 7, 2025 0:21 UTC (Sun) by quotemstr (subscriber, #45331) [Link] (2 responses)

> A program written in portable C++ and rust should be agnostic to the hardware architecture. Is it SpiderMonkey Javascript JIT that's dropping 32-bit targets? I wished their announcement had more details and links to where we could help out.

On a 64-bit machine, you have a lot more flexibility to use address space tricks (e.g. growable but stable arrays) and extra pointer bits than you do on a 32-bit machine, so there's a class of program that legitimately won't work on 32-bit. Yes, yes, in "portable C++" you can't rely on any of these things, but we write programs for real computers, not a standard committee's idealization.

Missing the reason

Posted Sep 7, 2025 20:28 UTC (Sun) by ibukanov (subscriber, #3942) [Link] (1 responses)

Both Intel and Arm still does not allow to use addresses beyond 48 bit even if formally the address word is 64-bit. Which means that SpiderMonkey (JS engine in Forefox) even on 64 bit can use NaN boxing technique, https://searchfox.org/firefox-main/source/js/public/Value... . However when future processors start to support bigger address space, like 52 bits or even more, the SpiderMonkey will need to be redesigned.

Missing the reason

Posted Sep 8, 2025 11:52 UTC (Mon) by tao (subscriber, #17563) [Link]

48 bits? That's, what, 256TB?

I think that once the memory usage of your browser starts approaching those numbers even Chrome and Firefox would start looking for memory leaks...

Missing the reason

Posted Sep 8, 2025 7:40 UTC (Mon) by sylvestre (subscriber, #57054) [Link]

upstream here
We only stop generating Linux 32 bit builds.
32 bit will still be supported at least for arm .

"supported"

Posted Sep 11, 2025 23:47 UTC (Thu) by josh (subscriber, #17465) [Link] (1 responses)

Quite a few comments here have been about what is supported, and whether it matters what's supported.

Upstreams don't necessarily want to support every possible configuration, and with limited resources and a desire to keep things feasible and maintainable for contributors, they need to prioritize. People downstream get incredibly upset when they discover their configuration or use case is not a priority and upstream is not willing to do some of all of the work for them.

In particular, many people act as if present-but-unsupported configurations impose no cost, rather than imposing continual costs on those working on the codebase. Sometimes upstreams decide they're no longer willing to pay those costs for configurations used by very few people. And even if someone were willing to put in effort to maintain it, often that effort is in the form of occasional patches, not making up for the continuous costs the configuration imposes.

Effectively, obscure configurations become an externality, where people wanting them to be supported expect others to shoulder some of the costs of those configurations, and they often downplay those costs. And some projects have started to decide they're not willing to shoulder those costs, which produces friction. It especially produces friction if a project has not previously made support levels explicit.

For a configuration like "Linux on arm64", for most projects, there's enough critical mass where a tiny tiny fraction of the potential users, working on the project, can easily bear the costs of support. For more obscure configurations, the number of people with that configuration may not be enough that they can collectively sustain all the software they want to run on that configuration. And there's an incentive, rather than giving up and moving to something more supported, to try to collectively *push others* to care more about that configuration, to do some of the work, to bear the externality. And because this becomes an existential threat to the continued viability of the obscure configuration (e.g. as it shrinks), those pushes to others have a lot of emotional valence, often in the form of anger and self-righteousness. And without care, that anger can get concentrated into communities that center themselves around obscure configurations.

In short, there's a continuous cycle, where waning configurations, as the burden to support them becomes larger than the benefit of supporting them, become sources of heavy strife within communities and projects. And it's important for projects to recognize this, and make support levels explicit, and have a means of ending support for configurations very explicitly and cleanly, without a long period of "if people get angry enough and righteous enough can they force upstream to put work into supporting them past the point where supporting them is sustainable".

"supported"

Posted Sep 12, 2025 7:58 UTC (Fri) by taladar (subscriber, #68407) [Link]

There is also a related effect where communities that shrink because they are less and less viable will end up with a much higher concentration of people resistant to change almost by definition as they are the least likely to change.

Similar effects can be seen in the real world with e.g. towns that used to have a viable industry to support them but that industry died. Flexible people move away, people resistant to change stay.


Copyright © 2025, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds