|
|
Subscribe / Log in / New account

Missing the reason

Missing the reason

Posted Sep 6, 2025 19:04 UTC (Sat) by dilinger (subscriber, #2867)
In reply to: Missing the reason by kalvdans
Parent article: No more 32-bit Firefox support

It is surprising that they're not planning to continue support for 32-bit ARM.

Chromium dropped i386 (aka, 32-bit x86) support a while back, but they still maintain 32-bit ARM support. That makes it pretty trivial to re-add i386 support with just a single small patch. However, completely dropping ALL 32-bit support is another matter entirely..


to post comments

Missing the reason

Posted Sep 6, 2025 19:45 UTC (Sat) by amacater (subscriber, #790) [Link] (38 responses)

Support on 32 bit ARM? It's hard to "build" Firefox on memory constrained hardware so it's hardly likely to be a priority

Missing the reason

Posted Sep 6, 2025 20:09 UTC (Sat) by josh (subscriber, #17465) [Link] (37 responses)

Support for 32-bit ARM isn't predicated on being able to *build* on 32-bit ARM; it could be cross-compiled.

That said, there are support and simplification advantages to ruling out 32-bit across the board.

Missing the reason

Posted Sep 6, 2025 20:25 UTC (Sat) by dilinger (subscriber, #2867) [Link] (3 responses)

Also for the record, debian continues to build both chromium and firefox on 32bit arm(hf). It's done in virtualized environments on arm64 hosts, but it's still not cross-compiled. For chromium, we do end up needing to disable link-time optimizations (LTO), including even ThinLTO, in order to fit into the smaller memory.

Missing the reason

Posted Sep 6, 2025 21:46 UTC (Sat) by josh (subscriber, #17465) [Link] (2 responses)

I'm aware. If Debian keeps 32-bit architectures, it would probably be beneficial to create an infrastructure for cross-compiling Debian packages (and then separately testing them natively, if they have tests). Leaving aside software that just won't link in less than 4 GB, those systems need all the performance they can get, and things like LTO and BOLT and PGO would help with that.

Missing the reason

Posted Sep 8, 2025 9:54 UTC (Mon) by Lennie (subscriber, #49641) [Link] (1 responses)

"If Debian keeps 32-bit architectures"

Well... funny you should mention that:

https://www.debian.org/releases/stable/release-notes/issu...

Missing the reason

Posted Sep 9, 2025 7:33 UTC (Tue) by jmalcolm (subscriber, #8876) [Link]

i386 in Debian was already not "386". In fact, Debian i386 would not even run on a Pentium, even in Debian 12. It needed a Pentium Pro (i686) at a minimum.

It should also be noted that AntiX is continuing 32 bit support apparently. Presumably AntiX based distros like MX Linux and DSL will as well. If you are a fan of 32 bit Debian, this is a way to keep getting it.

Missing the reason

Posted Sep 7, 2025 5:01 UTC (Sun) by mirabilos (subscriber, #84359) [Link] (32 responses)

Downsides as well, though.

I used to run i386 binaries of Firefox on x32/amd64 systems, to have a natural upper bound of its memory usage. Can’t eat up all system memory if it can address less than 4 GiB :þ

Worked very well.

Missing the reason

Posted Sep 7, 2025 5:17 UTC (Sun) by josh (subscriber, #17465) [Link] (9 responses)

Then it would just crash or fail, making it less reliable. Also, it has multiple processes these days, each of which has its own address space.

Missing the reason

Posted Sep 7, 2025 6:45 UTC (Sun) by Wol (subscriber, #4433) [Link] (2 responses)

You're missing the point. Those extra processes ARE the problem, because they're what's eating the memory.

And so what if firefox is less reliable, when the alternative is the system being less reliable. What's the point of firefox being better, if it takes down the system it's running on?

Cheers,
Wol

Missing the reason

Posted Sep 7, 2025 12:01 UTC (Sun) by josh (subscriber, #17465) [Link] (1 responses)

> You're missing the point. Those extra processes ARE the problem, because they're what's eating the memory.

I was responding to someone suggesting that they wanted 32-bit builds to limit memory usage. That won't help if the browser has multiple processes each using 4GB.

And multiple processes solve different problems that people care about: security, and isolation. Those multiple processes help avoid the problem that one tab crashing or using too much memory means the *whole browser* gets killed.

Memory limits

Posted Sep 8, 2025 7:50 UTC (Mon) by arnd (subscriber, #8866) [Link]

Modern Firefox is likely to hit both the physical (installed) memory limits and the virtual addressing limits of 32-bit machines, and it really only gets worse over time:
  • On arm32 and x86-32 systems running a kernel config with HIGHMEM and VMSPLIT_3G, the per process address limit is 3GB, as the 32-bit address space is shared with the kernel's lowmem (768MB) and vmalloc (256MB) areas.
  • Physical memory on x86-32 hardware is theoretically limited to 4GB with CONFIG_HIGHMEM_4G, but in practice the chipsets have a 3GB limit at best. 32-bit arm systems can theoretically go up to 16GB with HIGHMEM and LPAE, but most SoCs cannot connect more than 2GB (four 256Mb x16 DDR3 chips).
  • The 3GB virtual address limit will typically get lowered to 1.75GB or 2GB once the kernel loses support for highmem, because one has to use CONFIG_VMSPLIT_2G to keep accessing 2GB the physical memory (lowmem). 32-bit systems with 3GB or more RAM at that point lose both physical memory and virtual address limits.
  • On 64-bit systems, the virtual addressing can go up to 4GB for 32-bit compat tasks, so there may be cases where a particular 32-bit browser binary works fine when tested on 64-bit systems, but is unable to fit within the virtual limits on 32-bit kernels with the same amount of physical memory.
  • Both browser implementations and popular websites tend to use more memory every year, from a combination of added features and developers caring less about low-end devices. 3GB of physical memory is already tight for opening multiple tabs, while 3GB of address space may not be enough for a process showing a website with a lot of complex Javascript.
  • Mozilla is pushing integrated AI features into their browsers, which likely brings a lot of workloads over one of the above limits, making it completely unusable.

Missing the reason

Posted Sep 7, 2025 16:48 UTC (Sun) by mirabilos (subscriber, #84359) [Link] (5 responses)

> Then it would just crash or fail

But that’s *precisely* what I want! The browser can always be restartet, using less memory than before and not loading possibly problematic tabs (or I can switch to lynx), but the rest of the system is not under memory pressure by a boring webbrowser.

> it has multiple processes these days, each of which has its own address space.

Yes, that’s why I used past tense when I said I used to use i386 builds.

I’d still prefer that these days…

Missing the reason

Posted Sep 7, 2025 18:02 UTC (Sun) by josh (subscriber, #17465) [Link] (4 responses)

For me, if my *entire* browser crashes, I lose most of the applications I have running, other than those running in a terminal. The move towards tab isolation (as well as general improvements in browser stability) has been a huge improvement.

Missing the reason

Posted Sep 7, 2025 18:27 UTC (Sun) by mirabilos (subscriber, #84359) [Link] (2 responses)

Eh, Firefox reopens tabs on start after a crash. I often `xkill(1)` it to reclaim memory, even, and generally exit it when not in use.

Most of the programs I use run in a terminal, so…

Firefox memory leaks: alternative to a full browser kill and restart to recover the leaked memory

Posted Sep 10, 2025 0:23 UTC (Wed) by Duncan (guest, #6647) [Link] (1 responses)

I generally exit firefox (and anything else) when not in use as well, but I'm using a bit different technique for firefox memory management.

First detailing the memory issue I see: For me it's generally youtube, but I expect the same happens on other "infinite scroll" sites, particularly image (thumbnails for youtube) -heavy ones. My particular issue may be exacerbated by extensions, likely either HoverZoom+ when actively hovering over many thumbnails over time (thus assuming the memory for those is leaked), or, someone else's theory I read, that uBlockOrigin's blocked connections somehow don't fully close and thus leak memory over hours of blocking youtube ads, etc.

Regardless, using a single tab for hours on youtube, either searching (with perhaps a thousand hover-zoomed thumbnails over several hours), or playing multiple videos (tho fewer longer videos doesn't seem to leak as much as many short ones, could be from either many more blocked ads or many more thumbnails loaded, tho with fewer zoomed than the first scenario) eventually uses multiple gigabytes of RAM, even sending me gigs deep into swap on a 16 gig RAM system if I let it go long enough. (Tho I've never had it actually crash, likely because I have an equal 16-gig swap partition on each of four SSDs (so I can stripe them using equal swap-priority values for all four), so 64 gig swap in addition to 16 gig RAM, and I've never let it get beyond single-digits gig into swap.)

In the search scenario, if I middle-click to launch the actual videos in new tabs, then close those tabs in a relatively shorter time to go back and search further from the main search tab, the newer tabs don't seem to accumulate as much and closing them doesn't seem to release much either -- the big memory usage is all in the main search tab that has stayed open. And simply refreshing the page doesn't reclaim the lost memory, nor does deleting cache (which is on tmpfs so does use some memory), so the culprit isn't the tmpfs cache. And FWIW, forcing garbage collection from about:memory doesn't help either -- whatever's leaking, firefox doesn't consider it garbage to be collected!

*BUT*, either duplicating the tab (along with its session history) and closing the old one, or (with another tab open, say just open a new-tab to its default page, or one of the separate-tabbed video pages, so the entire browser doesn't close), closing the offending tab and immediately doing a reopen-closed-tab, keeps the tab's session history, but releases all that leaked memory, sometimes double-digit gigs of it!

So I don't close/kill the entire browser, I simply ensure at least one additional firefox tab is open so the browser itself doesn't close, then either duplicate the leaking tab and close the old one, or close and reopen the leaker. That gets my RAM back without killing the full browser.

Perhaps you'll find that useful as well, tho admittedly, in some cases it's likely less hassle to just do the kill and restart thing. (Much like after upgrading various in-use libraries and apps, while reverting to a text-mode login and running something to tell you what's still using deleted/stale files and restarting it, plus possibly having systemd reload itself too if it's on the list, can usually avoid a full reboot, it's often just easier to just do the reboot and get everything fresh all at once, rather than doing it one app at a time.)

Firefox memory leaks: alternative to a full browser kill and restart to recover the leaked memory

Posted Sep 10, 2025 0:49 UTC (Wed) by mirabilos (subscriber, #84359) [Link]

Good point, I’ve used that as well.

Sometimes there are tabs that just use up lots of resources (RAM, but mostly one full CPU) in the background, despite disabling service workers, so I still do that.

Missing the reason

Posted Sep 18, 2025 14:44 UTC (Thu) by millihertz (guest, #175019) [Link]

My experience - of running 64-bit Firefox on an x64 system with 2-8GB RAM and no swap - is that while it doesn't crash, it does occasionally decide to try and use all of the 28GB it has allocated, which rapidly brings the entire system to a halt as executable pages are crowded out by dirty data pages with nowhere to go. This has been depressingly repeatable across every system I've tried it on, and is why I switched back to using 32-bit Firefox in the first place.

Call me old-fashioned, but I would rather have the browser segfault than DoS the entire system it's running on... which, as I understand it, isn't fantastic for security either, to address another point raised in these comments.

Missing the reason

Posted Sep 7, 2025 6:43 UTC (Sun) by hsivonen (subscriber, #91034) [Link] (20 responses)

Running a 32-bit build when you could run a 64-bit build, since it prevents the use of the larger address space for security: https://github.com/mozilla-firefox/firefox/blob/c79acad61...

With multi-process browsers, 64-bit is more about address space than about memory usage.

Missing the reason

Posted Sep 7, 2025 6:54 UTC (Sun) by hsivonen (subscriber, #91034) [Link] (2 responses)

Oops, bad editing. I meant to say running a 32-bit build is a bad idea when you could run a 64-bit build.

Costs and benefits are multi-dimensional

Posted Sep 7, 2025 9:42 UTC (Sun) by jreiser (subscriber, #11027) [Link] (1 responses)

A 64-bit build is not necessarily better in every dimension of cost than a 32-bit build, and users may regard different dimensions as more important. For instance, the hardware memory cache might have a lower miss ratio on a 32-bit build because a pointer occupies only half as much space. Also, using the "x32" code model on x86_64, which restricts process address space to 4GiB but still enables using 16 CPU registers, might compensate for some of the advantages of full 64-bit. Even pointer tagging can be done on 32-bit pointers by segregating allocations that have different tag values into aligned 32KiB or 64KiB subspaces, and associating the tag with the subspace instead of directly in the pointer.

Costs and benefits are multi-dimensional

Posted Sep 7, 2025 9:49 UTC (Sun) by wtarreau (subscriber, #51152) [Link]

> A 64-bit build is not necessarily better in every dimension of cost than a 32-bit build

I absolutely agree. For example my build farm at home runs on cortex A72 cores. After long testing, it appears that 32-bit thumb2 binaries are up to ~20% faster than aarch64 ones. Needless to say that I've built my toolchains for that target! In all my tests, aarch64 code is systematically bigger and slower than thumb2, it's sad that they've not worked on a compact and efficient instruction set for aarch64.

Missing the reason

Posted Sep 7, 2025 16:52 UTC (Sun) by mirabilos (subscriber, #84359) [Link] (16 responses)

I know.

But at that time (before it split itself into multiple processes) it was still a very useful way to save on RAM.

I’d love to still be able to limit that, at least on the “8 GiB is maximum you can put in” laptop.

Missing the reason

Posted Sep 7, 2025 17:01 UTC (Sun) by mb (subscriber, #50428) [Link] (15 responses)

Using the 32bit version of an application just for limiting its memory use sounds like using a hammer as tool for a problem where you'd actually need to use a screw driver.

Have you looked into cgroups for limiting the memory consumption of applications?
That would even work with multi process applications.

Missing the reason

Posted Sep 7, 2025 18:06 UTC (Sun) by mirabilos (subscriber, #84359) [Link] (14 responses)

No, I’ve had enough trouble to get libvirt and/or Docker working because they want to use cgroups, and I’ve never saw any useful docs for how to use them other than “just use systemd, man, it’ll do it automagically for you”, and then there’s the v1 vs. v2 issue…

Missing the reason

Posted Sep 7, 2025 18:49 UTC (Sun) by mb (subscriber, #50428) [Link]

Reminds me of this:
https://xkcd.com/1172/

Missing the reason

Posted Sep 11, 2025 10:48 UTC (Thu) by kpfleming (subscriber, #23250) [Link] (6 responses)

I believe 'systemd-run' can do this now - launch any executable, creating a cgroup to contain it, and set various limits including memory usage. It may be worth a try.

Missing the reason

Posted Sep 11, 2025 13:00 UTC (Thu) by mathstuf (subscriber, #69389) [Link]

`systemd-run` has been able to do this for a long time. I've used it to make sure a build doesn't require more than 4 GB per TU by running a non-parallel build under a strict memory limit. But, AFAIK, it does require using systemd as your init system (or, rather, cgroup manager), so perhaps systemd-run is the "just use systemd, man" solution alluded to here.

Missing the reason

Posted Sep 11, 2025 22:27 UTC (Thu) by mbunkus (subscriber, #87248) [Link]

It sure can. It's as easy as:

systemd-run --user --property=MemoryMax=16G firefox

Missing the reason

Posted Sep 11, 2025 22:49 UTC (Thu) by mirabilos (subscriber, #84359) [Link] (3 responses)

Note the part in the comment you replied to which says:

> never saw any useful docs for how to use them other than “just use systemd, man

You fulfilled the cliché beautifully, though, I’ve to admit.

(No, I don’t and won’t run systemd, period.)

Missing the reason

Posted Sep 12, 2025 6:08 UTC (Fri) by zdzichu (guest, #17118) [Link] (2 responses)

You have to drive a nail in the wood. People say "just use a hammer, man". And you are like "never hammer, I will manage with this round rock somehow". It's your fault you ignore tools, so stop complaining.

Missing the reason

Posted Sep 12, 2025 15:03 UTC (Fri) by mirabilos (subscriber, #84359) [Link] (1 responses)

Except systemd is not the hammer; systemd is a cheap asian version of a swiss army pocket knife, with too many functions bundled.

Missing the reason

Posted Sep 12, 2025 20:58 UTC (Fri) by Cyberax (✭ supporter ✭, #52523) [Link]

Like, Linux itself?

Missing the reason

Posted Sep 11, 2025 13:49 UTC (Thu) by intelfx (subscriber, #130118) [Link] (2 responses)

> because they want to use cgroups, and I’ve never saw any useful docs for how to use them other than “just use systemd, man, it’ll do it automagically for you”

Well, cgmanager exists (or, at least, existed). In its time it was absolutely a viable alternative to systemd, and docker used to support both. The upstream development stopped in 2020, presumably because nobody wanted to do it anymore.

It’s hardly systemd’s fault that it turned out so good nobody actually desired to continue developing the alternatives.

> and then there’s the v1 vs. v2 issue

There is no issue. One is obsolete, other is actively supported.

Missing the reason

Posted Sep 11, 2025 22:52 UTC (Thu) by mirabilos (subscriber, #84359) [Link] (1 responses)

Hm, no cgmanager in Debian (bullseye, but likely not trixie either).

> > and then there’s the v1 vs. v2 issue
>
> There is no issue. One is obsolete, other is actively supported.

Who cares about supported?

The issue is that some software will only work with one of them. (I *think* I had to mkdir and mount cgroups v1 stuff in trixie to get… something… to work, Docker maybe or libvirt).

Missing the reason

Posted Sep 11, 2025 23:23 UTC (Thu) by intelfx (subscriber, #130118) [Link]

> Hm, no cgmanager in Debian (bullseye, but likely not trixie either).

Precisely. Case in point.

> Who cares about supported?

Those who write the code that you complain about.

Missing the reason

Posted Sep 13, 2025 17:11 UTC (Sat) by donald.buczek (subscriber, #112892) [Link] (2 responses)

> cgroups, and I’ve never saw any useful docs for how to use them other than “just use systemd, man, it’ll do it automagically for you”

You and most people will probably already know this, but of course you can also use cgroups without any special software by manipulating the files and directories in /sys/fs/cgroup (or wherever else you mount the cgroup2 filesystem).
Everything is documented at https://docs.kernel.org/admin-guide/cgroup-v2.html.

However, I don't know whether this helps in the context of libvirt and Docker. I'm not familiar with them and don't know what their requirements are.

Missing the reason

Posted Sep 13, 2025 17:38 UTC (Sat) by mirabilos (subscriber, #84359) [Link] (1 responses)

I did in fact not know that; I hadn’t researched into cgroups in more detail.

Much appreciated.

Missing the reason

Posted Sep 16, 2025 7:11 UTC (Tue) by taladar (subscriber, #68407) [Link]

You might want to have a look at the cgroups(7) manpage to get an introduction to cgroups in general.

Missing the reason

Posted Sep 18, 2025 14:37 UTC (Thu) by millihertz (guest, #175019) [Link]

I still do exactly this, and it works a treat. By and large, limiting the js heap to 512MB stops it from crashing too; some javascript might break, but that's fine.

Missing the reason

Posted Sep 7, 2025 6:52 UTC (Sun) by hsivonen (subscriber, #91034) [Link]

The announcement talks about 32-bit Linux. 32-bit x86 is the only 32-bit Linux that Mozilla has shipped binaries for.

The announcement does not say anything about 32-bit Windows or 32-bit Android, and reading it as saying anything about 32-bit ARM buildability for desktop Linux is probably reading too much into it.

Missing the reason

Posted Sep 7, 2025 16:50 UTC (Sun) by mirabilos (subscriber, #84359) [Link]

Not supporting does not have to mean actively dropping the code though, one can hope.

Missing the reason

Posted Sep 8, 2025 7:41 UTC (Mon) by sylvestre (subscriber, #57054) [Link]

32 bit will still be supported


Copyright © 2025, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds