|
|
Subscribe / Log in / New account

The future of 32-bit support in the kernel

By Jonathan Corbet
September 1, 2025

OSS EU
Arnd Bergmann started his Open Source Summit Europe 2025 talk with a clear statement of position: 32-bit systems are obsolete when it comes to use in any sort of new products. The only reason to work with them at this point is when there is existing hardware and software to support. Since Bergmann is the overall maintainer for architecture support in the kernel, he is frequently asked whether 32-bit support can be removed. So, he concluded, the time has come to talk more about that possibility.

People naturally think about desktop machines first, he continued. If you were running Linux in the 1990s, you had a 32-bit, desktop system. Unix systems, though, moved to 64-bit platforms around 30 years ago, and the Linux desktop made that move about 20 years ago. Even phones and related devices have been 64-bit for the last decade. If those systems were all that Linux had to support, 32-bit support would have long since been removed from the kernel. He summarized the situation with this slide, showing how the non-embedded architectures have transitioned to either 64-bit or nonexistence over time:

[History of
non-embedded 32-bit platforms]

The world is not all desktops — or servers — though; embedded Linux exists as well. About 90% of those systems are running on Arm processors. The kernel has accumulated a lot of devicetree files describing those systems over the years; only in this last year has the number of devicetrees for armv8 (64-bit) systems exceeded the number for armv7 (32-bit) systems.

For Arm processors with pre-armv7 architectures, there are only three for which it is still possible to buy hardware, but a number are still supported by the kernel community:

[History of
pre-armv7 platforms]

Many other pre-armv7 CPUs are out of production, but the kernel still has support for them. Of those, he said, there are about ten that could be removed now. It would be nice to be able to say that support for the others will be removed after a fixed period, ten years perhaps, but hardware support does not work that way. Instead, one has to think in terms of half lives; every so often, it becomes possible to remove support for half of the platforms. It all depends on whether there are users for the processors in question.

The kernel is still adding support for some 32-bit boards, he said, but at least ten new 64-bit boards gain support for each 32-bit one.

There are a number of non-Arm 32-bit architectures that still have support in the kernel; these include arc, microblaze, nios2, openrisc, rv32, sparc/leon, and xtensa. All of them are being replaced by RISC-V processors in new products. RISC-V is what you use if you don't care about Arm compatibility, he said.

Then, there is the dusty corner where nommu (processors without a memory-management unit) live; these include armv7-m, m68k, superh, and xtensa. Nobody is building anything with this kind of hardware now, and the only people who are working on them in any way are those who have to support existing systems. "Or to prove that it can be done."

There are still some people who need to run 32-bit applications that cannot be updated; the solution he has been pushing people toward is to run a 32-bit user space on a 64-bit kernel. This is a good solution for memory-constrained systems; switching to 32-bit halves the memory usage of the system. Since, on most systems, almost all memory is used by user space, running a 64-bit kernel has a relatively small cost. Please, he asked, do not run 32-bit kernels on 64-bit processors.

[Arnd Bergmann] There are some definite pain points that come with maintaining 32-bit support; most of the complaints, he said, come from developers in the memory-management subsystem. The biggest problem there is the need to support high memory; it is complex, and requires support throughout the kernel. High memory is needed when the kernel lacks the address space to map all of the installed physical memory; that tends to be at about 800MB on 32-bit systems. (See this article for more information about high memory).

Currently the kernel is able to support 32-bit systems with up to 16GB of installed memory. Such systems are exceedingly rare, though, and support for them will be going away soon. There are a few 4GB systems out there, including some Chromebooks. Systems with 2GB are a bit more common. Even these systems, he said, are "a bit silly" since the memory costs more than the CPU does. There are some use cases for such systems, though. Most 32-bit systems now have less than 1GB of installed memory. The kernel, soon, will not try to support systems with more than 4GB.

There are some ideas out there for how to support the larger-memory 32-bit systems without needing the high-memory abstraction. Linus Walleij is working on entirely separating the kernel and user-space address spaces, giving each 4GB to work with; this is a variant on the "4G/4G" approach that has occasionally been tried for many years. It is difficult to make such a system work efficiently, so this effort may never succeed, Bergmann said.

Another approach is the proposed "densemem" memory model, which does some fancy remapping to close holes in the physical address space. Densemem can support up to 2GB and is needed to replace the SPARSEMEM memory model, the removal of which which will eventually be necessary in any case. This work has to be completed before high memory can be removed; Bergmann said that he would be interested in hearing from potential users of the densemem approach.

One other possibility is to drop high memory, but allow the extra physical memory to be used as a zram swap device. That would not be as efficient as accessing the memory directly, but it is relatively simple and would make it possible to drop the complexity of high memory.

Then, there is the year-2038 problem, which he spent several years working on. The kernel-side work was finished in 2020; the musl C library was updated that same year, and the GNU C Library followed the year after. Some distributors have been faster than others to incorporate this work; Debian and Ubuntu have only become year-2038-safe this year.

The year-2038 problem is not yet completely solved, though; there are a lot of packages that have unfixed bugs in this area. Anything using futex(), he said, has about a 50% chance of getting time handling right. The legacy 32-bit system calls, which are not year-2038 safe, are still enabled in the kernel, but they will go away at some point, exposing more bugs. There are languages, including Python and Rust, that have a lot of broken language bindings. Overall, he said, he does not expect any 32-bit desktop system to survive the year-2038 apocalypse.

A related problem is big-endian support, which is also 32-bit only, and also obsolete. Its removal is blocked because IBM is still supporting big-endian mainframe and PowerPC systems; as long as that support continues, big-endian support will stay in the kernel.

A number of other types of support are under discussion. There were once 32-bit systems with more than eight CPUs, but nobody is using those machines anymore, so support could be removed. Support for armv4 processors, such as the DEC StrongARM CPU, should be removed. Support for early armv6 CPUs, including the omap2 and i.mx31, "complicates everything"; he would like to remove it, even though there are still some Nokia 770 systems in the wild. The time is coming for the removal of all non-devicetree board files. Removal of support for Cortex M CPUs, which are nommu systems, is coming in a couple of years. Developers are eyeing i486 CPU support, but that will not come out yet. Bergmann has sent patches to remove support for KVM on 32-bit CPUs, but there is still "one PowerPC user", so that support will be kept for now.

To summarize, he said, the kernel will have to retain support for armv7 systems for at least another ten years. Boards are still being produced with these CPUs, so even ten years may be optimistic for removal. Everything else, he said, will probably fade away sooner than that. The removal of high-memory support has been penciled in for sometime around 2027, and nommu support around 2028. There will, naturally, need to be more discussion before these removals can happen.

An audience member asked how developers know whether a processor is still in use or not; Bergmann acknowledged that it can be a hard question. For x86 support, he looked at a lot of old web pages to make a list of which systems existed, then showed that each of those systems was already broken in current kernels for other reasons; the lack of complaints showed that there were no users. For others, it is necessary to dig through the Git history, see what kinds of changes are being made, and ask the developers who have worked on the code; they are the ones who will know who is using that support.

Another person asked about whether the kernel would support big-endian RISC-V systems. Bergmann answered that those systems are not supported now, and he hoped that it would stay that way. "With RISC-V, anybody can do anything, so they do, but it is not always a good idea". The final question was about support for nommu esp32 CPUs; he answered that patches for those CPUs exist, but have not been sent upstream. Those processors are "a cool toy", but he does not see any practical application for them.

The slides for this talk are available. The curious may also want to look at Bergmann's 2020 take on this topic.

[Thanks to the Linux Foundation, LWN's travel sponsor, for supporting my travel to this event.]

Index entries for this article
KernelArchitectures
ConferenceOpen Source Summit Europe/2025


to post comments

64-bit big-endian

Posted Sep 1, 2025 19:55 UTC (Mon) by jrtc27 (subscriber, #107748) [Link] (12 responses)

> A related problem is big-endian support, which is also 32-bit only, and also obsolete

I don't understand this point. 64-bit big-endian systems exist in the form of sparc64 and s390x (and powerpc64, though that's less of a thing given powerpc64le).

64-bit big-endian

Posted Sep 1, 2025 21:47 UTC (Mon) by mjw (subscriber, #16740) [Link] (10 responses)

That also confused me, but I think it is a typo in the article.
The slides don't say that big-endian is 32-bit only, but "Equally obsolete as 32-bit".

64-bit big-endian

Posted Sep 1, 2025 22:27 UTC (Mon) by sam_c (subscriber, #139836) [Link] (7 responses)

I think it's a bit of a strange take given s390x is alive-and-well. It seems well-maintained in the toolchain and broader userland as far as I've encountered too. I'd be interested in why they said that.

(Of course, BE isn't the future, I just don't think the platforms are all rotting near-universally as 32-bit ones are on average.)

64-bit big-endian

Posted Sep 2, 2025 8:07 UTC (Tue) by arnd (subscriber, #8866) [Link] (6 responses)

This was pretty much the point I was trying to make in the presentation: IBM's continued investment in Linux for z17+ and POWER11+ servers is what keeps big-endian embedded alive, the same way that all 32-bit support hinges on the continued need to support embedded ARMv7 systems.

For the moment, the problems are mainly in newly written code, where portability to both big-endian and 32-bit targets relies on someone actively testing those and submitting fixes.

This is in contrast to nommu and highmem systems that no longer have the same level of commercial backing and are much further into the inevitable bitrot.

[I believe the article citing "big-endian support, which is also 32-bit only" was a transcription error in an otherwise excellent report -- what I actually said is that the majority of big-endian (embedded) systems are 32-bit, while pointing to s390x/powerpc64 servers as the ones that keep them viable.]

64-bit big-endian

Posted Sep 9, 2025 20:58 UTC (Tue) by anton (subscriber, #25547) [Link]

IBM switched their Linux support for Power to little-endian, when introducing OpenPower (Power 8 or Power 9). They completely dropped support for big-endian AFAICT, and other people seem to have picked up that attitude, too.

Long Term Support - one reason why

Posted Sep 13, 2025 18:02 UTC (Sat) by jjs (guest, #10315) [Link] (4 responses)

No idea how things are now, but around 25 years ago, I was working a job that included working with IBM mainframes. From attending conventions and meeting others who had IBM mainframes, some orgs were running 30+ year old binaries on their systems. Often, they had no access to the source code, but that was OK, it did the job, and they could surround it with new software to handle the new things. And if they needed help, they called IBM, who was prepared to support.

Why? Because it worked, and it made them money.

Also saw this with DEC VAX and HP MP/3000 systems. There was a reason these companies maintained such backward compatibility. $$$.

Again, been out of that arena for a long time, no idea current status (other than DEC is gone, HP is split up).

Long Term Support - one reason why

Posted Sep 15, 2025 11:41 UTC (Mon) by paulj (subscriber, #341) [Link]

My first proper job was in the contract support team at DEC. DEC were still supporting PDP 11's - that was maybe around '98 or '99. DEC were also supporting all kinds of weird old Unix boxes from a number of other vendors. I still remember getting a call about some kind of Siemens Unix box - first I knew Siemens ever had their own Unix boxes! Years later, I worked for HP, and... there was still a little group there supporting some app for some particular customer on PDP-11s. That was circa 2016 to 2018!

DEC^WCompaq^WHP enterprise support is a huge business, and some portion of that is charging a *lot* of money supporting weird old shit for a small set of customers who need it and can afford it.

Long Term Support - one reason why

Posted Sep 15, 2025 11:56 UTC (Mon) by farnz (subscriber, #17727) [Link] (2 responses)

There's still jobs out there based around supporting old binaries on PDP-11s, because updating your compliance certificate for a modification of the existing code on the existing hardware is a small thing, but updating it for new hardware or new code is a big thing.

The nuclear power industry has a chunk of this, for example - you know that the old hardware and software broadly does the right thing, but you need to make a slight change, and it's safer to slightly change the current software on old hardware designs than to rewrite the software or change the hardware significantly. And you paid DEC huge money for copies of PDP-11 schematics, PCB layouts etc, so that you can still manufacture new PDP-11s now that DEC isn't around.

Long Term Support - one reason why

Posted Sep 16, 2025 9:32 UTC (Tue) by paulj (subscriber, #341) [Link] (1 responses)

I bet you can /still/ pay DEC^WHPE large amounts of money and get PDP-11 support - as per a sibling comment, _for sure_ HP still had at least 1 specialised group doing PDP-11 support in 2016 - 2018.

DEC was still _manufacturing_ PDP-11 compatible machines up to around 1995!

Long Term Support - one reason why

Posted Sep 16, 2025 9:59 UTC (Tue) by Wol (subscriber, #4433) [Link]

https://en.wikipedia.org/wiki/PDP-11

The last design was released in 1990, production ceased (by DEC) in 1997. "Authorised clones" may have lasted slightly longer. Not bad for a design released in 1970.

Cheers,
Wol

64-bit big-endian

Posted Sep 2, 2025 7:05 UTC (Tue) by glaubitz (subscriber, #96452) [Link] (1 responses)

» The slides don't say that big-endian is 32-bit only, but "Equally obsolete as 32-bit".«

Does IBM already know that s390x is supposed to be obsolete now?

64-bit big-endian

Posted Sep 2, 2025 12:53 UTC (Tue) by maxfragg (guest, #122266) [Link]

isn't it sufficient, that everyone except IBM knows it?

64-bit big-endian

Posted Sep 5, 2025 11:22 UTC (Fri) by nowster (subscriber, #67) [Link]

I could have sworn I had some 64-bit big-endian MIPS patches accepted into the kernel about ten years ago...

Not all types are words / pointers

Posted Sep 1, 2025 19:58 UTC (Mon) by jrtc27 (subscriber, #107748) [Link] (4 responses)

> switching to 32-bit halves the memory usage of the system

That's not really true. Halving is the limit of what you could hope to achieve, if every data type in your system was a machine word or pointer. But char, short, int and long long don't change, (u)intNN_t don't change, and float/double don't change, so depending on what data you're actually operating on you can see anywhere between almost no change (e.g. some purely computational workload on large amounts of raw data) and halving the memory usage.

Not all types are words / pointers

Posted Sep 1, 2025 20:24 UTC (Mon) by arnd (subscriber, #8866) [Link] (3 responses)

This was based on actual measurements I did with very small workloads, trying to find the smallest configuration that could still boot and run a given Debian (armhf/arm64) rootfs with identical packages installed.

I had expected to see a much smaller difference between the two environments like you describe, the numbers I saw for anything I tried were always roughly 100% overhead for the 64-bit code compared to armhf.

I think this is a combination of multiple effects I measured in addition to the obvious sizeof(long) difference: arm64 has longer instruction words than armv7/thumb2, small malloc() calls are aligned to larger cache lines (128 vs 64 bytes), ELF sections have a larger default alignment (64K vs 4K), etc.

For many real workloads, a larger portion of RAM if of course going to be filled with the same user data (text, video, sensor input, network packages, ...) so the difference becomes smaller; for the most memory constrained systems, 100% overhead is surprisingly good estimate that holds true for both kernel data and userland.

Not all types are words / pointers

Posted Sep 2, 2025 5:59 UTC (Tue) by jrtc27 (subscriber, #107748) [Link] (2 responses)

For sure, a minimal system is going to be one of the worse cases, because there's no real data there, just data structures to track a whole lot of nothing, which means pointers, offsets and sizes galore.

Not all types are words / pointers

Posted Sep 5, 2025 13:40 UTC (Fri) by dave4444 (subscriber, #127523) [Link] (1 responses)

There is savings to be had, but not 50%. It depends on if the application has pointer-heavy data structures or not. 32bit userspace can be a life saver for memory constrained 64bit CPUs though, CONFIG_COMPAT is well worth it.
Good for:
* pointer heavy data structures
* computation where _base_ data types and 32bit registers are sufficient
Bad for:
* older compilers/toolchains assume some instrucitons are not available and cannot optimize with them
* heavy use of int64 or double datatypes as those require multiple instructions/registers

RAM usage will be reduced, but compiled code is bigger and performance can decrease. All a tradeoff

Not all types are words / pointers

Posted Sep 5, 2025 16:52 UTC (Fri) by pizza (subscriber, #46) [Link]

> * computation where _base_ data types and 32bit registers are sufficient

It's worth noting that for Arm and x86, the 64-bit instruction sets more than doubled the number of general-purpose registers available to software, and that _usually_ more than offsets the performance hit from the increased pointer size.

Big-endian RISC-V

Posted Sep 1, 2025 20:03 UTC (Mon) by jrtc27 (subscriber, #107748) [Link] (14 responses)

> Another person asked about whether the kernel would support big-endian RISC-V systems. Bergmann answered that those systems are not supported now, and he hoped that it would stay that way.

Unfortunately people are building those systems and others are proposing supporting them[1]. It is a real shame this is the approach that's being taken rather than having dedicated byte-swapping load/store instructions to accelerate processing big-endian data structures like network packets, which is the main use case that I'm aware of, outside of being compatible with old software written for big-endian mainframes.

[1] https://lore.kernel.org/all/20250822165248.289802-1-ben.d...

Big-endian RISC-V

Posted Sep 2, 2025 7:06 UTC (Tue) by glaubitz (subscriber, #96452) [Link] (13 responses)

> Unfortunately people are building those systems and others are proposing supporting them[1].

Why is this unfortunate? I don't understand. Is Linux supposed to run only on architectures that a small group of people likes?

Big-endian RISC-V

Posted Sep 2, 2025 7:55 UTC (Tue) by jrtc27 (subscriber, #107748) [Link] (12 responses)

I don't have an inherent problem with big-endian; you know I've helped you fix endianness portability issues over the years.

What I have a problem with is needless fragmentation and needless complexity within one specific architecture, in this case RISC-V. Sometimes the right design choice is to say no to options. Nobody was building big-endian RISC-V hardware, nobody was seriously writing big-endian RISC-V software and everything was just fine, but now everyone is expected to support yet another variant of RISC-V that doesn't really achieve anything except create a whole lot of work. It's not some new and interesting architecture that approaches things differently that happens to be big-endian, it is just RISC-V but with big-endian because nobody was willing to say no.

Big-endian RISC-V

Posted Sep 2, 2025 8:06 UTC (Tue) by glaubitz (subscriber, #96452) [Link] (11 responses)

> I don't have an inherent problem with big-endian; you know I've helped you fix endianness portability issues over the years.

I wasn't actually looking at the username and only realized now it was you ;-).

> What I have a problem with is needless fragmentation and needless complexity within one specific architecture, in this case RISC-V.

I think this is a developer-centric perspective rather than a user-centric perspective. If there are legitimate use cases such as building open-source networking hardware based on RISC-V, then it should be legitimate to add big-endian support if someone is willing to maintain it.

Big-endian RISC-V

Posted Sep 2, 2025 8:43 UTC (Tue) by arnd (subscriber, #8866) [Link] (5 responses)

I can see why MIPS are building big-endian support into their RISC-V cores, and I think this does make a lot of sense for their designs, as it gives their customers a migration path for the non-Linux operating systems that they had on earlier MIPS cores, the same way that every single Arm Cortex-A core supports both little-endian and big-endian operating systems.

Adding big-endian Armv8 support to Linux made sense in 2013 as users were still porting software from big-endian MIPS/Octeon or PowerPC/QorIQ systems, but it ended up being a mistake for the same reason that Jessica explained about RISC-V: it's an extra ABI that requires testing resources in order to keep running, for very little practical use.

Now that all the Linux networking applications moved to arm64le, the only remaining use case for arm64be is to have an easily available platform find and fix endianness bugs, typically in a VM guest running on a LE host. This means we can't easily remove from the arm64 kernel, but it would still be a mistake to add it to riscv64 Linux.

Big-endian RISC-V

Posted Sep 2, 2025 9:01 UTC (Tue) by glaubitz (subscriber, #96452) [Link] (3 responses)

> Now that all the Linux networking applications moved to arm64le

How do you know that?

Big-endian RISC-V

Posted Sep 2, 2025 9:14 UTC (Tue) by arnd (subscriber, #8866) [Link] (2 responses)

We previously had three companies that made networking SoCs advertising arm64be support as an optional feature, and paying Linaro to maintain that in mainline kernel and toolchains. All of them have gone from being actively interested in this to sending nonportable driver code for newer products in the same line, which was clearly never tested in big-endian mode.

Additionally, all newer networking SoCs come with SystemReady certification these days, but SystemReady requires booting in little-endian UEFI mode rather than the traditional Linux boot interface that supports both big-endian and little-endian kernels.

Big-endian RISC-V

Posted Sep 2, 2025 14:02 UTC (Tue) by hailfinger (subscriber, #76962) [Link] (1 responses)

I'm curious if that SystemReady certification also applies to the just-announced Realtek RTL9335. All previous Realtek ethernet switch SoCs were MIPS32, and that new SoC may also be a MIPS thing (no datasheet available yet).

Big-endian RISC-V

Posted Sep 2, 2025 16:10 UTC (Tue) by arnd (subscriber, #8866) [Link]

SystemReady only applies to 64-bit Arm CPUs, and usually only on the beefier side.

Realtek's other product lines like the rtl819x wifi line or the STB/NAS rtd1xxx chips already migrated from big-endian mips32 lexra to little-endian mips32 r24k/r34k/r74k, then little-endian arm32 and now little-endian arm64, so I would expect them to do a similar progression here, possibly skipping one or two steps.

Big-endian RISC-V

Posted Sep 2, 2025 10:19 UTC (Tue) by pabs (subscriber, #43278) [Link]

Surely networking devices use Linux instead of other OSes? I know my DSL router SoC is 32-bit big-endian MIPSr1 from Broadcom running a Linux install with busybox/etc. It is of course violating the BSD and GPL licenses, and running an ancient security-unsupported Linux kernel, with a terribly restricted build config. I used to be able to run a Debian chroot on it though, until they bumped the baseline past r1 and later dropped mips support entirely. I expect the various DSL SFP devices are running Linux too, probably with old MIPS too.

Big-endian RISC-V

Posted Sep 2, 2025 21:20 UTC (Tue) by jrtc27 (subscriber, #107748) [Link] (3 responses)

> I think this is a developer-centric perspective rather than a user-centric perspective. If there are legitimate use cases such as building open-source networking hardware based on RISC-V, then it should be legitimate to add big-endian support if someone is willing to maintain it.

But that's my point; you don't need all your normal system data to be big-endian there, all you need is for your *network packets* to be big-endian. I remain unconvinced that it would not be better to accelerate a traditionally little-endian architecture for networking by adding dedicated load/store-as-big-endian (i.e., byte swap to/from the cache) instructions on top of an otherwise little-endian system. That way there is no new ABI, no new incompatible ISA, nothing for developers to do except for networking software developers to make sure their code is written to take advantage of such instructions if available, but in reality you'd mostly expect a compiler to be able to fuse things (and maybe you'd add some special annotations on fields so they'd automatically be byte-swapped for you). PowerPC has just that in the form of l[hwd]brx - see https://godbolt.org/z/MvYz9sabG - and I can't help but feel this approach is being overlooked.

Big-endian RISC-V

Posted Sep 3, 2025 8:16 UTC (Wed) by farnz (subscriber, #17727) [Link] (2 responses)

x86-64 has MOVBE which byte-swaps on loads.

AArch64 takes the other practical approach; it has REV, REV16, and REV32 instructions to accelerate byte-swapping after the load or before the store.

Big-endian RISC-V

Posted Sep 3, 2025 20:18 UTC (Wed) by jrtc27 (subscriber, #107748) [Link]

RISC-V has REV8 to byte-swap a whole register, and if you want to swap less than a machine word you then have to right-shift.

But I imagine that even load+reverse as two instructions on AArch64 makes some of the big-endian folks unhappy still and they really don't want it to have any more overhead than a normal load.

Good to know x86 has MOVBE, and that my general principle of "if you can think of a sensible instruction, x86 and/or PowerPC has probably already done it" holds even more strongly in this case :)

Big-endian RISC-V

Posted Sep 3, 2025 20:51 UTC (Wed) by excors (subscriber, #95769) [Link]

32-bit MOVBE decodes to 2-3 uops on Intel desktop cores [1], so I'm not sure it's any better than MOV+BSWAP. It originated on Atom where it's 1 uop (so it did have a performance benefit there, presumably with some dedicated silicon), and I guess its spread to desktop cores is just for software compatibility and not performance.

[1] https://uops.info/html-instr/MOVBE_R32_M32.html / https://uops.info/html-instr/MOVBE_M32_R32.html

Big-endian RISC-V

Posted Sep 4, 2025 6:36 UTC (Thu) by maskray (subscriber, #112608) [Link]

ELF's design emphasizes natural size and alignment guidelines for its control structures. This principle, outlined in _Proceedings of the Summer 1990 USENIX Conference, ELF: An Object File to Mitigate Mischievous Misoneism_ , promotes ease of random access for structures like program headers, section headers, and symbols.

> All data structures that the object file format defines follow the "natural" size and alignment guidelines for the relevant class. If necessary, data structures contain explicit padding to ensure 4-byte alignment for 4-byte objects, to force structure sizes to a multiple of four, etc. Data also have suitable alignment from the beginning of the file. Thus, for example, a structure containing an Elf32_Addr member will be aligned on a 4-byte boundary within the file. Other classes would have appropriately scaled definitions. To illustrate, the 64-bit class would define Elf64 Addr as an 8-byte object, aligned on an 8-byte boundary. Following the strictest alignment for each object allows the format to work on any machine in a class. That is, all ELF structures on all 32-bit machines have congruent templates. For portability, ELF uses neither bit-fields nor floating-point values, because their representations vary, even among pro- cessors with the same byte order. Of course the programs in an ELF file may use these types, but the format itself does not.

This consideration made sense in 1990 but now seems less relevant given the overwhelming prevalence of little-endian architectures.

Now big-endian support manifests either in code duplication (separate template instantiation) or runtime overhead.

```cpp
template void elf::doIcf<ELF32LE>(Ctx &);
template void elf::doIcf<ELF32BE>(Ctx &);
template void elf::doIcf<ELF64LE>(Ctx &);
template void elf::doIcf<ELF64BE>(Ctx &);
```

Today, lld's RISC-V support assumes little-endian. If we ever need to add big-endian support, we would either introduce template instantiation and make the executable larger, or make read/write calls slower for little-endian users.

To maintain confidence in refactoring, all new code requires testing. For big-endian we need to duplicate tests. Feature developers or reviewers would have to think whether an issue would work on big-endian or accept the imperfection.

longevity

Posted Sep 1, 2025 20:53 UTC (Mon) by ajb (subscriber, #9694) [Link] (8 responses)

There are some very long lived systems out there. I recently did some work on realtek RTL8380, which is used in commodity ethernet switches. It gained support in openwrt within the last year or so. The chip has been around since 2016, and it's based on a processor, the mips4Kec, which was announced in 2002. Needless to say, it's 32-bit. Despite this they will probably be shipping them indefinitely. The spec of such switches hasn't changed in 10 years, and it seems that 1Gb ethernet is good enough for many use cases for the foreseeable future. I expect they will continue to ship them through many generations of faster moving products.

There are also going to be many products where it doesn't make much difference whether an ancient processor or an up to date riscv is used, and manufacturers are very likely to continue to choose ones where the licence fee is already paid off.

longevity

Posted Sep 1, 2025 22:11 UTC (Mon) by smoogen (subscriber, #97) [Link] (6 responses)

That is the same with various 8bit and 16 bit processors which have been used in embedded systems for decades now. Once something gets 'battle tested' in some scenario.. changing to a new chipset is a giant battle against "well how does it work? do you have as many hours of case reports that our version of a 6502 has?" Answers of "of course not" just answer what the "well if it ain't broke don't fix it until after I retire".

longevity

Posted Sep 2, 2025 1:44 UTC (Tue) by willy (subscriber, #9762) [Link] (4 responses)

But will the kernel ever be updated on such systems? If not, then the fact that support has been dropped upstream makes no difference.

longevity

Posted Sep 2, 2025 4:56 UTC (Tue) by ajb (subscriber, #9694) [Link] (3 responses)

Cheap commodity vendors will never update of course, but openWRT will, and presumably some higher end vendors who actually care about security.

longevity

Posted Sep 2, 2025 9:56 UTC (Tue) by hailfinger (subscriber, #76962) [Link] (2 responses)

Oh, the CRA will force network switch vendors to care, at least if they want to sell in the EU. AFAIK all Realtek-based managed switches (from Gigabit Ethernet to the latest 10 Gigabit Ethernet reference designs) have a 32-bit MIPS CPU.

The three options for such vendors are:
1. maintain an older kernel indefinitely (good luck with that, especially now that older upstream-unsupported kernels won't get CVEs allocated for security issues)
2. work hard on keeping the affected chipses supported by the mainline kernel
3. scrap their designs and start anew from another incopmatible design (not worth the effort)

longevity

Posted Sep 2, 2025 13:16 UTC (Tue) by foom (subscriber, #14868) [Link] (1 responses)

> good luck with that, especially now that older upstream-unsupported kernels won't get CVEs allocated for security issues

Sounds like that'd make maintenance generally easier, not harder, if your goal is only to check a box for compliance.

If there are no security vulnerabilities reported which apply to your particular kernel, obviously that means it has no security issues and therefore requires no updates, so there's no work to do. Yay!

longevity

Posted Sep 10, 2025 3:19 UTC (Wed) by raven667 (subscriber, #5198) [Link]

I'm catching up very late but as a practical matter, unless you are letting people run new custom code on the system and need to protect against attacks from local users, there are a lot of cases where the security issues of the Linux kernel just don't matter. The kernel embedded inside of an ethernet switch probably doesn't matter near as much as the implementation of ssh, ospf/bgp, arp, or a web interface, anything that gives shell access to the embedded system is a bigger problem than a local vulnerability in the system, as that's not the security boundary the system is trying to enforce, everything could be running as root with filesystem perms of 777 in some embedded use cases ;-).

longevity

Posted Sep 2, 2025 7:30 UTC (Tue) by taladar (subscriber, #68407) [Link]

The main difference is that most 8bit or 16bit embedded systems probably didn't run a generic Linux kernel in the first place but something written specifically for that device.

longevity

Posted Sep 1, 2025 22:54 UTC (Mon) by iabervon (subscriber, #722) [Link]

On the other hand, it doesn't need highmem, because it doesn't support more than 256M of RAM, and it has an MMU. I would guess that, if nommu and highmem went away, there wouldn't be any pressure to actually remove support for 32-bit systems that don't need anything special.

Embedded systems

Posted Sep 2, 2025 0:42 UTC (Tue) by wileypob (subscriber, #139361) [Link] (3 responses)

I work on fire detection systems, which we support for a long time. ARM 32-bit chips (of a class that can run Linux, like Cortex-A7) are still significantly cheaper than their 64-bit counterparts and still recommended for new designs by the vendor. I'm sympathetic to the support burden, but I hope 32-bit support can continue until the price of entry level 64-bit comes down.

Embedded systems

Posted Sep 3, 2025 4:27 UTC (Wed) by nhippi (subscriber, #34640) [Link] (1 responses)

Despite the large success, Linux remains very much a "scratch your own itch" system. Support for 32bit ARM systems can remain as long your company and your SoC vendor support the effort...

Embedded systems

Posted Sep 8, 2025 7:35 UTC (Mon) by marcH (subscriber, #57642) [Link]

That's not what the article seems to say. When 32bits goes entirely away, patches to "scratch your own 32 bit product" can go to /dev/null

Embedded systems

Posted Sep 16, 2025 2:42 UTC (Tue) by ssmith32 (subscriber, #72404) [Link]

Is it the actual cost of hardware, or certification?

Years ago, I did a brief stint at a company that made fire panels, and was told the main reason not to upgrade was to avoid re-certification.

They actually preferred to attach a proprietary dongle they paid good money for (which talked to Honeywell's cloud, again, more costs) instead of touching anything - hardware or software - on the panel.

The architects and engineers on the hardware side knew it was not the best solution, but the certification costs were just too high.

I'm assuming a lot has changed technically - it's been a long time - but I'd imagine the regulatory costs have not gone down?

MIPS ?

Posted Sep 2, 2025 2:44 UTC (Tue) by wtarreau (subscriber, #51152) [Link] (5 responses)

I haven't seen mention of MIPS here (only loongson). I know it's aging, and many of the chips are maintained out of tree (like the venerable ath79 family that's mostly maintained by OpenWRT). They're still super common in cheap $20 WiFi gateways built around chips such as AR9331 and successors. I'm using that for development to validate that my code works fine on big endian, 32-bit, or on systems which do not support unaligned accesses (you have all 3 at once, that's cool ;-)).

MIPS ?

Posted Sep 2, 2025 6:06 UTC (Tue) by jrtc27 (subscriber, #107748) [Link] (1 responses)

Actually, counterintuitively, there are categories of issues that are more common on 64-bit big-endian than 32-bit big-endian. On 32-bit, int/long/size_t/ptrdiff_t/void */intptr_t are all the same format, so if you get them confused it often doesn't matter in practice. On 64-bit little-endian you can often get away with it, despite them not all being the same width, so long as the high bits are 0, since 4 bytes of data and 4 bytes of zeroes is the representation of an 8-byte value with the high 4 bytes zero. On 64-bit big-endian though, the high bytes are in the first bytes in memory, and so if you reinterpret an 8-byte value as a 4-byte one you will read the high bytes, and if you reinterpret a 4-byte value as an 8-byte one you will left-shift the value by 32. I've seen this problem more than once, whether in the form of pointer aliasing issues or union-based type punning, and so if you are serious about testing your code's portability you should be sure to test this point of the quadrant.

MIPS ?

Posted Sep 2, 2025 16:04 UTC (Tue) by wtarreau (subscriber, #51152) [Link]

> On 64-bit big-endian though, the high bytes are in the first bytes in memory (...)

Yes definitely, that's the principle of endianness. Similarly when you read an in as a short you can get the wrong one, and so on. It's particularly common when dealing with port numbers for example.

For the 64-BE I'm using an old UltraSparc, it does the job pretty well and cannot stand unaligned accesses either. It's just used much less often as being more annoying to boot than a good old GL-iNet router that fits in your hand!

MIPS ?

Posted Sep 2, 2025 14:09 UTC (Tue) by arnd (subscriber, #8866) [Link] (2 responses)

The ath79 family is on the second chart linked in the article, along with 7 other mips32 targets that are still supported (I left off a couple of mips and armv5 targets that only support a single reference board but no actual products in mainline kernels, though I did miss the rtl83xx/93xx platform).

The chart is a little hard to read, but what it shows is that the oldest mainline-supported ath79 machine is from 2007, the replacement ipq40xx platform (armv7) was added in 2016, and at least one mips32 chip (i.e. qca9531) is still commercially available and continues to be sold alongside the armv7/v8 successors.

As far as I can tell, qca9531 and rtlx3xx are now the only big-endian-only 32-bit chip that is still actively marketed, as coldfire mcf5441x and ppc32 qoriq p4 are at the end of their official 15 year support life this year. The 64-bit qoriq txxxx on the other hand have extended support until 2035 according to NXP.

MIPS ?

Posted Sep 2, 2025 16:08 UTC (Tue) by wtarreau (subscriber, #51152) [Link]

> The ath79 family is on the second chart linked in the article, along with 7 other mips32

Ah indeed, sorry Arnd! I didn't notice them because the page was talking about Arm and most of the first ones were apparently Arm. I'm even seeing the ralink one.

Thanks for all the details BTW, these complete the table fairly well ;-)

MIPS ?

Posted Sep 3, 2025 20:23 UTC (Wed) by hailfinger (subscriber, #76962) [Link]

I really appreciate the excellent overview you gave in that presentation. It definitely helps to have such an overview, especially from a regulatory perspective, i.e. are there still any products to be introduced in the market based on a certain platform?

Industrial computers

Posted Sep 2, 2025 6:30 UTC (Tue) by andy_shev (subscriber, #75870) [Link] (6 responses)

Killing 32-bit support for a niche of the industrial computers is not a wise move. As he pointed out to HIGHMEM issue, the best is to fix that part and forget for complains for a while. Yet, this discussion should be done with industry representatives where chips (or their analogues) must be provided for 25+ years from the start. So, what this presentation is missing is the research in that area. And I will be in (continuing) opposition for 32-bit removal just because some maintainers of some subsystems complain.

Industrial computers

Posted Sep 2, 2025 7:33 UTC (Tue) by taladar (subscriber, #68407) [Link] (3 responses)

Shouldn't it be the other way around? The representatives of the companies trying to profit off that support should come to the kernel to advocate for its continued support and provide the resources to do so if they are the only beneficiaries of something that is painful for everyone else?

Industrial computers

Posted Sep 2, 2025 12:38 UTC (Tue) by andy_shev (subscriber, #75870) [Link] (2 responses)

Ideally it should be both ways. But if Linux community won't support this side, why the other should support Linux? Maybe this is a goal to get rid of Linux in industrial world, dunno... In any case after year 2038 it will make no difference :-)

Industrial computers

Posted Sep 2, 2025 13:18 UTC (Tue) by pizza (subscriber, #46) [Link] (1 responses)

> In any case after year 2038 it will make no difference :-)

Uh, modern 32-bit userspace has had the ability to deal with post-2038 dates for a long time.

It's just that there's a lot of old binaries out there that won't ever be recompiled.

That said, even if you do recompile the old sources with modern libc and -D_TIME_BITS=64, you can still be bitten by binary representations (network protocols, file formats, etc) that directly embed a 32-bit time_t.

Industrial computers

Posted Sep 2, 2025 16:14 UTC (Tue) by wtarreau (subscriber, #51152) [Link]

> That said, even if you do recompile the old sources with modern libc and -D_TIME_BITS=64, you can still be bitten by binary representations (network protocols, file formats, etc) that directly embed a 32-bit time_t.

Absolutely! And even harder to spot are the ones using 64-bit on a 64-bit platform but where the value transits through a 32-bit storage at one point in the chain (nastiest ones possibly being ASCII representation of the date in seconds, that some intermediate processing can easily store as signed 32-bit int). I'm pretty sure we'll see some funny issues, like dates jumping instantly to 0xffffffff or 0 because dumped as too large a number for some atoi() or even where only the first digits are parsed and the parsing stops at the first digit that causes the result to overflow, resulting in a time that advances 10 times slower. There are so many possibilities...

Industrial computers

Posted Sep 2, 2025 15:51 UTC (Tue) by arnd (subscriber, #8866) [Link] (1 responses)

A lot of the embedded systems I mention in my talk are sold for long-term industrial applications, which is why they are still around, and will likely remain that way for the foreseeable future.

The specific processors that I mentioned that may be removed (Intel strongarm/armv4, omap2/armv6, imx31/armv6 are no longer used in any industrial machines as far as I can tell, only in early handhelds, and they do cause problems for maintainability.

They are easy to confuse with similar industrial products though: moxart (armv4), at91rm9200 (armv4t), ep93xx (armv4t), ixp4xx/pxa (armv5), imx35 (armv6k), omap3/am3 (armv7). These all have relatively sane CPUs and DT support, so they are easy enough to maintain for many more years.

Importantly, I don't expect we'll even consider dropping armv4 (fa926) and armv4t (arm720t/arm920t) CPU support as long as we support armv5 (arm926e/pj1/xscale) based SoCs like the recently added Microchip SAM9X7.

Industrial computers

Posted Sep 2, 2025 18:39 UTC (Tue) by andy_shev (subscriber, #75870) [Link]

> A lot of the embedded systems I mention in my talk are sold for long-term industrial applications, which is why they are still around, and will likely remain that way for the foreseeable future. Heeh, the article doesn't mention that and unfortunately I was not able to attend the conference. But this is good to know!

M68k and SuperH are MMU ports!

Posted Sep 2, 2025 7:04 UTC (Tue) by glaubitz (subscriber, #96452) [Link] (8 responses)

»Then, there is the dusty corner where nommu (processors without a memory-management unit) live; these include armv7-m, m68k, superh, and xtensa. Nobody is building anything with this kind of hardware now, and the only people who are working on them in any way are those who have to support existing systems.«

Neither M68k nor SuperH are nommu only. That statement is simply incorrect. And people are actually building new m68k-based hardware with the help of FPGAs.

The retro community around the Amiga, Atari, Macintosh 68k as well as SuperH-based video game consoles such as the Sega Saturn and especially the Sega Dreamcast. Not sure why this article is so much focused on commercial applications.

And I don't think big-endian targets are going to be obsolete any time soon unless someone convinces IBM to drop s390x.

M68k and SuperH are MMU ports!

Posted Sep 2, 2025 10:52 UTC (Tue) by DrMcCoy (subscriber, #86699) [Link] (6 responses)

"The retro community around the Amiga, Atari, Macintosh 68k as well as SuperH-based video game consoles such as the Sega Saturn and especially the Sega Dreamcast."

They're not running Linux on these systems, though, are they?

M68k and SuperH are MMU ports!

Posted Sep 2, 2025 12:40 UTC (Tue) by andy_shev (subscriber, #75870) [Link] (2 responses)

Even if they are, after this change they most likely switch to *BSD (Open/Net/Free).

M68k and SuperH are MMU ports!

Posted Sep 2, 2025 23:16 UTC (Tue) by azz (subscriber, #371) [Link]

Or, for m68k, to FreeMINT, which runs on a surprising variety of systems now thanks to EmuTOS, and provides quite a decent Unix-ish environment that's better suited to machines of that vintage.

M68k and SuperH are MMU ports!

Posted Sep 10, 2025 3:38 UTC (Wed) by raven667 (subscriber, #5198) [Link]

Linux has tried, and succeeded, in being all things to all people on all systems, as much or more than NetBSD but there is a good reason that different projects exist with different goals and focus. Linux is the result of the worlds largest vendors efforts which are coordinated through the Linux Foundation trade consortium, but not every system *needs* to run Linux specifically, a subset of use cases could transition to NetBSD or another similar kernel and provide resources to that project and have both projects be healthier as a result. Vendors with niche CPUs could rally around a system like NetBSD and make the changes they want with less concern they are going to break some use case on billions of peoples devices that they weren't even aware existed.

A world where Linux ran on only 95% of the types of systems it does now, and the other 5% moved to Free/Net/OpenBSD or another kernel, but each group could focus on making their part the best would probably be a good world.

M68k and SuperH are MMU ports!

Posted Sep 2, 2025 22:49 UTC (Tue) by neggles (subscriber, #153254) [Link]

There's still a (small) community of users running Debian on the Dreamcast - bootable live CDs are still generated regularly :)

M68k and SuperH are MMU ports!

Posted Sep 4, 2025 8:20 UTC (Thu) by glaubitz (subscriber, #96452) [Link] (1 responses)

> They're not running Linux on these systems, though, are they?

Some people do, yes. We're building Debian for m68k and sh4 and there is also T2DE Linux for both.

For sh2, there is the J2 Linux port which runs on a fully open source CPU.

M68k and SuperH are MMU ports!

Posted Sep 4, 2025 8:28 UTC (Thu) by DrMcCoy (subscriber, #86699) [Link]

Huh, interesting. Okay, I didn't expect that. :)

M68k and SuperH are MMU ports!

Posted Sep 4, 2025 8:17 UTC (Thu) by hrw (subscriber, #44826) [Link]

M68k cores done in FPGA (or emulated like in Pistorm) often do not have MMU.

PAE

Posted Sep 2, 2025 14:36 UTC (Tue) by milek7 (subscriber, #141321) [Link]

> The kernel, soon, will not try to support systems with more than 4GB.

This reminds me of Linus complaints about PAE (from 2007!): https://www.realworldtech.com/forum/?threadid=76912&c...

Why "how soon can we remove X" instead of "how long can we have X"?

Posted Sep 2, 2025 14:45 UTC (Tue) by urjaman (subscriber, #130506) [Link] (4 responses)

Reading this post on my 32-bit chromebook with 4GB of RAM (that is running linux 6.16.1, not some old chromeos kernel) somehow rubs me the wrong way, as if you're drooling at the idea of removing support for things.

(I'm not expecting actual answers, just expressing frustrating at the whole premise...)

Why "how soon can we remove X" instead of "how long can we have X"?

Posted Sep 2, 2025 14:53 UTC (Tue) by corbet (editor, #1) [Link] (3 responses)

What is the source of your frustration? You have an old system that is nicely supported, and will continue to be supported — a gift you are getting for free from the people who do that work. Rather than condemn them for trying to figure out when they can stop doing work for platforms that are no longer in use, maybe you could try thanking them?

Sorry, but sometimes I get frustrated too...

Why "how soon can we remove X" instead of "how long can we have X"?

Posted Sep 2, 2025 16:18 UTC (Tue) by wtarreau (subscriber, #51152) [Link] (1 responses)

Among the criteria for prolongating support, such as "boards are no longer sold", there could be "lots of whiners among users" due to this or that cheap device having been quite popular in its time :-)

Why "how soon can we remove X" instead of "how long can we have X"?

Posted Sep 4, 2025 11:22 UTC (Thu) by hmh (subscriber, #3838) [Link]

Well, IMHO lots of "whiners" *is* a decent metric for the importance of the continued support for the device in the kernel (and userspace, etc): it actually means people who use said devices to run Linux and keep it up-to-date, and who *care* about it.

"Chipset not-yet-in-EOL" and "units sold" metrics often just mean "stock vendor firmware stuck on a kernel from the past decade + landfill"...

Why "how soon can we remove X" instead of "how long can we have X"?

Posted Sep 4, 2025 22:48 UTC (Thu) by urjaman (subscriber, #130506) [Link]

My most sincerest Thank Yous regarding support for this device do go to:

- Alyssa Rozensweig, for the panfrost GPU driver (panfrost has had many other contributors too, but she in particular is who to thank for the T760 support existing at all, and I've thanked her in $ too back when she had a ... i forgot the name for that open source patreon clone.)
- various folks on IRC (#linux-rockchip) who have translated my bisections and problem reports into the kernel development workflow (that my mind has serious trouble with).

Official video is out now

Posted Sep 6, 2025 15:05 UTC (Sat) by arnd (subscriber, #8866) [Link]

https://www.youtube.com/watch?v=QiOMiyGCoTw

What does 32-bit support in the kernel mean?

Posted Sep 12, 2025 16:16 UTC (Fri) by jem (subscriber, #24231) [Link] (1 responses)

The main emphasis in Arnd Bergmann's presentation is on how feasible ending 32-bit support would be, based on how many 32-bit platforms are left to support. I would have liked to hear more info on what 32-bit support in the kernel *is* and what consequences the removal would have, from a technical point of view.

The presentation mentions high memory support and 64-bit time. I understand removing the high memory code would get rid of largely unused cruft, but what else would removing 32-bit support mean? Imagine a hypothetical 32-bit machine with an out-of-tree Linux kernel, with its own set of 32-bit machine specific drivers and core kernel code. How badly would a removal of 32-bit support in the Linux kernel generic ("infrastructure") code suddenly pull the rug from under the feet of the out-of-tree project?

What does 32-bit support in the kernel mean?

Posted Sep 15, 2025 11:28 UTC (Mon) by arnd (subscriber, #8866) [Link]

I think when once it gets to removing ARMv7 support, and the rest of 32-bit with it, it's pretty much impossible to add back, as we'd start making assumptions about sizeof(long) and sizeof(void*) being 64-bit everywhere, with very little incentive to keep it portable. The main driver for that may be 128-bit CPU support, which will come eventually based on how address spaces keep growing by roughly the same number of bits every decade. Supporting three different word sizes is going to be much harder than two, so if Linux is going to run on 128-bit CPUs, the time may have come to stop running on 32-bit CPUs.

The reason I did not spend much time on that in the presentation is that this is too far in the future to make any specific predictions. We'll need at least ten more years of ARMv7 support just for the earliest machines that have a planned support life of 25 years like the i.MX6, and there are still new ARMv7 SoCs coming out that need to get updates for at least 10 years. Both of these are really lower bounds based on what I showed in the slide about older embedded targets. If some of the ARMv7 chips are used as long as the ppc8xx series, that would put the end well into the 2040s.

If there are machines that are not supported in mainline but are actively used, the best way to keep them usable in the future is to upstream all patches and keep reporting bugs when things break.

Tired of hostility towards anything that isn't current-gen

Posted Sep 13, 2025 1:52 UTC (Sat) by awilfox (guest, #124923) [Link] (2 responses)

Articles like this are why I'm still holding on to a patchset I wrote nine months ago that fixes KVM on 32-bit and 64-bit big endian Power. I've sent it to at least a dozen people by now, and all are very happy with it, and it applies to 5.15, 6.6, 6.12, and head. And I don't send it to lkml or kvmppc or linuxppc-dev because I am *terrified* at being the next to be yelled at for "using this old crap". The next to get piled on with hate comments. The next to be vilified by a seeming lynchmob of commenters, arm-chair quarterbacks, and actual kernel developers, all with the goal of driving out anything that cannot be purchased right now. God forbid LWN or Phoronix pick it up and I get actual targeted threats again, like when I made Firefox run on PPC64.

I wrote this KVM patch because it's not "old crap". It's the hardware I use to run Linux. It's the hardware I enjoy running Linux on. When I started using Linux in 1999, the barrier to entry was "do you have the hardware, did you test it, and is the code good quality". Now, the barrier to entry seems to be "is it still sold by the vendor". If I wanted that sort of experience, I would be running Windows.

I've also started working on making Nouveau work on big endian systems again. And the fd.o people were realistic, saying there was probably a lot of issues, and they don't want to spend their time helping me - but if I get somewhere, send them patches, and they'd still happily take them. Why not write articles like *that*? Why not say "we don't really want to deal with this any more, but if you do, take maintainership". Even the Itanium got a question of "does anyone want to take it up" before removal! Or is the idea of supporting big endian and/or 32-bit systems so offensive that even if offered maintainers, you wouldn't want them?

I know Arnd, and I don't think he is quite as hostile as the article comes across. In my experience, he has been lovely to work with. The rest of the kernel maintainers, not so much. And the Internet peanut gallery? The icing on the cake of why I can't bring myself to bother with the kernel any more.

Tired of hostility towards anything that isn't current-gen

Posted Sep 16, 2025 7:32 UTC (Tue) by taladar (subscriber, #68407) [Link]

Things like 32bit and endianess aren't really the "take maintainership" kind of features though. You have to consider them throughout the whole code base, so it is hard to push maintainership of those features onto the community of people who care (quite independently of the question if that community even has enough volunteers to do the work if you could separate it out).

Also, both of 32bit and big endian have been on a long decline, so this is hardly the first sign that they would eventually be removed.

Tired of hostility towards anything that isn't current-gen

Posted Sep 16, 2025 16:07 UTC (Tue) by mb (subscriber, #50428) [Link]

>we don't really want to deal with this any more, but if you do, take maintainership

The problem is that these people that take over maintainership basically don't exist. Even if they promise to do so, that rarely actually happens.
That means at the end maintainership for the "old stuff" is left to the normal maintainers.
And that's why they want to get rid of old stuff.

And it goes way beyond that: Keeping old things alive often causes overhead in completely different areas. Especially with things like big-endian support or general 32bit support. Putting maintenance burden on people who have never heard of the "old stuff" that some hobby enthusiast wants to keep going.

You see, if you want to keep ancient stuff running, you can just use an old kernel and/or an old distribution.


Copyright © 2025, Eklektix, Inc.
This article may be redistributed under the terms of the Creative Commons CC BY-SA 4.0 license
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds