The future of 32-bit support in the kernel
People naturally think about desktop machines first, he continued. If you were running Linux in the 1990s, you had a 32-bit, desktop system. Unix systems, though, moved to 64-bit platforms around 30 years ago, and the Linux desktop made that move about 20 years ago. Even phones and related devices have been 64-bit for the last decade. If those systems were all that Linux had to support, 32-bit support would have long since been removed from the kernel. He summarized the situation with this slide, showing how the non-embedded architectures have transitioned to either 64-bit or nonexistence over time:
The world is not all desktops — or servers — though; embedded Linux exists as well. About 90% of those systems are running on Arm processors. The kernel has accumulated a lot of devicetree files describing those systems over the years; only in this last year has the number of devicetrees for armv8 (64-bit) systems exceeded the number for armv7 (32-bit) systems.
For Arm processors with pre-armv7 architectures, there are only three for which it is still possible to buy hardware, but a number are still supported by the kernel community:
Many other pre-armv7 CPUs are out of production, but the kernel still has support for them. Of those, he said, there are about ten that could be removed now. It would be nice to be able to say that support for the others will be removed after a fixed period, ten years perhaps, but hardware support does not work that way. Instead, one has to think in terms of half lives; every so often, it becomes possible to remove support for half of the platforms. It all depends on whether there are users for the processors in question.
The kernel is still adding support for some 32-bit boards, he said, but at least ten new 64-bit boards gain support for each 32-bit one.
There are a number of non-Arm 32-bit architectures that still have support in the kernel; these include arc, microblaze, nios2, openrisc, rv32, sparc/leon, and xtensa. All of them are being replaced by RISC-V processors in new products. RISC-V is what you use if you don't care about Arm compatibility, he said.
Then, there is the dusty corner where nommu (processors without a
memory-management unit) live; these include armv7-m, m68k, superh, and
xtensa. Nobody is building anything with this kind of hardware now, and
the only people who are working on them in any way are those who have to
support existing systems. "Or to prove that it can be done
."
There are still some people who need to run 32-bit applications that cannot be updated; the solution he has been pushing people toward is to run a 32-bit user space on a 64-bit kernel. This is a good solution for memory-constrained systems; switching to 32-bit halves the memory usage of the system. Since, on most systems, almost all memory is used by user space, running a 64-bit kernel has a relatively small cost. Please, he asked, do not run 32-bit kernels on 64-bit processors.
There are some definite pain points that come with maintaining 32-bit
support; most of the complaints, he said, come from developers in the
memory-management subsystem. The biggest problem there is the need to
support high memory; it is complex, and requires support throughout the
kernel. High memory is needed when the kernel lacks the address space to
map all of the installed physical memory; that tends to be at about 800MB
on 32-bit systems. (See this article for
more information about high memory).
Currently the kernel is able to support 32-bit systems with up to 16GB of
installed memory. Such systems are exceedingly rare, though, and support
for them will be going away soon. There are a few 4GB systems out there,
including some Chromebooks. Systems with 2GB are a bit more common. Even
these systems, he said, are "a bit silly
" since the memory costs
more than the CPU does. There are some use cases for such systems, though.
Most 32-bit systems now have less than 1GB of installed memory. The
kernel, soon, will not try to support systems with more than 4GB.
There are some ideas out there for how to support the larger-memory 32-bit systems without needing the high-memory abstraction. Linus Walleij is working on entirely separating the kernel and user-space address spaces, giving each 4GB to work with; this is a variant on the "4G/4G" approach that has occasionally been tried for many years. It is difficult to make such a system work efficiently, so this effort may never succeed, Bergmann said.
Another approach is the proposed "densemem" memory model, which does some fancy remapping to close holes in the physical address space. Densemem can support up to 2GB and is needed to replace the SPARSEMEM memory model, the removal of which which will eventually be necessary in any case. This work has to be completed before high memory can be removed; Bergmann said that he would be interested in hearing from potential users of the densemem approach.
One other possibility is to drop high memory, but allow the extra physical memory to be used as a zram swap device. That would not be as efficient as accessing the memory directly, but it is relatively simple and would make it possible to drop the complexity of high memory.
Then, there is the year-2038 problem, which he spent several years working on. The kernel-side work was finished in 2020; the musl C library was updated that same year, and the GNU C Library followed the year after. Some distributors have been faster than others to incorporate this work; Debian and Ubuntu have only become year-2038-safe this year.
The year-2038 problem is not yet completely solved, though; there are a lot of packages that have unfixed bugs in this area. Anything using futex(), he said, has about a 50% chance of getting time handling right. The legacy 32-bit system calls, which are not year-2038 safe, are still enabled in the kernel, but they will go away at some point, exposing more bugs. There are languages, including Python and Rust, that have a lot of broken language bindings. Overall, he said, he does not expect any 32-bit desktop system to survive the year-2038 apocalypse.
A related problem is big-endian support, which is also 32-bit only, and also obsolete. Its removal is blocked because IBM is still supporting big-endian mainframe and PowerPC systems; as long as that support continues, big-endian support will stay in the kernel.
A number of other types of support are under discussion. There were once
32-bit systems with more than eight CPUs, but nobody is using those
machines anymore, so support could be removed. Support for armv4
processors, such as the DEC StrongARM CPU,
should be removed. Support for early armv6 CPUs, including the omap2 and
i.mx31, "complicates everything
"; he would like to remove it, even
though there are still some Nokia
770 systems in the wild. The time is coming for the removal of all
non-devicetree board files. Removal of support for Cortex M CPUs,
which are nommu systems, is coming in a couple of years. Developers are
eyeing i486 CPU support, but that will not come out yet. Bergmann has sent
patches to remove support for KVM on 32-bit CPUs, but there is still
"one PowerPC user
", so that support will be kept for now.
To summarize, he said, the kernel will have to retain support for armv7 systems for at least another ten years. Boards are still being produced with these CPUs, so even ten years may be optimistic for removal. Everything else, he said, will probably fade away sooner than that. The removal of high-memory support has been penciled in for sometime around 2027, and nommu support around 2028. There will, naturally, need to be more discussion before these removals can happen.
An audience member asked how developers know whether a processor is still in use or not; Bergmann acknowledged that it can be a hard question. For x86 support, he looked at a lot of old web pages to make a list of which systems existed, then showed that each of those systems was already broken in current kernels for other reasons; the lack of complaints showed that there were no users. For others, it is necessary to dig through the Git history, see what kinds of changes are being made, and ask the developers who have worked on the code; they are the ones who will know who is using that support.
Another person asked about whether the kernel would support big-endian
RISC-V systems. Bergmann answered that those systems are not supported
now, and he hoped that it would stay that way. "With RISC-V, anybody
can do anything, so they do, but it is not always a good idea
". The
final question was about support for nommu esp32 CPUs; he answered that
patches for those CPUs exist, but have not been sent upstream. Those
processors are "a cool toy
", but he does not see any practical
application for them.
The slides for this talk are available. The curious may also want to look at Bergmann's 2020 take on this topic.
[Thanks to the Linux Foundation, LWN's travel sponsor, for supporting my
travel to this event.]
Index entries for this article | |
---|---|
Kernel | Architectures |
Conference | Open Source Summit Europe/2025 |
Posted Sep 1, 2025 19:55 UTC (Mon)
by jrtc27 (subscriber, #107748)
[Link] (12 responses)
I don't understand this point. 64-bit big-endian systems exist in the form of sparc64 and s390x (and powerpc64, though that's less of a thing given powerpc64le).
Posted Sep 1, 2025 21:47 UTC (Mon)
by mjw (subscriber, #16740)
[Link] (10 responses)
Posted Sep 1, 2025 22:27 UTC (Mon)
by sam_c (subscriber, #139836)
[Link] (7 responses)
(Of course, BE isn't the future, I just don't think the platforms are all rotting near-universally as 32-bit ones are on average.)
Posted Sep 2, 2025 8:07 UTC (Tue)
by arnd (subscriber, #8866)
[Link] (6 responses)
For the moment, the problems are mainly in newly written code, where portability to both big-endian and 32-bit targets relies on someone actively testing those and submitting fixes.
This is in contrast to nommu and highmem systems that no longer have the same level of commercial backing and are much further into the inevitable bitrot.
[I believe the article citing "big-endian support, which is also 32-bit only" was a transcription error in an otherwise excellent report -- what I actually said is that the majority of big-endian (embedded) systems are 32-bit, while pointing to s390x/powerpc64 servers as the ones that keep them viable.]
Posted Sep 9, 2025 20:58 UTC (Tue)
by anton (subscriber, #25547)
[Link]
Posted Sep 13, 2025 18:02 UTC (Sat)
by jjs (guest, #10315)
[Link] (4 responses)
Why? Because it worked, and it made them money.
Also saw this with DEC VAX and HP MP/3000 systems. There was a reason these companies maintained such backward compatibility. $$$.
Again, been out of that arena for a long time, no idea current status (other than DEC is gone, HP is split up).
Posted Sep 15, 2025 11:41 UTC (Mon)
by paulj (subscriber, #341)
[Link]
DEC^WCompaq^WHP enterprise support is a huge business, and some portion of that is charging a *lot* of money supporting weird old shit for a small set of customers who need it and can afford it.
Posted Sep 15, 2025 11:56 UTC (Mon)
by farnz (subscriber, #17727)
[Link] (2 responses)
The nuclear power industry has a chunk of this, for example - you know that the old hardware and software broadly does the right thing, but you need to make a slight change, and it's safer to slightly change the current software on old hardware designs than to rewrite the software or change the hardware significantly. And you paid DEC huge money for copies of PDP-11 schematics, PCB layouts etc, so that you can still manufacture new PDP-11s now that DEC isn't around.
Posted Sep 16, 2025 9:32 UTC (Tue)
by paulj (subscriber, #341)
[Link] (1 responses)
DEC was still _manufacturing_ PDP-11 compatible machines up to around 1995!
Posted Sep 16, 2025 9:59 UTC (Tue)
by Wol (subscriber, #4433)
[Link]
The last design was released in 1990, production ceased (by DEC) in 1997. "Authorised clones" may have lasted slightly longer. Not bad for a design released in 1970.
Cheers,
Posted Sep 2, 2025 7:05 UTC (Tue)
by glaubitz (subscriber, #96452)
[Link] (1 responses)
Does IBM already know that s390x is supposed to be obsolete now?
Posted Sep 2, 2025 12:53 UTC (Tue)
by maxfragg (guest, #122266)
[Link]
Posted Sep 5, 2025 11:22 UTC (Fri)
by nowster (subscriber, #67)
[Link]
Posted Sep 1, 2025 19:58 UTC (Mon)
by jrtc27 (subscriber, #107748)
[Link] (4 responses)
That's not really true. Halving is the limit of what you could hope to achieve, if every data type in your system was a machine word or pointer. But char, short, int and long long don't change, (u)intNN_t don't change, and float/double don't change, so depending on what data you're actually operating on you can see anywhere between almost no change (e.g. some purely computational workload on large amounts of raw data) and halving the memory usage.
Posted Sep 1, 2025 20:24 UTC (Mon)
by arnd (subscriber, #8866)
[Link] (3 responses)
I had expected to see a much smaller difference between the two environments like you describe, the numbers I saw for anything I tried were always roughly 100% overhead for the 64-bit code compared to armhf.
I think this is a combination of multiple effects I measured in addition to the obvious sizeof(long) difference: arm64 has longer instruction words than armv7/thumb2, small malloc() calls are aligned to larger cache lines (128 vs 64 bytes), ELF sections have a larger default alignment (64K vs 4K), etc.
For many real workloads, a larger portion of RAM if of course going to be filled with the same user data (text, video, sensor input, network packages, ...) so the difference becomes smaller; for the most memory constrained systems, 100% overhead is surprisingly good estimate that holds true for both kernel data and userland.
Posted Sep 2, 2025 5:59 UTC (Tue)
by jrtc27 (subscriber, #107748)
[Link] (2 responses)
Posted Sep 5, 2025 13:40 UTC (Fri)
by dave4444 (subscriber, #127523)
[Link] (1 responses)
RAM usage will be reduced, but compiled code is bigger and performance can decrease. All a tradeoff
Posted Sep 5, 2025 16:52 UTC (Fri)
by pizza (subscriber, #46)
[Link]
It's worth noting that for Arm and x86, the 64-bit instruction sets more than doubled the number of general-purpose registers available to software, and that _usually_ more than offsets the performance hit from the increased pointer size.
Posted Sep 1, 2025 20:03 UTC (Mon)
by jrtc27 (subscriber, #107748)
[Link] (14 responses)
Unfortunately people are building those systems and others are proposing supporting them[1]. It is a real shame this is the approach that's being taken rather than having dedicated byte-swapping load/store instructions to accelerate processing big-endian data structures like network packets, which is the main use case that I'm aware of, outside of being compatible with old software written for big-endian mainframes.
[1] https://lore.kernel.org/all/20250822165248.289802-1-ben.d...
Posted Sep 2, 2025 7:06 UTC (Tue)
by glaubitz (subscriber, #96452)
[Link] (13 responses)
Why is this unfortunate? I don't understand. Is Linux supposed to run only on architectures that a small group of people likes?
Posted Sep 2, 2025 7:55 UTC (Tue)
by jrtc27 (subscriber, #107748)
[Link] (12 responses)
What I have a problem with is needless fragmentation and needless complexity within one specific architecture, in this case RISC-V. Sometimes the right design choice is to say no to options. Nobody was building big-endian RISC-V hardware, nobody was seriously writing big-endian RISC-V software and everything was just fine, but now everyone is expected to support yet another variant of RISC-V that doesn't really achieve anything except create a whole lot of work. It's not some new and interesting architecture that approaches things differently that happens to be big-endian, it is just RISC-V but with big-endian because nobody was willing to say no.
Posted Sep 2, 2025 8:06 UTC (Tue)
by glaubitz (subscriber, #96452)
[Link] (11 responses)
I wasn't actually looking at the username and only realized now it was you ;-).
> What I have a problem with is needless fragmentation and needless complexity within one specific architecture, in this case RISC-V.
I think this is a developer-centric perspective rather than a user-centric perspective. If there are legitimate use cases such as building open-source networking hardware based on RISC-V, then it should be legitimate to add big-endian support if someone is willing to maintain it.
Posted Sep 2, 2025 8:43 UTC (Tue)
by arnd (subscriber, #8866)
[Link] (5 responses)
Adding big-endian Armv8 support to Linux made sense in 2013 as users were still porting software from big-endian MIPS/Octeon or PowerPC/QorIQ systems, but it ended up being a mistake for the same reason that Jessica explained about RISC-V: it's an extra ABI that requires testing resources in order to keep running, for very little practical use.
Now that all the Linux networking applications moved to arm64le, the only remaining use case for arm64be is to have an easily available platform find and fix endianness bugs, typically in a VM guest running on a LE host. This means we can't easily remove from the arm64 kernel, but it would still be a mistake to add it to riscv64 Linux.
Posted Sep 2, 2025 9:01 UTC (Tue)
by glaubitz (subscriber, #96452)
[Link] (3 responses)
How do you know that?
Posted Sep 2, 2025 9:14 UTC (Tue)
by arnd (subscriber, #8866)
[Link] (2 responses)
Additionally, all newer networking SoCs come with SystemReady certification these days, but SystemReady requires booting in little-endian UEFI mode rather than the traditional Linux boot interface that supports both big-endian and little-endian kernels.
Posted Sep 2, 2025 14:02 UTC (Tue)
by hailfinger (subscriber, #76962)
[Link] (1 responses)
Posted Sep 2, 2025 16:10 UTC (Tue)
by arnd (subscriber, #8866)
[Link]
Realtek's other product lines like the rtl819x wifi line or the STB/NAS rtd1xxx chips already migrated from big-endian mips32 lexra to little-endian mips32 r24k/r34k/r74k, then little-endian arm32 and now little-endian arm64, so I would expect them to do a similar progression here, possibly skipping one or two steps.
Posted Sep 2, 2025 10:19 UTC (Tue)
by pabs (subscriber, #43278)
[Link]
Posted Sep 2, 2025 21:20 UTC (Tue)
by jrtc27 (subscriber, #107748)
[Link] (3 responses)
But that's my point; you don't need all your normal system data to be big-endian there, all you need is for your *network packets* to be big-endian. I remain unconvinced that it would not be better to accelerate a traditionally little-endian architecture for networking by adding dedicated load/store-as-big-endian (i.e., byte swap to/from the cache) instructions on top of an otherwise little-endian system. That way there is no new ABI, no new incompatible ISA, nothing for developers to do except for networking software developers to make sure their code is written to take advantage of such instructions if available, but in reality you'd mostly expect a compiler to be able to fuse things (and maybe you'd add some special annotations on fields so they'd automatically be byte-swapped for you). PowerPC has just that in the form of l[hwd]brx - see https://godbolt.org/z/MvYz9sabG - and I can't help but feel this approach is being overlooked.
Posted Sep 3, 2025 8:16 UTC (Wed)
by farnz (subscriber, #17727)
[Link] (2 responses)
AArch64 takes the other practical approach; it has REV, REV16, and REV32 instructions to accelerate byte-swapping after the load or before the store.
Posted Sep 3, 2025 20:18 UTC (Wed)
by jrtc27 (subscriber, #107748)
[Link]
But I imagine that even load+reverse as two instructions on AArch64 makes some of the big-endian folks unhappy still and they really don't want it to have any more overhead than a normal load.
Good to know x86 has MOVBE, and that my general principle of "if you can think of a sensible instruction, x86 and/or PowerPC has probably already done it" holds even more strongly in this case :)
Posted Sep 3, 2025 20:51 UTC (Wed)
by excors (subscriber, #95769)
[Link]
[1] https://uops.info/html-instr/MOVBE_R32_M32.html / https://uops.info/html-instr/MOVBE_M32_R32.html
Posted Sep 4, 2025 6:36 UTC (Thu)
by maskray (subscriber, #112608)
[Link]
> All data structures that the object file format defines follow the "natural" size and alignment guidelines for the relevant class. If necessary, data structures contain explicit padding to ensure 4-byte alignment for 4-byte objects, to force structure sizes to a multiple of four, etc. Data also have suitable alignment from the beginning of the file. Thus, for example, a structure containing an Elf32_Addr member will be aligned on a 4-byte boundary within the file. Other classes would have appropriately scaled definitions. To illustrate, the 64-bit class would define Elf64 Addr as an 8-byte object, aligned on an 8-byte boundary. Following the strictest alignment for each object allows the format to work on any machine in a class. That is, all ELF structures on all 32-bit machines have congruent templates. For portability, ELF uses neither bit-fields nor floating-point values, because their representations vary, even among pro- cessors with the same byte order. Of course the programs in an ELF file may use these types, but the format itself does not.
This consideration made sense in 1990 but now seems less relevant given the overwhelming prevalence of little-endian architectures.
Now big-endian support manifests either in code duplication (separate template instantiation) or runtime overhead.
```cpp
Today, lld's RISC-V support assumes little-endian. If we ever need to add big-endian support, we would either introduce template instantiation and make the executable larger, or make read/write calls slower for little-endian users.
To maintain confidence in refactoring, all new code requires testing. For big-endian we need to duplicate tests. Feature developers or reviewers would have to think whether an issue would work on big-endian or accept the imperfection.
Posted Sep 1, 2025 20:53 UTC (Mon)
by ajb (subscriber, #9694)
[Link] (8 responses)
There are also going to be many products where it doesn't make much difference whether an ancient processor or an up to date riscv is used, and manufacturers are very likely to continue to choose ones where the licence fee is already paid off.
Posted Sep 1, 2025 22:11 UTC (Mon)
by smoogen (subscriber, #97)
[Link] (6 responses)
Posted Sep 2, 2025 1:44 UTC (Tue)
by willy (subscriber, #9762)
[Link] (4 responses)
Posted Sep 2, 2025 4:56 UTC (Tue)
by ajb (subscriber, #9694)
[Link] (3 responses)
Posted Sep 2, 2025 9:56 UTC (Tue)
by hailfinger (subscriber, #76962)
[Link] (2 responses)
The three options for such vendors are:
Posted Sep 2, 2025 13:16 UTC (Tue)
by foom (subscriber, #14868)
[Link] (1 responses)
Sounds like that'd make maintenance generally easier, not harder, if your goal is only to check a box for compliance.
If there are no security vulnerabilities reported which apply to your particular kernel, obviously that means it has no security issues and therefore requires no updates, so there's no work to do. Yay!
Posted Sep 10, 2025 3:19 UTC (Wed)
by raven667 (subscriber, #5198)
[Link]
Posted Sep 2, 2025 7:30 UTC (Tue)
by taladar (subscriber, #68407)
[Link]
Posted Sep 1, 2025 22:54 UTC (Mon)
by iabervon (subscriber, #722)
[Link]
Posted Sep 2, 2025 0:42 UTC (Tue)
by wileypob (subscriber, #139361)
[Link] (3 responses)
Posted Sep 3, 2025 4:27 UTC (Wed)
by nhippi (subscriber, #34640)
[Link] (1 responses)
Posted Sep 8, 2025 7:35 UTC (Mon)
by marcH (subscriber, #57642)
[Link]
Posted Sep 16, 2025 2:42 UTC (Tue)
by ssmith32 (subscriber, #72404)
[Link]
Years ago, I did a brief stint at a company that made fire panels, and was told the main reason not to upgrade was to avoid re-certification.
They actually preferred to attach a proprietary dongle they paid good money for (which talked to Honeywell's cloud, again, more costs) instead of touching anything - hardware or software - on the panel.
The architects and engineers on the hardware side knew it was not the best solution, but the certification costs were just too high.
I'm assuming a lot has changed technically - it's been a long time - but I'd imagine the regulatory costs have not gone down?
Posted Sep 2, 2025 2:44 UTC (Tue)
by wtarreau (subscriber, #51152)
[Link] (5 responses)
Posted Sep 2, 2025 6:06 UTC (Tue)
by jrtc27 (subscriber, #107748)
[Link] (1 responses)
Posted Sep 2, 2025 16:04 UTC (Tue)
by wtarreau (subscriber, #51152)
[Link]
Yes definitely, that's the principle of endianness. Similarly when you read an in as a short you can get the wrong one, and so on. It's particularly common when dealing with port numbers for example.
For the 64-BE I'm using an old UltraSparc, it does the job pretty well and cannot stand unaligned accesses either. It's just used much less often as being more annoying to boot than a good old GL-iNet router that fits in your hand!
Posted Sep 2, 2025 14:09 UTC (Tue)
by arnd (subscriber, #8866)
[Link] (2 responses)
The chart is a little hard to read, but what it shows is that the oldest mainline-supported ath79 machine is from 2007, the replacement ipq40xx platform (armv7) was added in 2016, and at least one mips32 chip (i.e. qca9531) is still commercially available and continues to be sold alongside the armv7/v8 successors.
As far as I can tell, qca9531 and rtlx3xx are now the only big-endian-only 32-bit chip that is still actively marketed, as coldfire mcf5441x and ppc32 qoriq p4 are at the end of their official 15 year support life this year. The 64-bit qoriq txxxx on the other hand have extended support until 2035 according to NXP.
Posted Sep 2, 2025 16:08 UTC (Tue)
by wtarreau (subscriber, #51152)
[Link]
Ah indeed, sorry Arnd! I didn't notice them because the page was talking about Arm and most of the first ones were apparently Arm. I'm even seeing the ralink one.
Thanks for all the details BTW, these complete the table fairly well ;-)
Posted Sep 3, 2025 20:23 UTC (Wed)
by hailfinger (subscriber, #76962)
[Link]
Posted Sep 2, 2025 6:30 UTC (Tue)
by andy_shev (subscriber, #75870)
[Link] (6 responses)
Posted Sep 2, 2025 7:33 UTC (Tue)
by taladar (subscriber, #68407)
[Link] (3 responses)
Posted Sep 2, 2025 12:38 UTC (Tue)
by andy_shev (subscriber, #75870)
[Link] (2 responses)
Posted Sep 2, 2025 13:18 UTC (Tue)
by pizza (subscriber, #46)
[Link] (1 responses)
Uh, modern 32-bit userspace has had the ability to deal with post-2038 dates for a long time.
It's just that there's a lot of old binaries out there that won't ever be recompiled.
That said, even if you do recompile the old sources with modern libc and -D_TIME_BITS=64, you can still be bitten by binary representations (network protocols, file formats, etc) that directly embed a 32-bit time_t.
Posted Sep 2, 2025 16:14 UTC (Tue)
by wtarreau (subscriber, #51152)
[Link]
Absolutely! And even harder to spot are the ones using 64-bit on a 64-bit platform but where the value transits through a 32-bit storage at one point in the chain (nastiest ones possibly being ASCII representation of the date in seconds, that some intermediate processing can easily store as signed 32-bit int). I'm pretty sure we'll see some funny issues, like dates jumping instantly to 0xffffffff or 0 because dumped as too large a number for some atoi() or even where only the first digits are parsed and the parsing stops at the first digit that causes the result to overflow, resulting in a time that advances 10 times slower. There are so many possibilities...
Posted Sep 2, 2025 15:51 UTC (Tue)
by arnd (subscriber, #8866)
[Link] (1 responses)
The specific processors that I mentioned that may be removed (Intel strongarm/armv4, omap2/armv6, imx31/armv6 are no longer used in any industrial machines as far as I can tell, only in early handhelds, and they do cause problems for maintainability.
They are easy to confuse with similar industrial products though: moxart (armv4), at91rm9200 (armv4t), ep93xx (armv4t), ixp4xx/pxa (armv5), imx35 (armv6k), omap3/am3 (armv7). These all have relatively sane CPUs and DT support, so they are easy enough to maintain for many more years.
Importantly, I don't expect we'll even consider dropping armv4 (fa926) and armv4t (arm720t/arm920t) CPU support as long as we support armv5 (arm926e/pj1/xscale) based SoCs like the recently added Microchip SAM9X7.
Posted Sep 2, 2025 18:39 UTC (Tue)
by andy_shev (subscriber, #75870)
[Link]
Posted Sep 2, 2025 7:04 UTC (Tue)
by glaubitz (subscriber, #96452)
[Link] (8 responses)
Neither M68k nor SuperH are nommu only. That statement is simply incorrect. And people are actually building new m68k-based hardware with the help of FPGAs.
The retro community around the Amiga, Atari, Macintosh 68k as well as SuperH-based video game consoles such as the Sega Saturn and especially the Sega Dreamcast. Not sure why this article is so much focused on commercial applications.
And I don't think big-endian targets are going to be obsolete any time soon unless someone convinces IBM to drop s390x.
Posted Sep 2, 2025 10:52 UTC (Tue)
by DrMcCoy (subscriber, #86699)
[Link] (6 responses)
They're not running Linux on these systems, though, are they?
Posted Sep 2, 2025 12:40 UTC (Tue)
by andy_shev (subscriber, #75870)
[Link] (2 responses)
Posted Sep 2, 2025 23:16 UTC (Tue)
by azz (subscriber, #371)
[Link]
Posted Sep 10, 2025 3:38 UTC (Wed)
by raven667 (subscriber, #5198)
[Link]
A world where Linux ran on only 95% of the types of systems it does now, and the other 5% moved to Free/Net/OpenBSD or another kernel, but each group could focus on making their part the best would probably be a good world.
Posted Sep 2, 2025 22:49 UTC (Tue)
by neggles (subscriber, #153254)
[Link]
Posted Sep 4, 2025 8:20 UTC (Thu)
by glaubitz (subscriber, #96452)
[Link] (1 responses)
Some people do, yes. We're building Debian for m68k and sh4 and there is also T2DE Linux for both.
For sh2, there is the J2 Linux port which runs on a fully open source CPU.
Posted Sep 4, 2025 8:28 UTC (Thu)
by DrMcCoy (subscriber, #86699)
[Link]
Posted Sep 4, 2025 8:17 UTC (Thu)
by hrw (subscriber, #44826)
[Link]
Posted Sep 2, 2025 14:36 UTC (Tue)
by milek7 (subscriber, #141321)
[Link]
This reminds me of Linus complaints about PAE (from 2007!): https://www.realworldtech.com/forum/?threadid=76912&c...
Posted Sep 2, 2025 14:45 UTC (Tue)
by urjaman (subscriber, #130506)
[Link] (4 responses)
(I'm not expecting actual answers, just expressing frustrating at the whole premise...)
Posted Sep 2, 2025 14:53 UTC (Tue)
by corbet (editor, #1)
[Link] (3 responses)
Sorry, but sometimes I get frustrated too...
Posted Sep 2, 2025 16:18 UTC (Tue)
by wtarreau (subscriber, #51152)
[Link] (1 responses)
Posted Sep 4, 2025 11:22 UTC (Thu)
by hmh (subscriber, #3838)
[Link]
"Chipset not-yet-in-EOL" and "units sold" metrics often just mean "stock vendor firmware stuck on a kernel from the past decade + landfill"...
Posted Sep 4, 2025 22:48 UTC (Thu)
by urjaman (subscriber, #130506)
[Link]
- Alyssa Rozensweig, for the panfrost GPU driver (panfrost has had many other contributors too, but she in particular is who to thank for the T760 support existing at all, and I've thanked her in $ too back when she had a ... i forgot the name for that open source patreon clone.)
Posted Sep 6, 2025 15:05 UTC (Sat)
by arnd (subscriber, #8866)
[Link]
Posted Sep 12, 2025 16:16 UTC (Fri)
by jem (subscriber, #24231)
[Link] (1 responses)
The presentation mentions high memory support and 64-bit time. I understand removing the high memory code would get rid of largely unused cruft, but what else would removing 32-bit support mean? Imagine a hypothetical 32-bit machine with an out-of-tree Linux kernel, with its own set of 32-bit machine specific drivers and core kernel code. How badly would a removal of 32-bit support in the Linux kernel generic ("infrastructure") code suddenly pull the rug from under the feet of the out-of-tree project?
Posted Sep 15, 2025 11:28 UTC (Mon)
by arnd (subscriber, #8866)
[Link]
The reason I did not spend much time on that in the presentation is that this is too far in the future to make any specific predictions. We'll need at least ten more years of ARMv7 support just for the earliest machines that have a planned support life of 25 years like the i.MX6, and there are still new ARMv7 SoCs coming out that need to get updates for at least 10 years. Both of these are really lower bounds based on what I showed in the slide about older embedded targets. If some of the ARMv7 chips are used as long as the ppc8xx series, that would put the end well into the 2040s.
If there are machines that are not supported in mainline but are actively used, the best way to keep them usable in the future is to upstream all patches and keep reporting bugs when things break.
Posted Sep 13, 2025 1:52 UTC (Sat)
by awilfox (guest, #124923)
[Link] (2 responses)
I wrote this KVM patch because it's not "old crap". It's the hardware I use to run Linux. It's the hardware I enjoy running Linux on. When I started using Linux in 1999, the barrier to entry was "do you have the hardware, did you test it, and is the code good quality". Now, the barrier to entry seems to be "is it still sold by the vendor". If I wanted that sort of experience, I would be running Windows.
I've also started working on making Nouveau work on big endian systems again. And the fd.o people were realistic, saying there was probably a lot of issues, and they don't want to spend their time helping me - but if I get somewhere, send them patches, and they'd still happily take them. Why not write articles like *that*? Why not say "we don't really want to deal with this any more, but if you do, take maintainership". Even the Itanium got a question of "does anyone want to take it up" before removal! Or is the idea of supporting big endian and/or 32-bit systems so offensive that even if offered maintainers, you wouldn't want them?
I know Arnd, and I don't think he is quite as hostile as the article comes across. In my experience, he has been lovely to work with. The rest of the kernel maintainers, not so much. And the Internet peanut gallery? The icing on the cake of why I can't bring myself to bother with the kernel any more.
Posted Sep 16, 2025 7:32 UTC (Tue)
by taladar (subscriber, #68407)
[Link]
Also, both of 32bit and big endian have been on a long decline, so this is hardly the first sign that they would eventually be removed.
Posted Sep 16, 2025 16:07 UTC (Tue)
by mb (subscriber, #50428)
[Link]
The problem is that these people that take over maintainership basically don't exist. Even if they promise to do so, that rarely actually happens.
And it goes way beyond that: Keeping old things alive often causes overhead in completely different areas. Especially with things like big-endian support or general 32bit support. Putting maintenance burden on people who have never heard of the "old stuff" that some hobby enthusiast wants to keep going.
You see, if you want to keep ancient stuff running, you can just use an old kernel and/or an old distribution.
64-bit big-endian
64-bit big-endian
The slides don't say that big-endian is 32-bit only, but "Equally obsolete as 32-bit".
64-bit big-endian
64-bit big-endian
IBM switched their Linux support for Power to little-endian, when introducing OpenPower (Power 8 or Power 9). They completely dropped support for big-endian AFAICT, and other people seem to have picked up that attitude, too.
64-bit big-endian
Long Term Support - one reason why
Long Term Support - one reason why
There's still jobs out there based around supporting old binaries on PDP-11s, because updating your compliance certificate for a modification of the existing code on the existing hardware is a small thing, but updating it for new hardware or new code is a big thing.
Long Term Support - one reason why
Long Term Support - one reason why
Long Term Support - one reason why
Wol
64-bit big-endian
64-bit big-endian
64-bit big-endian
Not all types are words / pointers
Not all types are words / pointers
Not all types are words / pointers
Not all types are words / pointers
Good for:
* pointer heavy data structures
* computation where _base_ data types and 32bit registers are sufficient
Bad for:
* older compilers/toolchains assume some instrucitons are not available and cannot optimize with them
* heavy use of int64 or double datatypes as those require multiple instructions/registers
Not all types are words / pointers
Big-endian RISC-V
Big-endian RISC-V
Big-endian RISC-V
Big-endian RISC-V
Big-endian RISC-V
Big-endian RISC-V
Big-endian RISC-V
Big-endian RISC-V
Big-endian RISC-V
Big-endian RISC-V
Big-endian RISC-V
x86-64 has MOVBE which byte-swaps on loads. Big-endian RISC-V
Big-endian RISC-V
Big-endian RISC-V
Big-endian RISC-V
template void elf::doIcf<ELF32LE>(Ctx &);
template void elf::doIcf<ELF32BE>(Ctx &);
template void elf::doIcf<ELF64LE>(Ctx &);
template void elf::doIcf<ELF64BE>(Ctx &);
```
longevity
longevity
longevity
longevity
longevity
1. maintain an older kernel indefinitely (good luck with that, especially now that older upstream-unsupported kernels won't get CVEs allocated for security issues)
2. work hard on keeping the affected chipses supported by the mainline kernel
3. scrap their designs and start anew from another incopmatible design (not worth the effort)
longevity
longevity
longevity
longevity
Embedded systems
Embedded systems
Embedded systems
Embedded systems
MIPS ?
MIPS ?
MIPS ?
The ath79 family is on the second chart linked in the article, along with 7 other mips32 targets that are still supported (I left off a couple of mips and armv5 targets that only support a single reference board but no actual products in mainline kernels, though I did miss the rtl83xx/93xx platform).
MIPS ?
MIPS ?
MIPS ?
Industrial computers
Industrial computers
Industrial computers
Industrial computers
Industrial computers
Industrial computers
> A lot of the embedded systems I mention in my talk are sold for long-term industrial applications, which is why they are still around, and will likely remain that way for the foreseeable future.
Heeh, the article doesn't mention that and unfortunately I was not able to attend the conference. But this is good to know!
Industrial computers
M68k and SuperH are MMU ports!
M68k and SuperH are MMU ports!
M68k and SuperH are MMU ports!
M68k and SuperH are MMU ports!
M68k and SuperH are MMU ports!
M68k and SuperH are MMU ports!
M68k and SuperH are MMU ports!
M68k and SuperH are MMU ports!
M68k and SuperH are MMU ports!
PAE
Why "how soon can we remove X" instead of "how long can we have X"?
What is the source of your frustration? You have an old system that is nicely supported, and will continue to be supported — a gift you are getting for free from the people who do that work. Rather than condemn them for trying to figure out when they can stop doing work for platforms that are no longer in use, maybe you could try thanking them?
Why "how soon can we remove X" instead of "how long can we have X"?
Why "how soon can we remove X" instead of "how long can we have X"?
Why "how soon can we remove X" instead of "how long can we have X"?
Why "how soon can we remove X" instead of "how long can we have X"?
- various folks on IRC (#linux-rockchip) who have translated my bisections and problem reports into the kernel development workflow (that my mind has serious trouble with).
https://www.youtube.com/watch?v=QiOMiyGCoTw
Official video is out now
What does 32-bit support in the kernel mean?
What does 32-bit support in the kernel mean?
Tired of hostility towards anything that isn't current-gen
Tired of hostility towards anything that isn't current-gen
Tired of hostility towards anything that isn't current-gen
That means at the end maintainership for the "old stuff" is left to the normal maintainers.
And that's why they want to get rid of old stuff.