|
|
Log in / Subscribe / Register

Debian debates amending architecture support stratagem

By Daroc Alden
November 19, 2025

The Linux kernel supports a large number of architectures. Not all of those are supported by Linux distributions, but Debian does support many of them, officially or unofficially. On October 26, Bastian Blank opened a discussion about the minimum version of these architectures that Debian should support: in particular, raising the de-facto minimum versions in the next Debian release ("forky"). Thread participants were generally in favor of keeping support for older architecture variants, but didn't reach a firm conclusion.

Blank noted that Debian doesn't currently have a policy for which architecture versions should be supported. Three architectures do specify a minimum version of GCC, which in turn implies some limits on architecture versions: Arm CPUs must be armv7-a+fp or newer, 32-bit x86 CPUs must be i686 or newer, and s390 CPUs must be z196 or newer. (Version z196 is followed, naturally, by z114, then by zEC12, zBC12, and z13. Since z13, IBM has continued counting with the same numbers that everyone else uses.) These architectures are all fairly old. Arm version 7 dates from 1993, with the last processors being sold in 2001. ARM7 was from 1993, but ARMv7 is from 2005, and CPUs are still being sold.

Starting in Debian 14 ("forky"), Blank proposed that 64-bit x86 CPUs should require version x86-64-v2, PowerPC CPUs should require power9, and s390x CPUs should require z15. x86-64-v2, in particular, features wider atomic compare-and-exchange operations and more single-instruction multiple-data (SIMD) instructions.

As a general guidance I would like to aim for a ten to 15 years support range at release time. The cutoff in respect to the expected 2027 release date of Forky would therefor be 2012 to 2017. More time is given for widely used architectures, less for more specialized ones."

This recommendation was seemingly motivated by matching the composition of Debian's build infrastructure, which doesn't have any PowerPC machines older than power9, or s390x machines older than z16. If Blank's proposal were adopted, the distribution could potentially drop support for older architecture versions in 2030, which would be the end of Debian 13 ("trixie") long-term support.

Marco d'Itri thought that retiring architecture variants after 15 years from release was reasonable: "Debian's purpose is not to enable retrocomputing projects." That didn't sit will with all members of the list, however. Jan-Daniel Kaplanski objected, noting that the ability to use Debian with a wide variety of hardware was one of the appealing properties of the distribution. Even if a 15-year rule were adopted, however, it should be from the last year the CPU was available to buy: PowerPC version 8 CPUs were still sold last year, he said. Kaplanski would want them supported through 2039. Romain Dolbeau didn't think that was enough:

Debian is (at least up to now) the Linux distribution one could rely on to support hardware as long as its actual life without forcing users to upgrade. I have ~2007 (Penryn) systems still in use that were deployed with Debian when new, and I'm sure I'm not the only one. If it ain't broke, don't fix it, just upgrade Debian :-)

Marc Haber pointed out that Debian depends on kernel and toolchain support for its different architectures; he then clarified that his concern was about upstream support timelines. If the kernel or the toolchain did drop support for an older version of an architecture, it might do so with relatively little notice compared to Debian's long-term support timelines. He thought that Blank had a point in wanting to establish a policy years in advance of any potential deprecation, so that people could plan around it. Diederik de Haas opined that the kernel would not drop support without an announcement far in advance, and said that all of the concerns that had been raised in favor of Blank's proposal so far were hypothetical.

Simon Richter thought that the decision should be based on potential performance improvements, not just how old an architecture is. SIMD instructions (and other specialized instructions) can be noticeably faster for some programs, although they don't help with programs that spend most of their time waiting on I/O. He also suggested that it would be nice to have a way to automatically migrate users to an unofficial Debian port if the official releases raise their minimum supported versions.

John Klos asked why any change was needed — if it was intended to reduce the amount of work for Debian maintainers, what specific work would be avoided? Peter Green replied that architecture improvements sometimes "eliminate a significant source of bullshit", and gave the example of moving from x87 to sse2 floating point instructions on x86_64. On the other hand, he didn't think that Blank's proposed updates really gained that much: the newer versions that the proposal would mandate don't actually contain significant architectural changes.

Ultimately, the discussion did not result in any specific decision being made. It seems likely that Debian will maintain its current level of architecture support for forky, but in the absence of a specific policy, the debate will certainly arise again. The Linux community is, in some ways, a tug of war between different interests: some people want the maximum performance possible from their software, some people want to continue using the hardware they have available, and everyone has problems with getting enough time and energy to work on these things. How those competing interests eventually shake out — and whether people are eventually able to find a compromise position — comes down to who shows up to do the work.



to post comments

ARMv7 dates are wrong

Posted Nov 19, 2025 17:17 UTC (Wed) by pkubaj (guest, #175765) [Link] (12 responses)

You probably confused ARM7 with ARMv7. ARM7 is actually ARMv3. ARMv7 was launched in 2005 and there are still new boards being released that use ARMv7.

ARMv7 dates are wrong

Posted Nov 19, 2025 17:30 UTC (Wed) by daroc (editor, #160859) [Link] (11 responses)

... curses. I probably did get those confused, yes. I've gone ahead and made a correction.

ARMv7 dates are wrong

Posted Nov 19, 2025 17:50 UTC (Wed) by farnz (subscriber, #17727) [Link] (10 responses)

ARM is lovely and confusing, because while ARM1 is ARMv1 and ARM2 is ARMv2, ARM3 is also ARMv2; ARM4 and ARM5 never existed in the wild (and I've never been able to get a straight answer from anyone at Acorn or ARM about whether they existed at all, and if so, which architecture version they tried to implement - or whether they never made it into the wild because they were failed attempts at defining a useful ARMv3 architecture), and ARM6 is the first ARMv3 core. Early ARM7 cores are ARMv3, then there's a bunch of ARM7s that are ARMv4, and a few late ARM7 family cores are ARMv5. ARM8 is ARMv4, ARM9 is mostly ARMv4 except when it's ARMv5. ARM10 is ARMv5, while ARM11 is ARMv6.

And then they replaced the numbering with "Cortex", which is usually ARMv7 or later, except for some of the Cortex-M cores, which are ARMv6. But at least it's clear that a Cortex-M3 is not related to ARMv3, whereas having to know that ARM710T is ARMv4, ARM7EJ-S is ARMv5 and ARM710 is ARMv3 is "fun" to explain to newcomers to the ARM naming mess. Similar applies to knowing that the ARM920T is an earlier ARM architecture (v4) than ARM7EJ (v5).

ARMv7 dates are wrong

Posted Nov 19, 2025 18:01 UTC (Wed) by pizza (subscriber, #46) [Link] (3 responses)

> And then they replaced the numbering with "Cortex", which is usually ARMv7 or later, except for some of the Cortex-M cores, which are ARMv6. But at least it's clear that a Cortex-M3 is not related to ARMv3,

Minor correction: Cortex-M are armv7-m (or more recently, armv8-m for the dual-digit models) except for the Cortex-M0 which is armv6-m.

And after things finally calmed down for the Cortex naming convention they decided to copy Intel's naming strategy.

ARMv7 dates are wrong

Posted Nov 19, 2025 18:06 UTC (Wed) by farnz (subscriber, #17727) [Link] (2 responses)

Cortex-M1 and Cortex-M0+ are also ARMv6.

And I'm just waiting for ARMv9-M to make it a complete mess; hopefully they use 3 digit numbers for ARMv9-M Cortex-Ms, but there are no guarantees here.

ARMv7 dates are wrong

Posted Nov 19, 2025 18:40 UTC (Wed) by pm215 (subscriber, #98099) [Link] (1 responses)

In the spirit of this being a "gosh, this naming is confusing" thread:

M1 and M0+ are not ARMv6: they're ARMv6M, which is notably different to ARMv6. v6M came *after* v7M and I find it most useful to think of it as "a cutdown version of v7M". It's definitely not a simple "v6M is to v6 as v7M is to v7" relationship.

In v8M this was sorted out by having everything be v8M: the cutdown cores don't implement the "Main Extension" and the fullfat ones do. (This distinction is also sometimes referred to as "Baseline" vs "Mainline".) v6M is effectively "v7M without all the stuff that v8M classifies as the Main Extension".

More generally, M profile is so different from A/R profile that it forms its own architecture lineage, a sibling of A profile. (R profile is less weird as it's more or less "A profile but with an MPU rather than an MMU.)

ARMv7 dates are wrong

Posted Nov 19, 2025 19:00 UTC (Wed) by khim (subscriber, #9252) [Link]

> More generally, M profile is so different from A/R profile that it forms its own architecture lineage, a sibling of A profile.

One example is implementation of div in M profile it may trigger division by zero exception, but not in A or R… do you have any idea who needed that exception or why? I couldn't imagine why they did it that way…

ARMv7 dates are wrong

Posted Nov 19, 2025 19:52 UTC (Wed) by iabervon (subscriber, #722) [Link] (1 responses)

Part of the confusion is that there were cores designed by other people that licensed the ARM ISA. It's less weird that ARM8 cores are ARMv4 when there are Xscale cores that are also ARMv4. It really only seems like the core and ISA versions should match when looking back from the present where everyone uses cores designed by Arm rather than designing their own cores on an ISA license.

ARM is responsible for most of the confusion

Posted Nov 20, 2025 9:31 UTC (Thu) by farnz (subscriber, #17727) [Link]

The first cores designed by companies other than ARM are contemporaries of the ARM810, and are ARMv4 cores. And it seemed weird back when the Acorn Risc PC first came out that the ARM610 in that machine was already being described as an ARMv3 core - where did ARM4 and ARM5 go, and why wasn't it ARMv6 to match the core?

All of the confusion up until and including ARMv4 and ARM810 is entirely of ARM's making, since they controlled all the cores and all the architecture versions up until then; and a decent amount of it after that is also of ARM's making (all the ARM9 through ARM11 confusion), since they chose to keep the old numbering scheme after it became clear that architecture versions were going to be in common use.

ARMv7 dates are wrong

Posted Nov 19, 2025 23:25 UTC (Wed) by JohnMoon (subscriber, #157189) [Link]

Nick Desaulniers has a good blog post about this topic: https://nickdesaulniers.github.io/blog/2023/03/10/disambi...

ARMv7 dates are wrong

Posted Nov 20, 2025 1:08 UTC (Thu) by ejr (subscriber, #51652) [Link]

And let's not even delve into the mish-mash of PCIe features supported, unsupported, or claimed-to-be-supported around each variation on each chipset. While I very much love some non-mainstream directions, they all fall down on the external interfaces. If they're single-vendor (e.g. picking up their own implementation of a full stack), it's a nightmare supporting every possibility for general use.

Looking at you, my old friend RISC-V. ARM as well, and they're showing the way of "let the big folks deal with that and we'll just take the residuals."

ARMv7 dates are wrong

Posted Nov 22, 2025 20:29 UTC (Sat) by xnox (subscriber, #63320) [Link] (1 responses)

ARM4 and ARM5 are building names on their Cambridge Campus. We had mini debconf in ARM4.

ARMv7 dates are wrong

Posted Nov 23, 2025 20:30 UTC (Sun) by erincandescent (subscriber, #141058) [Link]

Unless it's changed since I worked there, there are no ARM4 or ARM5 buidldings, since the names match the processors:

ARM1 is the central building, straight ahead entering the complex
ARM2 was behind the Cambridge Water offices, now knocked down
ARM3 is to the left
ARM6 is to the right
They did some expansion circa 2018, and to my knowledge the new buildings were named A/B/C/D (boring :()

There never were ARM4 or ARM5 architectures or buildings. As the story was told to me (by the people who were there!), they skipped 4 & 5 to one-up the Pentium.

Please don't abandon old hardware

Posted Nov 19, 2025 17:40 UTC (Wed) by rrolls (subscriber, #151126) [Link] (76 responses)

I cannot echo this loudly enough:

> Debian is (at least up to now) the Linux distribution one could rely on to support hardware as long as its actual life without forcing users to upgrade. I have ~2007 (Penryn) systems still in use that were deployed with Debian when new, and I'm sure I'm not the only one. If it ain't broke, don't fix it, just upgrade Debian :-)

I too have perfectly functioning hardware running Debian - and the entire chain of events that led to me getting back into Debian in general was because I first tried putting Rocky Linux on them only to be met with nonsensical errors that, after doing some research, turned out to be because the CPUs were too old for Rocky.

If Debian scraps this support, where do I turn next? DuskOS? (Yes, I've been genuinely wanting to experiment with DuskOS, but these are production systems, so in this context this is sarcastic.)

The world has plenty of Linux distros that "move fast and break things", with frequent releases and just as frequent support cutoffs. We don't need Debian to become one of those. We need Debian to keep being Debian: super stable and not changing for the sake of it.

Please don't abandon old hardware

Posted Nov 19, 2025 18:34 UTC (Wed) by amacater (subscriber, #790) [Link] (7 responses)

I have two HP Microservers, both Gen 8. Fairly common and won't run Rocky or Red Hat. They will run Debian 13 Trixie - by the time Forky is released, they'll likely be 14 years old. As reluctant as I am to throw away good quality working hardware, a 15 year old system is likely to be much less power efficient than something newer.

Even time must have a stop eventually.

Please don't abandon old hardware

Posted Nov 19, 2025 21:02 UTC (Wed) by Lionel_Debroux (subscriber, #30014) [Link]

Agreed... there's a point where old hardware should be retired from production use, for both power efficiency and security reasons. This proposal is only about x86-64-v2, which keeps compatibility with '2011 Bulldozer and '2008 Nehalem... still leaving ample room for backwards compatibility into the late 2020s - early 2030s, IMO :)

A x86-64-v3 baseline, the same as RHEL 10, would kill compatibility with Bulldozer and Piledriver, which are unencumbered by the Platform Security Processor, and represent a relatively cheap way to build 32C/64T configuration, albeit with very poor power efficiency. And as far as security is concerned, even a x86-64-v4 baseline wouldn't be enough anyway, as Skylake has been EOSL for two years IIRC, even Cascade Lake is now EOSL...

A fresh experience of mine several weeks ago, between a high-frequency Broadwell-EP (E5-2667v4, 8 x 3.2/3.3/3.5 GHz) in a HP DL380 Gen9 and a a mid-range Sandy Bridge-EP (E5-2670, 8 x 2.6/3.0/3.3 GHz) in a HP DL380p Gen8, is as follows:
* <b>power efficiency</b>: according to the output of `sensors`, the DL380 Gen9 consistently consumes 50-100W less than the DL380p Gen8 does, but both servers have 128 GB of heatsink-less registered RAM, a 4 x GbE NIC, 2 x 128 GB SSDs, and 6 x 900 GB 10K RPM HDDs (different models but with pretty much the same consumption, I've just checked while writing this reply), and the DL380 Gen9 has an additional 2 x 10 GbE NIC, too;
* <b>instruction set</b>: the E5-2667v4 slightly exceeds the x86-64-v3 baseline set by the prior Haswell-EP generation (AVX2, F16C, FMA3, MOVBE, BMI, etc.), whereas the E5-2670 only provides base AVX;
* <b>security</b>: the E5-2670 is only equipped with PCID, it's missing both SMEP (Ivy Bridge) and SMAP (Broadwell) which raise the bar for exploits (long after PaX's original software KERNEXEC and MEMORY_UDEREF protections), and also missing INVPCID (Haswell) which lowers the cost of the mitigation of Meltdown...

As a consequence, unsurprisingly, that DL380 Gen9 I recently picked up has quickly replaced the DL380p Gen8 as a virtualization host, which is now powered down, and shall mostly remain that way. Maybe I could even wind down a second DL380p Gen8, since my workloads are typically not CPU-bound, and that DL380 Gen9 has a 16-way RAID controller, so I'd "just" have to buy a used 8 x SFF bay + backplane + power cable (which is surprisingly expensive).

Please don't abandon old hardware

Posted Nov 19, 2025 21:43 UTC (Wed) by wtarreau (subscriber, #51152) [Link]

> a 15 year old system is likely to be much less power efficient than something newer.

Not always. I'm still having some PCEngines ALIX machines running 24x7 for 21 years now. They're unbreakable, consume roughly 1.5W in idle (i.e. all the time), and peak at 7W under full load, which never happens. Even though I'm progressively replacing them with newer and more capable Armv8 boards, the Arm ones draw more power in idle! It just happens that nowadays chips are designed to focus on around 2 GHz frequency and to be efficient at that frequency. But chips for back then were made for 400-500 MHz and used to draw way less power because they were just refreshed and overclocked 486 in practice, probably having 100 times less transistors.

For many use cases, such old hardware remains pretty sufficient, and requiring users to trash them and replace them with power-hungry devices, often coming with a fan is just a waste of hardware, energy and money.

I agree that in the server world however, running an old XeonIII or a first-gen Core i5 is just wasting a lot of power.

But on the low scale there are really good things. I even remember my LinuxStamp2 made around a SAM9G20 from Atmel (ARMv5, that venerable ARM926EJ-S core at 360 MHz). That board drew so little power that it was impossible to tell if it was on or off by touching the chip. Yet it was performant! I used it for a few projects. It could have replaced one of my proxies but I didn't have an enclosure for it back then, and porting full OS to Arm was less easy than it is now. Nowadays this would be replaced by a Cortex A5 or A35 presumably, but these ones seem to be less ubiquitous.

Please don't abandon old hardware

Posted Nov 21, 2025 19:15 UTC (Fri) by anton (subscriber, #25547) [Link] (3 responses)

a 15 year old system is likely to be much less power efficient than something newer
My old laptop (Lenovo E130, introduced 2012) consumes a few W of power idle (which is it's usual state). I don't think newer laptops have gotten significantly better. And even if they have, if you consider the amount of energy spent on producing such a laptop (let's call it L), one would have to use it for a century or so before it amortizes the energy used in its production. But after 15 years at the latest someone would come around and suggest that L should be replaced with a newer laptop because of "power efficiency", and software support for L should be eliminated. So L can never amortize the power that went into its production, making it less power efficient overall. Therefore, in terms of power consumption, it may not be a good idea to retire old hardware in favour of newer hardware.

Another case in point is my old desktop from 2015 (Z170 board), which idled at 22W (and much of the time, it idled). My new desktop from 2024 idles at 25W. Admittedly, my old desktop will only be 15 years old in 2030, but the way things are going, I doubt that the hardware in 2030 will be more power efficient than my 2015 desktop.

Please don't abandon old hardware

Posted Nov 24, 2025 9:51 UTC (Mon) by taladar (subscriber, #68407) [Link] (2 responses)

On the other hand if you consider power compiling all of Debian for one additional architecture for old systems is probably not insignificant either, especially once the number of users of that architecture is low.

Please don't abandon old hardware

Posted Dec 2, 2025 13:05 UTC (Tue) by anton (subscriber, #25547) [Link] (1 responses)

When the discussion is about requiring x86-64-v2 or x86-64-v3 instead of x86-64, the number of systems that are still in use and that would no longer be able to run the distribution would be on the order of millions (probably hundreds of millions if the requirement is increased to x86-64-v3). Of course, one has to consider how many of those systems use the distribution under consideration, but in the case of Debian and Ubuntu I expect that it is still a substantial number, and avoiding forcing those users to buy new hardware avoids expending a lot more energy than building some packages (or even an entire distribution) for several submodels.

When the discussion is about Alpha or m68k, the point of supporting that is for cultural preservation reasons; we expend energy in preserving various cultural artifacts even if there is no net energy benefit. Some people don't like that, and various cultural artifacts have been destroyed throughout history, and sometimes even energy expended on that (and you all probably can think of examples closer to home), so obviously not everybody agrees that cultural artifacts are worth preserving. If you think they are not, we have to agree to disagree. If you think that they are, the question is if this preservation is worth the resources (including the energy) spent on preserving them.

Please don't abandon old hardware

Posted Dec 2, 2025 18:23 UTC (Tue) by Cyberax (✭ supporter ✭, #52523) [Link]

> When the discussion is about Alpha or m68k, the point of supporting that is for cultural preservation reasons

Not really. It's more like trying to use the Colosseum for modern Olympic games.

Please don't abandon old hardware

Posted Nov 28, 2025 1:51 UTC (Fri) by jmalcolm (subscriber, #8876) [Link]

> HP Microservers, both Gen 8. Fairly common and won't run Rock.,y or Red Hat

To be fair, those will run AlmaLinux which is also a RHEL work-alike.

And they will run Arch and most other Linux distros just fine as well. I am sure they would run Chimera Linux.

Debian is not unique in its support of older hardware. In fact, it has been one of the more aggressive distros to move away from 32 bit x86.

Debian runs ancient software but really does nothing special to run it on ancient hardware. At least, that is my view.

Debian does have an impressive bench of semi-supported architectures though. More than most.

Please don't abandon old hardware

Posted Nov 19, 2025 18:43 UTC (Wed) by tao (subscriber, #17563) [Link] (31 responses)

Meanwhile some of us other Debian users would very much like it if the kernel and user space took some benefit from the improvements that have been made to CPUs the last 20 years...

Please don't abandon old hardware

Posted Nov 19, 2025 21:25 UTC (Wed) by sthibaul (✭ supporter ✭, #54477) [Link] (19 responses)

They do: both the kernel and libc know to switch to variants of memcpy etc.

Please don't abandon old hardware

Posted Nov 22, 2025 18:26 UTC (Sat) by ralfj (subscriber, #172874) [Link] (18 responses)

Right, so a small handful of libraries go through the significant trouble required to dispatch to a suitably optimized version of the code at runtime. If AVX was part of the baseline, the autovectorizer could start using it *everywhere*, and library authors could start using it without having to worry about runtime dispatch.

So, it's correct that current Debian is taking *some* benefit of "recent" CPU improvements. But it could get so much more benefit if it was in the baseline.

Please don't abandon old hardware

Posted Nov 22, 2025 20:38 UTC (Sat) by sthibaul (✭ supporter ✭, #54477) [Link] (17 responses)

> But it could get so much more benefit if it was in the baseline.

Would it, *really*?

The parts which can really benefit from newer instruction sets are usually the most computational-intensive parts. And most often this is delegated to specialized libraries, which do use a set of processor-specific implementations.

> the significant trouble required to dispatch to a suitably optimized version of the code at runtime.

Which has been made *very* easy with the ifunc functionality.

Please don't abandon old hardware

Posted Nov 22, 2025 22:28 UTC (Sat) by Cyberax (✭ supporter ✭, #52523) [Link] (16 responses)

ifunc still means that you can't use inlining. This is especially important if you have C++ code, where a part of the algorithm might be in a C++ template.

Please don't abandon old hardware

Posted Nov 22, 2025 22:42 UTC (Sat) by sthibaul (✭ supporter ✭, #54477) [Link] (15 responses)

You can put the ifunc selection on the parent function.

Please don't abandon old hardware

Posted Nov 22, 2025 23:22 UTC (Sat) by Cyberax (✭ supporter ✭, #52523) [Link] (14 responses)

Sure, but then it becomes tricky with C++. You'll need to create different specializations for multiple templates if you want to use the fast function in (for example) a comparator.

Please don't abandon old hardware

Posted Nov 22, 2025 23:58 UTC (Sat) by sthibaul (✭ supporter ✭, #54477) [Link] (13 responses)

I don't really see a fundamental problem for c++ to support ifunc with different specializations.

And if your code is really complex, you will probably not actually get some benefit from newer instruction sets. Again, only actual profiling will tell you where you lose time, and whether newer instruction sets can really help you with it.

Please don't abandon old hardware

Posted Nov 23, 2025 3:00 UTC (Sun) by Cyberax (✭ supporter ✭, #52523) [Link] (12 responses)

> I don't really see a fundamental problem for c++ to support ifunc with different specializations.

There are no theoretical problems, but the fact is that there are no existing C++ template libraries that actually do that. This a very real problem if you are using something like CGAL.

> And if your code is really complex, you will probably not actually get some benefit from newer instruction sets. Again, only actual profiling will tell you where you lose time, and whether newer instruction sets can really help you with it.

One very real advantage of using newer instruction set is that you get random performance boosts everywhere in your code. Just compiling with -mnative resulted in 5-10% performance boost for some of my apps.

Certainly nothing groundbreaking, but definitely noticeable.

Please don't abandon old hardware

Posted Nov 23, 2025 17:51 UTC (Sun) by sthibaul (✭ supporter ✭, #54477) [Link] (11 responses)

> the fact is that there are no existing C++ template libraries that actually do that.

Then this should be worked on, it will benefit for various situations.

> One very real advantage of using newer instruction set is that you get random performance boosts everywhere in your code. Just compiling with -mnative resulted in 5-10% performance boost for some of my apps.

Have you tried -mtune=native?

> Certainly nothing groundbreaking

Yes, so I would not really trade this with throwing away some not-so-old machines.

Raising the bar with just -mtune would avoid having to trade.

Please don't abandon old hardware

Posted Nov 23, 2025 18:37 UTC (Sun) by Cyberax (✭ supporter ✭, #52523) [Link] (10 responses)

The fundamental issue is that things like popcnt or vectorized instructions often help a bit in scattered locations. Each of them is not a hotspot that would merit doing a separate ifunc, but all of them do add up.

> Have you tried -mtune=native?

I tried that, and I rarely saw any difference over the defaults.

> Yes, so I would not really trade this with throwing away some not-so-old machines.

It's more complicated. A better -march default can help literally millions of people to get additional performance, but it will lock out maybe thousands of people with retro-computing CPUs.

Is it a fair trade-off? I think so. Perhaps people with old machines can help maintain a separate port with older defaults. It's not an easy task because regular Linux distros can't really do sub-architectures. But it might be a good idea for something like nix or guix.

Another option is to have fat binaries that have multiple versions of the code. There was an attempt to do this: FatELF ( https://icculus.org/fatelf/ ) but its author got flamed and trolled until he quit.

Or maybe even a separate distro, perhaps "Oldbian"?

Please don't abandon old hardware

Posted Nov 23, 2025 18:55 UTC (Sun) by sthibaul (✭ supporter ✭, #54477) [Link] (8 responses)

> with retro-computing CPUs.

We are not talking about just retro-computing here, but actual hardware in production. E.g.

https://debconf25.debconf.org/talks/129-approaching-the-s...

One of the reasons they chose debian is precisely for the longer support of hardware which is *hard* to replace with newer arch.

Please don't abandon old hardware

Posted Nov 24, 2025 21:33 UTC (Mon) by Cyberax (✭ supporter ✭, #52523) [Link] (7 responses)

Uhm... Your link doesn't show what is the production equipment?

I don't think there are any production x86 CPUs that don't have at least Haswell extensions.

Please don't abandon old hardware

Posted Nov 24, 2025 21:40 UTC (Mon) by sthibaul (✭ supporter ✭, #54477) [Link] (5 responses)

I googled a bit and found some slides from the team:

https://indico.cern.ch/event/1518977/contributions/666409...

notably slide 11:

Accelerators rely on custom-built Front-end Computers (FECs)
↳ In-house built hardware on x86_64-v1 and x86_64-v2 microarchitecture baseline
↳ These baselines are deprecated with RHEL9 and RHEL10!
↳ Not feasible to replace (time & cost)

Please don't abandon old hardware

Posted Nov 25, 2025 0:46 UTC (Tue) by Cyberax (✭ supporter ✭, #52523) [Link] (4 responses)

So it's likely that they're using a stockpile of older CPUs for custom hardware. This makes more sense. A bit more details: https://indico.cern.ch/event/1277467/contributions/540171...

They already have a deeply customized system, so they can continue customizing it. And if they're making hardware, they can spare a few grad students to build a local OS for the FECs.

Please don't abandon old hardware

Posted Nov 25, 2025 8:40 UTC (Tue) by sthibaul (✭ supporter ✭, #54477) [Link] (3 responses)

> They already have a deeply customized system, so they can continue customizing it.
> if they're making hardware, they can spare a few grad students to build a local OS for the FECs.

OS customization as in: select the pieces you want etc. is way less involved than downgrading compilation baseline. An immediate example is:

https://salsa.debian.org/pkg-llvm-team/llvm-toolchain/-/c...

This example is relatively obvious to notice since it's the compilation toolchain, but you'd have to track down such change in *all* software you want to compile, since as soon as the baseline is officially bumped, *any* package can include such a change without any warning, and your software starts SIGILL-ing in unexpected places. You don't want that in production.

If you're that badly into gaining 5-10% on the software that matters for you, you can as well recompile it yourself with -march=native, instead of asking for putting more work on others?

Please don't abandon old hardware

Posted Nov 25, 2025 8:59 UTC (Tue) by Wol (subscriber, #4433) [Link]

While it might be a pain, this sounds like a great argument in favour of gentoo, or the binary derivative(s).

It already has support for build options, so use the "bin" server, and upload your own "downgraded" baseline binaries. So your binaries get built once, uploaded, and downloaded everywhere else as required. You might have to create your own distro server, but you're running the mainline distro just on a lower baseline.

And I've heard various stories of people running gentoo, sync'ing their dev machine against upstream, QA'ing what's changed, then rolling it out downstream to whoever they're responsible to. So it can't be that hard (until somebody makes a change that can't be compiled by your lower baseline).

Problem solved (for the moment) until upstream rip out support in the code for old chipsets, but even then you can just patch that back in ...

Cheers,
Wol

Please don't abandon old hardware

Posted Nov 25, 2025 20:10 UTC (Tue) by Cyberax (✭ supporter ✭, #52523) [Link] (1 responses)

> If you're that badly into gaining 5-10% on the software that matters for you, you can as well recompile it yourself with -march=native, instead of asking for putting more work on others?

It doesn't matter for me, per se. But is it the better option to waste energy on millions of computers so that a hundred or so people can use stock Debian in a highly specialized application?

Please don't abandon old hardware

Posted Nov 25, 2025 20:18 UTC (Tue) by sthibaul (✭ supporter ✭, #54477) [Link]

I gave you just one example off the top of my head, and you are now assuming that this is the only example...

And, yes, wasting a bit of energy on millions of computers can be less a problem globally than wasting time & energy to track binary compatibility issues on thousand others.

Please don't abandon old hardware

Posted Nov 25, 2025 10:37 UTC (Tue) by farnz (subscriber, #17727) [Link]

There's a chunk of Atoms still in production and recommended for new designs that are x86-64v2 at most - no AVX2 or other Haswell extensions to the ISA. Parker Ridge and Snow Ridge, for example.

Fortunately, the days of recommended for new designs Intel CPUs without SSE4.2 are gone, so there's no longer any new CPUs that don't meet at least x86-64v2; but, while they no longer recommend them for new designs, Intel still manufactures and sells older Atoms that only meet x86-64v1. If you have an older design around a embedded Atom CPU, you'll still be buying x86-64v1 CPUs from Intel.

Please don't abandon old hardware

Posted Dec 19, 2025 9:41 UTC (Fri) by sammythesnake (guest, #17693) [Link]

Or "Deadbian" if you're feeling less charitable :-P

Please don't abandon old hardware

Posted Nov 19, 2025 21:58 UTC (Wed) by arnd (subscriber, #8866) [Link] (1 responses)

I think in this case, Ubuntu actually got it right by adding additional libraries for x86-v64-v3, according to their recent announcement at https://lwn.net/Articles/1044383/. This gets most of the performance benefits, but still allows all of the older machines to keep working.

Please don't abandon old hardware

Posted Nov 20, 2025 7:39 UTC (Thu) by glaubitz (subscriber, #96452) [Link]

> I think in this case, Ubuntu actually got it right by adding additional libraries for x86-v64-v3, according to their recent announcement at https://lwn.net/Articles/1044383/. This gets most of the performance benefits, but still allows all of the older machines to keep working.

For applications such as number crunching, where every percent of performance actually matters, end users will recompile the packages in question with "-mnative" anyway.

Please don't abandon old hardware

Posted Nov 19, 2025 23:58 UTC (Wed) by pabs (subscriber, #43278) [Link] (1 responses)

Has anyone measured how much improvement newer CPUs get from baseline changes? And if it is very different from runtime instruction selection mechanisms?

https://wiki.debian.org/InstructionSelection

Please don't abandon old hardware

Posted Nov 22, 2025 16:40 UTC (Sat) by hsivonen (subscriber, #91034) [Link]

Recompiling software only measures what compilers already know how to do (but don't due to the target being old) or measured the absence of the overhead from cpuid-based dispatch for code that's already manually written.

It can't measure the effect on what software would get written if the baseline as perceived by software developers was higher: both compiler transformations that compiler devs would have better motivation of writing if those transformations would get used by default and what app developers would write when being able to assume the absence of run-time dispatch overhead or the build system and code organization trouble of having multiple variants of the "same" code.

Please don't abandon old hardware

Posted Nov 20, 2025 7:38 UTC (Thu) by glaubitz (subscriber, #96452) [Link] (6 responses)

> Meanwhile some of us other Debian users would very much like it if the kernel and user space took some benefit from the improvements that have been made to CPUs the last 20 years...

What huge benefit are you getting by switching the baseline to x86_64-v2. Can you name anything that actually makes a difference?

Please don't abandon old hardware

Posted Nov 20, 2025 14:57 UTC (Thu) by andresfreund (subscriber, #69562) [Link] (5 responses)

> What huge benefit are you getting by switching the baseline to x86_64-v2. Can you name anything that actually makes a difference?

on v1 there's no cmpxchg16b, which means there are lots of lock-free algorithms that can't be implemented. Lack of popcnt makes plenty bit peeking algorithms more expensive. Lack of pcmpestri means many optimized string algorithms won't work.

Yes, for some of these you can do CPU capability tests, but the dispatching implied by that is often too expensive. And it means requiring to implement fallback algorithms for things that almost all hardware has.

Please don't abandon old hardware

Posted Nov 21, 2025 9:13 UTC (Fri) by ballombe (subscriber, #9523) [Link] (4 responses)

I tested POPCNT on various CPU and it was significantly slower than Knuth algorithm using shift and logical AND.

Please don't abandon old hardware

Posted Nov 21, 2025 13:47 UTC (Fri) by khim (subscriber, #9252) [Link] (2 responses)

It's very-very strange. The first generation Zen should it it in one tick and even Sandy Bridge does it in one tick, too.

Have you tried to do something else in parallel with Knuth algorithm? Because popcnt occupies one ALU for one tick and leaves three others for other things, while Knuth algorithm would occupy all four ALUs and be faster that way, if you program does nothing but popcnt in a loop… it loses if you program is actually uses popcnt for anything useful.

Please don't abandon old hardware

Posted Nov 21, 2025 18:03 UTC (Fri) by ballombe (subscriber, #9523) [Link] (1 responses)

I need to popcnt very large vectors, so I sum thousands of popcnt in a row.

The timing difference of basic operations like popcnt and bfffo between CPUs (even between intel CPU) means that distro binaries will never be optimal no matter what.

Please don't abandon old hardware

Posted Nov 21, 2025 18:10 UTC (Fri) by khim (subscriber, #9252) [Link]

> The timing difference of basic operations like popcnt and bfffo between CPUs (even between intel CPU) means that distro binaries will never be optimal no matter what.

That's not the goal, anyway. The goal is to cover typical code. There popcnt is a win 99.99% of time.

> I need to popcnt very large vectors, so I sum thousands of popcnt in a row.

Here you win not because popcnt is slow, but because SIMD is fast. It needs to be compared to the vpopcnt, not to scalar popcnt. And where it's available it's faster than Knuth algorithm.

Optimizing vectorizable algorithms is the next step after enabling x86-64-v3 (or, at least x86-64-v2) for normal code. Intel made insane mess with vector instructions, unfortunately.

Please don't abandon old hardware

Posted Nov 21, 2025 14:32 UTC (Fri) by excors (subscriber, #95769) [Link]

64-bit POPCNT reportedly has a throughput of 1 cycle per instruction on most Intel CPUs, and 0.25-0.33 cycles per instruction on recent AMD (https://uops.info/table.html?search=popcnt&cb_lat=on&...), so it seems strange that you could beat that with a multi-instruction algorithm.

Apparently Intel pre-Cannon Lake had a false dependency which might reduce performance by a factor of 3, which you need to be careful of when writing assembly, though GCC/Clang will handle it correctly when using intrinsics (https://github.com/komrad36/popcount), so that shouldn't be a big problem.

Please don't abandon old hardware

Posted Nov 19, 2025 18:52 UTC (Wed) by ballombe (subscriber, #9523) [Link] (3 responses)

Something missing from the analysis is the popularity of the various subarchitectures.
For example, if there are much more users of POWER7 than of POWER9, it does not make sense to remove POWER7 to optimize POWER9.

Please don't abandon old hardware

Posted Nov 20, 2025 9:16 UTC (Thu) by taladar (subscriber, #68407) [Link] (2 responses)

That is not necessarily true. Without knowing specifics about your example the use of the older architecture might just be because it gets better support and the new one is unpopular because it isn't giving any benefits while everything is compiled for an older architecture anyway.

The phrase "Don't judge demand for a bridge by counting people swimming across the river" comes to mind.

Please don't abandon old hardware

Posted Nov 21, 2025 9:36 UTC (Fri) by geert (subscriber, #98403) [Link] (1 responses)

Don't replace the H₂O in the river by HF, now the bridge (which might come with a hefty toll) is finished?

Please don't abandon old hardware

Posted Nov 21, 2025 13:08 UTC (Fri) by Wol (subscriber, #4433) [Link]

That's (sort of) what they've done in London!

Downstream of Tower Bridge they've built (so far) one new tunnel. In order to pay for it, both the new and and the old tunnel next to it have been made toll. The tunnel upstream has now had size and weight restrictions applied (to be honest, it needed them) of 6'6" and 2 tonne. And the free ferry downstream now only operates during the hours of the tolls.

So anything bigger than a car has no option other than to use a toll crossing, use the ferry which gets bunged up very quickly, go through Central London, or go the long way round. The one big plus point is the new tunnel has no height or weight restrictions, so it gives those vehicles an alternative to the ferry.

(There are three more tunnels planned - I hope ours is next!)

Cheers,
Wol

Please don't abandon old hardware

Posted Nov 19, 2025 21:03 UTC (Wed) by ju3Ceemi (subscriber, #102464) [Link] (27 responses)

What is the use of a pre-2007 hardware that run 2025 software ?

Please don't abandon old hardware

Posted Nov 20, 2025 1:12 UTC (Thu) by ejr (subscriber, #51652) [Link]

In some markets focused on making legacy things run faster like POWER, much of the issues lurk around regulatory / licensing compliance. Forcing a customer to run through those hoops again means losing the customer. The HW showing they comply in a way that does not change external behavior may keep all the customers.

Please don't abandon old hardware

Posted Nov 20, 2025 7:12 UTC (Thu) by rrolls (subscriber, #151126) [Link] (23 responses)

What's wrong with running 2025 software on pre-2007 hardware? (In my case it is more like 2013-2014 or something, but I digress.)

When the existing load is light enough that the machine spec still runs it perfectly fine, why should we replace perfectly good working hardware for no real-world benefit, when this job takes time and money and creates e-waste and risk? An upgrade might mean some process completes 100 times faster, but if response times are already good enough and the load average is already significantly less than 1 for the vast majority of the time, then it wasn't necessary.

I have never understood this modern thinking of "this thing's a few years old, so it must be useless". My logic is the opposite: "this thing's a few years old and it's still working fine, so it's stood the test of time, so I can trust it much more than something brand new". If it ain't broke, don't fix it.

This is, again, why I use Debian in the first place, because it's about the only distro that, so far, has preferred to choose "things that stood the test of time" rather than the "bleeding edge" that everyone else chooses.

Please don't abandon old hardware

Posted Nov 20, 2025 9:23 UTC (Thu) by taladar (subscriber, #68407) [Link]

"Old hardware" is really the wrong way to phrase it to understand why support is often cut off after a while.

The key combination that gets support cut off is "unpopular hardware" combined with "hardware that takes a lot of effort to support because it differs from other hardware we need to support".

That combination makes it more likely that the ratio of contributions from that user base to the effort required to keep it running to the amount of effort to keep it running is bad.

Not to mention, of course, that many types of effort manifest in complexity that affect everyone working on the project, even people who might have never even heard of the specific type of unpopular hardware.

Please don't abandon old hardware

Posted Nov 20, 2025 12:52 UTC (Thu) by NAR (subscriber, #1313) [Link] (6 responses)

Is there an actual use case for running 2025 software on pre-2007 hardware? Not some generic "don't increase the landfill" argument, but an actual, real life use case?

Please don't abandon old hardware

Posted Nov 20, 2025 13:35 UTC (Thu) by pizza (subscriber, #46) [Link] (5 responses)

> Is there an actual use case for running 2025 software on pre-2007 hardware?

Why is "It's what I have" not an acceptable answer?

Please don't abandon old hardware

Posted Nov 20, 2025 13:44 UTC (Thu) by ju3Ceemi (subscriber, #102464) [Link] (4 responses)

While I do understand why running 2007 hardware might be a good idea (as you say: "it's what I have"), I do not understand how running software from 2025 would be interesting

I mean, if you run that kind of old hardware, I'd guess the actual workload had few evolution in the last decade
So, more like a retro-thing that a business-critical thing (if you get my meaning)

Please don't abandon old hardware

Posted Nov 20, 2025 13:55 UTC (Thu) by pizza (subscriber, #46) [Link] (1 responses)

> I do not understand how running software from 2025 would be interesting

Putting aside the minor detail that "2007 software" that interacts with/over the internet most likely will not work at all, and even if it does, you're probably going to get pwn3d within the hour.

> I mean, if you run that kind of old hardware, I'd guess the actual workload had few evolution in the last decade
So, more like a retro-thing that a business-critical thing (if you get my meaning)

...You're missing out on the detail that 2007-era hardware is often 100% capable of running (if not actually 100% compatible with) modern software [1]. It's just slower (and more power hungry) than something newer.

[1] ie "x86_64-v1" in general purpose computers.

Please don't abandon old hardware

Posted Nov 20, 2025 15:41 UTC (Thu) by paulj (subscriber, #341) [Link]

It need not be more power-hungry, in absolute terms. Further, even if modern hardware can get more MIPS out of each Watt, that need not be relevant for applications that are spending most of their time waiting for some event. Further, while modern CPUs may well have much better efficiency in terms of peak MIPS/Watt, they also tend to have worse current leakage at idle.

If an old computer is handling some task with very little CPU use, it is not a given that a fast new computer will be a win in terms of power.

Please don't abandon old hardware

Posted Nov 21, 2025 18:28 UTC (Fri) by anton (subscriber, #25547) [Link]

We have a bunch of machines of all ages that I use for measuring microarchitectural differences. Many of them are running a recent Debian, because it's much easier to clone an existing working installation and put the SSD in the target machine (it only needs a SATA port) and then individualize the machine (a few minutes of work) than to install some old distribution. And it also makes it easier to run the same binaries on all these machines (usually a binary built on a new distribution does not work on an old distribution).

Please don't abandon old hardware

Posted Nov 22, 2025 16:52 UTC (Sat) by hsivonen (subscriber, #91034) [Link]

> I do not understand how running software from 2025 would be interesting

Running 2025 software is interesting if you want to interact with the world in a way that a) works and b) isn't full of known vulnerabilities.

There are folks who are in a financial situation where it makes sense not to pay for new hardware but still makes sense to run an up-to-date Web browser.

Last year, I gave away 3 Penryn Macs with after-market SSD and Linux set up, and the interest in the first two was so intense that I donated the third one to an intermediary so as to not have to deal with messages of interest. (In Helsinki, Finland, in case that's of interest for context.)

Nuoveau works well with Nvidia GPUs of that vintage and can run Firefox with WebRender on Gnome in Wayland mode surprisingly well.

Please don't abandon old hardware

Posted Nov 20, 2025 14:07 UTC (Thu) by nim-nim (subscriber, #34454) [Link] (6 responses)

> If it ain't broke, don't fix it.

While I engage in procastinating myself the direct consequence of this strategy is “it will break at the most inconvenient time and cost a lot more to fix than being proactive”.

Please don't abandon old hardware

Posted Nov 20, 2025 14:17 UTC (Thu) by pizza (subscriber, #46) [Link] (4 responses)

> While I engage in procastinating myself the direct consequence of this strategy is “it will break at the most inconvenient time and cost a lot more to fix than being proactive”.

That's true whether the hardware is brand new or decades old, and the "solution" is the same either way -- you keep a proportionally- sized (to the cost of downtime) inventory of spare parts.

Please don't abandon old hardware

Posted Nov 21, 2025 10:27 UTC (Fri) by taladar (subscriber, #68407) [Link] (3 responses)

Breaking doesn't always mean hardware failures. It can also mean lack of compliance with some new regulation, lack of implementations for some new network protocol or crypto algorithm,...

Please don't abandon old hardware

Posted Nov 28, 2025 3:04 UTC (Fri) by jmalcolm (subscriber, #8876) [Link] (2 responses)

> some new network protocol or crypto algorithm

You just made the case for using up-to-date software. It is not a strike against older hardware though.

I have a 2009 Macbook Pro that has Linux kernel 6.17 on it.
- It uses bcachefs with lz4 compression on the primary partition.
- OpenSSL is 3.6.0.
- libnl is 3.11.0
- OpenSSH is 10.2
- StrongSwan is 6.0.2
- It supports OpenGL 4.5 and Vulkan 1.4.334

What network protocol or crypto algorithm does this system not support?

Please don't abandon old hardware

Posted Nov 28, 2025 7:07 UTC (Fri) by mb (subscriber, #50428) [Link] (1 responses)

See?
Isn't it nice that the people writing these software packages still support your ancient hardware?

This is not automatic or free, though. It causes work and constraints for the developers and maintainers. Especially in low level components like the kernel or algorithm implementations in assembly code.

If a software once supported your hardware 20 years ago, then that does *not* mean it will automatically support it in the current version. This takes effort. This costs time and money.

If a new software that didn't exist 20 years ago still runs on your old hardware that is usually due to a combination of luck (the developer never tested it on your hw) and the toolchains still supporting the ancient hardware. Which means time and money was spent.

There is a limit to how old of a hardware the projects are willing to support.

Please don't abandon old hardware

Posted Nov 28, 2025 10:43 UTC (Fri) by farnz (subscriber, #17727) [Link]

But note that the limit to how old the hardware a project will support partially depends on the developers available to that project, and what they choose to care about. If everyone working on a project only cares about AWS Graviton4 instances, then it's just luck if it works on something else; if you really want that project to work on your hardware (older, or current), you need to step up and do the work that they're not doing, so that your hardware is supported.

And once your hardware gets old enough, that work can include not just coding, but also supplying builders, remote access to test boxes etc - it can get harder and harder to sustain older hardware.

Please don't abandon old hardware

Posted Nov 28, 2025 2:49 UTC (Fri) by jmalcolm (subscriber, #8876) [Link]

Much of my older hardware is much less likely to break than my newer hardware. A lot of my older hardware was very premium when it was new and it carries that level of quality into the present. This often makes it a lot nicer to use with things like great keyboards and attractive screens. It also makes it less likely to break.

I have 2009 era laptops in much better shape than 4 year old laptops of lower quality. Cracked cases, worn out hinges, broken keys, and flaky charge ports are all examples of things that happen on my newer hardware but not my older stuff.

In fact, I would say that build quality is one of the reasons that I use my older hardware more than my new stuff.

Please don't abandon old hardware

Posted Nov 20, 2025 14:14 UTC (Thu) by josh (subscriber, #17465) [Link] (7 responses)

> What's wrong with running 2025 software on pre-2007 hardware?

Demanding that the people working on 2025 software support your pre-2007 hardware.

Please don't abandon old hardware

Posted Nov 20, 2025 14:56 UTC (Thu) by farnz (subscriber, #17727) [Link] (6 responses)

Note, though, that there's a huge difference between demanding that people working on 2025 software support your pre-2007 hardware, and doing the work needed to ensure that someone working on 2025 software for 2025 hardware doesn't break your pre-2007 hardware in the process.

The former is a lot more visible, but it's just contributing noise. The latter is hard work (you could, for example, end up maintaining a compiler backend, kernel support, and a Debian Port with associated build farm for your pre-2007 hardware), but in the worst case possible doesn't slow down people working on 2025 software, and in the best case helps them because you find bugs that would eventually show up on 2025 hardware before they hurt.

Please don't abandon old hardware

Posted Nov 21, 2025 10:33 UTC (Fri) by taladar (subscriber, #68407) [Link] (3 responses)

I disagree that in the worst case it doesn't slow other people down. There is an inherent complexity to solving every single problem for say 20 architectures instead of just 10 that can not be split up to be handled by a separate maintainer. Critical software architectural decisions need to be made in a way that works on all hardware architectures and often can not just be split up into a maintainer for each hardware architecture.

Please don't abandon old hardware

Posted Nov 21, 2025 11:55 UTC (Fri) by farnz (subscriber, #17727) [Link] (2 responses)

IME, that's not actually an issue. Architectures tend to fall into groups that are "the same" for any given problem, and thus the only time there's a new issue from a new architecture is when it doesn't fall into an existing group. And my experience is that once you have two distinct architectures, you have all the groups you need; lots of variance exists, but it's usually in places like atomics where the compiler will exploit it if the architecture doesn't, so you need it fixed anyway.

Please don't abandon old hardware

Posted Nov 21, 2025 14:19 UTC (Fri) by smcv (subscriber, #53363) [Link]

> Architectures tend to fall into groups that are "the same" for any given problem

A corollary of this is that the strongest push to stop supporting older systems tends to be where the size of the group is small (perhaps only one architecture) or where the total user-base of several affected architectures is small, meaning that effort spent on supporting a group of problematic architectures only benefits a small number of users: in that situation, it's easy to reach a situation where the effort being spent is out of proportion to the benefit.

Examples in Debian include raising the i386 baseline from i686 to i686+MMX+SSE+SSE2 (eliminating the use of the i387 FPU, which means i386 stops having its own unique floating-point quirks), and raising the 32-bit ARM baseline from ARMv5 to ARMv7 (eliminating armel, which was the only release architecture without atomic intrinsics). Both of those reduce the effort required to support the remaining machines of the relevant architecture, which makes it more likely for the effort/result ratio to remain favourable.

Please don't abandon old hardware

Posted Nov 21, 2025 14:30 UTC (Fri) by smcv (subscriber, #53363) [Link]

> my experience is that once you have two distinct architectures, you have all the groups you need

That has not been my experience. Some of the more obvious ways to divide them are: endianness (big/little), word size (ILP32/LP64), signedness of char, older i386's unique excess-precision FPU, and older ARM's unique floating-point memory layout.

For example ppc64el and arm64 are in the same equivalence class for all of those, and so is x86_64 for everything except char signedness.

And that's before we get into more subtle points like presence or absence of atomic intrinsics, page size, unaligned access behaviour (minor slowdown/major slowdown/crash/silently wrong answers), whether specific non-portable runtime components like JavaScript engines' JIT compilers have been ported or not, whether specific toolchain components like LLVM and rustc have been ported or not, and whether there is a working software graphics driver suitable for running the unit tests of GUI software.

Please don't abandon old hardware

Posted Nov 22, 2025 17:09 UTC (Sat) by hsivonen (subscriber, #91034) [Link] (1 responses)

> Note, though, that there's a huge difference between demanding that people working on 2025 software support your pre-2007 hardware, and doing the work needed to ensure that someone working on 2025 software for 2025 hardware doesn't break your pre-2007 hardware in the process.

I just made (in Nightly, not yet in release) the HTML serializer used for the innerHTML getter and the HTML parser in Firefox SIMD-accelerated on aarch64 (unconditionally) and on x86_64 when BMI1 (and, by implication, AVX1) is available (popcount and count trailing zeros pair nicely with _mm_movemask_epi8).

Making the same x86_64 binaries deal with the AVX+BMI available and not available situations was annoying, and it seems obvious to me that current x86_64 hardware or even x86_64 hardware from a decade ago isn't used to its full potential due this added hassle.

Also, in this process I happened to learn that all our x86_64 CI runners have AVX and BMI, except our Android x86_64 CI runs on such an old version of QEMU that it doesn't have these, so the AVX and BMI unavailable case got tested in CI only by accident. Firefox is intended to run on the x86_64 baseline on desktop Linux (Windows, Mac, and Android themselves require more than the baseline), but we don't actually test that.

My code required SSSE3 to compile, but the generated code got better when compiling it for SSE 4.2, AVX, or AVX+BMI. I made AVX+BMI the requirement without which scalar-only code runs, since that makes the best use of most users' hardware and we don't even have ways to systematically assess the benefit of lower instruction set levels on older CPUs anymore.

Please don't abandon old hardware

Posted Nov 24, 2025 11:45 UTC (Mon) by farnz (subscriber, #17727) [Link]

The difference I was calling out is the difference between someone telling you that you must account for x86-64 machines without BMI1 when you write code, and when Mozilla builds official binaries, and someone following your changes that require BMI1 by doing the work to allow someone building Firefox for x86-64 to build and run on a machine without BMI1.

As you've found, if you must account for lack of BMI1 yourself, and in your official binaries, it's a huge pain; you gain a lot from being able to simply ignore systems that do not have AVX and BMI1 (which includes some relatively recent Atom, Celeron and Pentium processors from Intel, because they used ISA differences as market segmentation). Making you care about systems without it makes the software worse for everyone.

But someone who's willing to come through after you've made the official builds and CIs depend on a modern instruction set, and do the work to stand up an unofficial build and CI that doesn't depend on AVX and BMI1, runs on older CPUs, and doesn't impose a burden on you working on things for modern hardware, is a very different situation. There, you get to ignore the existence of systems without (say) AVX2 and BMI2 and nobody complains if you break them; instead, someone comes through after you, and carefully changes things so that your changes still benefit people with modern CPUs, but they can still build and run on older CPUs.

FWIW, I've done the thing of changing the default build from -march=x86-64 -mtune=$more_modern_cpu to -march=$more_modern_cpu at two different jobs, and observed that you get a lot of code that doesn't improve (measured both by joules consumed and by time taken), small speedups (1% to 5% in time, 1% to 10% in joules consumed) all over the place with no particularly obvious cause, and a few big improvement (over 25% in either time or joules consumed) where autovectorization can do more than before.

Can't afford modern hardware

Posted Nov 20, 2025 22:12 UTC (Thu) by pabs (subscriber, #43278) [Link]

Not quite the same year, but I run 2013 hardware that I found in a dumpster, and I can't afford to replace it. I'm still checking dumpsters but haven't found anything newer yet.

Please don't abandon old hardware

Posted Nov 28, 2025 2:37 UTC (Fri) by jmalcolm (subscriber, #8876) [Link]

> What is the use of a pre-2007 hardware that run 2025 software

I confess, I do not understand this question. The answer would seem to be the same as "what is the use of 2025 software" and I cannot imagine that as a real question.

I can only imagine you are actually asking "why not use newer than 2006 hardware to run your 2025 software"? That seems like a reasonable question. But here are some answers.

- you only have pre-2007 hardware available
- your newer hardware is being used for other things (and so you only have pre-2007 hardware available)

And of course my answer which is, for A LOT of 2025 software older hardware runs it just fine. Not only runs it well enough to be useful but well enough to not make me want anything better.

I have a multi-story home. The best computer hardware is at the top (where I work). The hardware available on lower levels is older. Very often, I start working on older hardware and find that it works well enough that I am not motivated to go up one floor to work on something newer.

When you are using web apps, remotely accessing other servers, deploying to the cloud, using LLMs either either in a browser or at the command-line, or just doing basic office productivity stuff like reading or creating basic docs, checking email, or even video conferencing, you can get completely lost in the flow of getting stuff done and forget that you are on an older machine. I am typing this right now on a 2012 Macbook Pro and it is not the oldest machine I use more or less daily. Not by a long shot.

There is certainly nothing about what I am doing right now that makes me think I need a faster machine.

However, if I had to use older software for any of the things above, I would immediately run into limitations that would drive me crazy. What I want is exactly 2025 software on all of my machines.

Please don't abandon old hardware

Posted Nov 21, 2025 12:26 UTC (Fri) by LionsPhil (subscriber, #121073) [Link] (1 responses)

This. It's so much more true this side of the millennium where the hardware upgrade treadmill has eased off. I don't expect to run anything but period OSes on my hot old Athlon XP powerhouse-of-its-day, but the 15-year-old ThinkPad x201t has a multi-core multi-GHz 64-bit CPU and enough RAM to handle a webpage...and to do that safely it needs an OS stack and browser without period CVEs. :)

Please don't abandon old hardware

Posted Nov 21, 2025 14:19 UTC (Fri) by amacater (subscriber, #790) [Link]

As someone who spent last weekend in a room with the Debian CD/images release team: if your Thinkpad X201 is still working - you are *very* lucky and should be expecting hardware to break soon. Even with relatively light use, we found another X201 died this time. WiFi dying, fans, power supplies or just dying outright.

We're considering raising the baseline on release testing weekends to more modern Thinkpads because we have to. My bag last week contained T420, X260, X270, T470. T490 ... and an Intel Mac.

Please don't abandon old hardware

Posted Nov 21, 2025 21:02 UTC (Fri) by jond (subscriber, #37669) [Link] (1 responses)

Debian needs more people doing work for this to happen. None of this is a new situation: I own hardware that Debian has discontinued support for long ago (powerpc). Debian has been dropping ports for at least 18 years (m68k, Etch EOL, 2007)

Please don't abandon old hardware

Posted Nov 21, 2025 21:07 UTC (Fri) by jond (subscriber, #37669) [Link]

I've just remembered that Etch was also the end of support for pre-UltraSparc Sparcstations.

Baseline updates bite hard

Posted Nov 20, 2025 1:00 UTC (Thu) by pabs (subscriber, #43278) [Link]

I've been bitten by Debian's baseline updates before. The router my ISP ships uses a MIPSr1 Broadcom SoC, it violates the GPL and BSD licenses at least though. I used to be able to run the Debian mips port in a chroot on it, but then Debian bumped the baseline past MIPS r1. Later they just got rid of the port entirely. So now I would have to run outdated insecure versions of Debian.

Requiring crypto instructions

Posted Nov 20, 2025 12:44 UTC (Thu) by DemiMarie (subscriber, #164188) [Link] (3 responses)

I’d set the x86_64 requirement to “You have AES-NI and RDRAND”. That avoids timing leakage in crypto code.

Personally, I’d drop support when the vendor did, but that would be very unpopular.

Requiring crypto instructions

Posted Nov 20, 2025 13:28 UTC (Thu) by chris_se (subscriber, #99706) [Link] (2 responses)

> I’d set the x86_64 requirement to “You have AES-NI and RDRAND”. That avoids timing leakage in crypto code.

RDRAND/RDSEED is broken in a significant amount of CPUs (even current ones) that a lot of reasonable software now uses this only as an auxiliary source of entropy, but not the primary one. AMD's Zen5 architecture has a bug in this regard, see [1], and that's just the most recent example, and not every CPU ever released with a problem with RDRAND/RDSEED could be / was fixed.

Given _how_ many times CPU manufacturers have screwed up this instruction so far, I'd rather trust specialized open source crypto code designed for this, than the constantly failing black boxes that are modern CPUs. (And I don't even think that this is in any way malicious, I just think that CPU designers are not experts on RNG and thus make mistakes, especially since this is the _one_ part of the CPU that's not supposed to be deterministic.)

[1] https://www.amd.com/en/resources/product-security/bulleti...

Requiring crypto instructions

Posted Nov 20, 2025 17:10 UTC (Thu) by DemiMarie (subscriber, #164188) [Link] (1 responses)

What are the other known RDRAND/RDSEED vulns?

Requiring crypto instructions

Posted Nov 21, 2025 9:13 UTC (Fri) by chris_se (subscriber, #99706) [Link]

> What are the other known RDRAND/RDSEED vulns?

AMD in particular has had a horrible track record with this instruction. In addition to the most recent (still awaiting a microcode fix) issue:

Back in ~2014 (fails after suspend/resume):
https://bugzilla.kernel.org/show_bug.cgi?id=85911

Zen2 (2019/2020), i.e. Ryzen 3000 series (RDRAND always returning just 1-bits):
https://arstechnica.com/gadgets/2019/10/how-a-months-old-...

But Intel is not innocent in this eather: they had a bug in CPUs ranging from 2012 - 2019 (CVE-2020-0543, random state could be leaked):
https://www.intel.com/content/www/us/en/security-center/a...
The microcode mitigations resulted in drastic performance decreases (see e.g. <https://www.phoronix.com/news/RdRand-3-Percent>)
(To be fair to Intel: their bug was not remotely as bad as the AMD ones, but still...)

Apart from the most recent AMD one I mentioned in the previous post, the other ones have (to my knowledge) since been fixed, but I hope you understand why I'm not very confident in using those instructions on their own, without any other source of entropy.


Copyright © 2025, Eklektix, Inc.
This article may be redistributed under the terms of the Creative Commons CC BY-SA 4.0 license
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds