Why old architectures are sometimes important
Why old architectures are sometimes important
Posted Jul 15, 2024 16:44 UTC (Mon) by geofft (subscriber, #59789)In reply to: Why old architectures are sometimes important by jjs
Parent article: The 6.10 kernel has been released
x32 isn't an old architecture in a meaningful sense, though. It was launched around 2012 - it necessarily postdates x86-64, and only works on x86-64 processors, which is to say, any x32 machine can run x86-64 binaries equally well (but with perhaps a few percent more memory consumption).
But I see this argument about old architectures for things that actually are old architectures, and I don't get why this has any bearing on new versions of OSes.
For point 1, the argument for reliability of remote systems argues in favor of not upgrading them to the latest kernel or latest OS - it's not like you need new features, and so the risk calculus weighs against making changes that you don't really need. Linux 2.4 (for example) works every bit as well as Linux 2.4 worked twenty years ago; it's still the same code. If support for something is "removed" from Linux, it's only removed from new releases; it's still just as functional as it was yesterday in the releases that existed yesterday.
If you're worried about vulnerabilities, either tell yourself that your architecture is so rare that people aren't going to explore it, or more usefully, isolate your system from the public internet; even if you upgrade to the latest version of everything, the fact that you're running an obscure architecture means that there aren't "many eyes" looking at your setup, people are unlikely to implement state-of-the-art mitigations as well, etc. (Not to mention that your own code, probably, hasn't been seriously analyzed since before we were as cognizant of security vulnerabilities as we are today.)
For point 2, the same argument applies if you really are leaning into the position of "it works, why change it?". Even if you're wrapping the old binaries with modern APIs, you'll still want the old binaries to run in the environment where you're confident they run well. And if you're wrapping them with modern APIs, that gives you plenty of room to put modern authorization controls on access to the legacy systems and not worry about vulnerabilities.
If you're doing ongoing development on legacy software - which is fairly common - then chances are you will want to keep up with the broader industry, and that argues in favor of investing in modernization. If the business decides it's excited about AI, you'll quickly find that nobody is figuring out how to compile PyTorch for VAX. Besides, there are significant efficiency improvements that mean you'll save more on power than you'll spend on the new hardware. (And I've also seen tax incentives for upgrading even just older x86-64 machines to newer ones.)
If we were talking about proprietary software that is no longer available, I would completely agree. My Nintendo 3DS, for instance, still works as well as it did when I bought it, and it's frustrating that the eShop is no longer available. But we do have the code of every version of Linux ever released, and of every major FOSS distro, and nothing can take that away. So it seems perfectly fine for people who need to use old hardware use the software that was best designed for that hardware, and let new software target new hardware.
Posted Jul 15, 2024 19:53 UTC (Mon)
by jjs (guest, #10315)
[Link]
Regarding point 1 - actually some of them DO want to upgrade the OS, even if they can't directly access the hardware easily. They may even upgrade the software itself. The key is the hardware that exists is where they need it, but isn't easily swapped out.
Regarding point 2 - yes, we're talking about old proprietary binaries. The machines it runs on are breaking down, but they don't want to change out the software because it works. Solution: Run the old software on new hardware. Issue: New HW may be different CPU. OS may be different. But if it runs the software, they're fine. If it's not, they're not.
Regarding the discussions in that BOF (and elsewhere, I've had the discussions many times). I'm expressing their requirements (at least as presented). One group wanted upgrades to the SW (to include Linux kernel) on old HW in the field. Another group wanted to run old binaries on new systems. Was it wise? Was it the best choice? That's separate from "This is what we want, how do you propose to supply it."
One thing to think about - yes, 2.4 source code still exists and can be compiled. But who's providing support for it? The expressed interest was in upgrading to supported (now called LTS?) kernels. Which means periodically, they need to move kernel versions.
We see similar problems with Android and support for upgrades (slightly different problem, but still - how to upgrade the kernel, especially with proprietary drivers).
IBM, DEC, and HP (at least 20+ years ago) spent considerable effort to ensure those old binaries ran on new HW / OS just fine. And companies paid big money to get that. Which is why IBM, DEC,and HP provided the service. I've been out of that realm for quite a while, have no idea if anyone does that anymore.
Kernel upgrades to support old software, old hardware, or both