The 6.10 kernel has been released
So the final week was perhaps not quite as quiet as the preceding ones, which I don't love - but it also wasn't noisy enough to warrant an extra rc.
Changes in 6.10 include the removal of support for some ancient Alpha CPUs, shadow-stack support for the x32 sub-architecture, Rust-language support on RISC-V systems, support for some Windows NT synchronization primitives (though it is marked "broken" in 6.10), the mseal() system call, fsverity support in the FUSE filesystem subsystem, ioctl() support in the Landlock security module, the memory-allocation profiling subsystem, and more.
See the LWN merge-window summaries (part 1, part 2) and the KernelNewbies 6.10 page
for more details.
Posted Jul 15, 2024 7:19 UTC (Mon)
by pm215 (subscriber, #98099)
[Link] (16 responses)
Posted Jul 15, 2024 8:26 UTC (Mon)
by Wol (subscriber, #4433)
[Link] (1 responses)
Cheers,
Posted Jul 15, 2024 8:31 UTC (Mon)
by mb (subscriber, #50428)
[Link]
Posted Jul 15, 2024 13:48 UTC (Mon)
by arnd (subscriber, #8866)
[Link]
Posted Jul 15, 2024 15:26 UTC (Mon)
by chadcatlett (subscriber, #126771)
[Link]
Posted Jul 15, 2024 16:19 UTC (Mon)
by jjs (guest, #10315)
[Link] (11 responses)
Often, they'd patch around it - putting software around the old binary to allow newer things (web, etc.) to interface with the old binary, but at the end, that binary was doing the work.
Of course, a) someone has to be willing to do the work, and b) someone has to be willing to actually fork over money to support that person. b) is where the problem comes in. Companies depending on old binaries should be willing to pay some to keep them running, or prepare to change over.
As I said, that was 20+ years ago. Not certain the exact situation now. DEC is gone. HP got broken up. Processor technology has gone forward. Lots of old processors aren't even made any more. Even though some of the systems then have been replaced, I'd believe that there are now systems that were middle-aged (not old) then that are now the old system they can't replace.
Addendum - at least HP3000 is still supported, but not by HP - https://en.wikipedia.org/wiki/HP_3000#Independent_support
Posted Jul 15, 2024 16:23 UTC (Mon)
by paulj (subscriber, #341)
[Link] (5 responses)
Posted Jul 15, 2024 17:45 UTC (Mon)
by farnz (subscriber, #17727)
[Link] (4 responses)
FWIW, I still get approached from time to time by a nuclear power plant company that's looking for people who want to train to support process automation written in PAL-11R assembly (not even MACRO-11). They don't want to port to MACRO-11, let alone away from the PDP-11, because the code is certified, and even a single byte change is a big deal, but they do need to make changes (and recertify) every so often, because assumptions made in the 1970s are not necessarily true today.
Posted Jul 15, 2024 20:02 UTC (Mon)
by jjs (guest, #10315)
[Link] (3 responses)
Posted Jul 16, 2024 9:35 UTC (Tue)
by farnz (subscriber, #17727)
[Link] (2 responses)
They want to stick to PAL11-R for exactly the reason you suggested; when they recertify, they look at the difference between the last certified system, and the latest certified system. They only need demonstrate that the changes are safe to recertify - so the smaller the changes, the easier it becomes.
So, switching from a PDP-11 to a Cortex-M4F with more compute and I/O than the PDP-11 is a huge deal; everything changes, so everything has to be recertified, from hardware upwards. Sticking on a PDP-11 and switching from PAL11-R to MACRO11 is also a big deal, because all the software needs rechecking, and to be able to recheck at source level (not machine code), they'd have to get MACRO11 qualified as producing predictable output, where PAL11-R is already a qualified tool. Similar would apply to using a higher level language like BLISS or C - you have to qualify the toolchain first, and that's expensive.
And this is all sufficiently expensive that it's cheaper to pay for people to pick up a legacy assembly language and just update the legacy codebase for new regulatory and safety requirements (e.g. they had a flurry of activity after Fukushima) than it is to update to a newer language, let alone pay the cost of migrating to new hardware.
Posted Jul 18, 2024 4:29 UTC (Thu)
by raven667 (subscriber, #5198)
[Link] (1 responses)
Posted Jul 18, 2024 7:28 UTC (Thu)
by farnz (subscriber, #17727)
[Link]
A shift to MACRO11 instead of PAL11-R isn't a big change - all of the existing source would assemble as-is, and the new features of MACRO11 (which are all over 40 years old) would help when you're doing maintenance on the changes made to the system.
And the reason for stability is not that the job hasn't significantly changed - it has - it's that if you want the benefits of the modern control system, you have to pay for it. However, nuclear power plants are regulated such that you can't just say "there's a new system - pay to install it, because we're not supporting the old one" - if the company wants to upgrade you to the newer 68k based control system, they have to pay the cost of replacement.
Posted Jul 15, 2024 16:44 UTC (Mon)
by geofft (subscriber, #59789)
[Link] (1 responses)
But I see this argument about old architectures for things that actually are old architectures, and I don't get why this has any bearing on new versions of OSes.
For point 1, the argument for reliability of remote systems argues in favor of not upgrading them to the latest kernel or latest OS - it's not like you need new features, and so the risk calculus weighs against making changes that you don't really need. Linux 2.4 (for example) works every bit as well as Linux 2.4 worked twenty years ago; it's still the same code. If support for something is "removed" from Linux, it's only removed from new releases; it's still just as functional as it was yesterday in the releases that existed yesterday.
If you're worried about vulnerabilities, either tell yourself that your architecture is so rare that people aren't going to explore it, or more usefully, isolate your system from the public internet; even if you upgrade to the latest version of everything, the fact that you're running an obscure architecture means that there aren't "many eyes" looking at your setup, people are unlikely to implement state-of-the-art mitigations as well, etc. (Not to mention that your own code, probably, hasn't been seriously analyzed since before we were as cognizant of security vulnerabilities as we are today.)
For point 2, the same argument applies if you really are leaning into the position of "it works, why change it?". Even if you're wrapping the old binaries with modern APIs, you'll still want the old binaries to run in the environment where you're confident they run well. And if you're wrapping them with modern APIs, that gives you plenty of room to put modern authorization controls on access to the legacy systems and not worry about vulnerabilities.
If you're doing ongoing development on legacy software - which is fairly common - then chances are you will want to keep up with the broader industry, and that argues in favor of investing in modernization. If the business decides it's excited about AI, you'll quickly find that nobody is figuring out how to compile PyTorch for VAX. Besides, there are significant efficiency improvements that mean you'll save more on power than you'll spend on the new hardware. (And I've also seen tax incentives for upgrading even just older x86-64 machines to newer ones.)
If we were talking about proprietary software that is no longer available, I would completely agree. My Nintendo 3DS, for instance, still works as well as it did when I bought it, and it's frustrating that the eShop is no longer available. But we do have the code of every version of Linux ever released, and of every major FOSS distro, and nothing can take that away. So it seems perfectly fine for people who need to use old hardware use the software that was best designed for that hardware, and let new software target new hardware.
Posted Jul 15, 2024 19:53 UTC (Mon)
by jjs (guest, #10315)
[Link]
Regarding point 1 - actually some of them DO want to upgrade the OS, even if they can't directly access the hardware easily. They may even upgrade the software itself. The key is the hardware that exists is where they need it, but isn't easily swapped out.
Regarding point 2 - yes, we're talking about old proprietary binaries. The machines it runs on are breaking down, but they don't want to change out the software because it works. Solution: Run the old software on new hardware. Issue: New HW may be different CPU. OS may be different. But if it runs the software, they're fine. If it's not, they're not.
Regarding the discussions in that BOF (and elsewhere, I've had the discussions many times). I'm expressing their requirements (at least as presented). One group wanted upgrades to the SW (to include Linux kernel) on old HW in the field. Another group wanted to run old binaries on new systems. Was it wise? Was it the best choice? That's separate from "This is what we want, how do you propose to supply it."
One thing to think about - yes, 2.4 source code still exists and can be compiled. But who's providing support for it? The expressed interest was in upgrading to supported (now called LTS?) kernels. Which means periodically, they need to move kernel versions.
We see similar problems with Android and support for upgrades (slightly different problem, but still - how to upgrade the kernel, especially with proprietary drivers).
IBM, DEC, and HP (at least 20+ years ago) spent considerable effort to ensure those old binaries ran on new HW / OS just fine. And companies paid big money to get that. Which is why IBM, DEC,and HP provided the service. I've been out of that realm for quite a while, have no idea if anyone does that anymore.
Posted Jul 23, 2024 3:54 UTC (Tue)
by ssmith32 (subscriber, #72404)
[Link] (2 responses)
It's obviously not quite running on a PDP-11 - but there is plenty of old COBOL code out there. As you say, it still works, so why not?
I've seen COBOL business logic, wrapped with a C api, and invoked from Java web services via a CORBA orb.
My dad ported code off a mainframe running off spinning tape (not rust), onto a little server box.
Posted Jul 23, 2024 8:15 UTC (Tue)
by farnz (subscriber, #17727)
[Link] (1 responses)
It's not just that the old code works; it's also that the extant requirements for that program are often "do what the old code does". There must have been better requirements when it was written, but those are now lost in the mists of time. As a result, maintenance comes down to "change the bit of the old code that doesn't meet new requirements, leaving the old ones alone".
Posted Jul 23, 2024 9:02 UTC (Tue)
by Wol (subscriber, #4433)
[Link]
Are you sure :-)
The requirements I work to mostly seem to be "bodge it til it looks right". (Given that, for a lot of my work, there is no right answer, you'd think that was as good as any. And then I look at what my colleagues are doing and scream "for God's sake that just CAN'T give you a sensible answer!")
Cheers,
Posted Jul 16, 2024 22:57 UTC (Tue)
by bearstech (subscriber, #160755)
[Link]
Most gains are in the +30%..+150% interval, and for lots of CPU generations. Impressive stuff. Let's encrypt everything at rest, it's cheap !
x32 is still being actively worked on?
x32 is still being actively worked on?
Wol
x32 is still being actively worked on?
x32 is a 32 bit ABI with x86-64 instruction set on top of x86-64 kernel. It's not the x86-32 compat ABI.
x32 is still being actively worked on?
x32 is still being actively worked on?
Why old architectures are sometimes important
1. Those with systems out in really remote locations (think monitoring systems in the wilderness, monitoring of drilling / etc.) or embedded in various devices they need to keep running. They may have wifi or telephone connection, so can be updated. But the device itself is very hard to replace. So the idea is to keep it running as long as possible.
2. Businesses running on old binaries. I was amazed in my career to discover how much old binaries are used in the business world. It works, why change it? IBM, DEC, and HP had systems designed not only for modern work, but to run old binaries (IBM mainframe, DEC VAX, HP's 3000 systems). Just for this. Companies pay big money for systems that run their old binaries.
Why old architectures are sometimes important
PDP-11 still supported
PDP-11 still supported - thanks for the info
Recertification of power plant code
Recertification of power plant code
Recertification of power plant code
x32 isn't an old architecture in a meaningful sense, though. It was launched around 2012 - it necessarily postdates x86-64, and only works on x86-64 processors, which is to say, any x32 machine can run x86-64 binaries equally well (but with perhaps a few percent more memory consumption).
Why old architectures are sometimes important
Kernel upgrades to support old software, old hardware, or both
Why old architectures are sometimes important
Old code surviving into the long future
Old code surviving into the long future
Wol
(much) Faster disk encryption