|
|
Subscribe / Log in / New account

LWN.net Weekly Edition for September 25, 2025

Welcome to the LWN.net Weekly Edition for September 25, 2025

This edition contains the following feature content:

This week's edition also includes these inner pages:

  • Brief items: Brief news items from throughout the community.
  • Announcements: Newsletters, conferences, security updates, patches, and more.

Please enjoy this week's edition, and, as always, thank you for supporting LWN.net.

Comments (none posted)

An unstable Debian stable update

By Joe Brockmeier
September 23, 2025

A bug in a recent release of systemd's network manager caused headaches for people managing systems that have a virtual LAN (VLAN) interface on a bridge; something one might want to do, for example, when configuring network interfaces for virtual machines. The bug affected several Debian users when upgrading the systemd package from v257.7-1 to v257.8-1. The updated package is part of the Debian 13.1 release, and the bug has snared enough users to cause a minor stir—due in no small part to the maintainer's response as much as the bug itself.

The bug in systemd-networkd was first reported to the systemd project on August 7 by Kenneth J. Miller, who encountered it in an Arch Linux package. The problem is a regression in the systemd 257.8 release; systemd-networkd will crash with a segmentation fault if a system has a bridge interface configured with a VLAN. This is not a common configuration, but it is supported by systemd-networkd and explained in its documentation. Certainly, users who have this configuration working would expect it to continue doing so when updating their system.

The regression found its way into the systemd package version 257.8-1~deb13u1 and was uploaded to the proposed updates repository for testing. Timo Weingärtner filed a bug reporting this problem against the Debian systemd package on August 30, before version 257.8-1 had moved into Debian's stable updates repository.

He explained his setup, offered to provide more detailed configuration information in private, and noted that the upgraded systemd-networkd worked fine on systems without a VLAN configured on a bridge. He added that he was reporting the bug in the hopes of stopping the new package from reaching the stable archive. Luca Boccassi, a systemd developer and one of the maintainers for the Debian package, as well as the most active uploader for it, responded on September 1 and asked Weingärtner to provide more information.

Move to stable

The systemd 257.8-1 package did, unfortunately, move to the stable updates repository, and it was included in the Debian 13.1 release on September 6. That led to several people chiming in on the bug report to confirm that it was an issue and urging Boccassi to increase the severity of the bug. Tim Small asked if there was a way to pull the package from the archive. He also expressed surprise that it had been included in the update since it had been reported "a good week before Debian 13.1 went out".

Adam D. Barratt said that a new updated package was needed; it could be released to the stable-updates repository if the release managers agreed. He noted that the release team had no way of knowing that the 257.8 package should not have been included in 13.1:

It was reported at a non release critical severity, tagged "moreinfo" and neither the submitter nor the maintainer (nor anyone else) raised it to the Release Team's attention

Boccassi set the bug's priority to "minor" and replied that there was no need to disturb the release managers. He also added a somewhat dismissive comment about the impact of the bug itself:

This is just a minor issue with a particular corner case of a custom config of an optional component. Anybody who is unable to deal with that should just stick to the default Debian components. The next stable update in ~2 months will contain a fix.

What Boccassi is referring to is Debian's default method of configuring networking; there are four recommended ways to configure networking for Debian systems. The default for servers is to use /etc/network/interfaces, and NetworkManager is the default for desktop configurations. LWN covered a discussion about those defaults in September 2024.

Response

Kevin Otte called the response insulting and said that "my network failing out from under me during a routine 'apt upgrade'" was hardly a minor issue. He added that the configuration is a normal feature that was introduced with Linux 3.8 and well-documented in the systemd-networkd man page. "This is hardly a corner case nor a custom configuration." Otte also objected to the idea that systemd-networkd is an optional component, since it is part of systemd and included in Debian's base package set.

Any package maintainer that can't adequately maintain their package, especially one as critical as systemd, and essentially tells the user "sucks to be you" should step down.

Others voiced similar sentiments; Jonathan Wiltshire, one of Debian's stable release managers (SRMs), disagreed with Boccassi's position. He pointed out that systemd-networkd is called out in the Debian networking reference, and the configuration that triggers the bug is explicitly included in systemd's documentation. Whether Boccassi intended it or not, maintaining systemd-networkd makes him responsible for a critical component for many people using Debian in a production scenario. "A reasonable user would expect this to work without issue." He also said that this kind of bug was absolutely something the SRMs would want to know about. "We would rather know and disregard than not know at all."

Daniel Baumann sent a message to the debian-devel mailing list about the bug, saying that it had taken several servers offline at his work, "seriously harming the reputation of Debian" and causing his management to question further contributions to Debian during working hours. He said that the fix for the bug was trivial and should be applied as soon as possible, he also asked Boccassi if he had reconsidered the priority of the bug.

On September 8, Boccassi doubled down on his assertion that the configuration in question was a "niche corner case":

Regardless of what one's opinions may be, the fact is that the default in Debian is network-manager for desktops and ifupdown for the rest, so anybody demanding nothing less than perfection would do well to stick to those defaults instead of being adventurous and then throwing abuse when experimental things don't work.

He said that he had already prepared an update with a fix for the bug to go into Debian's proposed updates queue but had not uploaded it due to "incredibly demotivating" comments from other members of the Debian project. He would upload the update when "things calm down to a sensible level". If Wiltshire wanted to push the new package as an update before the 13.2 release, Boccassi said, then he could do so.

Fix uploaded

Philipp Kern—a former Debian SRM—responded on September 9; he reported that a non-maintainer upload with a fix for the bug was due to be installed in Debian's archive. That package is now available in Debian's stable updates repository and has been accepted for 13.2.

It is understandable that users would be upset by such a bug impacting their systems; having to recover a remote system that has lost networking is, after all, not much fun for most people. It might, in fact, ruin one's whole day. However, some of the responses and accusations were out of line—even taking into account the dismissive nature of some of Boccassi's replies.

This is not to say that there is no room for improvement or that it would be a bad idea for other Debian developers to be more active in maintaining such a critical package. There are five other people listed as maintainers of the systemd package aside from Boccassi. He, however, appears to be the only person making regular maintainer uploads for the package in recent times, though the changelog does note a number of contributions from others.

The Debian project strives to provide a stable, high-quality operating system; it succeeds in this the majority of the time. Its distribution is developed by volunteers who typically have day jobs and many competing responsibilities outside of their work on the distribution. Ideally, Debian's maintainers would always respond to bugs effectively, and users could assume that stable updates will, in fact, be stable. It is not reasonable to expect the same level of response from volunteers that one gets from a commercial Linux, though. Negative feedback does not encourage open-source contributors to do better; it puts them on the defensive and creates a hostile atmosphere that too often leads to developer burnout.

Comments (137 posted)

Canceling asynchronous Rust

By Daroc Alden
September 24, 2025

RustConf

Asynchronous Rust code has what Rain Paharia calls a "universal cancellation protocol", meaning that any asynchronous code can be interrupted in the same way. They claim that this is both a useful feature when used deliberately, and a source of errors when done by accident. They presented about this problem at RustConf 2025, offering a handful of techniques to avoid introducing bugs into asynchronous Rust code.

Paharia started their talk (slides) with a question for the Rust programmers in the audience. Suppose that one wanted to use the tokio library's asynchronous queues (called channels) to read from a channel in a loop until the channel is closed, raising an error if the channel is ever empty for too long. Is this code the correct way to do it?

    loop {
        match timeout(Duration::from_secs(5), rx.recv()).await {
            Ok(Ok(msg)) => process(msg),
            Ok(Err(_)) => return,
            Err(_) => println!("No messages for 5 seconds"),
        }
    }

As is typical of code shown on slides, this example is somewhat terse. In this case, timeout() returns an Err value if it hits the time limit, or an Ok value wrapping the return value from whatever operation is being subjected to a time limit — in this case, rx.recv(). That returns either an error (when the channel is closed unexpectedly) or an Ok value containing the message read from the channel. In a real application, the programmer would probably want to handle the channel closing more explicitly; but as far as example code goes, it demonstrates the basic idea of wrapping a read from a channel in a timeout.

The members of the audience were hesitant to answer, because anytime a speaker asks you if code is correct, it seems like they're about to point out a subtle problem. But, after a moment of tension, Paharia revealed that yes, this is completely correct code. What about the corresponding code for the other end of the channel? They put up another slide, and asked whether anyone thought this was the right way to write to a channel:

    loop {
        let msg = next_message();
        match timeout(Duration::from_secs(5), tx.send(msg)).await {
            Ok(Ok(_)) => println!("Sent successfully"),
            Ok(Err(_)) => return,
            Err(_) => println!("No space in queue for 5 seconds"),
        }
    }

This time, people were a bit more confident: no, that code has a bug. If the send times out (taking the Err(_) branch) or the channel is closed (the Ok(Err(_)) branch), the code drops msg, potentially losing data. This is a problem, Paharia said: because of the way asynchronous Rust is currently designed, it's far too easy to make a mistake like this by accident.

They wanted to emphasize that this wasn't to claim that asynchronous Rust is bad. "I love async!" They gave a talk at RustConf 2023 about how it's a great fit for writing correct POSIX signal-handling code, even. At their job at Oxide, they use asynchronous Rust throughout the company's software stack. Asynchronous Rust is great, so they've written a lot of it — which is exactly why they've run into so many problems around canceling asynchronous tasks, and why they want to share ways to minimize those pitfalls.

[Rain Paharia]

Asynchronous Rust is built around the Future trait, which represents a computation that can be polled and suspended. When the programmer writes an asynchronous function, the compiler turns it into an enumeration representing the possible states of a state machine. That enumeration implements Future, which provides a common interface for driving state machines forward. async functions make it ergonomic to stick those little state machines together into bigger state machines using await.

Paharia likes this design, but says that it can come as a surprise to programmers moving to Rust from JavaScript or Go, where the equivalent language mechanism is implemented with green threads. In those languages, creating the equivalent of a Future (called a promise) starts executing the code right away in a separate thread, and the promise is just a handle for retrieving the eventual result. In Rust, creating a Future doesn't do anything — it just creates the state machine in memory. Nothing actually happens until the programmer calls the .poll() method.

Rust's design here is driven by the needs of embedded software, they explained. In the embedded world, there are no threads and no runtime, so it pretty much has to work like this. The fact that asynchronous tasks in Rust are just normal data is also what makes it easy to cancel them. If one wants to cancel an asynchronous function, one can just stop calling .poll() on it. There are lots of ways to do that. Paharia gave some examples of real bugs where they had written code that accidentally dropped (freed) a Future:

    // An operation that is initially implemented in synchronous code
    some_operation();

    // It returns a Result, which the linter warns about ignoring
    // But in this case we don't care, so silence the linter warning:
    let _ = some_operation();

    // Later, it gets refactored to be written asynchronously
    let _ = some_async_operation();

    // ... now the Future is being created and dropped without being run

At that point, they wanted to define two terms: "cancel safety" and "cancel correctness". Cancel safety refers to whether a Future can be safely dropped or not. The reader example above can be safely dropped — it will just stop reading from the queue. The writer example cannot — dropping it can also drop some data. The former is "cancel-safe" and the latter is not. Tokio's documentation has a section on whether their asynchronous API functions are cancel-safe or not, so programmers can usually figure this out by just looking at the local definition of a function.

Cancel correctness, on the other hand, is a global property of the whole program. A program is cancel-correct if it doesn't have any bugs related to Future cancellation. A cancel-unsafe function will not always lead to correctness problems. For example, if one is using some kind of acknowledgment and redelivery mechanism, it might not actually be a bug to drop a message like the example does.

Three things have to be true for there to be a cancel-correctness bug, Paharia said. A cancel-unsafe Future must exist, get canceled at some point, and that cancellation must violate a system property. Most approaches to eliminating cancel-correctness bugs work by tackling one of those three pillars. Unfortunately, no single technique is a complete solution in current Rust. They went on to share a handful of techniques that can all help to eliminate cancel-correctness bugs.

The first technique is to break up potentially dangerous operations into smaller pieces. The writer example from above could use the .reserve() method on a tokio queue to obtain access to a guaranteed slot for the message, before it actually sends it. This code won't drop msg if a timeout occurs:

    loop {
        let msg = next_message();
        loop {
            match timeout(Duration::from_secs(5), tx.reserve()).await {
                Ok(Ok(permit)) => { permit.send(msg); break; }
                Ok(Err(_)) => return,
                Err(_) => println!("No space for 5 seconds"),
            }
        }
    }

That works, but it still takes care to get right. Another technique would be to use APIs that record partial progress, so that the programmer can know what data has been processed and what has not. For example, tokio's Writer::write_all() method is not cancel-safe, but there is an alternative Writer::write_all_buf() that uses a cursor so the programmer can see where a write was interrupted and recover.

A final technique is to use threads to emulate the approach from Go and JavaScript. Tokio has a spawn() function that takes a Future, starts a new thread (or grabs one from a thread pool), and then runs the Future to completion.

"This sucks!" Paharia summarized. All of these techniques can help, but none of them is a full, systemic solution. They went so far as to call this the "least Rusty part of Rust" because the language doesn't have any mechanisms to prevent this kind of bug — yet.

There is hope for the future. There are three different proposals for how Rust could rule out cancel-correctness bugs: Async drop, which would let asynchronous code run a cleanup function on cancellation; Unforgettable types, which would require all Future values to be used; and the closely related Undroppable types.

In the future, when one of those proposals has been decided on and adopted, this entire class of bugs could go away. Until then, Paharia recommends finding ways to make asynchronous code cancel-safe, design APIs that make it easy to do the right thing, and use techniques like spawned threads to ensure that critical Future values can't be dropped or forgotten by accident. There wasn't time for everything they wanted to cover in the talk, so more detail is available in the document they wrote for Oxide on the topic.

Comments (29 posted)

CHERI with a Linux on top

By Jake Edge
September 24, 2025

LSS EU

The Capability Hardware Enhanced RISC Instructions (CHERI) project is a rethinking of computer architecture in order to improve system security. Carl Shaw gave a presentation at Linux Security Summit Europe (LSS EU) about CHERI and the efforts to get Linux running on it. He introduced capabilities, which are a mechanism for access control, and outlined their history, which goes back many decades at this point, then looked more specifically at the CHERI project and what it will take to apply the security constraints of capabilities to an operating system like Linux.

Capabilities

At its core, CHERI is about extending instruction-set architectures (ISAs) to add support for capabilities. A 1966 paper, "Programming Semantics for Multiprogrammed Computations", introduced the idea of capabilities, along with many of the ideas that would later underlie Unix. The paper had a strong focus on security and ensuring that computations did not interfere with each other; it generalized some ideas from earlier computers like Atlas, Rice Computer, and various Burroughs machines into what the authors called "capabilities". "Processes need to own capabilities to be able to do something on a system."

A capability is a reference and a set of rights; "a capability is an access-control object". It was originally applied to memory, but the paper expanded the idea to cover I/O and other system resources. For memory, which he was focusing on for the talk, the reference is to a region of memory and the rights are permissions to read, write, and execute it. More formally, "a capability is an unforgeable, transferable token that authorizes the use of an object", he said.

[Carl Shaw]

An object capability of that sort incorporates both a reference to the object and access rights for that object. The paper used a list of capabilities that a process had access to, which was called the "C-list". Each entry was a capability, with a reference to a memory segment and the permissions for it. So access to memory required an indirection through the C-list table, which turned out not to perform well.

He mentioned a few of the early hardware implementations of capabilities, starting in 1970, though he said there were some slightly earlier machines in the US. The CAP computer was from Cambridge University; the "first ever commercial capability-based system" was the Plessey System 250, which was not a general-purpose computer and was originally used by the military for message routing. It did have many of the attributes of modern computers, such as virtual memory and symmetric multiprocessing; "it was a pretty far ahead machine for its day".

A less-successful capability-based CPU is the Intel iAPX 432 from 1975, which ended up only being used in niche applications. Its performance was poor, mainly due to the indirection required to access memory. More recently, the Arm Morello CPU in 2022 was the result of a research project between the company and the UK government; it added CHERI on top of an Arm Neoverse processor. It was developed on a short time scale of about a year, so compromises inevitably had to be made, Shaw said, but "they did a really good job on it"; it is still used for research, but newer CHERI implementations have narrowed their focus to a smaller, more commercially viable subset of capabilities than the Morello has.

There were a number of operating systems developed using capabilities, "some you've probably never heard of", including KeyKOS, EROS, and CapROS, which were mostly "focused around high levels of reliability". In modern times, seL4 uses capabilities and, this year, it is joined by CHERI-seL4.

But, his talk was aimed at Linux, he said. Linux already has some vestiges of capabilities, including things like socket and file descriptors, which can be passed around to other processes to bestow rights. Kernel capabilities are not true capabilities in his mind, but page-table entries are a form of capabilities: they have a reference to a memory region and associated permissions.

CHERI

"CHERI is a new implementation of capabilities"; it is a security technology that is designed to be scalable, so that it can be used in everything from microcontrollers to server-class hardware. It is deterministic; CHERI does not rely on any hashing or secrets. "It's very much a hardware/software co-design technology, as well," Shaw said.

Capability-based addressing is used by CHERI, which is a variant without C-lists, so it does not suffer the performance penalties for indirection. CHERI extends existing ISAs. It started by extending MIPS, then Morello extended Arm, and now most of the work being done is for RISC-V; there is also an initial sketch of how the x86 ISA could accommodate CHERI, he said. CHERI takes a hybrid capability approach, so that it can work with existing systems as they are; it accommodates memory-management units (MMUs), hypervisors, and existing programming languages.

The CHERI instructions do not use integer address pointers, they use capabilities for addresses instead. Existing code will still run on a CHERI system, using addresses the way it currently does, but it will not get the benefit of the CHERI protections.

The project was started 15 years ago by Cambridge University and SRI International funded by DARPA. The CHERI Alliance is the focal point of current research, which is being funded by both governments and companies.

The goals of CHERI are to provide memory safety for languages like C and C++, though it will also benefit others, including Rust, while also offering "fine-grained, efficient compartmentalization". There is already coarse-grained compartmentalization in today's systems, including protection (or privilege) rings and MMUs protecting processes from each other, but CHERI is "designed to be very very fine-grained, down to the byte level, if you wanted to go there".

The intent was for existing code to run unchanged, "but it never works like that". For most well-written C and C++ application code, a recompile is largely all that is needed to work on a CHERI system. For example, KDE was ported to CHERI on Morello and only required changes to 0.02% of the code to get it working. For things like language runtimes, JIT compilers, memory allocators, and code that does lots of pointer manipulation, such as kernels, it gets more complicated. Beyond Linux, FreeRTOS, Zephyr, and (as mentioned) seL4 have all been ported to run on CHERI hardware; other operating systems are in progress as well.

The instructions that CHERI adds to the ISA are for creating and modifying capabilities; the modifications are operations that are normally expected for pointers, such as incrementing and other arithmetic operations. The hardware itself needs to be changed to support capabilities; registers need to be extended to hold them, for example.

There is both a pure-capability (purecap) mode, where only capabilities can be used for memory access, and an integer-pointer (integer) mode, which uses regular pointers. There is actually a way to have both in a single program, with pointers annotated based on which type they are, but it is not recommended, Shaw said. On the CPU, there is a mode switch that is made between the two modes, which is particularly important on RISC-V to save space in the ISA encoding; for example, load and store instruction encodings are shared between the modes.

On a CHERI system, all loads and stores are checked against a capability, even when running in integer mode. There is a program-counter capability (PCC) for both modes, and a default data capability (DCC) for integer mode, which allows the accesses from programs in integer mode to be constrained; "we can set where it can execute and we can set what it can see in memory".

In terms of implementation, a capability is an address that has been extended with metadata, but it is important to think of them as a single unit. There is a bounds field, which holds upper and lower bounds for memory addresses, and a permissions field that has the usual read, write, and execute permissions as well as some others, including whether you can store the capability to memory or not. On the CHERI RISC-V, capabilities are 128 bits in size; all of the registers and caches need to accommodate pointers of that size. There is also an out-of-band single bit tag that is used to indicate whether the contents of memory or a register contain a valid capability or not; software generally does not need to interact with the tag directly, he said.

Originally, capabilities were 256 bits so that they could included full 64-bit upper and lower bounds. "One of the innovations of CHERI is to use a compressed format for bounds"; the CHERI RISC-V uses a mantissa-exponent system, which reduces the resolution but that is not much of a problem on a virtual-memory system, he said.

There are some rules for using capabilities in CHERI, starting with the provenance rule: a capability can only be created using another valid capability. The monotonicity rule says that a new capability can only have the same or lesser rights than the capability it is created from. The reachable capability monotonicity rule disallows increasing the reachable capabilities for a given chunk of code without yielding execution to another domain. The code only has access to a limited set of capabilities, but if it takes an interrupt, that will run in a different domain, which could perhaps increase the capabilities available to it.

When the system boots, it has access to the "infinite cap" (or "root cap"), which is all of the permissions for all of memory; it is generally stored in the PCC. As an example, the system could then create two compartments by creating sub-capabilities that were more restricted; each could have non-overlapping bounds, and one region could perhaps be for code, so it only has read and execute permissions. Then, inside the other region, a read-only array capability could be created; anything having that capability can read the array, but nowhere else in the enclosing region.

Most of the "heavy lifting" for setting up the capabilities is done by the compiler, Shaw said. For example, a static C array will have a capability created for it by the compiler, which is how CHERI can provide memory safety to C code. The program cannot successfully read or write outside of the array because the capability it must use to access the array will not allow it to do so. Stacks can be made non-executable by removing that permission from the capability for the stack frame, for example.

Linux

CHERI provides run-time memory safety that is hardware-enforced, which is critical for C and C++ programs. The Linux kernel is mostly implemented in C so getting memory safety for it requires a tool like CHERI. In addition, CHERI allows implementing least-privilege compartmentalization. There have been supply-chain attacks against libraries, which CHERI could protect against by putting "a library into a sandbox, mostly automatically, which can constrain its access and its entry and exit points". Within the kernel, a similar approach can be taken by placing subsystems and drivers inside sandboxes. An analysis of kernel bugs in 2022 showed that 87% of the high-severity kernel CVEs could be mitigated with either memory safety or compartmentalization; "we see that as a pretty important thing to try and achieve".

About two weeks before his talk in late August, CHERI Linux developers got the 6.16 kernel running in purecap mode; "so this means that every pointer in the kernel is now a capability". Originally, Huawei did a proof of concept of Linux running on CHERI, then the Morello project ported Linux to that hardware; the Morello version used the hybrid mode, where most of the pointers were still integers, though the system-call level used capabilities.

His employer, Codasip, has a team that is working on Linux for CHERI on RISC-V; it started with the hybrid Morello kernel, but then did a clean implementation in purecap mode. "We do not claim it's perfect, what we're aiming for at the moment is functionality; we want to get the basics running, then we're gonna go on to the more advanced security concepts." Some of those advanced techniques have already been proved on FreeBSD in CheriBSD, he said, but not on Linux yet.

Testing of the kernel has been done using the Linux Test Project (LTP), which is not all passing, yet, but "it's looking pretty reasonable". On the user-space side, there is a "relatively simple" purecap version; it does not yet have the GNU C library (glibc) but is using musl libc. His team is focused on the kernel, core libraries, and utilities, at this point, he said.

He went through a list of various kernel features, briefly reporting on their status; many things are working already, including networking, BPF, USB, and PCIe. There is a "rather dated X11 system working". The team has also started some optimization work, especially with regard to copying memory to and from user space. In addition, the CHERI architecture allows doing 128-bit loads and stores, which can accelerate functions like memcpy().

There is other development work going on as well, such as on the LLVM compiler for RISC-V CHERI and on QEMU for running and testing the system. The CHERI Alliance GitHub repository is where all of the work is being done.

The ABI being used is the Pure Capability user-space ABI (PCuABI) defined by the Morello project three years ago. It uses capabilities at the system-call level, which constrains what each side of the ABI can do. Copying to and from user-space memory is constrained by the bounds and permissions of the capabilities, while returned capabilities, such as from mmap(), restrict user space.

There are a number of challenges for purecap CHERI in the kernel, starting with the use of unsigned long for pointers. That type is used for pointers all over the kernel, but the CHERI compiler needs them to be a uintptr_t so that it can use capabilities instead. There are also alignment and size problems that come from the larger size of capabilities; structures in the kernel sometimes assume pointers have a specific size. The goal is to minimize the changes that need to be made and to make them with an eye on what can go upstream eventually.

The next piece that his team plans to look at is loading kernel modules into compartments. It is a tricky problem, since kernel modules "have to have quite a lot of access within the main kernel" Another "big ticket item" that needs to be tackled is support for BPF in user space. The BPF compiler has no conception of capabilities, which needs to be addressed; there is also the question of backward compatibility for existing BPF binaries. The work done in the CheriBSD project is useful as a reference, he said.

An area where CHERI could help is with Linux on MMU-less systems. Those systems lack the process isolation that is provided by the MMU, but CHERI can provide hardware-enforced isolation. An MMU also provides translation of addresses to and from virtual and physical, which is not something CHERI can do, but there is some interesting work in academia that might help. "CHERI is sort of refreshing some ideas and getting people to look back at these sorts of issues", he said.

A related idea is to use CHERI for a single-address-space Linux targeting workloads with many processes sharing the same data. CHERI would be used for isolation, and the MMU for translation, but the shared data would be accessible without changing translation-lookaside buffers (TLBs), so it would reduce TLB thrashing.

Codasip has designed a CHERI CPU, the X730, "from the ground up"; part of what the company does is to create configurable cores, where features like CHERI can be turned on or off when the CPU is built. That makes it easier to compare performance between the two; performance of CHERI CPUs is a question that the project frequently gets asked. The X730 only requires less than 5% more silicon area for CHERI, compared to the A730 non-CHERI version; it can run at the same maximum frequency for both types. The X730 adds 3.8% overhead for the CHERI instructions and overall has a less than a 5% performance overhead for CHERI code. The team is still working on optimizations and thinks it can reduce that overhead further.

He wrapped up by returning to the paper from 1966, whose authors stressed that "multiprogrammed computer systems" would need to evolve over time to meet changing requirements. That is what the CHERI project is trying to do, Shaw said, by evolving both hardware and software to try to improve the security of computer systems.

Q&A

The first question was in relation to DARPA, which regularly has initiatives toward memory safety; the most recent is the Translating all C to Rust (TRACTOR) program, which is looking to automate that transition. If it is successful, "what role do you see CHERI playing in an environment where a majority, even a vast majority, of all C code has been replaced with Rust?" Shaw said that he wonders how successful TRACTOR will be, given that AI techniques may fall short of being able to reliably translate C for all of the different programs needed. Meanwhile, though, he does not see CHERI and Rust as being in conflict at all; the two can work together and it is something the project is putting effort into. "There will be a CHERI Rust compiler."

While memory safety is definitely important, the compartmentalization afforded by CHERI is more interesting to him. "Being able to get least privilege in software is a real big step forward, I think." None of the current languages attack that problem, he said, so it would take "a further evolution of language in order to support this whole concept nicely".

Another attendee warned Shaw about what Arnd Bergmann said in a talk earlier in the week: the existence of MMU-less Linux is slated to end in 2028 or so. He suggested that Shaw talk to Bergmann about those plans. Shaw said there is a niche for MMU-less CPUs, especially for network gear, such as routers, that is driven by trying to keep the costs as low as possible; ideally, the manufacturers want Linux, but will presumably choose something else if they have to.

The attendee asked about the memory overhead for CHERI, which Shaw said he did not have any real numbers on, since the team has just started gathering that kind of information. The tags add some overhead, but that is typically less than 1% of the size of memory. Pointer-heavy workloads will obviously have a larger increase in memory than computation-oriented workloads, he said.

The compiler being used is LLVM, he said in response to another question; the work there is starting to go upstream and is the first of the CHERI Linux work to do so. The CHERI Linux project has to adapt its strategy for getting code upstream, depending on the target project; LLVM is a large project so the efforts so far have been to get the infrastructure needed for CHERI upstream. Some of that work will show up in LLVM 22, he thinks.

LSS EU organizer Elena Reshetova noted that she agreed that the compartmentalization aspects of CHERI were the more interesting and wondered what progress had been made on that. Shaw said that they are just getting started on compartmentalizing for Linux; the first steps will be for user-space libraries, which are already pretty well understood from the CheriBSD work. Kernel-module compartmentalization is the other thing being pursued, as he mentioned earlier.

He agreed with Reshetova that compartmentalizing the kernel would be "very challenging". She followed up by asking if it even made sense to pursue for Linux; "given that it never has been considered in the design, this is a pretty fundamental change". Shaw thought that it might be possible, at least based on what the hardware-assisted kernel compartments (HAKC) project has been doing. "We think it's at least to some extent achievable."

James Morris asked about the relationship between the Morello and CHERI Linux projects; "are you joining forces to do the upstreaming?" Shaw said that "the CHERI community is pretty tight-knit". His team works closely with teams for Morello, CHERIoT, and others, including lots of collaboration with Cambridge University people. The project mostly has participants in the US and UK, but that is changing and there is more commercial interest in the project everywhere, he said.

Another question was about how people could emulate CHERI hardware to try things out. Shaw said that there was currently a 6.10 kernel in the CHERI Alliance repository, but that the 6.16 kernel should be pushed there soon. It will come with all of the scripts needed for building everything with Yocto, including development tools, like the toolchain, SDK, QEMU for CHERI, and so on. That will be a good starting point for those wanting to check out CHERI Linux.

The slides and a YouTube video of the talk are available for interested readers.

[I would like to thank the Linux Foundation, LWN's travel sponsor, for supporting my trip to Amsterdam for Linux Security Summit Europe.]

Comments (11 posted)

Extending the time-slice-extension discussion

By Jonathan Corbet
September 18, 2025
Time-slice extension is a proposed scheduler feature that would allow a user-space process to request to not be preempted for a short period while it executes a critical section. It is an idea that has been circulating for years, but efforts to implement it became more serious in February of this year. The latest developer to make an attempt at time-slice extension is Thomas Gleixner, who has posted a new patch set with a reworked API. Chances are good that this implementation is close to what will actually be adopted by the kernel.

Imagine a user-space thread that holds a (user-space) spinlock; if that thread is preempted by another thread which subsequently attempts to acquire that same lock, the preempting thread will spin indefinitely, having displaced the thread that could release the lock. That is not a path toward optimal performance. If, though, the lock-holding thread could ask to not be preempted while holding the lock, this scenario could be avoided; time-slice extension is meant to at least give that thread a chance to run long enough to finish its work before being kicked out of the CPU.

The February implementation of this work was by Steve Rostedt; after that work bogged down, an implementation by Prakash Sangappa was considered for a while, but also ran into disagreements over its API and implementation details. Gleixner's work is an attempt to create a version that will pass muster. To get there, he first had to rework the restartable sequences implementation to address a number of problems he found there; a new version of that series was posted on September 8.

In this version of the time-slice-extension API, a process must explicitly enable the feature before attempting to use it. That is done with a new prctl() command named PR_RSEQ_SLICE_EXTENSION_SET. This requirement serves two purposes: it allows the kernel to avoid some overhead for the (presumed) majority of processes that do not use time-slice extension, and a process that can use the feature will get an indication of whether the kernel actually supports it. There is also a new PR_RSEQ_SLICE_EXTENSION_GET command to query whether the feature is currently enabled for the calling process.

Like most of its predecessors, Gleixner's implementation uses the restartable sequences infrastructure, including its block of memory shared between user space and the kernel, in the implementation of the new feature. This time around, though, the rseq structure stored in that block is given a new field:

    __u32 slice_ctrl;

To request a time-slice extension, a thread sets the RSEQ_SLICE_EXT_REQUEST bit in that field; a simple assignment is sufficient, there is no need for any special atomic operations. If that thread is then interrupted, the kernel will handle whatever task caused the interruption; before returning to user space, the kernel checks whether anything has happened that would normally cause the current thread to be scheduled out. If the thread is marked for rescheduling, it would normally lose access to the CPU; if, however, the request bit is set in slice_ctrl, the conditions are right, and the kernel is in a good mood, it will consider giving that thread just a bit more time before the eviction happens.

About that timer: the setting of the 30µs high-resolution timer is one of the most expensive parts of implementing this feature, since it requires reprogramming the hardware. The kernel reduces that overhead by looking at the amount of time left in the thread's time slice; if it exceeds the grant period, the timer will not be set.
Whether the conditions are right depends on a few things, including the reason why rescheduling the process is called for; the kernel is unlikely to grant an extension if a high-priority realtime process needs the CPU, for example. Whether or not the extension is allowed, the kernel will clear the RSEQ_SLICE_EXT_REQUEST bit in slice_ctrl to inform the thread that it was interrupted. If the kernel does decide to grant the extension, it will, before returning to user space, set the RSEQ_SLICE_EXT_GRANTED bit in slice_ctrl and set a timer for (by default) 30µs.

The user-space thread should run its critical section and, at completion, check the request bit with an atomic test-and-clear operation. If that bit had already been cleared by the kernel, the thread will know that an interrupt has happened. In that case, it must check for RSEQ_SLICE_EXT_GRANTED to see whether it is running on borrowed time; if so, it must immediately call the new rseq_slice_yield() system call to tell the kernel that the critical section is complete. That call is likely to result in the rescheduling of the thread.

There are a few things that a potential user of this feature should be aware of. One is that the extension grant can be revoked if the kernel changes its mind; that will happen unconditionally if the thread is interrupted again while running under the extension grant. In that case, the RSEQ_SLICE_EXT_GRANTED bit will be cleared, and the thread does not have to call rseq_slice_yield(). If the thread runs its critical section to completion, though, some care must be taken to not run afoul of the kernel's rules. The rseq_slice_yield() call is not optional; if the kernel's 30µs timer expires before that call is made, the grant will be ended and the thread rescheduled. A more severe fate awaits any thread that makes any system call other than rseq_slice_yield() while running with an extension grant; that thread will be terminated immediately.

Rostedt's February version only worked if the scheduler was running in the lazy preemption mode; that limitation was controversial at the time. Gleixner's implementation does not have that limitation and, in fact, will explicitly not work as well when lazy preemption is in use, especially on realtime kernels, where preemption can happen anywhere in the kernel and not just at the point of return to user space. Implementing time-slice extension in that environment would add to the overhead of the feature, whether it is in use or not. In the earlier discussions, there had been talk of making time-slice extension available to realtime processes as well, but that is not a part of this series.

The API for this feature may not yet be entirely set in stone. Mathieu Desnoyers had a couple of requests, the first of which was to allow the user-space critical sections to be nested, so that a second critical section could be entered while already running within a critical section. That would require turning the RSEQ_SLICE_EXT_REQUEST into a counter informing the kernel of just how many nested critical sections were being executed at any given time. Gleixner pondered the request for a bit before concluding that nesting should be implemented in user space, if at all.

The second request was to allow any system call to signal an end to the critical section, rather than killing the thread. Otherwise, he said, the feature would see far fewer users: "Handling syscall within granted extension by killing the process will likely reserve this feature to the niche use-cases". Gleixner rejected that reasoning, saying: "Having this used only by people who actually know what they are doing is actually the preferred outcome". Even so, he had previously said that he would be willing to consider allowing any system call as a way to end the critical section, but that it would require changing the system-call entry code in a way that he feared might not be acceptable to the scheduler developers (who have not yet expressed an opinion on the subject).

While there may be discussion about the details for a while, it seems likely that the eventual API for time-slice extension will be fairly close to what has been described here. The feature could perhaps be ready to merging within a couple of development cycles. This being the kernel community, though, there is always a chance that somebody will make a request to extend the discussion beyond its natural end.

Comments (8 posted)

Multiple kernels on a single system

By Jonathan Corbet
September 19, 2025
The Linux kernel generally wants to be in charge of the system as a whole; it runs on all of the available CPUs and controls access to them globally. Cong Wang has just come forward with a different approach: allowing each CPU to run its own kernel. The patch set is in an early form, but it gives a hint for what might be possible.

The patch set as a whole only touches 1,400 lines of code, adding a few basic features; there would clearly need to be a lot more work done to make this feature useful. The first part is a new KEXEC_MULTIKERNEL flag to the kexec_load() system call, requesting that a new kernel be booted on a specific CPU. That CPU must be in the offline state when the call is made, or the call will fail with an EBUSY error. It would appear that it is only possible to assign a single CPU to any given kernel in this mode; the current interface lacks a way to specify more than one CPU. There is a bunch of x86-64 assembly magic to set up the target CPU for the new kernel and to boot it there.

The other significant piece is a new inter-kernel communication mechanism, based on inter-processor interrupts, that allows the kernels running on different CPUs to talk to each other. Shared memory areas are set aside for the efficient movement of data between the kernels. While the infrastructure is present in the patch set, there are no users of it in this series. A real-world system running in this mode would clearly need to use this communication infrastructure to implement a lot of coordination of resources to keep the kernels from stepping on each other, but that work has not been posted yet.

The final patches in the series add a new /proc/multikernel file that can be used to monitor the state of the various kernels running in the system.

Why would one want to do this? In the cover letter, Wang mentions a few advantages, including improved fault isolation and security, better efficiency than virtualization, and the ease of zero-downtime updates in conjunction with the kexec handover mechanism. He also mentions the ability to run special-purpose kernels (such as a realtime kernel) for specific workloads.

The work that has been posted is clearly just the beginning:

This patch series represents only the foundational framework for multikernel support. It establishes the basic infrastructure and communication mechanisms. We welcome the community to build upon this foundation and develop their own solutions based on this framework.

The new files in the series carry copyright notices for Multikernel Technologies Inc, which, seemingly, is also developing its own solutions based on this code. In other words, this looks like more than a hobby project; it will be interesting to see where it goes from here. Perhaps this relatively old idea (Larry McVoy was proposing "cache-coherent clusters" for Linux at least as far back as 2002) will finally come to fruition.

Comments (50 posted)

Revocable references for transient devices

By Jonathan Corbet
September 22, 2025
Computers were once relatively static devices; if a peripheral was present at boot, it was unlikely to disappear while the system was operating. Those days are far behind us, though; devices can come and go at any time, often with no notice. That impermanence can create challenges for kernel code, which may not be expecting resources it is managing to make an abrupt exit. The revocable resource management patch set from Tzung-Bi Shih is meant to help with the creation of more robust — and more secure — kernel subsystems in a dynamic world.

Low-level drivers that manage transient devices must be prepared for those devices to vanish; there is no way around that. But those drivers often create data structures that are associated with the devices they manage, and export those structures to higher-level software. That creates a different sort of problem: the low-level driver knows when it can no longer communicate with a departed device, but it can be harder to know when it can safely free those data structures. Higher-level code may hold references to those structures for some time after the device disappears; freeing them prematurely would expose the kernel to use-after-free bugs and the exploits that are likely to follow.

Shih's patch set addresses this problem by turning those problematic references into a form of weak reference. The holder of such a reference must validate it prior to each use, and access to the resource can be revoked at (almost) any time. When used correctly, this API is meant to ensure that resources stay around as long as users exist, and to let the provider of those resources know when they can be safely freed.

Consider an entirely made-up example of a serial-port device that can come and go at the user's whim. When one of these devices is discovered, the driver will create a structure describing it that can be used by higher levels of software. An example user of this structure could be the SLIP network-interface driver, which many Linux users happily forgot about decades ago, but which still exists and is useful to, for example, communicate with your vape-powered web server. The serial and SLIP drivers must have some sort of agreement regarding when the low-level structures can be accessed, or the state of the system will take a turn for the worse.

The resource provider (the low-level driver) starts by creating a revocable_provider structure for the structure that it will make available to higher levels:

    struct revocable_provider *stuff = revocable_provider_alloc(void *res);

The res parameter should point to the resource that is being managed. The user of this resource (the higher-level driver), instead, should obtain a revocable structure with call to:

    struct revocable *revocable_alloc(struct revocable_provider *prov);

In the example code provided with the patch set, the resource consumer has direct access to the revocable_provider structure and makes the call itself. One could also imagine an API where the provider hides that structure and allocates the revocable structure for the user.

Sooner or later, the user will need access to the underlying resource. It does not have a pointer to that resource, though; it only has the revocable structure. To get that pointer, it must make a call to:

    void *revocable_try_access(struct revocable *resource);

In normal operation, where the device still exists, this call will do two things: it enters a sleepable RCU (SRCU) read-side critical section, and it returns the pointer that the caller needs. The caller can then use that pointer to access the resource, secure in the knowledge that the pointer is valid and will continue to be for the duration of the access. Once the task at hand is complete, the user calls:

    void revocable_release(struct revocable *resource);

That exits the SRCU critical section, and releases the reference to the resource; the caller should not retain the pointer to that resource past this call. There is also a convenience macro provided for this pair of calls:

    REVOCABLE(struct revocable *resource, void *ptr) {
    	/*
	 * "ptr" will point to the resource here.  It may be NULL,
	 * though, so code must always check.
	 */
    }

As the comment above suggests, a call to revocable_try_access() might return NULL, which is an indication that access to the resource has been revoked. Needless to say, the caller must always check for that case and respond accordingly.

Revocation happens on the provider side, for example, if the owner of the vape unplugs it to put it to its more traditional use. Once the low-level driver becomes aware that the device has evaporated, it revokes access to the resource it exported with:

    void revocable_provider_free(struct revocable_provider *stuff);

This call may, of course, race with the activity of users that have legitimately obtained a pointer to the resource with revocable_try_access(); as long as those users are out there, the provider cannot free that resource. To handle this situation, revocable_provider_free() makes a call to synchronize_srcu(), which will block for as long as any read-side SRCU critical sections for that resource remain active. Once revocable_provider_free() returns, the provider knows that no users of the resource remain, so it can be safely freed.

This whole interface may seem familiar to those who have been watching the effort to enable kernel development in Rust: it is patterned after the Revocable type used there.

The initial response to this work has been mostly positive; Greg Kroah-Hartman, for example, said:

This is, frankly, wonderful work. Thanks so much for doing this, it's what many of us have been wanting to see for a very long time but none of us got around to actually doing it.

Danilo Krummrich, who wrote the Rust Revocable implementation, had some more specific comments, suggesting changes to the naming and the API. Shih, meanwhile, has requested a discussion of the work at the Kernel Summit, which will be held this December in Tokyo. This is relatively new work, and it seems likely to evolve considerably before it is ready for use in the mainline kernel. Given the potential it has to address the sorts of life-cycle bugs that have plagued kernel developers for years, though, it seems almost certain that some future form of this API will be adopted.

Comments (10 posted)

Blender 4.5 brings big changes

September 19, 2025

This article was contributed by Roland Taylor

Blender 4.5 LTS was released on July 15, 2025, and will be supported through 2027. This is the last feature release of the 3D graphics-creation suite's 4.x series; it includes quality-of-life improvements, including work to bring the Vulkan backend up to par with the default OpenGL backend. With 4.5 released, Blender developers are turning their attention toward Blender 5.0, planned for release later this year. It will introduce substantial changes, particularly in the Geometry Nodes system, a central feature of Blender's procedural workflows.

Brief introduction

Blender is an open-source creative application released under the GPLv3. The Blender Foundation stewards development, with significant funding from the Blender Development Fund as well as backing from individual contributors and industry sponsors. Its code is primarily written in C and C++, with Python used extensively for scripting and add-ons.

While Blender is often known as a 3D modeling and animation tool, it has grown into a comprehensive open-source suite for digital content creation. Alongside powerful 3D tools, it features compositing, nonlinear video editing, and 2D animation in 3D space. This integrated suite of tools enables designers, animators, and other creators to work with a single application across their digital pipeline. Blender also provides access to its core functions through Nodes, a visual programming system that enables procedural workflows for complex operations. The Grease Pencil tool, also accessible through the Geometry Nodes system, is used for 2D animation, cut-out animation, motion graphics, and more. Blender's procedural systems rely heavily on these node-based graphical interfaces, and the 4.5 LTS focuses on their continued evolution. These systems enable fully non-destructive workflows, preserving all original data at every stage of the editing process.

Blender strives to be compatible with visual-effects (VFX) industry standards through alignment with the VFX Reference Platform, which is updated annually. This allows Blender to be run on the same systems as other VFX software, as well as share files with them. 4.5 brings a slew of library updates to maintain alignment with the reference platform.

A solid foundation

Historically, Blender has relied on OpenGL for drawing its user interface and powering its 3D-display capabilities. However, efforts are underway to modernize this aspect of its core functionality by abstracting away the rendering backend, bringing support for running on additional graphics APIs, including Vulkan and Apple's Metal API. The Vulkan API is a low-overhead, cross-platform standard that allows applications like Blender to communicate more directly with GPU hardware than OpenGL. Being the final feature release of the 4.x series, this LTS brings a critical step in the maturity of the Vulkan backend. Though still not enabled by default due to multiple outstanding issues, it now rivals the OpenGL backend in both features and performance.

Vulkan is built on a parallel-execution model, allowing applications to send multiple commands to the GPU simultaneously, while OpenGL relies on a sequential model. Vulkan's execution model makes better use of the increased number of cores found in modern GPUs. This is a crucial step toward smoother viewport performance and more responsive interaction with complex scenes.

There are known limitations still blocking the new backend from being adopted as the default. Notably, large meshes with 100-million vertices or more are not yet supported, resulting in poor performance on the Vulkan backend for virtual reality and other high-mesh-count applications. Future driver updates may address some of these issues.

The viewport in Blender is an interactive view space where 3D scenes are displayed and constructed. Rendering converts 3D scenes into 2D images or video, producing the final output with Blender's built-in engines or third-party renderers. Rendering can also be performed without the graphical interface by running Blender in headless mode, both on individual systems and at scale on render farms. The viewport and rendering upgrades in Blender 4.5 extend beyond the improvements to its Vulkan backend. Specifically, work continues on EEVEE, the realtime rendering engine built for rapid, interactive rendering on modern GPUs. EEVEE 2.0, also known as EEVEE Next, receives several critical improvements focused on stability and visual accuracy. Shadows now render more smoothly thanks to the addition of shadow terminator normal bias. This is an area where EEVEE has struggled to match other renderers, including Blender's own Cycles rendering engine.

Two settings control shadow termination bias: "Geometry Offset" and "Shading Offset", found in the "Shading" tab of the "Object Properties" panel. This gives artists greater control over the position and angle of shadows. However, due to the difficulties of creating shadows that work equally well for all projects, the default for these settings is "no bias". These visual improvements coincide with fixes for rendering problems such as light leaking from large light sources. Light leaking is a phenomenon where light incorrectly passes through or around solid objects, creating unrealistic bright spots in the rendered scene. Overall, these changes aim to bring EEVEE Next closer to parity with other renderers.

Beyond rendering quality, this LTS release delivers improvements to workflow fluidity. Texture loading, shader compilation, and startup times all contribute to overall performance and user experience, and all three have been improved. Textures are now loaded using a deferred, multithreaded process, resulting in more than double the speed of the previous method. This change introduces a small CPU overhead due to loading textures before redrawing the viewport, but the cost is not significant enough to severely impact performance.

Shaders are also now compiled in parallel. Crucially, this optimization is independent of the viewport backend in use, whether Vulkan or OpenGL, translating to immediate benefits from these core improvements. That said, a new preference allows users to revert to sub-process shader calculation, if desired, which is faster but consumes more RAM. Additionally, by skipping unnecessary shading steps during viewport initialization, startup times have been improved significantly.

With ongoing efforts to improve the workspace, users can now control which tabs are visible in the Properties Editor through the right-click pop-up menu. The Asset Browser, used for importing and organizing assets (including scenes, 3D objects, textures, and more), has been continually refined throughout the 4.x series. In 4.5 LTS, it receives some key usability enhancements, particularly in how assets are displayed, such as wrapping long lines used for asset labels, and making it easier to create thumbnails for assets.

Nodes, Grease Pencil, and modeling polish

Rather than focusing solely on fixes and performance gains, this cycle emphasized tighter integration between the various node systems in Blender. The result is that Shader Nodes, Compositor Nodes, and Geometry Nodes (including Grease Pencil Nodes) now share more capabilities and have a more consistent workflow.

The common nodes (including mathematical operations) and procedural textures available with Shader Nodes and Geometry Nodes are now available for use in the Compositor. This change enables effects such as procedurally generated visual noise or cloudiness applied to an image or video during post-processing. Common nodes can be copied across Shader and Geometry Node setups, further aligning node logic and capability design across the toolset. In Geometry Nodes, the new "Set Mesh Normal" node grants artists direct control over custom normals, which are perpendicular vectors that are used to represent the orientation of a surface. By allowing users to define normals via Fields, Blender provides fine-grained procedural controls for surface shading. For instance, an animator could drive this node with a value to simulate a material seamlessly transitioning from a soft, smooth surface to a rough, hard-edged one, all without the need for manually editing the mesh.

In 4.5, Point Clouds debut as a new object type, accessible from the "Add" menu in the viewport or through various Geometry Nodes. Point Clouds represent objects as a group of points in 3D space and have long been used in scientific and industrial 3D scanning. According to the Blender developer documentation on Point Clouds, this new object type supports motion graphics, physics simulations (including particle systems), granular materials (such as sand and gravel), and 3D scanning. To this end, Blender includes comprehensive tools and editable object attributes, including standard transformations like rotation and scale. It also maintains high rendering performance through EEVEE and Cycles, putting point clouds on par with meshes.

[Blender showing the Point Clouds object type]

Looking ahead

With 4.5 LTS out the door, the Blender developers have shifted focus to 5.0, the next major release, which is now under active development. As the beginning of a new, feature-breaking series, 5.0 introduces significant refinements and modernized workflows without abandoning the user interface paradigm established by Blender 2.80 in 2019. The release notes outline several key features planned for the release. Among these improvements is the ability to mark scenes as assets, allowing entire scenes with their contents and setup to be pulled directly into the visual scene editor using the asset browser.

The Grease Pencil tool in 5.0 supports the motion blur effect, controlled by the new "Motion Blur Steps" setting in the "Grease Pencil" render panel. Linux users now benefit from HDR support in the viewport on Wayland when using the Vulkan backend. Additionally, a change to the .blend file format to handle larger content allows Blender to store meshes with more than a few hundred million vertices. This feature required a change to the file structure of the .blend file format, meaning that files created in version 5.0 are incompatible with Blender 4.4 and prior releases, but can be loaded in 4.5 LTS. While the new format supports meshes with hundreds of millions of vertices, working with such files still demands powerful hardware, specifically large amounts of system RAM and GPU memory.

Blender 5.0 improves symmetry in Edit Mode by ensuring mirrored operations no longer fail or produce inconsistent results. UV mapping, the process of unfolding the surface of a 3D model onto a 2D image for applying textures, sees improvements in Blender 5.0 thanks to improved synchronization. This change ensures selections remain aligned between the viewport and the UV Editor. Blender 5.0 finally resolves this longstanding limitation.

Those interested in downloading Blender 4.5 can get official builds from the project web site, install the Flatpak via Flathub, or install the snap package from the Snap Store.

DOGWALK, a game by the Blender Foundation, which was built using Blender and the Godot engine, was released at the same time as 4.5. The game is freely available for download from Blender Studio, Steam, and Itch.io.

According to the release schedule, Blender 5.0 will enter beta on October 1, 2025. Interested users can access official daily builds from Blender's experimental downloads page. Blender's development is open to contributors of all backgrounds; instructions on contributing code, documentation, and more are available in the developer portal.

Comments (none posted)

Page editor: Jake Edge

Brief items

Security

Security quote of the week

Trust is not earned by verifying a developers legal identity. There is no way to verify whether an app published to the Play Store is harmful or not, regardless of whether their identity has been verified with Google.

Trust is earned by transparency. F-Droid users are able to verify with certainty the source code which was used to build an app they are about to install.

The way in which F-Droid builds free software from source and then distributes it to end users without needing to involve Google, is akin to how most Linux distributions have been distributing software for decades. These distributions mechanisms have stood the test of time, are regarded as extremely secure and trustworthy, and are used by most of the modern computing infrastructure across the globe.

Nobody has suggested that Linux distributions need to be made safer for end users by having a central authority verify each app developer. It should be no different for mobile operating systems.

Peter Serwylo

Comments (none posted)

Kernel development

Kernel release status

The current development kernel is 6.17-rc1, released on September 21. Linus said: "Let's keep the testing going, and we'll have the final 6.17 in a week".

Stable updates: 6.16.8, 6.12.48, 6.6.107, and 6.1.153 were released on September 19.

Comments (none posted)

Distributions

Bluefin LTS released

The Universal Blue project has announced the release of Bluefin LTS, an image-based distribution similar to Bluefin that uses CentOS Stream 10 and EPEL instead of Fedora as its base:

Bluefin LTS ships with Linux 6.12.0, which is the kernel for the lifetime of release. An optional hwe branch with new kernels is available, offering the same modern kernel you'll find in Bluefin and Bluefin GTS. Both vanilla and HWE ISOs are available, and you can always choose to switch back and forth after installation. [...]

Bluefin LTS provides a backported GNOME desktop so that you are not left behind. This is an important thing for us. James has been diligently working on GNOME backports with the upstream CentOS community, and we feel bringing modern GNOME desktops to an LTS makes sense.

Comments (none posted)

RPM 6.0.0 released

Version 6.0.0 of the RPM Package Manager has been released. Notable changes in this release include support for multiple OpenPGP signatures per package, the ability to update previously installed PGP keys, as well as support for RPM v4 and v6 packages. See the release notes for full details.

Full Story (comments: 5)

Tails 7.0 released

Version 7.0 of the Tails portable operating system has been released. This is the first version of Tails based on Linux 6.12.43, Debian 13 ("trixie") and GNOME 48. It uses zstd instead of xz to compress the USB and ISO images to deliver a faster start time on most computers. The release is dedicated to the memory of Lunar, "a traveling companion for Tails, a Tor volunteer, Free Software hacker, and community organizer":

Lunar has always been by our side throughout Tails' history. From the first baby steps of the project that eventually became Tails, to the merge with Tor, he's provided sensible technical suggestions, out-of-the-box product design ideas, outreach support, and caring organizational advice.

Outside of Tor, Lunar worked on highly successful Free Software projects such as the Debian project, the Linux distribution on which Tails is based, and the Reproducible Builds project, which helps us verify the integrity of Tails releases.

See the changelog for a full list of fixes, upgraded applications, and removals. LWN covered Tails Project team leader intrigeri's DebConf25 talk in July.

Comments (none posted)

Distributions quote of the week

ZFS support in Linux distributions varies. The broader Linux community questions the compatibility between the GPL and the CDDL and in response, distributions have added and removed installer ZFS support. Intellectual property attorneys who have considered CDDL/GPL compatibility often say "it's an interesting question," which is lawyerese for "$500 an hour and the hearing's gonna be lit."
Michael Lucas

Comments (12 posted)

Development

Rust 1.90.0 released

Version 1.90.0 of the Rust language has been released. Changes include switching to the LLD linker by default, the addition of support for workspace publishing to cargo, and the usual set of stabilized APIs.

Comments (17 posted)

Open Infrastructure is Not Free: A Joint Statement on Sustainable Stewardship

The Open Source Security Foundation (OpenSSF) has put together a joint statement from many of the public package repositories for various languages about the need for assistance in maintaining these commons. Services such as PyPI for Python, crates.io for Rust, and many others are working together to try to find ways to sustain these services in the face of challenges from "automated CI systems, large-scale dependency scanners, and ephemeral container builds" all downloading enormous amounts of package data, coupled with the rise of generative and agentic AI "driving a further explosion of machine-driven, often wasteful automated usage, compounding the existing challenges". It is not a crisis, yet, they say, but it is headed in that direction.
Despite serving billions (perhaps even trillions) of downloads each month (largely driven by commercial-scale consumption), many of these services are funded by a small group of benefactors. Sometimes they are supported by commercial vendors, such as Sonatype (Maven Central), GitHub (npm) or Microsoft (NuGet). At other times, they are supported by nonprofit foundations that rely on grants, donations, and sponsorships to cover their maintenance, operation, and staffing.

Regardless of the operating model, the pattern remains the same: a small number of organizations absorb the majority of infrastructure costs, while the overwhelming majority of large-scale users, including commercial entities that generate demand and extract economic value, consume these services without contributing to their sustainability.

Comments (52 posted)

Development quotes of the week

One could argue that Slack is free to stop providing us the nonprofit offer at any time, but in my opinion, a six month grace period is the bare minimum for a massive hike like this, if not more. Essentially, Salesforce (a $230 billion company) is strong-arming a small nonprofit for teens, by providing less than a week to pony up a pretty massive sum of money, or risk cutting off all our communications. That's absurd.

The small amount of notice has also been catastrophic for the programs that we run. Dozens of our staff and volunteers are now scrambling to update systems, rebuild integrations and migrate years of institutional knowledge. The opportunity cost of this forced migration is simply staggering.

Anyway, we're moving to Mattermost. This experience has taught us that owning your data is incredibly important, and if you're a small business especially, then I'd advise you move away too.

Mahad Kalam

This is not useful. This is not contributing. It's just burning maintainer time sorting through AI hallucinations. We have enough mediocre code to review that comes from actual humans who are actually trying to learn about Mesa and help out. We don't need to add AI shit to the merge request pile. If you don't understand the patch well enough to be able to describe what it does and why it makes things faster, don't submit it.

So now we're making it really clear: If you submit the merge request, you're responsible for the code change as if you typed it yourself. You don't get to claim ignorance and "because the AI said so". It's your responsibility to do due diligence to make sure it's correct and to accurately describe the change in the commit message.

Faith Ekstrand

Comments (2 posted)

Page editor: Daroc Alden

Announcements

Newsletters

Distributions and system administration

Development

Emacs News September 22
These Weeks in Firefox September 12
These Weeks in Firefox September 19
These Weeks in Firefox September 22
This Week in GNOME September 21
GNU Tools Weekly News September 21
LLVM Weekly September 22
This Week in Matrix September 19
OCaml Weekly News September 23
Perl Weekly September 22
This Week in Plasma September 20
PyCoder's Weekly September 23
Weekly Rakudo News September 22
Ruby Weekly News September 18
This Week in Rust September 17
Wikimedia Tech News September 22

Meeting minutes

Calls for Presentations

CFP Deadlines: September 25, 2025 to November 24, 2025

The following listing of CFP deadlines is taken from the LWN.net CFP Calendar.

DeadlineEvent Dates EventLocation
September 29 September 29
October 2
Alpine Linux Persistence and Storage Summit 2025 Wattenberg, Austria
October 12 March 23
March 26
KubeCon + CloudNativeCon Europe Amsterdam, Netherlands
October 27 March 16
March 17
FOSS Backstage Berlin, Germany
November 16 February 17 AlpOSS 2026 Échirolles, France

If the CFP deadline for your event does not appear here, please tell us about it.

Upcoming Events

Events: September 25, 2025 to November 24, 2025

The following event listing is taken from the LWN.net Calendar.

Date(s)EventLocation
September 26
September 28
Qubes OS Summit Berlin, Germany
September 26
September 28
GNU Tools Cauldron Porto, Portugal
September 27
September 28
Nextcloud Community Conference 2025 Berlin, Germany
September 29
October 2
Alpine Linux Persistence and Storage Summit 2025 Wattenberg, Austria
September 29
September 30
Git Merge 2025 San Francisco, US
September 29
October 1
X.Org Developers Conference Vienna, Austria
September 30
October 1
All Systems Go! 2025 Berlin, Germany
October 3
October 4
Texas Linux Festival Austin, US
October 12
October 14
All Things Open Raleigh, NC, US
October 17
October 19
OpenInfra Summit Europe 2025 Paris-Saclay, France
October 18
October 19
OpenFest 2025 Sofia, Bulgaria
October 21
October 24
PostgreSQL Conference Europe Riga, Latvia
October 23
October 24
GStreamer Conference 2025 London, UK
October 28
October 29
Cephalocon Vancouver, Canada
November 4
November 5
Open Source Summit Korea Seoul, South Korea
November 7
November 8
South Tyrol Free Software Conference Bolzano, Italy
November 7
November 8
Seattle GNU/Linux Conference Seattle, US
November 15
November 16
Capitole du Libre 2025 Toulouse, France
November 18
November 20
Open Source Monitoring Conference Nuremberg, Germany
November 20 NLUUG Autumn Conference 2025 Utrecht, The Netherlands

If your event does not appear here, please tell us about it.

Security updates

Alert summary September 18, 2025 to September 24, 2025

Dist. ID Release Package Date
AlmaLinux ALSA-2025:16115 10 gnutls 2025-09-18
AlmaLinux ALSA-2025:15785 8 kernel 2025-09-24
AlmaLinux ALSA-2025:16372 8 kernel 2025-09-24
AlmaLinux ALSA-2025:16373 8 kernel-rt 2025-09-24
AlmaLinux ALSA-2025:16046 9 mysql:8.4 2025-09-18
AlmaLinux ALSA-2025:15887 9 opentelemetry-collector 2025-09-18
AlmaLinux ALSA-2025:15874 9 python-cryptography 2025-09-18
Debian DSA-6004-1 stable chromium 2025-09-19
Debian DLA-4304-1 LTS cjson 2025-09-18
Debian DLA-4308-1 LTS corosync 2025-09-22
Debian DSA-6007-1 stable ffmpeg 2025-09-21
Debian DLA-4305-1 LTS firefox-esr 2025-09-19
Debian DSA-6003-1 stable firefox-esr 2025-09-18
Debian DSA-6006-1 stable jetty12 2025-09-19
Debian DSA-6005-1 stable jetty9 2025-09-19
Debian DLA-4307-1 LTS jq 2025-09-21
Debian DSA-6008-1 stable kernel 2025-09-22
Debian DSA-6009-1 stable kernel 2025-09-22
Debian DLA-4303-1 LTS nextcloud-desktop 2025-09-18
Debian DLA-4306-1 LTS pam 2025-09-21
Fedora FEDORA-2025-15f6a132bf F41 checkpointctl 2025-09-23
Fedora FEDORA-2025-11b6deb0b8 F42 checkpointctl 2025-09-23
Fedora FEDORA-2025-eda09a0a51 F43 checkpointctl 2025-09-23
Fedora FEDORA-2025-2cc476bf84 F41 chromium 2025-09-18
Fedora FEDORA-2025-bb1ae3ee9c F42 chromium 2025-09-23
Fedora FEDORA-2025-4daec13254 F41 curl 2025-09-23
Fedora FEDORA-2025-97ae15dc56 F42 curl 2025-09-20
Fedora FEDORA-2025-639f53ea67 F42 expat 2025-09-19
Fedora FEDORA-2025-d582069c41 F43 expat 2025-09-24
Fedora FEDORA-2025-100ae879e3 F41 firefox 2025-09-18
Fedora FEDORA-2025-bac4da5419 F42 forgejo 2025-09-18
Fedora FEDORA-2025-5fc3f360cf F43 forgejo 2025-09-18
Fedora FEDORA-2025-24e111e6f1 F41 gh 2025-09-19
Fedora FEDORA-2025-d4c9910925 F42 gh 2025-09-19
Fedora FEDORA-2025-a1a4cba6f5 F41 gitleaks 2025-09-18
Fedora FEDORA-2025-94112c7319 F42 gitleaks 2025-09-18
Fedora FEDORA-2025-d3cfe902f5 F43 gitleaks 2025-09-18
Fedora FEDORA-2025-22c5cc654d F43 kernel 2025-09-18
Fedora FEDORA-2025-22c5cc654d F43 kernel-headers 2025-09-18
Fedora FEDORA-2025-67d99d2c39 F41 lemonldap-ng 2025-09-18
Fedora FEDORA-2025-72e47ed215 F42 lemonldap-ng 2025-09-18
Fedora FEDORA-2025-27d58d0125 F43 lemonldap-ng 2025-09-18
Fedora FEDORA-2025-50a98965b5 F43 libssh 2025-09-20
Fedora FEDORA-2025-6df5ab0b98 F43 perl-Catalyst-Authentication-Credential-HTTP 2025-09-23
Fedora FEDORA-2025-89495f6403 F41 perl-Cpanel-JSON-XS 2025-09-18
Fedora FEDORA-2025-f4f4dae8f2 F42 perl-Cpanel-JSON-XS 2025-09-18
Fedora FEDORA-2025-b529f6bfed F41 podman-tui 2025-09-22
Fedora FEDORA-2025-7a565ff5c2 F42 podman-tui 2025-09-22
Fedora FEDORA-2025-29c34ad84a F43 podman-tui 2025-09-22
Fedora FEDORA-2025-5d38037ea1 F41 prometheus-podman-exporter 2025-09-22
Fedora FEDORA-2025-89d6e0363e F42 prometheus-podman-exporter 2025-09-22
Fedora FEDORA-2025-7ed37510cc F43 prometheus-podman-exporter 2025-09-22
Fedora FEDORA-2025-6d50efcd0c F42 python-pip 2025-09-18
Fedora FEDORA-2025-6d7e566803 F41 scap-security-guide 2025-09-19
Fedora FEDORA-2025-8fe89048bd F42 scap-security-guide 2025-09-19
Fedora FEDORA-2025-cc7979cb89 F43 scap-security-guide 2025-09-19
Fedora FEDORA-2025-7a1f93f58a F42 xen 2025-09-19
Oracle ELSA-2025-15904 OL8 container-tools:rhel8 2025-09-18
Oracle ELSA-2025-16109 OL10 firefox 2025-09-22
Oracle ELSA-2025-16108 OL9 firefox 2025-09-18
Oracle ELSA-2025-16115 OL10 gnutls 2025-09-22
Oracle ELSA-2025-16116 OL9 gnutls 2025-09-22
Oracle ELSA-2025-16154 OL10 grub2 2025-09-18
Oracle ELSA-2025-15782 OL10 kernel 2025-09-22
Oracle ELSA-2025-14748 OL7 kernel 2025-09-22
Oracle ELSA-2025-14987 OL7 kernel 2025-09-22
Oracle ELSA-2025-16046 OL9 mysql:8.4 2025-09-18
Oracle ELSA-2025-16157 OL10 thunderbird 2025-09-22
Oracle ELSA-2025-16156 OL9 thunderbird 2025-09-22
Red Hat RHSA-2025:16109-01 EL10 firefox 2025-09-18
Red Hat RHSA-2025:16108-01 EL9 firefox 2025-09-18
Red Hat RHSA-2025:1659-01 EL9 kernel 2025-09-24
Red Hat RHSA-2025:13789-01 EL7 libxml2 2025-09-18
Red Hat RHSA-2025:13689-01 EL8.2 libxml2 2025-09-18
Red Hat RHSA-2025:13788-01 EL8.4 libxml2 2025-09-18
Red Hat RHSA-2025:13688-01 EL8.6 libxml2 2025-09-18
Red Hat RHSA-2025:13806-01 EL8.8 libxml2 2025-09-18
Red Hat RHSA-2025:13684-01 EL9.0 libxml2 2025-09-18
Red Hat RHSA-2025:13683-01 EL9.2 libxml2 2025-09-18
Red Hat RHSA-2025:13677-01 EL9.4 libxml2 2025-09-18
Red Hat RHSA-2025:16541-01 EL9.0 multiple packages 2025-09-24
Red Hat RHSA-2025:16539-01 EL9.2 multiple packages 2025-09-24
Red Hat RHSA-2025:16540-01 EL9.4 multiple packages 2025-09-24
Slackware SSA:2025-260-01 expat 2025-09-17
Slackware SSA:2025-260-02 mozilla 2025-09-17
Slackware SSA:2025-260-03 mozilla 2025-09-17
SUSE SUSE-SU-2025:03266-1 SLE-m5.2 avahi 2025-09-18
SUSE SUSE-SU-2025:03331-1 SLE-m5.3 SLE-m5.4 SLE-m5.5 oS15.4 avahi 2025-09-24
SUSE SUSE-SU-2025:03332-1 SLE12 avahi 2025-09-24
SUSE SUSE-SU-2025:03333-1 SLE15 oS15.6 avahi 2025-09-24
SUSE SUSE-SU-2025:03277-1 SLE-m5.3 SLE-m5.4 oS15.4 bluez 2025-09-19
SUSE SUSE-SU-2025:03269-1 SLE-m5.5 oS15.5 bluez 2025-09-18
SUSE SUSE-SU-2025:03271-1 SLE15 oS15.5 oS15.6 busybox, busybox-links 2025-09-18
SUSE SUSE-SU-2025:03271-2 oS15.6 busybox, busybox-links 2025-09-23
SUSE SUSE-SU-2025:03280-1 SLE-m5.3 SLE-m5.4 SLE-m5.5 oS15.4 cairo 2025-09-19
SUSE openSUSE-SU-2025:0367-1 osB15 chromium 2025-09-19
SUSE openSUSE-SU-2025:0368-1 osB15 chromium 2025-09-19
SUSE SUSE-SU-2025:03281-1 oS15.4 cmake 2025-09-19
SUSE SUSE-SU-2025:03261-1 MP4.3 SLE15 SLE-m5.2 SLE-m5.3 SLE-m5.4 SLE-m5.5 SES7.1 oS15.6 cups 2025-09-18
SUSE openSUSE-SU-2025:15562-1 TW cups 2025-09-19
SUSE SUSE-SU-2025:03268-1 MP4.3 SLE15 SLE-m5.1 SLE-m5.2 SLE-m5.3 SLE-m5.4 SLE-m5.5 SES7.1 oS15.6 curl 2025-09-18
SUSE SUSE-SU-2025:03267-1 MP4.3 SLE15 SLE-m5.3 SLE-m5.4 SLE-m5.5 oS15.4 curl 2025-09-18
SUSE openSUSE-SU-2025:15559-1 TW element-web 2025-09-18
SUSE SUSE-SU-2025:03287-1 SLE12 firefox 2025-09-22
SUSE SUSE-SU-2025:03291-1 SLE15 SES7.1 oS15.6 firefox 2025-09-22
SUSE openSUSE-SU-2025:15565-1 TW firefox 2025-09-20
SUSE openSUSE-SU-2025:15555-1 TW firefox-esr 2025-09-17
SUSE SUSE-SU-2025:03297-1 SLE12 frr 2025-09-23
SUSE SUSE-SU-2025:03274-1 oS15.3 frr 2025-09-19
SUSE SUSE-SU-2025:20694-1 SLE-m6.0 gdk-pixbuf 2025-09-17
SUSE SUSE-SU-2025:03289-1 SLE15 oS15.6 govulncheck-vulndb 2025-09-22
SUSE openSUSE-SU-2025:15566-1 TW govulncheck-vulndb 2025-09-20
SUSE SUSE-SU-2025:20693-1 SLE-m6.0 gstreamer 2025-09-17
SUSE SUSE-SU-2025:03262-1 SLE15 SES7.1 oS15.6 java-1_8_0-ibm 2025-09-18
SUSE SUSE-SU-2025:03310-1 MP4.2 SLE15 SLE-m5.1 SLE-m5.2 SES7.1 oS15.3 kernel 2025-09-23
SUSE SUSE-SU-2025:03314-1 MP4.3 SLE15 SLE-m5.3 SLE-m5.4 oS15.4 kernel 2025-09-23
SUSE SUSE-SU-2025:02844-2 SLE11 kernel 2025-09-18
SUSE SUSE-SU-2025:03290-1 SLE15 kernel 2025-09-22
SUSE SUSE-SU-2025:03283-1 SLE15 SLE-m5.5 oS15.5 kernel 2025-09-19
SUSE SUSE-SU-2025:03272-1 SLE15 oS15.6 kernel 2025-09-18
SUSE SUSE-SU-2025:03301-1 SLE15 oS15.6 kernel 2025-09-23
SUSE SUSE-SU-2025:03270-1 SLE-m5.5 oS15.5 krb5 2025-09-18
SUSE SUSE-SU-2025:03278-1 SLE15 oS15.6 kubevirt, virt-api-container, virt-controller-container, virt-exportproxy-container, virt-exportserver-container, virt-handler-container, virt-launcher-container, virt-libguestfs-t 2025-09-19
SUSE SUSE-SU-2025:03275-1 SLE15 oS15.6 mariadb 2025-09-19
SUSE SUSE-SU-2025:03276-1 oS15.4 mariadb 2025-09-19
SUSE SUSE-SU-2025:03285-1 oS15.6 mybatis, ognl 2025-09-22
SUSE SUSE-SU-2025:03260-1 SLE15 SLE-m5.1 SLE-m5.2 SLE-m5.3 SLE-m5.4 SLE-m5.5 oS15.6 net-tools 2025-09-18
SUSE SUSE-SU-2025:20692-1 SLE-m6.0 podman 2025-09-17
SUSE SUSE-SU-2025:03273-1 SLE15 python-h2 2025-09-19
SUSE SUSE-SU-2025:03257-1 SLE12 raptor 2025-09-17
SUSE openSUSE-SU-2025:15569-1 TW rke2 2025-09-21
SUSE SUSE-SU-2025:03298-1 SLE15 oS15.6 rustup 2025-09-23
SUSE SUSE-SU-2025:20716-1 SLE-m6.0 sevctl 2025-09-17
SUSE SUSE-SU-2025:03306-1 SLE15 sevctl 2025-09-23
SUSE SUSE-SU-2025:03307-1 SLE15 oS15.6 sevctl 2025-09-23
SUSE openSUSE-SU-2025:0366-1 osB15 shadowsocks-v2ray-plugin 2025-09-18
SUSE openSUSE-SU-2025:0365-1 osB15 shadowsocks-v2ray-plugin 2025-09-18
SUSE openSUSE-SU-2025:15570-1 TW tcpreplay 2025-09-23
SUSE SUSE-SU-2025:03309-1 SLE15 oS15.6 thunderbird 2025-09-23
SUSE openSUSE-SU-2025:15556-1 TW tkimg 2025-09-17
SUSE openSUSE-SU-2025:15571-1 TW tor 2025-09-23
SUSE SUSE-SU-2025:20715-1 SLE-m6.0 ucode-intel 2025-09-17
SUSE SUSE-SU-2025:20696-1 SLE-m6.0 vim 2025-09-17
SUSE SUSE-SU-2025:03299-1 SLE12 vim 2025-09-23
SUSE SUSE-SU-2025:03300-1 SLE15 SLE-m5.5 oS15.5 oS15.6 vim 2025-09-23
SUSE SUSE-SU-2025:03294-1 SLE15 oS15.6 wireshark 2025-09-22
SUSE openSUSE-SU-2025:0364-1 oS15.6 yt-dlp 2025-09-18
Ubuntu USN-7760-1 22.04 24.04 25.04 glibc 2025-09-22
Ubuntu USN-7756-1 14.04 16.04 18.04 20.04 22.04 24.04 imagemagick 2025-09-18
Ubuntu USN-7759-1 16.04 18.04 isc-kea 2025-09-23
Ubuntu USN-7755-1 14.04 16.04 18.04 linux, linux-aws, linux-aws-hwe, linux-azure, linux-azure-4.15, linux-gcp, linux-gcp-4.15, linux-hwe, linux-kvm, linux-oracle 2025-09-17
Ubuntu USN-7758-1 22.04 24.04 linux, linux-aws, linux-gcp, linux-gke, linux-gkeop, linux-hwe-6.8, linux-lowlatency, linux-lowlatency-hwe-6.8, linux-oracle 2025-09-19
Ubuntu USN-7764-1 22.04 24.04 linux, linux-aws, linux-gcp, linux-gke, linux-gkeop, linux-lowlatency, linux-lowlatency-hwe-6.8 2025-09-24
Ubuntu USN-7766-1 22.04 linux-aws-6.8, linux-gcp-6.8 2025-09-24
Ubuntu USN-7755-3 18.04 linux-aws-fips 2025-09-24
Ubuntu USN-7726-5 20.04 22.04 linux-azure, linux-azure-5.15, linux-azure-fips 2025-09-18
Ubuntu USN-7755-2 18.04 linux-fips, linux-azure-fips, linux-gcp-fips 2025-09-17
Ubuntu USN-7722-2 24.04 25.04 linux-gcp-6.14, linux-oracle, linux-oracle-6.14 2025-09-17
Ubuntu USN-7758-2 22.04 24.04 linux-ibm, linux-ibm-6.8, linux-nvidia, linux-nvidia-6.8, linux-nvidia-lowlatency, linux-raspi 2025-09-19
Ubuntu USN-7765-1 22.04 24.04 linux-nvidia, linux-nvidia-6.8, linux-nvidia-lowlatency 2025-09-24
Ubuntu USN-7758-4 22.04 linux-oracle-6.8 2025-09-19
Ubuntu USN-7758-3 24.04 linux-realtime 2025-09-19
Ubuntu USN-7767-1 24.04 linux-realtime 2025-09-24
Ubuntu USN-7757-1 18.04 20.04 22.04 24.04 25.04 openjpeg2 2025-09-18
Ubuntu USN-7761-1 24.04 25.04 pam 2025-09-22
Ubuntu USN-7762-1 22.04 24.04 25.04 python-pip 2025-09-23
Ubuntu USN-7763-1 25.04 rabbitmq-server 2025-09-23
Full Story (comments: none)

Kernel patches of interest

Kernel releases

Linus Torvalds Linux 6.17-rc7 Sep 21
Greg Kroah-Hartman Linux 6.16.8 Sep 19
Greg Kroah-Hartman Linux 6.12.48 Sep 19
Greg Kroah-Hartman Linux 6.6.107 Sep 19
Greg Kroah-Hartman Linux 6.1.153 Sep 19

Architecture-specific

Build system

Core kernel

Development tools

Device drivers

Adrián Larumbe Introduce Panfrost JM contexts Sep 17
Mikhail Kshevetskiy spi: airoha: driver fixes & improvements Sep 18
Viken Dadhaniya can: mcp251xfd: add gpio functionality Sep 18
Andreas Kemnade power: supply: add charger for BD71828 Sep 18
victor.duicu@microchip.com add support for MCP998X Sep 18
alejandro.lucero-palau@amd.com Type2 device basic support Sep 18
peter.wang@mediatek.com Enhance UFS Mediatek Driver Sep 18
Marcelo Schmitt Add SPI offload support to AD4030 Sep 18
Dang Huynh via B4 Relay RDA8810PL SD/MMC support Sep 19
Russell King (Oracle) Marvell PTP stuffs Sep 18
Ryan.Wanner@microchip.com clk: at91: add support for parent_data and Sep 18
Chuan Liu via B4 Relay clk: amlogic: add video-related clocks for S4 SoC Sep 19
Bhargava Marreddy Add more functionality to BNGE Sep 19
Eric Gonçalves Input: add fts2ba61y touchscreen driver Sep 20
Wesley Cheng Introduce Glymur USB support Sep 19
Andrew Jones iommu/riscv: Add irqbypass support Sep 20
Jarkko Sakkinen tpm: robust stack allocations Sep 21
Marcus Folkesson I2C Mux per channel bus speed Sep 22
Elaine Zhang rockchip: add can for RK3576 Soc Sep 22
Siva Reddy Kallam Introducing Broadcom BNG_RE RoCE Driver Sep 22
Nicolas Frattaroli MT8196 GPU Frequency/Power Control Support Sep 23
Biju Add RZ/G3E GPT support Sep 23
Cosmin Tanislav Add ADCs support for RZ/T2H and RZ/N2H Sep 23
Aaron Kling via B4 Relay Support Tegra210 actmon for dynamic EMC scaling Sep 23
Vladimir Oltean Lynx 28G improvements part 1 Sep 23
Remi Buisson via B4 Relay iio: imu: new inv_icm45600 driver Sep 24

Device-driver infrastructure

Documentation

Paul E. McKenney rcu: Documentation updates for v6.18 Sep 18
Aleksa Sarai man2: document "new" mount API Sep 19

Filesystems and block layer

Memory management

Networking

Security-related

Virtualization and containers

Miscellaneous

Page editor: Joe Brockmeier


Copyright © 2025, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds