|
|
Subscribe / Log in / New account

LWN.net Weekly Edition for November 6, 2025

Welcome to the LWN.net Weekly Edition for November 6, 2025

This edition contains the following feature content:

This week's edition also includes these inner pages:

  • Brief items: Brief news items from throughout the community.
  • Announcements: Newsletters, conferences, security updates, patches, and more.

Please enjoy this week's edition, and, as always, thank you for supporting LWN.net.

Comments (none posted)

An explicit thread-safety proposal for Python

By Daroc Alden
November 3, 2025

Python already has several ways to run programs concurrently — including asynchronous functions, threads, subinterpreters, and multiprocessing — but all of those options have drawbacks of one kind or another. PEP 703 ("Making the Global Interpreter Lock Optional in CPython") removed a major barrier to running Python threads in parallel, but also exposed Python programmers to the same tricky synchronization problems found in other languages supporting multithreaded programs. A new draft proposal by Mark Shannon, PEP 805 ("Safe Parallel Python"), suggests a way for the CPython runtime to cut down on concurrency bugs, making it more practical for Python programmers to use versions of the language without the global interpreter lock (GIL).

The most common concurrency bugs happen when two threads attempt to read and write to the same shared value: a data race. There are many ways to prevent this, such as using a lock, putting constraints into the type system (as Rust does), or even making control over data races a core part of the language. Central to all of these approaches is the observation that data races only occur when there is a mutable value shared between threads without synchronization. Taking away any one of those things (making the value immutable, not sharing it, or synchronizing accesses) makes data races impossible.

PEP 805 would add a new __shared__ field to Python objects to track whether the object is local to a particular thread group, is protected by a lock, is immutable (shallowly, unlike the other recent Python immutability proposal), or is a special system object that does synchronization internally. Whenever a Python program obtains a reference to an object, the Python interpreter would check that the object is in a valid state to be referenced by the current thread, in the same code that increments the object's reference count. If not, it would throw an exception, converting the potential data race into something the programmer can notice and debug.

The __shared__ field, while available to be read from Python, would ordinarily be maintained and updated by the interpreter. All objects would start off local to the thread that creates them. From there, the programmer could use a new __freeze__() method to make them immutable or a new __protected_by__() method to associate them with a lock. Values passed to another thread via a special Channel class would be frozen automatically.

The performance impact from adding a new check on a fundamental operation like that could be significant, although Shannon has not yet implemented a working prototype to measure the impact experimentally. The PEP goes into some detail about how those costs could be defrayed. The main thing to note is that most objects are not going to be shared between threads; the specializing adaptive interpreter can cache that information in a way that reduces the overhead to one, mostly correctly predicted, branch. Then, the just-in-time compiler can potentially coalesce those checks further.

Shannon first proposed the approach on September 8. Daniele Parmeggiani expressed concerns that the approach in the draft PEP would make it hard to implement lockless data structures in Python — specifically, that the mechanism would rule out benign data races, which are critical to the efficient implementation of lockless algorithms. Shannon clarified that the "synchronized" state, for objects with internal synchronization, was intended to apply to those kinds of structures even if they came from a user extension rather than the runtime itself.

As in any language discussion, several people had opinions about what the new things introduced by the PEP should be called. The main point of contention was the new functions for making an object shallowly immutable, but there was relatively little disagreement that the core idea of the PEP would be useful. One naming quibble from Parmeggiani was about an implementation detail that the PEP calls "SuperThreads". The concept would be a kind of intermediate step between having a GIL and having no implicit locking between threads at all, to allow the incremental migration of Python programs. Each SuperThread would have its own lock that the threads must take to run. Threads within a single SuperThread would be allowed to access each others' local objects without taking any other lock. Having every thread belong to the same SuperThread is essentially like running Python with the GIL enabled; having every thread belong to its own SuperThread is equivalent to running with the GIL fully disabled.

Parmeggiani thought that the name "SuperThreads" was not particularly intuitive, and suggested something like "SerializedThreadGroup" instead. He did like the concept itself, though, and agreed that it would be "less scary than switching from all-GIL to no-GIL."

Another question that came up in the discussion was not just how the PEP would impact Python's memory model, but also what memory model Python actually has in the first place. Jeffrey Bosboom dug up several references, including the scant existing documentation and the withdrawn PEP 583 ("A Concurrency Memory Model for Python"), which lays the groundwork for a memory model for Python, but does not actually specify one. Bosboom considers having a documented memory model essential, because otherwise discussing the correctness of multithreaded code is impossible.

Python community member "Matthew" opined that, with the optional removal of the GIL, Python's memory model was "whatever CPython happened to have implemented at the time PEP 703 was written." Nobody seemed prepared to dispute that.

The question of how exactly PEP 703 changes the interaction between Python threads has come up in more general contexts on the Python discussion forums as well. When it accepted PEP 779 ("Criteria for supported status for free-threaded Python"), Python's steering council called for the creation of documentation for Python users and maintainers alike on the impact of free-threading.

Even though Python 3.13 was released in October 2024 with support for GIL-disabled (free-threading) builds, that documentation has not yet been completed. The work is ongoing, but the most up-to-date documentation is the unofficial Python Free-Threading Guide, which doesn't address Python's memory model.

PEP 805 is still a draft, so it could go through substantial changes before being presented to the steering council — but if submitted in its current form, it may be useful more for defining some of the limits on interaction between threads in Python than for directly preventing data races. In either case, the Python community is dedicated to enabling better multithreaded applications, so a documented memory model is almost certainly coming in the future.

Comments (7 posted)

Namespace reference counting and listns()

By Jonathan Corbet
November 3, 2025
The kernel's namespaces feature is, among other things, a key part of the implementation of containers. Like much in the kernel, though, the namespace API evolved over time; there was no design at the outset. As a result, this API has some rough edges and missing features. Christian Brauner is working to straighten out the namespace situation somewhat with this daunting 72-part patch series that, among other things, adds a new system call to allow user space to query the namespaces present on the system.

The original namespace type, now called mount namespaces, was introduced by Al Viro as just "namespaces" in 2001 (they were briefly covered in LWN at the time). UTS namespaces (which provide a different view of the system's host name), process-ID namespaces (managing the visibility of processes), and IPC namespaces (controlling the view of the System V inter-process communication features) followed as part of the 2.6.19 release in 2006. Each namespace type was added when the need arose and somebody was moved to implement it. As the use of namespaces has grown, though, some of the problems in their implementation have become more apparent.

Reference counts

For example, namespaces have complicated lifecycle requirements. A namespace must obviously continue to exist as long as there are any processes that are running within it. For many years, a namespace would automatically be deleted once the last process running within it exited. Over time, the ability to keep an empty namespace around (by opening a file descriptor referencing it or bind-mounting it into the filesystem) was added. Some namespaces (user namespaces, for example) are hierarchical; if a hierarchical namespace contains children, that namespace will, once again, remain within the system.

The kernel uses a reference count on each namespace to know when it is no longer in use. C is not an object-oriented language, so there is no class hierarchy for namespaces; each namespace type is a different structure. There is a "superclass", of sorts, in the ns_common structure, which all of the other namespace structure types include; it was first added by Viro to the 3.19 release in 2014. The reference count was moved into struct ns_common by Brauner for 5.9 in 2020. That structure went through some significant changes in the 6.18 merge window to reach its current form, which includes a reference count now named __ns_ref.

This count tracks all references to a given namespace; that includes all of the types listed above, but there are others as well. The kernel will often create internal references to namespaces that can cause them to persist for a period after they are no longer used and, in theory, no longer visible to user space. There is an interesting interaction with a different feature added for 6.18, though: the ability to refer to namespaces with file handles. A file handle is an opaque binary cookie that can be used to open an object without locating it in the filesystem; see the open_by_handle_at() man page for details.

Since a file handle is not contained within a filesystem, it can outlive the object to which it refers. Or, in the case of a namespace, it can outlive the visibility of the object to which it refers. A namespace may have gone completely out of use and exist only because of internal kernel references that will, presumably, go away soon; normally, user space would not be able to open this namespace. But that changes if user space has a file handle referring to the namespace; opening that file handle will result in "resurrecting" the namespace when it was otherwise on its way out.

That is the sort of API quirk that nobody asked for and nobody intended to implement. If, however, it is allowed to continue to exist, somebody — attackers if nobody else — will surely find a way to depend on it. Eliminating this quirk is the first objective of Brauner's series.

Normally, a reference-counted structure contains a single reference count; these patches go against that practice by adding a second count, called __ns_ref_active, to struct ns_common. This count tracks the number of "active" references, which are essentially the references visible to user space. It can be thought of as a subset of __ns_ref, in that any change to __ns_ref_active should be accompanied by an equal change to __ns_ref. Internal references created by the kernel, though, will only increment __ns_ref. In the end, __ns_ref still manages the lifetime of the namespace, while __ns_ref_active manages its visibility to user space.

So, for example, an attempt to open a namespace with open_by_handle_at() will fail if __ns_ref_active is zero, even if the namespace itself still exists within the kernel. It will no longer be possible to use file handles to bring namespaces back from the dead.

Listing namespaces

As kernel developers added namespaces over the last 24 years, none of them ever quite got around to implementing a way to see which namespaces are active in the system. In current kernels, the only way to get a complete list of namespaces is to go rummaging through the /proc/PID/ns directories for every process in the system. Needless to say, that is less than optimally efficient. It also still doesn't get a full list, since any namespaces that are empty but which are kept around with, for example, a bind mount, are not present in any process's /proc directory and will consequently be missed.

The obvious answer is to add a new system call that allows iterating through the active namespaces; in this series, that is listns():

    struct ns_id_req {
        __u32 size;         /* sizeof(struct ns_id_req) */
        __u32 spare;        /* Reserved, must be 0 */
        __u64 ns_id;        /* Last seen namespace ID (for pagination) */
        __u32 ns_type;      /* Filter by namespace type(s) */
        __u32 spare2;       /* Reserved, must be 0 */
        __u64 user_ns_id;   /* Filter by owning user namespace */
    };

    ssize_t listns(const struct ns_id_req *req, u64 *ns_ids,
                   size_t nr_ns_ids, unsigned int flags);

A caller starts by filling in an ns_id_req structure describing the information request. The size field is the size of the structure itself, allowing for expansion in the future if need be. The ns_type field is a bitmask of the namespace types of interest; MNT_NS for mount namespaces, for example, or NET_NS for network namespaces. If only children of a given namespace are of interest, the parent's namespace ID can be put into user_ns_id. The ns_id field should be zero for the first call.

The actual listns() call takes a pointer to that structure, an array (ns_ids) to store the returned namespace IDs, and the length of that array (nr_ns_ids). The flags field must be zero in the current implementation. This call will fill in the ns_ids array with matching namespace IDs, returning the number of IDs that were put there. If the number of matching namespaces is too large to fit in the provided array, a subsequent listns() call can pick up where this one left off by placing the final ID returned by the previous call in the ns_id field of the ns_id_req structure.

Needless to say, listns() will only return namespaces that have an __ns_ref_active count greater than zero. For the curious, there are several examples of how listns() can be used in the above-linked cover letter. It is also worth noting that, while this patch series is long, patches 22 through 72 are all tests for the new functionality; they also provide examples of how it is expected to be used.

This series is in its fourth revision; the rate of change so far suggests that there might be another round or two in store before it is ready to go, but there do not appear to be any fundamental objections to this work. While Brauner has not indicated when he plans to send these changes upstream, it seems reasonable to expect to see them during the 6.19 merge window.

Comments (11 posted)

Mergiraf: syntax-aware merging for Git

By Daroc Alden
October 31, 2025

The idea of automatic syntax-aware merging in version-control systems goes back to 2005 or earlier, but initial implementations were often language-specific and slow. Mergiraf is a merge-conflict resolver that uses a generic algorithm plus a small amount of language-specific knowledge to solve conflicts that Git's default strategy cannot. The project's contributors have been working on the tool for just under a year, but it already supports 33 languages, including C, Python, Rust, and even SystemVerilog.

Mergiraf was started by Antonin Delpeuch, but several other contributors have stepped up to help, of which Ada Alakbarova is the most prolific. The project is written in Rust and licensed under version 3 of the GPL.

The default Git merge algorithm ("ort") is primarily line-based. It does include some tree-based logic for merging directories, but changes within a single file are merged on a line-by-line basis. That can lead to situations where two logically separate changes that affect the same line cause a merge conflict.

Consider the following base version:

    void callback(int status);

And then suppose that one person makes the function fallible:

    int callback(int status);

While someone else changes the argument type:

    void callback(long status);

The default merge algorithm can't handle that, because there are conflicting changes to the same line. Syntax-aware merging, however, is based on the syntactical elements of the language, not individual lines. So, for example, Mergiraf can resolve the above conflict like this:

    int callback(long status);

From its point of view, the changes don't actually overlap, because the return type and the argument type are treated as separate, non-overlapping regions. This kind of syntax-aware merging has been bandied about for many years, but the complexity of writing a merge algorithm for syntax trees kept it from really being practical for widespread use. Spork, an implementation of the idea for Java, was released in 2023, showing that it was actually feasible. Mergiraf attempts to extend that Java-specific algorithm to programming (and configuration or markup) languages in general.

The design

Mergiraf relies on the tree-sitter incremental parsing library to convert individual languages into generic syntax trees where each leaf corresponds to a specific token in the file, and each internal node represents a language construct. However, Mergiraf itself needs relatively little information about each language to work. Instead, it uses a non-language-specific tree-matching algorithm to guide conflict resolution, plus a small amount of language knowledge layered on top. This design is part of the reason that the tool has been adapted to so many different languages.

The Mergiraf algorithm starts by doing a regular line-based merge; if that succeeds, as it often does, then the program doesn't need to resort to the more expensive tree-based merging algorithm. Even if a line-based merge fails, however, it often fails only in a few locations. When parsing the different versions of the file being merged, Mergiraf can mark any parts of the syntax tree that were resolved without conflicts by the line-based merge as not needing changes, allowing it to focus only on the conflicting parts. This provides a substantial speedup, especially for large files.

For the remaining parts, the tool uses the GumTree algorithm to find fuzzy matches between the remaining subtrees. Identifying the matches is enough to produce a diff, but it doesn't provide enough information on its own to resolve any conflicts. Next, Mergiraf flattens the syntax tree into a list of facts about how the nodes in the tree are related to each other. These facts are tagged with whether they came from the base, left, or right revision of the merge (i.e., the most recent common ancestor, the commit being merged into, and the commit being merged). Then a new syntax tree is reconstructed from the merged list of facts. If a fact from the base revision conflicts with another fact, it is discarded. If two facts from the left and right revisions disagree, that indicates an actual conflict that Mergiraf cannot resolve.

The advantage of this approach is that it eliminates the kind of move/edit conflicts that plague the ort algorithm: if one revision edits the internal parts of some part of the program, and the other revision relocates that part of the program, those facts don't contradict one another. On the other hand, if both revisions edit the exact same part of the program, that does represent a real conflict that a human should really look at.

Although, for edits in some languages, Mergiraf can use language-specific knowledge to resolve even conflicts like this. For example, consider the following change to a Rust structure:

    // Base version
    struct Foo {
        field1: Bar,
    }

    // Left revision
    struct Foo {
        field1: Bar,
        new_field_left: Baz,
    }

    // Right revision
    struct Foo {
        field1: Bar,
        new_field_right: Quux,
    }

This is a merge conflict because a line-based algorithm couldn't tell which order to add the new lines in — and which order lines appear in a program is usually important. In Rust, however, the compiler is allowed to rearrange structure fields as it sees fit (unless the structure is marked #[repr(C)] or one of the other repr settings — which seems to be a known bug in the current version of Mergiraf). Therefore, this merge conflict can be resolved automatically by putting the lines in any order. The resulting merged program has the same behavior either way. On the other hand, that wouldn't be a correct way to resolve the equivalent merge conflict in C, because, in C, the order of members in a structure can affect the correctness of the program.

When a syntactic element's children can be freely reordered without changing the meaning of the program, Mergiraf calls it a "commutative parent". Part of the language-specific information that Mergiraf needs is a list of which parts of the language are commutative parents, if any. A commutative parent isn't a get-out-of-jail-free card for merge conflicts, though: if two revisions add fields with the same name and different types, for example, that would still be a conflict. In such cases, Mergiraf uses an additional piece of language-specific information to put the conflicting lines close together, so that the resulting conflict markers pinpoint the problem as precisely as possible.

Using it

When I encountered it, Mergiraf's approach sounded promising, but I was curious about how much of a difference it would actually make in real-world use of Git. The Linux kernel repository contains, at the time of writing, 7,415 merge commits that, when replayed using the default merge algorithm, result in conflicts. These are the merge commits that would have had to be fixed by hand, although it's probably an underestimate of the number of merge conflicts that kernel developers have had to deal with. It doesn't include merge conflicts that would have appeared during rebasing, for example, because information about rebases isn't included in the Git history for analysis.

After extracting a list of every merge conflict in the kernel's Git history, I tried using Mergiraf to resolve them. 6,987 still resulted in conflicts, but 428 were resolved successfully. A much larger fraction of merge conflicts were still partially resolved. Should those results generalize, which I think is likely, adopting Mergiraf could reduce the number of merge conflicts requiring manual merging by a small amount, which is still potentially helpful to save valuable maintainer time.

The tool itself has two interfaces: one that can be run by hand on a file with conflict markers (such as those produced by ort) in order to attempt to resolve conflicts, and one that can be used by Git automatically. Running "mergiraf solve <path>" will read the conflict markers in the given file and attempt to resolve them. Adding this snippet to one's Git configuration and setting the driver as the default in .gitattributes will use Mergiraf as the Git merge driver from the beginning:

    [merge "mergiraf"]
        name = mergiraf
        driver = mergiraf merge --git %O %A %B -s %S -x %X -y %Y -p %P -l %L

When invoked by Git, the user can review the conflicts that Mergiraf encountered and how it resolved them by running "mergiraf review". For people who don't have a merge conflict handy, Mergiraf has an example repository containing various kinds of conflicts, in order to show how Mergiraf resolves them. The tool also works with Jujutsu, and likely with other version-control systems, as long as they use the same merge-conflict syntax as Git.

Programmers have gotten along just fine without Mergiraf, so it isn't necessarily something that everyone will want to add to their set of programming tools. But few people enjoy running into merge conflicts, and tools that can help intelligently resolve them — especially the ones that are obvious to a human, and therefore a waste of time to deal with — are an attractive prospect.

Comments (44 posted)

The long path toward optimizing short reads

By Jonathan Corbet
October 30, 2025
The kernel's file-I/O subsystems have been highly optimized over the years in the hope of providing the best performance for a wide variety of workloads. There is, however, one workload type that suffers with current kernels: applications that perform many short reads, in multiple processes, from the same file. Kiryl Shutsemau has been working on a patch to try to optimize this case, but the task is turning out to be harder than one might expect.

As Shutsemau (who has also been known as Kirill Shutemov) explains in the changelog to this relatively short patch, one of the steps in performing a read from a file is locating the relevant folio in the page cache and obtaining a reference to that folio to ensure that it remains stable while the data it contains is copied back to user space. In cases where the reads are short and frequent, though, that reference-count manipulation becomes a significant part of the entire cost; atomic operations are expensive, and bouncing the cache line for the folio around the system makes things even worse. Workloads of this type are measurably slowed by this overhead.

He set out to reduce this performance penalty by adding a new fast path for short reads. The first step is to add a new sequence counter (to the address_space structure) that tracks the number of folios for a given file that have been deleted from the page cache. A short read (defined as less than 1KB in this patch) is then directed to a fast path that works by noting that sequence count before copying the data to be read from the folio into a buffer on the kernel stack. If, at the conclusion of the copy, the sequence count has not changed, the data can be safely copied from that buffer back to user space; otherwise the kernel will fall back into the slower path.

The optimization seems to work. One benchmark result, for short reads from a single small file, improves by nearly a factor of three; other tests show more modest improvements. The obligatory kernel-compilation benchmark, which is definitely not the sort of workload this patch is trying to optimize, improves by roughly 1%. On its face, this patch looks like the type that offers obvious benefits and can sail through the process fairly easily, but memory management and file I/O are subtle spaces where hazards abound.

The use of a 1KB stack-based buffer would once have raised no end of eyebrows. The kernel stack is not as small as it once was, but 1KB is still a significant part of it. Linus Torvalds suggested dropping the maximum "short-read" size to 768 bytes, but Shutsemau responded that this type of read only happens at the end of a short call chain, so there is little risk of a stack overflow happening there. Andrew Morton also questioned the "1k guess-or-giggle" cutoff, saying that a much smaller buffer would probably handle most of the relevant cases. He also wondered if it would be possible to copy the data directly to user space rather than buffering by way of the kernel stack; Torvalds answered that doing so would risk exposing sensitive data if the folio in question is reused during the operation.

A different sort of criticism came from Dave Chinner, who worried about what would happen if one of these short reads raced with an fallocate() operation that punches a hole into the same file. fallocate() is specified to be atomic, meaning that user space will see the contents of the file either before or after an operation, but will not see intermediate results. The check on the new sequence count, he said, would not notice changes by fallocate(), potentially resulting in "a transient data corruption event" if data is copied to satisfy a read while being simultaneously modified by fallocate().

Torvalds dismissed those concerns, describing the atomicity guarantees for fallocate() as "a bedtime story", and asserting that implementing that behavior would be too expensive. Chinner replied that this case is the same as truncate(), where the kernel community treats the exposure of partial results as a serious bug. The problem is worse than just exposing partial results, he said; in the case of an FALLOC_FL_COLLAPSE_RANGE operation, a badly timed read could return results that never existed, even partially, in the file. With regard to performance, he said that XFS implements the necessary locking to prevent this kind of race, even with Shutsemau's patch, so providing the specified guarantees for fallocate() does not have to be slow.

Chinner also advised against increasing the "cognitive load" for anybody who is working on any part of the implementation of truncate(), which has long been seen as one of the most difficult operations for a filesystem to get right. He asked: "is the benefit for this niche workload really worth the additional complexity it adds to what is already a very complex set of behaviours and interactions?". Hugh Dickins echoed that sentiment, but Torvalds said that there are other parts of the page cache that do "*much* more scary things", and that tripling performance for a known workload isn't necessarily "niche". David Hildenbrand answered with a request that this change be deferred until planned work to change how folios are freed (using read-copy-update) is completed.

The conversation has wound down, for now at least; it seems likely to resume if Shutsemau reposts the patch after addressing some other issues that were pointed out in review. This work, and the conversation around it, shows just how subtle the intersection of memory management and the virtual filesystem layer can be; each of those subsystems is tricky enough on its own, and bringing them together does not make anything simpler. That is why it can be difficult for this kind of change to clear the bar for acceptance; it is just too easy to break things. So this particular optimization's acceptance into the mainline is far from guaranteed, but it is an interesting exercise regardless.

Comments (6 posted)

Julia 1.12 brings progress on standalone binaries and more

November 4, 2025

This article was contributed by Lee Phillips

Julia is a modern programming language that is of particular interest to scientists due to its high performance combined with language features such as Lisp-style macros, an advanced type system, and multiple dispatch. We last looked at Julia in January on the occasion of its 1.11 release. Early in October Julia 1.12 appeared, bringing a handful of quality-of-life improvements for Julia programmers, most notably support, though still experimental and limited, for the creation of binaries.

Standalone binaries

The big news in this latest release is that Julia programs can be compiled into small, standalone binaries. However, the reality is that the generated binaries are not exactly small, not exactly standalone, and severely limited in scope. Development is proceeding on all of these fronts. The current facility, while it could conceivably be useful in lucky circumstances, is really a kind of proof of concept: a demonstration of how things will eventually work.

My experiments with a "hello world" program resulted in a 1.7MB binary and a directory of library files that occupied a further 91MB. The binary needs to be placed alongside this library directory to run; if you want to give your binary to a friend to use, you need to bundle up the entire 93MB directory. However, your friend does not need to have a Julia installation, which occupies about a gigabyte. The binaries are small in comparison with earlier iterations of the standalone compiler technology, which stuffed most of the Julia runtime and the standard library into the executable. This progress reflects work on the "trimming" ability of the compiler, which attempts to slice out unused routines from the standard library, unneeded parts of the Julia runtime, metadata, and code from the user's program that it can determine is unreachable.

Generation of the binary takes on the order of a minute, but once that's done it starts up instantly whenever invoked. Compilation of these "standalone" binaries requires GCC, as well as the JuliaC package. The latter is most conveniently used when installed as an "app" (see the "Apps" section below) using the package-mode command "app add JuliaC", after which you can compile Julia apps from the command line. See the appendix to this article for working examples of Julia apps and binaries.

Veterans of compiled languages with static types, such as C and Fortran, are accustomed to a different experience. On my machine "hello world" in Fortran compiles in less than a second and produces a 16KB binary. The binary is portable to any machine with the same CPU architecture and commonly installed system libraries. One should not expect an identical outcome when compiling Julia binaries. Julia's "secret sauce", the dynamic type system and method dispatch that endows it with its powers of composability, will never be a feature of languages such as Fortran. The tradeoff is a more complex compilation process and the necessity to have part of the Julia runtime available during execution.

Currently, there are severe limitations imposed on the program to be compiled with juliac. The main limitation is the prohibition of dynamic dispatch. This is a key feature of Julia, where methods can be selected at run time based on the types of function arguments encountered. The consequence is that most public packages don't work, as they may contain at least some instances of dynamic dispatch in contexts that are not performance-critical. Some of these packages can and will be rewritten so that they can be used in standalone binaries, but, in others, the dynamic dispatch is a necessary or desirable feature, so they will never be suitable for static compilation.

The current state of the juliac compiler tool has other severe limitations. For example, programs cannot read from files or from the terminal; the only way to provide input is through command-line arguments.

Workspaces

Julia's package system is responsible for installing a consistent set of dependencies for a program; in support of this, it performs a growing set of tasks. The latest release adds three features to the set: workspaces, apps, and an enhancement to the status command.

The new concept of a workspace is a set of projects that share the same Manifest.toml file, which is an automatically generated and complete dependency graph of a project. The idea is to have a main project and (possibly) several subprojects, which inherit the dependencies of the main project, and might have their own additional dependencies that get added to the manifest. Some natural applications of the concept would be to create subprojects for testing, examples, or documentation.

The implementation of workspaces in the current release gives the impression of being a work in progress, although it is definitely useful once one figures out how to use it. The obstacle in the way of this is an almost complete lack of documentation. To create a workspace, you need to edit a project's Project.toml file manually. This file contains the direct dependencies of a project (not the entire dependency graph) and is distributed with it, allowing other users to recreate the environments needed by the project.

To define a workspace consisting, for example, of a main project called "BigProject" with two subprojects called "sub1" and "sub2", the following two lines are added to the BigProject's Project.toml file:

    [workspace]
    projects = ["sub1", "sub2"]   

There is no interactive package-mode command for this, as there is for other functions that alter the Project.toml file, such as adding or removing dependencies or pinning their versions.

The directories for the subprojects must be placed within the main project's directory, at the same level as the main Project.toml file. Therefore the file layout for this workspace will look like this:

    BigProject/
    ├── Manifest.toml
    ├── Project.toml
    ├── src/
    │   └── BigProject.jl
    ├── sub1/
    │   ├── Project.toml
    │   └── src/
    └── sub2/
        ├── Project.toml
        └── src/ 

The "generate" command in package mode generates the file layout, with a skeleton module in the src directory, of a single project. But it's up to the user to place the subprojects under the main project directory.

Now the programmer can add dependencies to BigProject, as well as to the subprojects, using the activate and add commands in the package mode from the Julia read-eval-print loop (REPL). This will modify each Project.toml file as appropriate, but a single dependency graph, resolved for all of the projects in the workspace, will be created in BigProject's Manifest.toml file; the subprojects will not get separate manifests. If the project creator wants to distribute BigProject without the subprojects, the lines in Project.toml defining the workspace should probably be removed, and, again, must be done manually.

The package-mode status command in the REPL has a new option, --workspace, that gives some information about the workspace layout. However, it prints a flat list of all direct dependencies where one might expect some information about which dependency belongs to which member of the workspace.

The workspace feature has enough utility that I envision using it myself on occasion, but it would benefit from better integration with the REPL's package mode and is in desperate need of documentation.

Apps

"Apps" is a new package option that provides a way to make a Julia project into a command that can be invoked from the terminal like any other command. This should not be confused with the creation of standalone binaries (see above). The use of an app requires the presence of a Julia installation. Support for apps is still experimental. As with workspaces, the implementation is bare-bones, requiring manual editing of the Project.toml file. Adding the lines:

    [apps]
    fact = {}  

indicates the desire to install an app named "fact" (with optional metadata included within the curly brackets). This app will be installed in the ~/.julia/bin directory with executable permissions. What the app actually does when invoked is to start, behind the scenes, the Julia runtime and execute the entry point of the module within the project whose Project.toml file contained the directive above. This entry point is indicated in the module source file using the @main macro:

    function @main(args)
        ...
    end   

The installation of the app into ~/.julia/bin uses a new variation of the package-mode add command:

    (@v1.12) pkg> app add <project name or path>  

The name used for the app, "fact" in this example, has no necessary relation to the name of the module or project.

Of course, one could accomplish the same thing by writing a shell script that invokes the Julia interpreter. I have several shell scripts that I've written for my own convenience that begin with lines like:

    #!/usr/local/bin/julia --project=<project path>  

followed by a normal Julia program. Scripts such as these became practical when Julia's startup time became reasonable, several versions ago. In fact, the files that the new "app" mechanism installs in ~/.julia/bin are just shell scripts. The advantage is that they can be installed with one package-mode command.

Redefinition of structs

Julia encourages development in the REPL (or using it through connected editors, IDEs, or notebooks). Programmers have always been able to freely redefine functions, as their programs grow, without having to restart the interpreter. This freedom had not been extended to the definitions of structs, however; an attempt to redefine them led to an error message. The restriction was a nagging inconvenience, as structs form the basis of user-defined types, and are second in importance only to functions themselves. This limitation is finally removed in Julia 1.12, which allows struct redefinition in the REPL.

This improvement will also make the third-party Revise package, a standard member of every Julia programmer's toolkit, even more useful. Revise greatly aids development in the REPL by automatically reloading function definitions as needed, precompiling behind the scenes when source files are changed. The new struct redefinition freedom in Julia 1.12 allows Revise to also load the redefined structs. Its new powers are being developed in an active branch of the Revise repository.

Multithreading enhancements

In our article describing the new features in Julia 1.9, we described the arrival of interactive threads: starting Julia with the flag -t m,n creates two "thread pools", a normal pool of m worker threads and a pool of n interactive threads that have higher priority. The interactive threads would be used to improve responsiveness in the REPL, for example. Starting Julia with just -t m created m worker threads and no interactive threads, and omitting the flag was the same as using -t 1.

The latest release changes the default. Now, omitting the thread flag is the same as using -t 1,1: one worker thread and one interactive thread. Apparently the utility of the interactive thread in improving the REPL experience was deemed so high that everyone should want to use this for interactive development or running programs. To get zero interactive threads, the previous default, we must explicitly use -t 1.

Unfortunately, here again, the documentation is incomplete and confusing. Experimentation reveals that using "auto" in place of a number for m reserves a number of worker threads equal to the number of logical cores available, and using "auto" for n always results in one interactive thread.

Initialization tasks in concurrent code can be awkward to program. If a multithreaded simulation, for example, needs to read some parameters from a file to set some physical constants, it's not ideal to have each thread (there may be thousands) open and read the file separately. Another common case is the initialization of a simulation with a random number, which may need to be the same for each thread.

A welcome addition to Julia's multithreading toolkit is the appearance of three new types that assist with initialization tasks. These are OncePerProcess, OncePerThread, and OncePerTask. They permit the definition of initialization functions that run once, with the granularity suggested by their names, returning the same value on subsequent calls. The new functions are more convenient than manually confining initialization to a single thread and broadcasting the results to the others.

In addition to the new features described in detail here, 1.12 comes with new compiler diagnostics, facilities for handing atomic variables, a new wall-time profiler, and several other enhancements.

Conclusion

Some of the new features described above are essentially undocumented. As has been the case in the past, and as is the case with far too many Julia packages in the public General Registry, I had to find out how they work by perusing GitHub issues, forum discussions, and source code, but mainly by extensive and time-consuming experimentation. This is a blind spot widely afflicting developers in general; in the case of Julia, it is an obstacle to wider adoption of the language.

Nevertheless, most of the new features in the latest release are immediately useful and are responsive to the needs of Julia programmers. The new code-trimming talent of the juliac compiler represents substantial progress, even if its limitations mean that it does not yet have wide practical application. The project is receiving a good deal of attention, however, and the Julia community can look forward to standalone binaries becoming continually more useful.

Comments (10 posted)

A security model for systemd

By Joe Brockmeier
November 5, 2025

All Systems Go!

Linux has many security features and tools that have evolved over the years to address threats as they emerge and security gaps as they are discovered. Linux security is all, as Lennart Poettering observed at the All Systems Go! conference held in Berlin, somewhat random and not a "clean" design. To many observers, that may also appear to be the case for systemd; however, Poettering said that he does have a vision for how all of the security-related pieces of systemd are meant to fit together. He wanted to use his talk to explain "how the individual security-related parts of systemd actually fit together and why they exist in the first place".

I did not have a chance to attend the All Systems Go! conference this year, but watched the recording of the talk after it was published. The slides are also available.

What is a security model?

Poettering said that when he started drafting his slides it dawned on him that he had used the phrase "security model" frequently, but without knowing its formal definition. So he turned to Wikipedia's definition, which states:

A computer security model is a scheme for specifying and enforcing security policies. A security model may be founded upon a formal model of access rights, a model of computation, a model of distributed computing, or no particular theoretical grounding at all.

That definition was pleasing, he said, because he could just "pull something out of my hair and it's a security model." Of course, he wanted to be a bit more formal than that. Considering the threats in the world we actually live in was the place to begin.

Thinking about threats

Today's systems are always exposed, he said. They are always connected; even systems that people do not think about, such as those in cars, are effectively always online waiting for updates. And systems are often in physically untrusted environments. Many systems are hosted by cloud providers and outside the physical control of their users. Users also carry around digital devices, such as phones, tablets, and laptops: "So it is absolutely essential that we talk about security to protect them both from attacks on the network and also locally and physically."

The next thing is to think about what is actually being attacked. Poettering described some of the possible scenarios; one type of attack might take advantage of a vulnerability in unprivileged code, while another might try to exploit privileged code to make it execute something it was not supposed to. It could be an attack on the kernel from user space. "We need to know what's being attacked in order to defend those parts from whomever is attacking them."

Attacks also have different goals, he said. Some attacks may target user data, others may attempt to backdoor a system, and still others may be focused on using a system's resources, or conducting a denial-of-service (DoS) attack. The type of attacks determine the type of protections to be used. Encryption, he said, is useful if one is worried about data exfiltration, but not so much for a DoS.

Poettering said that he also thought about where attacks are coming from. For example, does an attacker have physical access to a system, is the attack coming over a network, or is the attack coming from inside the system? Maybe a user has a compromised Emacs package, or something escapes a web browser's sandbox. Not all of these attack sources are relevant to systemd, of course, but thinking about security means understanding that attacks can come from everywhere.

FLOUTing security

The bottom line is that the approach to defending against attacks depends on where they come from and what the intention of the attack is. Poettering put up a new slide, which he said was the most important of all the slides in his presentation. It included his acronym for systemd's security model, "FLOUT":

Frustrate attacks
Limit exposure after successful attacks
Observe attacks
Undo attacks
Track vulnerabilities

"I call this 'FLOUT': frustrate, limit, observe, undo, and track. And I think in systemd we need to do something about all five of them".

The first step is to "frustrate" attackers; to make attacks impossible. "Don't even allow the attack to happen and all will be good." But, it does not always work that way; software is vulnerable, and exploits are inevitable. That is why limiting exposure with sandboxing is important, he said. If a system is exploited, "they might do bad stuff inside of that sandbox, but hopefully not outside of it."

Since exploits are inevitable, it also necessary to be able to observe the system and know not only that an attack happened, but how it happened as well. And, once an attack has happened and been detected, it must be undone. With containers and virtual machines, it is less important to have a reset function, Poettering said: "Just delete the VM or container, if you have the suspicion that it was exploited, and create a new one". But that approach does not work so well with physical devices. "We need to always have something like a factory reset that we can return to a well-defined state" and know that it is no longer exploited. Finally, there is tracking vulnerabilities. Ideally, he said, you want to know in advance if something is vulnerable.

Poettering returned to a theme from the beginning of the talk; the fact that Linux, and its security features, were not designed "in a smooth, elegant way". There are so many different security components, he complained, ranging from the original Unix model with UIDs and GIDs, to user namespaces. "And if you want to use them together, its your problem". Too much complexity means less security.

He said that he preferred universal security mechanisms to fine-grained ones. This means finding general rules that always apply and implementing security policies that match those rules, rather than trying to apply policies for specific projects or use cases. He gave the example that device nodes should only be allowed in /dev. That is a very simple security policy that is not tied to any specific hardware.

But that is not how many of Linux's security mechanisms are built. SELinux, for instance, requires a specific policy for each daemon. Then, one might write the policy that forbids that daemon from creating device nodes. But that is much more fragile and difficult to maintain, he said. "It's much easier figuring out universal truths and enforcing them system-wide". To do that, components should be isolated into separate worlds.

Worlds apart

Poettering said that he liked to use the word "worlds" because it's not used much in the Linux community, so far. The term "worlds" could be replaced with "containers", "sandboxes", "namespaces", and so on. The important concept is that something in a separate world is not only restricted from accessing resources that are outside of that world, it should not see those resources at all.

So to keep the complexity of these sandboxes small, it's good if all these objects are not even visible, not even something you have to think about controlling access to, because they are not there, right?

Security rules should be that way, he said, and deal with isolation and visibility. That is different than the way SELinux works; everything still runs in the same world. An application may be locked down, but it still sees everything else.

The next fundamental thing to think about, he said, is to figure out what an application is in the first place and how to model it for security. It is not just an executable binary, but a combination of libraries, runtimes, data resources, configuration files, and more, all put together. To have a security model, "we need to model apps so that we know how to apply the security" to them.

Ideally, an app would be something like an Open Container Initiative (OCI) image or Flatpak container that has all of its resources shipped in an "atomic" combination; that is, all of the components are shipped together and updated together. In this way, he said, each application is its own world. Here, Poettering seemed to be comparing the update model for Docker-type containers and Flatpak containers to package-based application updates, where an application's dependencies might updated independently; he said that "non-atomic behavior" is a security vulnerability because different components may not be tested together.

Another piece of a security model is delegation; components need to be able to talk to one another and delegate tasks. On the server side, the database and web server must be able to talk to one another. On the desktop, the application that needs a Bluetooth device needs to be able to talk to the application that manages Bluetooth devices.

Security boundaries

Poettering also talked about different types of security boundaries. Security sandboxes are one type of boundary that most people already think about, and boundaries between user identities (UIDs). A system's different boot phases are yet another type of boundary; for example, during certain parts of the boot process there are values that are measured into the TPM. After that phase of the boot process is finished it "kind of blows a fuse" and the system can no longer modify those values, which provides a security boundary.

He said that there are also distinctions that are important between code, configuration, and state. Code is executable, but the configuration is not. The resources should be kept separate; state and configuration should be mutable, but code should not be mutable "because that's an immediate exploit, basically, if some app or user manages to change the code".

Along with the security boundaries are the technologies that enforce those boundaries; for example, Linux namespaces, SELinux security labels, CPU rings, and others.

Distributions

The Linux distribution code-review model is supposed to be a security feature, he said. It means that users do not have to download software from 500 different sources they "cannot possibly understand if they are trustworthy or not". Instead, users rely on distributions to do some vetting of the code.

However, Poettering said that there are problems with this model: namely that it does not scale and it is too slow. Distributions cannot package everything, and they cannot keep up with how quickly developers release software. Plus, code reviews are hard, even harder than programming. "So do we really trust all the packagers and the distributions to do this comprehensively? I can tell you I'm not." This is not to disrespect distribution packagers, he said: "I'm just saying that because I know I'm struggling with code reviews, and so I assume that other people are not necessarily much better than me".

One never knows, he said, if distribution packagers are actually reviewing the code they package, and "sometimes it becomes visible that they don't; let's hope that those are the exceptions". Sandboxing and compartmentalizing, Poettering said, is essential to ensure that users do not have to rely solely on code review for protection.

Rules

Having examined all the things that one has to think about when creating a security model, Poettering wanted to share the rules that he has come up with. The first is that kernel objects should be authenticated before they are instantiated. "We should minimize any interaction with data, with objects, with stuff that hasn't been authenticated yet because that is always where the risk is."

Poettering also said that security should focus on images, not files; look at the security of an entire app image, rather than trying to examine individual files (or "inodes" as he put it). "We should measure everything in combination before we use it". He brought up sandboxing again, and said that it was necessary to "isolate everywhere".

Another rule is that a comprehensive factory reset is a must, he said. This cannot be an afterthought, but something that needs to be in the system right away. And, finally, "we need to really protect our security boundaries".

But, he said, a security model still has to be useful. And, "as most of us here are hackers" there needs to be a break-glass mode that allows for making temporary changes and debugging. A break glass mode should be a measured and logged event, though: "Even if you are allowed to do this, there needs to be a trace of it afterward". Such a mode should not allow a developer to exfiltrate data from a system, and possibly even invalidate data in some way.

Linux misdesigns

Next, Poettering identified some of the things he felt were misdesigns in the Linux and Unix security models that he does not want to rely on. His first gripe was with the SUID (or "setuid") bit on files. This is not a new topic for him; Poettering said that general-purpose Linux distributions should get rid of SUID binaries in 2023, in response to a GNU C library (glibc) vulnerability. Instead, he suggested using interprocess communication (IPC) to manage executing a privileged operation on behalf of an unprivileged user.

He also felt that the Linux capabilities implementation is a terrible thing. The feature is "kind of necessary", but a design mistake. For example, CAP_SYS_ADMIN is "this grab bag of privileges of the super user". He complained that it had a privilege "so much bigger than all the other ones that it's a useless separation" of privileges. However, complaints about CAP_SYS_ADMIN are neither new nor rare; Michael Kerrisk, for example, enumerated several in his LWN article about it in 2012.

In any case, Poettering did acknowledge that capabilities are "not entirely useless", and that systemd makes heavy use of capabilities. However, "we only make use of it because it's there, and it's really basic, and you cannot even turn it off in the kernel".

One of the core Unix designs that Linux has inherited is "everything is a file". That is, he said, not actually true. There are certain kinds of objects that are not inodes, such as System V semaphores and System V shared memory. That is a problem, because they are objects with a different type of access control than inodes where "at least we know how security works".

Implementation in systemd

"Now, let's be concrete", Poettering said. It was time to explain how systemd implements the security model that he had discussed, and where its components fit into the FLOUT framework. The first was to sandbox services, to limit exposure; systemd has a number of features for putting services into their own sandbox.

Another is using dm-verity and signatures for discoverable disk images (DDIs) that are inspected to ensure they meet image policies. Verifying disk images would frustrate attackers, as well as provide observability; if a disk image does not match the signature, that is a sign of tampering. Systemd's factory reset features provide the "undo" part of the FLOUT framework; in systemd v258 the project added the ability to reset the TPM as well as disk partitions. LWN covered that in August 2025.

Poettering said that we should also "try really hard to do writable XOR executable mounts". A filesystem should be mounted writable so that its contents can be modified, or it should be mounted as executable so that binaries could be run from it. But a filesystem should never be both. If that were implemented through the whole system, he said, it would be much more secure. Systemd provides tools to do this, in part, with its system extension features. Systemd can mount system extension images (sysext) for /usr and /opt, and configuration extension images (confext) for /etc. The default is to mount these extension read-only, though it is possible to make them writable.

Systemd also uses the TPM a lot, "for fundamental key material" to decrypt disks (systemd-cryptsetup) and service credentials (systemd-creds). That, he said, helped to frustrate attackers and limit access. Finally, he quickly mentioned using the varlink IPC model for delegating and making requests to services, which also helped as a way to limit access.

Questions

One member of the audience wanted to know how Poettering would replace capabilities if he had a magic wand capable of doing so. "If you don't like it, what would you like to see instead?" Poettering responded that his issue was not with the capability model per se, but with the actual implementation in Linux. He said that he liked FreeBSD's Capsicum: "if they would implement that, that would be lovely".

Another attendee asked when systemd would enable the no new privileges flag. Poettering said that it was already possible to use that flag with systemd because it does not have SUID binaries. "We do not allow that". But, he said, it does not mean that the rest of the system is free of SUID binaries. It should be the goal, "at least in well-defined systems" to just get rid of SUID binaries.

Comments (56 posted)

Page editor: Joe Brockmeier

Brief items

Security

CHERIoT 1.0 released

Version 1.0 of the Capability Hardware Extension to RISC-V for IoT (CHERIoT) specification has been released. CHERIoT is a hardware-software system for secure embedded devices, and the specification provides a full description of the ISA and its intended use by CHERIoT RTOS. David Chisnall has written a blog post about the release that explains its significance as well as plans for CHERIoT 2.0 and beyond:

The last change that we made to the ISA was in December 2024, so we are confident that this is a stable release that we can support in hardware for a long time. This specification was implemented by the 1.0 release of CHERIoT Ibex and by CHERIoT Kudu (which has not yet had an official release). These two implementations demonstrate that the ISA scales from three-stage single-issue pipelines to six-stage dual-issue pipelines, roughly the same range of microarchitectures supported by Arm's M profile.

We at SCI have the first of our ICENI chips, which use the CHERIoT Ibex core, on the way back from the fab now and will be scaling up to mass production in the new year. I am not allowed to speak for other folks building CHERIoT silicon, but I expect 2026 to be an exciting year for the CHERIoT project!

Comments (none posted)

Removing XSLT from Chromium

Mason Freed and Dominik Röttsches have published a document with a timeline and plans for removing Extensible Stylesheet Language Transformations (XSLT) from the Chromium project and Chrome browser:

Chromium has officially deprecated XSLT, including the XSLTProcessor JavaScript API and the XML stylesheet processing instruction. We intend to remove support from version 155 (November 17, 2026). The Firefox and WebKit projects have also indicated plans to remove XSLT from their browser engines. This document provides some history and context, explains how we are removing XSLT to make Chrome safer, and provides a path for migrating before these features are removed from the browser.

LWN covered the Web Hypertext Application Technology Working Group (WHATWG) discussion about XSLT in August.

Comments (30 posted)

Security quote of the week

The big difference between jails and the combination of features that are assembled to provide container abstractions on Linux is that everything in the FreeBSD kernel knows if a process is jailed. The process structure contains a pointer to the current jail. This makes it very easy for device drivers to say 'this ioctl is not permitted for jailed processes because it affects global state, this other ioctl is allowed because it's specific to the context of the associated file descriptor'. Administrative APIs provided by the kernel either operate on the current jail, or are blocked in jails, depending on which would be safe.
David Chisnall

Comments (3 posted)

Kernel development

Kernel release status

The current development kernel is 6.18-rc4, released on November 2. Quoth Linus: "Last week in fact felt *so* calm that I was surprised to notice that rc4 isn't really smaller than usual: all the stats look very normal, both in number of changes and where the changes are."

Stable updates: 6.17.7, 6.12.57, and 6.6.116 were released on November 2.

Comments (none posted)

A new kernel port — to WebAssembly

Joel Severin has announced the availability of his port of the Linux kernel to WebAssembly; one can go to this page and watch it boot in a browser.

Wasm is similar to every other arch in Linux, but also different. One important difference is that there is no way to suspend execution of a task. There is a way around this though: Linux supports up to 8k CPUs (or possibly more...). We can just spin up a new CPU dedicated to each user task (process/thread) and never preempt it

Comments (6 posted)

Defeating KASLR by Doing Nothing at All (Project Zero)

The Project Zero blog explains that, on 64-bit Arm systems, the kernel's direct map is always placed at the same virtual location, regardless of whether kernel address-space layout randomization (KASLR) is enabled.

While it remains true that KASLR should not be trusted to prevent exploitation, particularly in local contexts, it is regrettable that the attitude around Linux KASLR is so fatalistic that putting in the engineering effort to preserve its remaining integrity is not considered to be worthwhile. The joint effect of these two issues dramatically simplified what might otherwise have been a more complicated and likely less reliable exploit.

Comments (23 posted)

Distributions

Bazzite Fall update released

The Universal Blue project has announced the Fall update for the Fedora-based Bazzite gaming distribution. This release brings Bazzite up to Fedora 43, includes support for additional handheld gaming systems, as well as drivers for a number of steering wheel devices, and more.

Comments (none posted)

Debian to require Rust as of May 2026

Julian Andres Klode has announced that the Debian APT package-management tool will acquire "hard Rust dependencies sometime after May 2026. "If you maintain a port without a working Rust toolchain, please ensure it has one within the next 6 months, or sunset the port."

Comments (193 posted)

Devuan 6.0 released

Version 6.0 ("Excalibur") of the systemd-averse Devuan distribution has been released. It is based on Debian 13 ("trixie"), and includes some of the significant changes from that release, including the merged /usr hierarchy. See the release notes for details.

Comments (5 posted)

Ubuntu introduces architecture variants

Michael Hudson-Doyle, a member of Ubuntu's Foundations team, has announced the introduction of an "architecture variant" for Ubuntu 25.10:

By making changes to dpkg, apt and Launchpad, we are able to build multiple versions of a package, each for a different level of the x86-64 architecture, meaning we can have packages that specifically target x86-64-v3, for example.

As a result, we're very excited to share that in Ubuntu 25.10, some packages are available, on an opt-in basis, in their optimized form for the more modern x86-64-v3 architecture level.

See the announcement for details on opting in to x86-64-v3 packages.

Comments (39 posted)

Development

Incus 6.18 released

Version 6.18 of the Incus container and virtual-machine management system has been released. Notable changes in this release include new configuration keys for providing credentials to systemd, BPF token delegation, VirtIO support for sound cards, the ability to export ISO volumes, improvements to the IncusOS command-line utility, and more.

Comments (none posted)

LXQt 2.3.0 released

Version 2.3.0 of the Lightweight Qt Desktop Environment (LXQt) has been released. The highlight of this release is continued improvement in Wayland support across LXQt components. Rather than offering its own compositor, the LXQt project takes a modular approach and works with several Wayland compositors, such as KWin, labwc, and niri.

Comments (none posted)

OCI Runtime Specification 1.3 adds FreeBSD

Version 1.3 of the Open Container Initiative (OCI) Runtime Specification has been released. The specification covers the configuration, execution environment, and lifecycle of containers. The most notable change in 1.3 is the addition of FreeBSD to the specification, which the FreeBSD Foundation calls "a watershed moment for FreeBSD":

The addition of cloud-native container support complements FreeBSD's already robust virtualization capabilities, particularly the powerful FreeBSD jails technology that has been a cornerstone of the operating system for over two decades. In fact, OCI containers on FreeBSD are implemented using jails as the underlying isolation mechanism, bringing together the security and resource management benefits of jails with the portability and ecosystem advantages of OCI-compliant containers.

Comments (none posted)

Python steering council accepts lazy imports

Barry Warsaw, writing for the Python steering council, has announced that PEP 810 ("Explicit lazy imports") has been approved, unanimously, by the four who could vote. Since Pablo Galindo Salgado was one of the PEP authors, he did not vote. The PEP provides a way to defer importing modules until the names defined in a module are needed by other parts of the program. We covered the PEP and the discussion around it a few weeks back. The council also had "recommendations about some of the PEP's details, a few suggestions for filling a couple of small gaps", including:
Use lazy as the keyword. We debated many of the given alternatives (and some we came up with ourselves), and ultimately agreed with the PEP's choice of the lazy keyword. The closest challenger was defer, but once we tried to use that in all the places where the term is visible, we ultimately didn't think it was as good an overall fit. The same was true with all the other alternative keywords we could come up with, so... lazy it is!

What about from foo lazy import bar? Nope! We like that in both module imports and from-imports that the lazy keyword is the first thing on the line. It helps to visually recognize lazy imports of both varieties.

Comments (3 posted)

Rust 1.91.0 released

Version 1.91.0 of the Rust language has been released. Changes include promoting aarch64-pc-windows-msvc to a tier-1 platform, a new lint rule to catch dangling raw pointers from local variables, and a fair number of newly stabilized APIs.

Comments (none posted)

Page editor: Daroc Alden

Announcements

Newsletters

Distributions and system administration

Development

Meeting minutes

Calls for Presentations

CFP Deadlines: November 6, 2025 to January 5, 2026

The following listing of CFP deadlines is taken from the LWN.net CFP Calendar.

DeadlineEvent Dates EventLocation
November 16 January 31
February 1
Free and Open source Software Developers' European Meeting Brussels, Belgium
November 16 February 17 AlpOSS 2026 Échirolles, France
November 30 March 19 Open Tech Day 26: OpenTofu Edition Nuremberg, Germany
December 19 May 15
May 17
PyCon US Long Beach, California, US
December 21 February 2 OpenEmbedded Workshop 2026 Brussels, Belgium
December 31 April 28
April 29
stackconf 2026 Munich, Germany

If the CFP deadline for your event does not appear here, please tell us about it.

Upcoming Events

Events: November 6, 2025 to January 5, 2026

The following event listing is taken from the LWN.net Calendar.

Date(s)EventLocation
November 7
November 8
South Tyrol Free Software Conference Bolzano, Italy
November 7
November 8
Seattle GNU/Linux Conference Seattle, US
November 8 FOSS for All Conference 2025 Seoul, South Korea
November 13
November 14
ecoCompute 2025 Berlin, Germany
November 15
November 16
Capitole du Libre 2025 Toulouse, France
November 18
November 20
Open Source Monitoring Conference Nuremberg, Germany
November 19
November 20
Open vSwitch OVN Conf'25 Prague, Czech Republic
November 20 NLUUG Autumn Conference 2025 Utrecht, The Netherlands
December 2
December 4
Yocto Project Virtual Summit 2025.12 Online
December 6 OLF Conference Columbus, OH, US
December 6
December 7
EmacsConf online
December 8
December 10
Open Source Summit Japan Tokyo, Japan
December 8
December 10
Automotive Linux Summit Tokyo, Japan
December 10
December 11
Open Source Experience 2025 Paris, France
December 11
December 12
Open Compliance Summit Tokyo, Japan
December 11
December 13
Linux Plumbers Conference Tokyo, Japan
December 13
December 14
LibreOffice Asia Conference 2025 Tokyo, Japan
December 13
December 15
GNOME Asia Summit 2025 Tokyo, Japan

If your event does not appear here, please tell us about it.

Security updates

Alert summary October 30, 2025 to November 5, 2025

Dist. ID Release Package Date
AlmaLinux ALSA-2025:18152 10 .NET 8.0 2025-11-03
AlmaLinux ALSA-2025:18153 10 .NET 9.0 2025-11-03
AlmaLinux ALSA-2025:18150 8 .NET 9.0 2025-11-03
AlmaLinux ALSA-2025:18151 9 .NET 9.0 2025-11-03
AlmaLinux ALSA-2025:18815 8 java-1.8.0-openjdk 2025-10-30
AlmaLinux ALSA-2025:18815 9 java-1.8.0-openjdk 2025-10-30
AlmaLinux ALSA-2025:18821 8 java-17-openjdk 2025-10-30
AlmaLinux ALSA-2025:18821 9 java-17-openjdk 2025-10-30
AlmaLinux ALSA-2025:18824 10 java-21-openjdk 2025-10-30
AlmaLinux ALSA-2025:19156 10 libtiff 2025-10-30
AlmaLinux ALSA-2025:19276 8 libtiff 2025-10-31
AlmaLinux ALSA-2025:19237 9 redis 2025-10-30
AlmaLinux ALSA-2025:19238 8 redis:6 2025-10-30
AlmaLinux ALSA-2025:18070 8 webkit2gtk3 2025-11-03
Debian DLA-4364-1 LTS bind9 2025-11-04
Debian DSA-6046-1 stable chromium 2025-10-30
Debian DLA-4363-1 LTS dcmtk 2025-11-03
Debian DLA-4361-1 LTS geographiclib 2025-11-03
Debian DLA-4362-1 LTS gimp 2025-11-03
Debian DSA-6049-1 stable gimp 2025-11-04
Debian DLA-4355-1 LTS mediawiki 2025-10-31
Debian DSA-6045-1 stable pdns-recursor 2025-10-29
Debian DLA-4360-1 LTS pure-ftpd 2025-11-03
Debian DLA-4354-1 LTS pypy3 2025-10-31
Debian DLA-4357-1 LTS ruby-rack 2025-11-02
Debian DSA-6048-1 stable ruby-rack 2025-11-03
Debian DSA-6047-1 stable squid 2025-10-30
Debian DLA-4359-1 LTS strongswan 2025-11-03
Debian DLA-4356-1 LTS ublock-origin 2025-10-31
Debian DLA-4358-1 LTS wordpress 2025-11-03
Debian DLA-4353-1 LTS xorg-server 2025-10-29
Fedora FEDORA-2025-945dff8564 F42 LabPlot 2025-10-30
Fedora FEDORA-2025-7a1a0e5bd8 F43 Thunar 2025-11-03
Fedora FEDORA-2025-10c407da27 F41 bind 2025-10-30
Fedora FEDORA-2025-92566203fd F42 bind 2025-10-30
Fedora FEDORA-2025-10c407da27 F41 bind-dyndb-ldap 2025-10-30
Fedora FEDORA-2025-92566203fd F42 bind-dyndb-ldap 2025-10-30
Fedora FEDORA-2025-31f0d8bfa9 F43 chromium 2025-11-05
Fedora FEDORA-2025-08b0c5ec40 F43 dotnet9.0 2025-11-04
Fedora FEDORA-2025-945dff8564 F42 dtk6core 2025-10-30
Fedora FEDORA-2025-945dff8564 F42 dtk6gui 2025-10-30
Fedora FEDORA-2025-945dff8564 F42 dtk6log 2025-10-30
Fedora FEDORA-2025-945dff8564 F42 dtk6widget 2025-10-30
Fedora FEDORA-2025-4154ea83d0 F43 fastapi-cli 2025-11-05
Fedora FEDORA-2025-4154ea83d0 F43 fastapi-cloud-cli 2025-11-05
Fedora FEDORA-2025-945dff8564 F42 fcitx5-qt 2025-10-30
Fedora FEDORA-2025-2d70cfaa80 F43 firefox 2025-11-01
Fedora FEDORA-2025-6db4dcdf66 F41 fluidsynth 2025-10-30
Fedora FEDORA-2025-1131df0f70 F42 fluidsynth 2025-10-30
Fedora FEDORA-2025-0ea3179bb0 F43 fluidsynth 2025-10-30
Fedora FEDORA-2025-945dff8564 F42 gammaray 2025-10-30
Fedora FEDORA-2025-4154ea83d0 F43 gherkin 2025-11-05
Fedora FEDORA-2025-945dff8564 F42 kddockwidgets 2025-10-30
Fedora FEDORA-2025-a7cea1535d F43 kea 2025-11-01
Fedora FEDORA-2025-d44581756d F43 libnbd 2025-11-05
Fedora FEDORA-2025-4154ea83d0 F43 maturin 2025-11-05
Fedora FEDORA-2025-945dff8564 F42 mingw-qt6-qt3d 2025-10-30
Fedora FEDORA-2025-945dff8564 F42 mingw-qt6-qt5compat 2025-10-30
Fedora FEDORA-2025-945dff8564 F42 mingw-qt6-qtactiveqt 2025-10-30
Fedora FEDORA-2025-945dff8564 F42 mingw-qt6-qtbase 2025-10-30
Fedora FEDORA-2025-945dff8564 F42 mingw-qt6-qtcharts 2025-10-30
Fedora FEDORA-2025-945dff8564 F42 mingw-qt6-qtdeclarative 2025-10-30
Fedora FEDORA-2025-945dff8564 F42 mingw-qt6-qtimageformats 2025-10-30
Fedora FEDORA-2025-945dff8564 F42 mingw-qt6-qtlocation 2025-10-30
Fedora FEDORA-2025-945dff8564 F42 mingw-qt6-qtmultimedia 2025-10-30
Fedora FEDORA-2025-945dff8564 F42 mingw-qt6-qtpositioning 2025-10-30
Fedora FEDORA-2025-945dff8564 F42 mingw-qt6-qtscxml 2025-10-30
Fedora FEDORA-2025-945dff8564 F42 mingw-qt6-qtsensors 2025-10-30
Fedora FEDORA-2025-945dff8564 F42 mingw-qt6-qtserialport 2025-10-30
Fedora FEDORA-2025-945dff8564 F42 mingw-qt6-qtshadertools 2025-10-30
Fedora FEDORA-2025-945dff8564 F42 mingw-qt6-qtsvg 2025-10-30
Fedora FEDORA-2025-945dff8564 F42 mingw-qt6-qttools 2025-10-30
Fedora FEDORA-2025-945dff8564 F42 mingw-qt6-qttranslations 2025-10-30
Fedora FEDORA-2025-945dff8564 F42 mingw-qt6-qtwebchannel 2025-10-30
Fedora FEDORA-2025-945dff8564 F42 mingw-qt6-qtwebsockets 2025-10-30
Fedora FEDORA-2025-945dff8564 F42 nheko 2025-10-30
Fedora FEDORA-2025-43a0bff5ea F41 openapi-python-client 2025-11-03
Fedora FEDORA-2025-16b2da653e F42 openapi-python-client 2025-11-05
Fedora FEDORA-2025-a77c1f005b F42 openapi-python-client 2025-11-03
Fedora FEDORA-2025-4154ea83d0 F43 openapi-python-client 2025-11-05
Fedora FEDORA-2025-ce3d358bcc F43 openapi-python-client 2025-11-05
Fedora FEDORA-2025-ab1fce816d F41 openbao 2025-11-01
Fedora FEDORA-2025-4bf7795b4e F42 openbao 2025-11-01
Fedora FEDORA-2025-0687b2debc F43 openbao 2025-10-31
Fedora FEDORA-2025-4154ea83d0 F43 python-annotated-doc 2025-11-05
Fedora FEDORA-2025-4154ea83d0 F43 python-cron-converter 2025-11-05
Fedora FEDORA-2025-4154ea83d0 F43 python-fastapi 2025-11-05
Fedora FEDORA-2025-4154ea83d0 F43 python-inline-snapshot 2025-11-05
Fedora FEDORA-2025-4154ea83d0 F43 python-jiter 2025-11-05
Fedora FEDORA-2025-4154ea83d0 F43 python-openapi-core 2025-11-05
Fedora FEDORA-2025-4154ea83d0 F43 python-platformio 2025-11-05
Fedora FEDORA-2025-4154ea83d0 F43 python-pydantic 2025-11-05
Fedora FEDORA-2025-4154ea83d0 F43 python-pydantic-core 2025-11-05
Fedora FEDORA-2025-4154ea83d0 F43 python-pydantic-extra-types 2025-11-05
Fedora FEDORA-2025-945dff8564 F42 python-pyqt6 2025-10-30
Fedora FEDORA-2025-4154ea83d0 F43 python-rignore 2025-11-05
Fedora FEDORA-2025-4154ea83d0 F43 python-starlette 2025-11-05
Fedora FEDORA-2025-4154ea83d0 F43 python-typer 2025-11-05
Fedora FEDORA-2025-4154ea83d0 F43 python-typing-inspection 2025-11-05
Fedora FEDORA-2025-43a0bff5ea F41 python-uv-build 2025-11-03
Fedora FEDORA-2025-a77c1f005b F42 python-uv-build 2025-11-03
Fedora FEDORA-2025-4154ea83d0 F43 python-uv-build 2025-11-05
Fedora FEDORA-2025-945dff8564 F42 qt-creator 2025-10-30
Fedora FEDORA-2025-c50e4dfd3b F42 qt5-qtbase 2025-11-01
Fedora FEDORA-2025-9a46af550f F43 qt5-qtbase 2025-11-01
Fedora FEDORA-2025-945dff8564 F42 qt6 2025-10-30
Fedora FEDORA-2025-945dff8564 F42 qt6-qt3d 2025-10-30
Fedora FEDORA-2025-945dff8564 F42 qt6-qt5compat 2025-10-30
Fedora FEDORA-2025-945dff8564 F42 qt6-qtbase 2025-10-30
Fedora FEDORA-2025-945dff8564 F42 qt6-qtcharts 2025-10-30
Fedora FEDORA-2025-945dff8564 F42 qt6-qtcoap 2025-10-30
Fedora FEDORA-2025-945dff8564 F42 qt6-qtconnectivity 2025-10-30
Fedora FEDORA-2025-945dff8564 F42 qt6-qtdatavis3d 2025-10-30
Fedora FEDORA-2025-945dff8564 F42 qt6-qtdeclarative 2025-10-30
Fedora FEDORA-2025-945dff8564 F42 qt6-qtgrpc 2025-10-30
Fedora FEDORA-2025-945dff8564 F42 qt6-qthttpserver 2025-10-30
Fedora FEDORA-2025-945dff8564 F42 qt6-qtimageformats 2025-10-30
Fedora FEDORA-2025-945dff8564 F42 qt6-qtlanguageserver 2025-10-30
Fedora FEDORA-2025-945dff8564 F42 qt6-qtlocation 2025-10-30
Fedora FEDORA-2025-945dff8564 F42 qt6-qtlottie 2025-10-30
Fedora FEDORA-2025-945dff8564 F42 qt6-qtmqtt 2025-10-30
Fedora FEDORA-2025-945dff8564 F42 qt6-qtmultimedia 2025-10-30
Fedora FEDORA-2025-945dff8564 F42 qt6-qtnetworkauth 2025-10-30
Fedora FEDORA-2025-945dff8564 F42 qt6-qtopcua 2025-10-30
Fedora FEDORA-2025-945dff8564 F42 qt6-qtpositioning 2025-10-30
Fedora FEDORA-2025-945dff8564 F42 qt6-qtquick3d 2025-10-30
Fedora FEDORA-2025-945dff8564 F42 qt6-qtquick3dphysics 2025-10-30
Fedora FEDORA-2025-945dff8564 F42 qt6-qtquicktimeline 2025-10-30
Fedora FEDORA-2025-945dff8564 F42 qt6-qtremoteobjects 2025-10-30
Fedora FEDORA-2025-945dff8564 F42 qt6-qtscxml 2025-10-30
Fedora FEDORA-2025-945dff8564 F42 qt6-qtsensors 2025-10-30
Fedora FEDORA-2025-945dff8564 F42 qt6-qtserialbus 2025-10-30
Fedora FEDORA-2025-945dff8564 F42 qt6-qtserialport 2025-10-30
Fedora FEDORA-2025-945dff8564 F42 qt6-qtshadertools 2025-10-30
Fedora FEDORA-2025-945dff8564 F42 qt6-qtspeech 2025-10-30
Fedora FEDORA-2025-945dff8564 F42 qt6-qtsvg 2025-10-30
Fedora FEDORA-2025-945dff8564 F42 qt6-qttools 2025-10-30
Fedora FEDORA-2025-945dff8564 F42 qt6-qttranslations 2025-10-30
Fedora FEDORA-2025-945dff8564 F42 qt6-qtvirtualkeyboard 2025-10-30
Fedora FEDORA-2025-945dff8564 F42 qt6-qtwayland 2025-10-30
Fedora FEDORA-2025-945dff8564 F42 qt6-qtwebchannel 2025-10-30
Fedora FEDORA-2025-945dff8564 F42 qt6-qtwebengine 2025-10-30
Fedora FEDORA-2025-945dff8564 F42 qt6-qtwebsockets 2025-10-30
Fedora FEDORA-2025-945dff8564 F42 qt6-qtwebview 2025-10-30
Fedora FEDORA-2025-b10099f608 F41 ruby 2025-11-02
Fedora FEDORA-2025-43a0bff5ea F41 ruff 2025-11-03
Fedora FEDORA-2025-a77c1f005b F42 ruff 2025-11-03
Fedora FEDORA-2025-4154ea83d0 F43 ruff 2025-11-05
Fedora FEDORA-2025-43a0bff5ea F41 rust-astral-tokio-tar 2025-11-03
Fedora FEDORA-2025-a77c1f005b F42 rust-astral-tokio-tar 2025-11-03
Fedora FEDORA-2025-4154ea83d0 F43 rust-astral-tokio-tar 2025-11-05
Fedora FEDORA-2025-43a0bff5ea F41 rust-attribute-derive 2025-11-03
Fedora FEDORA-2025-a77c1f005b F42 rust-attribute-derive 2025-11-03
Fedora FEDORA-2025-4154ea83d0 F43 rust-attribute-derive 2025-11-05
Fedora FEDORA-2025-43a0bff5ea F41 rust-attribute-derive-macro 2025-11-03
Fedora FEDORA-2025-a77c1f005b F42 rust-attribute-derive-macro 2025-11-03
Fedora FEDORA-2025-4154ea83d0 F43 rust-attribute-derive-macro 2025-11-05
Fedora FEDORA-2025-43a0bff5ea F41 rust-backon 2025-11-03
Fedora FEDORA-2025-43a0bff5ea F41 rust-collection_literals 2025-11-03
Fedora FEDORA-2025-a77c1f005b F42 rust-collection_literals 2025-11-03
Fedora FEDORA-2025-4154ea83d0 F43 rust-collection_literals 2025-11-05
Fedora FEDORA-2025-43a0bff5ea F41 rust-get-size-derive2 2025-11-03
Fedora FEDORA-2025-a77c1f005b F42 rust-get-size-derive2 2025-11-03
Fedora FEDORA-2025-4154ea83d0 F43 rust-get-size-derive2 2025-11-05
Fedora FEDORA-2025-43a0bff5ea F41 rust-get-size2 2025-11-03
Fedora FEDORA-2025-a77c1f005b F42 rust-get-size2 2025-11-03
Fedora FEDORA-2025-4154ea83d0 F43 rust-get-size2 2025-11-05
Fedora FEDORA-2025-43a0bff5ea F41 rust-interpolator 2025-11-03
Fedora FEDORA-2025-a77c1f005b F42 rust-interpolator 2025-11-03
Fedora FEDORA-2025-4154ea83d0 F43 rust-interpolator 2025-11-05
Fedora FEDORA-2025-4154ea83d0 F43 rust-jiter 2025-11-05
Fedora FEDORA-2025-43a0bff5ea F41 rust-manyhow 2025-11-03
Fedora FEDORA-2025-a77c1f005b F42 rust-manyhow 2025-11-03
Fedora FEDORA-2025-4154ea83d0 F43 rust-manyhow 2025-11-05
Fedora FEDORA-2025-43a0bff5ea F41 rust-manyhow-macros 2025-11-03
Fedora FEDORA-2025-a77c1f005b F42 rust-manyhow-macros 2025-11-03
Fedora FEDORA-2025-4154ea83d0 F43 rust-manyhow-macros 2025-11-05
Fedora FEDORA-2025-43a0bff5ea F41 rust-proc-macro-utils 2025-11-03
Fedora FEDORA-2025-a77c1f005b F42 rust-proc-macro-utils 2025-11-03
Fedora FEDORA-2025-4154ea83d0 F43 rust-proc-macro-utils 2025-11-05
Fedora FEDORA-2025-43a0bff5ea F41 rust-quote-use 2025-11-03
Fedora FEDORA-2025-a77c1f005b F42 rust-quote-use 2025-11-03
Fedora FEDORA-2025-4154ea83d0 F43 rust-quote-use 2025-11-05
Fedora FEDORA-2025-43a0bff5ea F41 rust-quote-use-macros 2025-11-03
Fedora FEDORA-2025-a77c1f005b F42 rust-quote-use-macros 2025-11-03
Fedora FEDORA-2025-4154ea83d0 F43 rust-quote-use-macros 2025-11-05
Fedora FEDORA-2025-4154ea83d0 F43 rust-regex 2025-11-05
Fedora FEDORA-2025-4154ea83d0 F43 rust-regex-automata 2025-11-05
Fedora FEDORA-2025-43a0bff5ea F41 rust-reqsign 2025-11-03
Fedora FEDORA-2025-a77c1f005b F42 rust-reqsign 2025-11-03
Fedora FEDORA-2025-4154ea83d0 F43 rust-reqsign 2025-11-05
Fedora FEDORA-2025-43a0bff5ea F41 rust-reqsign-aws-v4 2025-11-03
Fedora FEDORA-2025-a77c1f005b F42 rust-reqsign-aws-v4 2025-11-03
Fedora FEDORA-2025-4154ea83d0 F43 rust-reqsign-aws-v4 2025-11-05
Fedora FEDORA-2025-43a0bff5ea F41 rust-reqsign-command-execute-tokio 2025-11-03
Fedora FEDORA-2025-a77c1f005b F42 rust-reqsign-command-execute-tokio 2025-11-03
Fedora FEDORA-2025-4154ea83d0 F43 rust-reqsign-command-execute-tokio 2025-11-05
Fedora FEDORA-2025-43a0bff5ea F41 rust-reqsign-core 2025-11-03
Fedora FEDORA-2025-a77c1f005b F42 rust-reqsign-core 2025-11-03
Fedora FEDORA-2025-4154ea83d0 F43 rust-reqsign-core 2025-11-05
Fedora FEDORA-2025-43a0bff5ea F41 rust-reqsign-file-read-tokio 2025-11-03
Fedora FEDORA-2025-a77c1f005b F42 rust-reqsign-file-read-tokio 2025-11-03
Fedora FEDORA-2025-4154ea83d0 F43 rust-reqsign-file-read-tokio 2025-11-05
Fedora FEDORA-2025-43a0bff5ea F41 rust-reqsign-http-send-reqwest 2025-11-03
Fedora FEDORA-2025-a77c1f005b F42 rust-reqsign-http-send-reqwest 2025-11-03
Fedora FEDORA-2025-4154ea83d0 F43 rust-reqsign-http-send-reqwest 2025-11-05
Fedora FEDORA-2025-4154ea83d0 F43 rust-serde_json 2025-11-05
Fedora FEDORA-2025-4154ea83d0 F43 rust-speedate 2025-11-05
Fedora FEDORA-2025-43a0bff5ea F41 rust-tikv-jemalloc-sys 2025-11-03
Fedora FEDORA-2025-a77c1f005b F42 rust-tikv-jemalloc-sys 2025-11-03
Fedora FEDORA-2025-4154ea83d0 F43 rust-tikv-jemalloc-sys 2025-11-05
Fedora FEDORA-2025-43a0bff5ea F41 rust-tikv-jemallocator 2025-11-03
Fedora FEDORA-2025-a77c1f005b F42 rust-tikv-jemallocator 2025-11-03
Fedora FEDORA-2025-4154ea83d0 F43 rust-tikv-jemallocator 2025-11-05
Fedora FEDORA-2025-7d890563f6 F42 samba 2025-11-03
Fedora FEDORA-2025-af04521261 F43 skopeo 2025-11-03
Fedora FEDORA-2025-5f49ddd4af F42 sssd 2025-11-01
Fedora FEDORA-2025-224e937c18 F41 unbound 2025-10-30
Fedora FEDORA-2025-16df491a66 F43 unbound 2025-11-01
Fedora FEDORA-2025-43a0bff5ea F41 uv 2025-11-03
Fedora FEDORA-2025-a77c1f005b F42 uv 2025-11-03
Fedora FEDORA-2025-4154ea83d0 F43 uv 2025-11-05
Fedora FEDORA-2025-87154673fe F41 vgrep 2025-11-01
Fedora FEDORA-2025-6738ea943a F42 vgrep 2025-11-01
Fedora FEDORA-2025-6f416148b4 F42 xorg-x11-server-Xwayland 2025-11-01
Fedora FEDORA-2025-fe61a6ad60 F43 xorg-x11-server-Xwayland 2025-10-30
Fedora FEDORA-2025-945dff8564 F42 zeal 2025-10-30
Mageia MGASA-2025-0254 9 bind 2025-11-01
Mageia MGASA-2025-0256 9 golang 2025-11-04
Mageia MGASA-2025-0257 9 libavif 2025-11-04
Mageia MGASA-2025-0252 9 libtiff 2025-10-31
Mageia MGASA-2025-0255 9 sope 2025-11-01
Mageia MGASA-2025-0253 9 transfig 2025-11-01
Oracle ELSA-2025-17710 OL7 compat-libtiff3 2025-10-31
Oracle ELSA-2025-19403 OL10 expat 2025-11-04
Oracle ELSA-2025-19106 OL10 kernel 2025-10-31
Oracle ELSA-2025-25731 OL7 kernel 2025-11-04
Oracle ELSA-2025-25731 OL8 kernel 2025-11-04
Oracle ELSA-2025-25731 OL8 kernel 2025-11-04
Oracle ELSA-2025-19102 OL8 kernel 2025-10-29
Oracle ELSA-2025-19409 OL9 kernel 2025-11-04
Oracle ELSA-2025-19105 OL9 kernel 2025-10-29
Oracle ELSA-2025-19156 OL10 libtiff 2025-10-29
Oracle ELSA-2025-19276 OL8 libtiff 2025-10-31
Oracle ELSA-2025-19113 OL9 libtiff 2025-10-29
Oracle ELSA-2025-19237 OL9 redis 2025-10-31
Oracle ELSA-2025-19238 OL8 redis:6 2025-10-31
Oracle ELSA-2025-19345 OL9 redis:7 2025-10-31
Oracle ELSA-2025-19489 OL9 tigervnc 2025-11-04
Oracle ELSA-2025-19434 OL8 xorg-x11-server 2025-11-04
Oracle ELSA-2025-19433 OL9 xorg-x11-server 2025-11-04
Oracle ELSA-2025-19435 OL10 xorg-x11-server-Xwayland 2025-11-04
Oracle ELSA-2025-19432 OL8 xorg-x11-server-Xwayland 2025-11-04
Red Hat RHSA-2025:19793-01 EL8 bind9.16 2025-11-05
Red Hat RHSA-2025:19601-01 EL9.4 git 2025-11-04
Red Hat RHSA-2025:19469-01 EL10 kernel 2025-11-03
Red Hat RHSA-2025:19447-01 EL8 kernel 2025-11-03
Red Hat RHSA-2025:19409-01 EL9 kernel 2025-11-03
Red Hat RHSA-2025:19440-01 EL8 kernel-rt 2025-11-03
Red Hat RHSA-2025:19400-01 EL8.2 libssh 2025-11-03
Red Hat RHSA-2025:19401-01 EL8.4 libssh 2025-11-03
Red Hat RHSA-2025:19472-01 EL9.0 libssh 2025-11-03
Red Hat RHSA-2025:19470-01 EL9.2 libssh 2025-11-03
Red Hat RHSA-2025:19572-01 EL8 mariadb:10.5 2025-11-04
Red Hat RHSA-2025:19584-01 EL9 multiple packages 2025-11-04
Red Hat RHSA-2025:19566-01 EL10 osbuild-composer 2025-11-04
Red Hat RHSA-2025:19594-01 EL9 osbuild-composer 2025-11-04
Red Hat RHSA-2025:19513-01 EL10 pcs 2025-11-04
Red Hat RHSA-2025:19719-01 EL8 pcs 2025-11-05
Red Hat RHSA-2025:19734-01 EL8.6 pcs 2025-11-05
Red Hat RHSA-2025:19647-01 EL8.8 pcs 2025-11-04
Red Hat RHSA-2025:19512-01 EL9 pcs 2025-11-04
Red Hat RHSA-2025:19800-01 EL9.0 pcs 2025-11-05
Red Hat RHSA-2025:19733-01 EL9.2 pcs 2025-11-05
Red Hat RHSA-2025:19736-01 EL9.4 pcs 2025-11-05
Red Hat RHSA-2025:19772-01 EL10 qt6-qtsvg 2025-11-05
Red Hat RHSA-2025:19318-01 EL8.4 redis:6 2025-10-30
Red Hat RHSA-2025:19610-01 EL8 sssd 2025-11-04
Red Hat RHSA-2025:19489-01 EL9 tigervnc 2025-11-04
Red Hat RHSA-2025:19434-01 EL8 xorg-x11-server 2025-11-03
Red Hat RHSA-2025:19435-01 EL10.0 xorg-x11-server-Xwayland 2025-11-03
Red Hat RHSA-2025:19432-01 EL8 xorg-x11-server-Xwayland 2025-11-03
Slackware SSA:2025-305-01 seamonkey 2025-11-01
Slackware SSA:2025-302-02 tigervnc 2025-10-29
Slackware SSA:2025-302-01 xorg 2025-10-29
SUSE SUSE-SU-2025:3918-1 SLE12 ImageMagick 2025-11-03
SUSE SUSE-SU-2025:3867-1 SLE15 ImageMagick 2025-10-30
SUSE openSUSE-SU-2025:15685-1 TW ImageMagick 2025-11-01
SUSE SUSE-SU-2025:3903-1 SLE15 bind 2025-10-31
SUSE SUSE-SU-2025:2554-1 SLE15 cdi-apiserver-container, cdi-cloner-container, cdi- controller-container, cdi-importer-container, cdi-operator-container, cdi- uploadproxy-container, cdi-uploadserver-container, cont 2025-10-30
SUSE openSUSE-SU-2025:15687-1 TW chromedriver 2025-11-01
SUSE openSUSE-SU-2025:0413-1 osB15 chromium 2025-10-31
SUSE openSUSE-SU-2025:0412-1 osB15 chromium 2025-10-30
SUSE openSUSE-SU-2025:0411-1 osB15 chromium 2025-10-30
SUSE SUSE-SU-2025:3868-1 SLE12 chrony 2025-10-30
SUSE SUSE-SU-2025:3899-1 SLE-m5.3 SLE-m5.4 SLE-m5.5 oS15.4 colord 2025-10-31
SUSE SUSE-SU-2025:3949-1 SLE15 oS15.6 colord 2025-11-05
SUSE openSUSE-SU-2025:15675-1 TW coreboot-utils 2025-11-01
SUSE SUSE-SU-2025:20895-1 SLE-m6.1 expat 2025-10-30
SUSE SUSE-SU-2025:2990-1 SLE15 SES7.1 ffmpeg 2025-11-05
SUSE openSUSE-SU-2025:0417-1 osB15 git-bug 2025-11-02
SUSE openSUSE-SU-2025:0418-1 osB15 git-bug 2025-11-02
SUSE SUSE-SU-2025:3937-1 SLE15 oS15.6 govulncheck-vulndb 2025-11-04
SUSE SUSE-SU-2025:20900-1 SLE-m6.1 haproxy 2025-10-30
SUSE SUSE-SU-2025:3869-1 SLE15 himmelblau 2025-10-30
SUSE SUSE-SU-2025:1771-1 SLE15 SES7.1 iputils 2025-10-31
SUSE SUSE-SU-2025:3947-1 SLE15 oS15.6 jasper 2025-11-05
SUSE openSUSE-SU-2025:15690-1 TW java-11-openj9 2025-11-01
SUSE openSUSE-SU-2025:15691-1 TW java-17-openj9 2025-11-01
SUSE openSUSE-SU-2025:15693-1 TW java-21-openj9 2025-11-01
SUSE SUSE-SU-2025:3859-1 SLE15 oS15.6 java-21-openjdk 2025-10-29
SUSE openSUSE-SU-2025:15694-1 TW java-25-openj9 2025-11-01
SUSE openSUSE-SU-2025:15674-1 TW java-25-openjdk 2025-10-29
SUSE openSUSE-SU-2025:15677-1 TW kea 2025-11-01
SUSE SUSE-SU-2025:20898-1 SLE-m6.0 SLE-m6.1 kernel 2025-10-30
SUSE SUSE-SU-2025:2588-1 SLE15 SLE-m5.5 kernel 2025-11-04
SUSE openSUSE-SU-2025:15678-1 TW libmozjs-115-0 2025-11-01
SUSE openSUSE-SU-2025:15688-1 TW libmozjs-140-0 2025-11-01
SUSE SUSE-SU-2025:20894-1 SLE-m6.1 libssh 2025-10-30
SUSE SUSE-SU-2025:3897-1 SLE12 libssh 2025-10-31
SUSE openSUSE-SU-2025:15682-1 TW libtiff-devel-32bit 2025-11-01
SUSE SUSE-SU-2025:20897-1 SLE-m6.1 libxslt 2025-10-30
SUSE SUSE-SU-2025:3875-1 SLE15 SLE-m5.3 SLE-m5.4 SLE-m5.5 oS15.4 oS15.6 libxslt 2025-10-30
SUSE SUSE-SU-2025:3919-1 SLE12 nodejs18 2025-11-03
SUSE openSUSE-SU-2025:15680-1 TW ongres-scram 2025-11-01
SUSE SUSE-SU-2025:3946-1 SLE15 oS15.6 openjpeg 2025-11-05
SUSE SUSE-SU-2025:20896-1 SLE-m6.1 openssl-3 2025-10-30
SUSE SUSE-SU-2025:20899-1 SLE-m6.1 podman 2025-10-30
SUSE SUSE-SU-2025:3910-1 MP4.3 SLE15 SES7.1 poppler 2025-11-03
SUSE SUSE-SU-2025:3945-1 SLE12 poppler 2025-11-05
SUSE SUSE-SU-2025:3900-1 SLE15 oS15.5 poppler 2025-10-31
SUSE SUSE-SU-2025:3898-1 oS15.4 poppler 2025-10-31
SUSE openSUSE-SU-2025:15696-1 TW python311-starlette 2025-11-02
SUSE SUSE-SU-2025:3942-1 MP4.3 SLE15 oS15.4 qatengine, qatlib 2025-11-05
SUSE SUSE-SU-2025:3943-1 SLE15 oS15.5 qatengine, qatlib 2025-11-05
SUSE SUSE-SU-2025:3911-1 oS15.4 rav1e 2025-11-03
SUSE openSUSE-SU-2025:15698-1 TW redis 2025-11-03
SUSE SUSE-SU-2025:3951-1 SLE12 runc 2025-11-05
SUSE SUSE-SU-2025:3950-1 SLE15 SLE-m5.2 SLE-m5.3 SLE-m5.4 SLE-m5.5 SES7.1 oS15.6 runc 2025-11-05
SUSE SUSE-SU-2025:3944-1 SLE15 oS15.6 sccache 2025-11-05
SUSE SUSE-SU-2025:3902-1 MP4.3 SLE15 oS15.4 squid 2025-10-31
SUSE SUSE-SU-2025:3856-1 MP4.3 SLE15 oS15.4 strongswan 2025-10-29
SUSE SUSE-SU-2025:3904-1 SLE12 strongswan 2025-11-03
SUSE SUSE-SU-2025:3857-1 SLE15 SES7.1 strongswan 2025-10-29
SUSE SUSE-SU-2025:3873-1 SLE15 oS15.5 strongswan 2025-10-30
SUSE SUSE-SU-2025:3855-1 SLE15 oS15.6 strongswan 2025-10-29
SUSE openSUSE-SU-2025:15681-1 TW strongswan 2025-11-01
SUSE SUSE-SU-2025:3941-1 MP4.3 SLE15 SLE-m5.2 SLE-m5.3 SLE-m5.4 SLE-m5.5 SES7.1 tiff 2025-11-05
SUSE SUSE-SU-2025:3905-1 SLE12 webkit2gtk3 2025-11-03
SUSE SUSE-SU-2025:3909-1 MP4.3 SLE15 oS15.4 xorg-x11-server 2025-11-03
SUSE SUSE-SU-2025:3858-1 SLE12 xorg-x11-server 2025-10-29
SUSE SUSE-SU-2025:3865-1 SLE15 xorg-x11-server 2025-10-30
SUSE SUSE-SU-2025:3864-1 SLE15 SES7.1 xorg-x11-server 2025-10-30
SUSE SUSE-SU-2025:3866-1 SLE15 oS15.5 xorg-x11-server 2025-10-30
SUSE SUSE-SU-2025:3872-1 SLE15 oS15.6 xorg-x11-server 2025-10-30
SUSE openSUSE-SU-2025:15683-1 TW xorg-x11-server 2025-11-01
SUSE SUSE-SU-2025:3874-1 SLE15 xwayland 2025-10-30
SUSE SUSE-SU-2025:3863-1 SLE15 oS15.6 xwayland 2025-10-30
SUSE openSUSE-SU-2025:15684-1 TW xwayland 2025-11-01
Ubuntu USN-7848-1 25.04 amd64-microcode 2025-10-30
Ubuntu USN-7847-1 22.04 24.04 25.04 binutils 2025-10-30
Ubuntu USN-7839-2 16.04 18.04 20.04 22.04 24.04 25.04 google-guest-agent 2025-11-03
Ubuntu USN-7850-1 14.04 kernel 2025-10-30
Ubuntu USN-7857-1 24.04 25.04 25.10 keystone 2025-11-04
Ubuntu USN-7849-1 16.04 18.04 20.04 22.04 24.04 25.04 25.10 libssh 2025-11-04
Ubuntu USN-7852-1 22.04 24.04 25.04 libxml2 2025-10-30
Ubuntu USN-7844-1 16.04 18.04 20.04 22.04 24.04 25.04 25.10 libyaml-syck-perl 2025-10-30
Ubuntu USN-7853-1 16.04 18.04 linux, linux-aws, linux-aws-hwe, linux-gcp, linux-gcp-4.15, linux-hwe, linux-oracle 2025-10-30
Ubuntu USN-7853-2 18.04 linux-fips, linux-aws-fips, linux-gcp-fips 2025-10-30
Ubuntu USN-7833-4 24.04 linux-gcp-6.14 2025-10-31
Ubuntu USN-7856-1 24.04 linux-hwe-6.14 2025-11-04
Ubuntu USN-7835-4 22.04 linux-hwe-6.8 2025-10-31
Ubuntu USN-7854-1 18.04 linux-kvm 2025-10-30
Ubuntu USN-7829-6 20.04 22.04 linux-nvidia-tegra, linux-nvidia-tegra-5.15, linux-nvidia-tegra-igx, linux-raspi 2025-11-04
Ubuntu USN-7843-1 18.04 20.04 22.04 24.04 25.04 25.10 netty 2025-10-30
Ubuntu USN-7851-1 22.04 24.04 25.04 25.10 runc-app, runc-stable 2025-11-05
Ubuntu USN-7804-2 16.04 18.04 20.04 squid, squid3 2025-11-04
Ubuntu USN-7855-1 22.04 24.04 25.04 25.10 unbound 2025-11-04
Ubuntu USN-7846-1 22.04 24.04 25.04 25.10 xorg-server, xwayland 2025-10-29
Full Story (comments: none)

Kernel patches of interest

Kernel releases

Linus Torvalds Linux 6.18-rc4 Nov 02
Sebastian Andrzej Siewior v6.18-rc4-rt3 Nov 03
Greg Kroah-Hartman Linux 6.17.7 Nov 02
Greg Kroah-Hartman Linux 6.12.57 Nov 02
Greg Kroah-Hartman Linux 6.6.116 Nov 02
Clark Williams 6.6.116-rt66 Nov 02
Tom Zanussi 5.4.300-rt102 Oct 31

Architecture-specific

Core kernel

Development tools

Bastien Curutchet (eBPF Foundation) selftests/bpf: Integrate test_xsk.c to test_progs framework Oct 31
Raghavendra Rao Ananta vfio: selftest: Add SR-IOV UAPI test Nov 04

Device drivers

Edward Srouji Add other eswitch support Oct 29
Marius Cristea Add support for Microchip EMC1812 Oct 29
Deepa Guthyappa Madivalara Enable support for AV1 stateful decoder Oct 30
Francesco Lavra st_lsm6dsx: add tap event detection Oct 30
André Apitzsch via B4 Relay Add CAMSS support for MSM8939 Oct 30
Markus Schneider-Pargmann (TI.com) firmware: ti_sci: Partial-IO support Oct 30
Alex Hung Color Pipeline API w/ VKMS Oct 29
Raviteja Laggyshetty Add interconnect support for Kaanapali SoC Oct 30
Bart Van Assche Optimize the hot path in the UFS driver Oct 30
Pavitrakumar Managutte crypto: spacc - Add SPAcc Crypto Driver Oct 31
Hrishabh Rajput via B4 Relay Add support for Gunyah Watchdog Oct 31
Krishna Chaitanya Chundru PCI: Enable Power and configure the TC9563 PCIe switch Oct 31
Miquel Raynal mtd: spinand: Octal DTR support Oct 31
Yo-Jung Leo Lin (AMD) drm/amdgpu: add UMA carveout tuning interfaces Nov 03
Herve Codina (Schneider Electric) Add support for the Renesas RZ/N1 ADC Nov 03
Russell King (Oracle) net: stmmac: multi-interface stmmac Nov 03
Bitterblue Smith wifi: rtw89: Add support for RTL8852CU Nov 01
Sriharsha Basavapatna RDMA/bnxt_re: Support direct verbs Nov 03
Laurentiu Palcu Add support for i.MX94 DCIF Nov 03
Haibo Chen Add support for NXP XSPI Nov 04
Sumit Kumar bus: mhi: Add loopback driver Nov 04
Matthias Fend media: add Himax HM1246 image sensor Nov 04
Laurentiu Mihalcea Add support for i.MX8ULP's SIM LPAV Nov 04
Animesh Manna Enable DP2.1 alpm Nov 04
Jonas Jelonek add gpio-line-mux Nov 04
Matti Vaittinen Support ROHM BD72720 PMIC Nov 05
Cosmin Tanislav Add RSPI support for RZ/T2H and RZ/N2H Nov 05
Prajna Rajendra Kumar Add support for Microchip CoreSPI Controller Nov 05
Tommaso Merciai Add USB2.0 support for RZ/G3E Nov 05

Device-driver infrastructure

Filesystems and block layer

Memory management

Networking

Virtualization and containers

Miscellaneous

Marek Olšák libdrm 2.4.128 Nov 02

Page editor: Joe Brockmeier


Copyright © 2025, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds