|
|
Log in / Subscribe / Register

LWN.net Weekly Edition for February 26, 2026

Welcome to the LWN.net Weekly Edition for February 26, 2026

This edition contains the following feature content:

This week's edition also includes these inner pages:

  • Brief items: Brief news items from throughout the community.
  • Announcements: Newsletters, conferences, security updates, patches, and more.

Please enjoy this week's edition, and, as always, thank you for supporting LWN.net.

Comments (none posted)

As ye clone(), so shall ye AUTOREAP

By Jonathan Corbet
February 24, 2026
The facilities provided by the kernel for the management of processes have evolved considerably in the last few years, driven mostly by the advent of the pidfd API. A pidfd is a file descriptor that refers to a process; unlike a process ID, a pidfd is an unambiguous handle for a process; that makes it a safer, more deterministic way of operating on processes. Christian Brauner, who has driven much of the pidfd-related work, is proposing two new flags for the clone3() system call, one of which changes the kernel's security model in a somewhat controversial way.

The existing CLONE_PIDFD flag was added (by Brauner) for the 5.2 kernel release; it causes clone3() to create and return a pidfd for the newly created process (or thread). That gives the parent process a handle on its child from the outset. This pidfd can be used to, among other things, detect when the child has exited and obtain its exit status.

The classic Unix way of obtaining exit-status information, though, is with one of the wait() family of system calls. When a process exits, it will go into the "zombie" state until its parent calls a form of wait() to reap the zombie's exit information and clean up the process entry itself. If a parent creates a lot of children and fails to reap them, it will fill the system with zombie processes — a classic beginner's process-management mistake. Even if the parent does not care about what happens to its children after they are created, it normally needs to carefully clean up after those children when they exit.

There are ways of getting out of that duty. The kernel will send a SIGCHLD signal to the parent of an exited process, but that signal is normally ignored. If the parent explicitly sets that signal to SIG_IGN, though, its children will be automatically cleaned up without bothering the parent. This setting applies to all children, though; if a parent does care about some of them, it cannot use this approach. Among other things, that means that library code that creates child processes cannot make the cleanup problem go away by setting SIGCHLD to SIG_IGN without creating problems for the rest of the program.

One of the new flags that Brauner is proposing is CLONE_AUTOREAP, which would cause the newly created process to be automatically cleaned up once it exits. There would be no need (or ability) to do so with wait(), and no SIGCHLD signal sent. The flag applies only to the just-created process, so it does not affect process management elsewhere in the program. It also is not inherited by grandchild processes. If the child is created with CLONE_PIDFD as well, the resulting pidfd can be used to obtain the exit status without the need to explicitly reap the child. Otherwise, that status will be unavailable; as Brauner noted, this mode "provides a fire-and-forget pattern".

The CLONE_AUTOREAP flag seems useful and uncontroversial. The other proposed flag, CLONE_PIDFD_AUTOKILL, may also be useful, but it has raised some eyebrows. This flag ties the life of the new process to the pidfd that represents it; if that pidfd is closed, the child will be immediately and unceremoniously killed with a SIGKILL signal. The use of CLONE_PIDFD is required with CLONE_PIDFD_AUTOKILL; otherwise, the pidfd that keeps the child alive would never exist in the first place. CLONE_PIDFD_AUTOKILL also requires CLONE_AUTOREAP, since it is likely to be enforced when the parent process exits and can no longer clean up after its children.

The intended use case for this feature is "container runtimes, service managers, and sandboxed subprocess execution - any scenario where the child must die if the parent crashes or abandons the pidfd". In other words, this flag is meant to be used in cases where processes are created under the supervision of other processes, and they should not be allowed to keep on running if the supervisor goes away.

Linus Torvalds quickly pointed out a potential problem after seeing version 3 of the patch series: this feature would allow the killing of setuid processes that the creator would otherwise not have the ability to affect. A suitably timed kill might interrupt a privileged program in the middle of a sensitive operation, leaving the system in an inconsistent (and perhaps exploitable) state. "If I'm right", Torvalds said, "this is completely broken. Please explain.". Brauner responded that he was aware that he was proposing a significant change to how the kernel's security model works, but was insistent that this capability is needed; he wanted a discussion on how it could be safely implemented:

My ideal model for kill-on-close is to just ruthlessly enforce that the kernel murders anything once the file is released. I would value input under what circumstances we could make this work without having the kernel magically unset it under magical circumstances that are completely opaque to userspace.

Jann Horn questioned whether this change was actually problematic. There are various ways to kill setuid processes now, he said, including the use of SIGHUP signals or certain systemd configurations. "I agree that this would be a change to the security model, but I'm not sure if it would be that big a change." He suggested that, if this change is worrisome, it could be restricted to situations where the "no new privileges" flag has been set; that is how access to seccomp() is regulated now. Brauner indicated that this option might be workable, and the current version of the series (version 4 as of this writing) automatically sets "no new privileges" in the newly created child when CLONE_PIDFD_AUTOKILL is used.

Meanwhile, Ted Ts'o suggested that, rather than killing the child with SIGKILL, the kernel could instead send a sequence of signals, starting with SIGHUP, then going to SIGTERM, with a delay between them. That would give a setuid program the ability to catch the signal and clean up properly before exiting. Only if the target process doesn't take the hint would SIGKILL be sent. Even then, he was not convinced that this feature could be made secure: "I bet we'll still see some zero days coming out of this, but we can at least mitigate likelihood of security breach."

Brauner did not respond to that last suggestion, and the conversation wound down, relatively quickly, without any definitive conclusions having been reached. There is clearly a bit of tension between the need for supervisors to have complete control over the applications they manage and the need to keep existing security boundaries intact. Whether that tension is enough to keep relatively new process-management approaches from being implemented remains to be seen.

Comments (8 posted)

Open-source Discord alternatives

By Daroc Alden
February 20, 2026

The closed-source chat platform Discord announced on February 9 that it would soon require some users to verify their ages in order to access some content — although the company quickly added that the "vast majority" of users would not have to. That reassurance has to contend with the fact that the UK and other countries are implementing increasingly strict age requirements for social media. Discord's age verification would be done with an AI age-judging model or with a government photo ID. A surprising number of open-source projects use Discord for support or project communications, and some of those projects are now looking for open-source alternatives. Mastodon, for example, has moved discussion to Zulip. There are some alternatives out there, all with their own pros and cons, that communities may want to consider if they want to switch away from Discord.

Matrix

Centralized, federated, decentralized

Chat applications generally fall into three distinct groups: centralized services (where one server manages all communication), federated services (where there are multiple servers hosting different users that can all talk to one another), and decentralized services (where clients talk to each other directly without servers). In a centralized system, all usernames live in the same namespace. In a federated system, each server forms a separate namespace, so users have to pick a specific server to join. Decentralized naming schemes are more complicated — and also out of this article's scope, since none of the options listed here are decentralized.

The Matrix project started in 2014; it is an attempt to build a federated messaging and synchronization protocol that gives people full control over their own communication. The project's manifesto says: "The ability to converse securely and privately is a basic human right." The project has been welcoming former Discord users, after seeing a huge spike of sign-ups to the matrix.org "homeserver" — the server that the Matrix.org Foundation runs as a way for people to easily start chatting on the network.

Many users do eventually move to another homeserver, self-hosted or otherwise; there are many independent server implementations, fitting a variety of different needs. Seven are listed as actively maintained, with eleven more that have been abandoned for one reason or another. There are even more client applications, including a handful that act as web front-ends for people who don't wish to download and install another application.

This variety of implementations is both a blessing and a curse for Matrix. On the one hand, they're a clear sign of a vibrant community and a diverse software ecosystem. On the other hand, many of the clients are missing specific features, meaning that users will need to shop around a little to find a client that fits their needs. Generally, all clients support joining rooms (analogous to Discord channels) and doing basic chatting. The way the Matrix protocol is structured also means that they universally support synchronized use of the same account on multiple devices, even across clients. When new clients sign into an account, there's a short authentication dance involving the previous clients to ensure they're authorized. (Or a recovery procedure, for those who have lost their previous device.) Some security researchers have raised concerns about the cryptographic implementation in existing libraries [although the Matrix Foundation has published a rebuttal], so that is another factor that varies across implementations.

Most clients support organizing rooms into "spaces" (groups of related rooms that you can invite other people to, analogous to Discord "servers" or "guilds"), adding and viewing emoji reactions, and splitting side conversations into separate "threads". Custom emoji reactions are also well supported. Having multiple accounts signed in on the same client, on the other hand, is not well-supported; the advice seems to be to run multiple separate clients if one wants to keep work and personal accounts separate, for example.

On the protocol level, Matrix does a good job of abstracting over the federated nature of the network. Rooms and spaces can be transparently shared with users on other homeservers, spaces can include rooms from multiple servers, and so on. One can even create a room on one server, have a user on another server join, and then have the first server leave the room, passing off the responsibility for operating the room to the new server. Individual servers going down, therefore, has little impact on the network as a whole.

At a lower level, Matrix communication is sent over HTTP or HTTPS. This adds some overhead to the protocol compared to operating directly over a TCP socket, but it makes it easy to run a Matrix server from a residential connection and put an HTTPS proxy in front of it. The protocol also supports using DNS SRV records to separate the domain name of a homeserver from its actual hosting location.

Overall, Matrix is a reasonable alternative, especially for users who care about seamless federation. Server software is relatively easy to set up for projects wishing to self-host. The main downside is that some of the software involved is a little unpolished; of the clients I tried, several of them had small bugs or user-interface glitches. Element is probably the most polished, but its support for threads wasn't as nice as some of the other clients.

Jabber

Jabber, also known as XMPP, is a federated open standard for online communication and messaging. It's considerably older than Matrix, dating back to 1999. Despite that, it supports a number of modern chat features through protocol extensions, such as emoji reactions, typing notifications, and so on. Its long history also means that it has a large number of client and server implementations. Those clients run the full range from simple proof-of-concepts to well-designed applications, and from normal desktop and mobile applications to special-purpose tools. The libpurple C++ library has a generic XMPP implementation that makes it possible to write personal tools that use the protocol. Each server has its own ways to register, but the project maintains a list of public instances for people to try.

Jabber was historically designed for one-on-one chats, with multi-user chats (MUCs) being a later addition to the protocol. Consequently, it lacks the server/channel/thread hierarchical organization that Discord has. On the other hand, the protocol has stood the test of time, and seems unlikely to be going anywhere. For its core use case of chatting with friends, it is well-polished and usable. Jabber clients also tend to have good support for multi-account usage, and there are a number of bridges that can facilitate communication between Jabber accounts and other chat services. Some Jabber clients, such as Pidgin even have native support for other protocols as well.

Jabber's federation system lets users from different servers chat seamlessly with one another, but it doesn't make migrating MUCs around quite as transparent as migrating Matrix rooms: MUCs are created on a specific server, and rely on that server staying up to remain usable. That isn't normally too much of a problem, however; if one's Jabber server goes down, one can't participate in chats anyway.

On a protocol level, Jabber does its own communication over TCP, so one needs a handful of externally accessible ports to self-host a server. That is just about the only requirement, however. Projects like Snikket or Openfire provide self-contained Jabber servers requiring minimal setup. For people who just want to chat with their friends with minimal hassle, using mature tools that work reliably, Jabber is a good choice. Projects that want to make it easy for new contributors to join their discussions might find Jabber's group-chat-oriented organization less helpful, however.

Zulip

Zulip is a centralized chat platform that focuses on making it easy to follow multiple threads of conversation. Zulip is primarily developed and maintained by a company called Kandra Labs, which offers paid hosting for people who want a solution where someone else does the relevant server administration. The software itself is completely open source under an Apache 2.0 license.

Since Zulip is centralized, users on one server cannot easily and directly talk to users on another server. The project does expose an API for bridging software to use, and specifically recommends matterbridge as a way to connect Zulip channels to chats on other protocols.

Zulip organizes channels on a given server into folders, and then supports "conversations" as a subdivision of an individual channel. The user interface, which comes in both web and desktop versions, sports a combined activity feed that makes it easy to see what has changed in the conversations that one has actively participated in. That makes it handy for large projects that have a number of discussions going on at the same time. Zulip boasts a large number of moderation and organization features as well.

Self-hosting is fairly straightforward, although the installer assumes that Zulip is the only software running on the server and will freely install and configure additional dependencies, so it's best to install in a container or virtual machine. The requirements are also a bit heftier than Matrix or Jabber. Zulip can be hosted behind a HTTPS proxy, although using the optional email integration requires access to incoming port 25.

The one difference between the self-hosted version and the paid version, which are otherwise completely feature equivalent, is that the self-hosted version is limited in the number of mobile devices it can push notifications to. Open-source projects, academic researchers, and non-profits can have that limit raised for free. The limitation comes from the fact that Google and Apple require a single server to send all push notifications to the Zulip mobile apps, so self-hosted installations have to relay them via the central Zulip server.

The desktop and mobile clients support signing into multiple Zulip servers, but the web interface requires the use of separate tabs for separate servers. The interface is generally clean and usable, although the lack of alternative clients makes that somewhat understandable. Overall, Zulip would be a good choice for larger projects that want a modern, polished experience and care less about federation. Even if Zulip uses a centralized architecture, however, its almost-all-open-source nature and ability to export history significantly reduces the risk of lock-in.

Stoat

Stoat chat is another centralized, self-hostable chat platform. It has a smaller development community than Zulip, but it has an interface and structure that more resembles Discord. It also has a single main server, stoat.chat, with little interest so far in any kind of federation or interoperability. The project's documentation doesn't completely rule out the possibility, however — it just indicates that it's not on the project roadmap for now.

Users who want a Discord-like experience, including having a single central server, but who want the possibility of self hosting in the future, might like Stoat. It is licensed under the AGPL v3.0, and hosting costs are currently covered by donations. In the future, the project may look into charging on a usage basis, to defray costs, but promises not to charge for premium features the way Discord does. Stoat's lack of import/export support and interoperability make it a less good choice for projects that don't want to put all of their eggs in one basket, however.

Spacebar

Ultimately, people often don't want to stop using the software that works for them. For people who really miss Discord and want the exact same experience — down to the level of API compatibility — there is Spacebar. This project is an attempt to reverse-engineer Discord and create a self-hostable AGPL-licensed version. The project was started in 2020, and the server software is mostly working. The client implementation, however, is still new and experimental. The project doesn't recommend using it yet, and bugs are inevitable.

The server software is designed to implement the exact same API as Discord itself does — although, since the Discord client is closed-source, it can't be pointed at a Spacebar server. That identical API means that bots and integrations written for Discord can be pointed at a Spacebar server, however, and theoretically run without ever knowing that they've been redirected.

Spacebar is under active development, with a healthy development community. People who just want "Discord, but open-source" and are willing to pitch in, or to wait for Spacebar's client implementation to catch up, might find Spacebar the simplest replacement.

Conclusion

This list is by no means exhaustive. There are so many other open-source technologies and protocols with overlapping use-cases that a complete list would be impossible to construct. But these alternatives seem like the completely open-source applications most capable of replacing Discord and the things that people use it for at the present moment. For everyone who doesn't want or need a Discord-like "modern" chat experience — IRC will always be an option.

Comments (68 posted)

Modernizing swapping: virtual swap spaces

By Jonathan Corbet
February 19, 2026
The kernel's unloved but performance-critical swapping subsystem has been undergoing multiple rounds of improvement in recent times. Recent articles have described the addition of the swap table as a new way of representing the state of the swap cache, and the removal of the swap map as the way of tracking swap space. Work in this area is not done, though; this series from Nhat Pham addresses a number of swap-related problems by replacing the new swap table structures with a single, virtual swap space.

The problem with swap entries

As a reminder, a "swap entry" identifies a slot on a swap device that can be used to hold a page of data. It is a 64-bit value split into two fields: the device index (called the "type" within the code), and an offset within the device. When an anonymous page is pushed out to a swap device, the associated swap entry is stored into all page-table entries referring to that page. Using that entry, the kernel can quickly locate a swapped-out page when that page needs to be faulted back into RAM.

The "swap table" is, in truth, a set of tables, one for each swap device in the system. The transition to swap tables has simplified the kernel considerably, but the current design of swap entries and swap tables ties swapped-out pages firmly to a specific device. That creates some pain for system administrators and designers.

As a simple example, consider the removal of a swap device. Clearly, before the device can be removed, all pages of data stored on that device must be faulted back into RAM; there is no getting around that. But there is the additional problem of the page-table entries pointing to a swap slot that no longer exists once the device is gone. To resolve that problem, the kernel must, at removal time, scan through all of the anonymous page-table entries in the system and update them to the page's new location. That is not a fast process.

This design also, as Pham describes, creates trouble for users of the zswap subsystem. Zswap works by intercepting pages during the swap-out process and, rather than writing them to disk, compresses them and stores the result back into memory. It is well integrated with the rest of the swapping subsystem, and can be an effective way of extending memory capacity on a system. When the in-memory space fills, zswap is able to push pages out to the backing device.

The problem is that the kernel must be able to swap those pages back in quickly, regardless of whether they are still in zswap or have been pushed to slower storage. For this reason, zswap hides behind the index of the backing device; the same swap entry is used whether the page is in RAM or on the backing device. For this trick to work, though, the slot in the backing device must be allocated at the beginning, when a page is first put into zswap. So every zswap usage must include space on a backing device, even if the intent is to never actually store pages on disk. That leads to a lot of wasted storage space and makes zswap difficult or impossible to use on systems where that space is not available to waste.

Virtual swap spaces

The solution that Pham proposes, as is so often the case in this field, is to add another layer of indirection. That means the replacement of the per-device swap tables with a single swap table that is independent of the underlying device. When a page is added to the swap cache, an entry from this table is allocated for it; the swap-entry type is now just a single integer offset. The table itself is an array of swp_desc structures:

    struct swp_desc {
        union {
            swp_slot_t         slot;
            struct zswap_entry * zswap_entry;
        };
        union {
            struct folio *     swap_cache;
            void *             shadow;
        };
        unsigned int               swap_count;
        unsigned short             memcgid:16;
        bool                       in_swapcache:1;
        enum swap_type             type:2;
    };

The first union tells the system where to find a swapped-out page; it either points to a device-specific swap slot or an entry in the zswap cache. It is the mapping between the virtual swap slot and a real location. The second union contains either the location of the page in RAM (or, more precisely, its folio) or the shadow information used by the memory-management subsystem to track how quickly pages are faulted back in. The swap_count field tracks how many page-table entries refer to this swap slot, while in_swapcache is set when a page is assigned to the slot. The control group (if any) managing this allocation is noted in memcgid.

The type field tells the kernel what type of mapping is currently represented by this swap slot. If it is VSWAP_SWAPFILE, the virtual slot maps to a physical slot (identified by the slot field) on a swap device. If, instead, it is VSWAP_ZERO, it represents a swapped-out page that was filled with zeroes that need not be stored anywhere. VSWAP_ZSWAP identifies a slot in the zswap subsystem (pointed to by zswap_entry), and VSWAP_FOLIO is for a page (indicated by swap_cache) that is currently resident in RAM.

The big advantage of this arrangement is that a page can move easily from one swap device to another. A zswap page can be pushed out to a storage device, for example, and all that needs to change is a pair of fields in the swp_desc structure. The slot in that storage device need not be assigned until a decision to push the page out is made; if a given page is never pushed out, it will not need a slot in the storage device at all. If a swap device is removed, a bunch of swp_desc entries will need to be changed, but there will be no need to go scanning through page tables, since the virtual swap slots will be the same.

The cost comes in the form of increased memory usage and complexity. The swap table is one 64-bit word per swap entry; the swp_desc structure triples that size. Pham points out that the added memory overhead is less than it seems, since this structure holds other information that is stored elsewhere in current kernels. Still, it is a significant increase in memory usage in a subsystem whose purpose is to make memory available for other uses. This code also shows performance regressions on various benchmarks, though those have improved considerably from previous versions of the patch set.

Still, while the value of this work is evident, it is not yet obvious that it can clear the bar for merging. Kairui Song, who has done the bulk of the swap-related work described in the previous articles, has expressed concerns about the memory overhead and how the system performs under pressure. Chris Li also worries about the overhead and said that the series is too focused on improving zswap at the expense of other swap methods. So it seems likely that this work will need to see a number of rounds of further development to reach a point where it is more widely considered acceptable.

Postscript: swap tiers

There is a separate project that appears to be entirely independent from the implementation of the virtual swap space, but which might combine well with it: the swap tiers patch set from Youngjun Park. In short, this series allows administrators to configure multiple swap devices into tiers; high-performance devices would go into one tier, while slower devices would go into another. The kernel will prefer to swap to the faster tiers when space is available. There is a set of control-group hooks to allow the administrator to control which tiers any given group of processes is allowed to use, so latency-sensitive (or higher-paying) workloads could be given exclusive access to the faster swap devices.

A virtual swap table would clearly complement this arrangement. Zswap is already a special case of tiered swapping; Park's infrastructure would make it more general. Movement of pages between tiers would become relatively easy, allowing cold data to be pushed in the direction of slower storage. So it would not be surprising to see this patch series and the virtual swap space eventually become tied together in some way, assuming that both sets of patches continue to advance.

In general, the kernel's swapping subsystem has recently seen more attention than it has received in years. There is clearly interest in improving the performance and flexibility of swapping while making the code more maintainable in the long run. The days when developers feared to tread in this part of the memory-management subsystem appear to have passed.

Comments (24 posted)

No hardware memory isolation for BPF programs

By Daroc Alden
February 25, 2026

On February 12, Yeoreum Yun posted a suggestion for an improvement to the security of the kernel's BPF implementation: use memory protection keys to prevent unauthorized access to memory by BPF programs. Yun wanted to put the topic on the list for discussion at the Linux Storage, Filesystem, Memory Management, and BPF Summit in May, but the lack of engagement makes that unlikely. They also have a patch set implementing some of the proposed changes, but has not yet shared that with the mailing list. Yun's proposal does not seem likely to be accepted in its current form, but the kernel has added hardware-based hardening options in the past, sometimes after substantial discussion.

When a modern CPU needs to turn a virtual address into a physical address, it does so by consulting a page table. This table also dictates whether the memory in question is readable, writable, executable, accessible by user space, etc. Page tables have a multi-level structure, requiring several pointer indirections to find the actual entry for a page of memory. To avoid the overhead of following these indirections on every memory access, the CPU keeps a cache of recently accessed entries called the translation lookaside buffer (TLB).

When the kernel wants to change the access permissions of a given area of memory, it needs to update the page table and then flush the TLB (causing an inevitable performance hit as it refills). Worse, if the area of memory is large, it may need to update many page-table entries. Even just keeping track of which page-table entries need to be updated and iterating through them all can be a time-consuming operation. Memory protection keys are a hardware feature that helps avoid the overhead of changing large sections of the page table, making it practical to change the permissions of memory as part of a routine operation.

Memory protection keys use four bits in the page table to associate each page in memory with one of sixteen keys; there is a special CPU register that associates each key with read and write permissions. This avoids the need to actually change individual page-table entries: just change the permission bits for the corresponding key and the memory becomes inaccessible, without the need to walk the page table or flush the TLB.

The kernel has had support for memory protection keys since 2016, but that support has only extended to user space. The related system calls allow user-space applications to use memory protection keys to implement a faster version of mprotect(). There is no reason, in theory, that memory protection keys couldn't be used within the kernel as well. In practice, there have been a number of attempts to integrate them into the kernel that have not come to fruition.

Yun suggested adding a new set of kmalloc_pkey() and vmalloc_pkey() functions to allocate kernel objects in parts of memory protected by a key. That would make it practical to give BPF programs access to only a subset of kernel memory. Specifically, memory that is owned by BPF programs, or that a subsystem specifically intends to share with a BPF program, could be allocated with a separate memory protection key from other kernel allocations. Then, when entering BPF code, access to general kernel memory could be swiftly disabled. Yun's message described how this would work in some depth, but did not include any of the actual code to implement it — even though they intend to share their work-in-progress code soon — so it's hard to judge how invasive a change this would be.

Dave Hansen thought that plan might be feasible for subsystems such as the scheduler that have a relatively limited amount of writable data, but that other areas of the kernel would have more problems:

Networking isn't my strong suit, but packet memory seems rather dynamically allocated and also needs to be written to by eBPF programs. I suspect anything that slows packet allocation down by even a few cycles is a non-starter.

IMNHO, any approach to solving this problem that starts with: we just need a new allocator or modification to existing kernel allocators to track a new memory type makes it a dead end. Or, best case, a very surgical, targeted solution.

Alexei Starovoitov, on the other hand, did not just think the suggestion would be difficult to pull off, but also pointless. Yun had listed several CVEs from between 2020 and 2023 as a way of showing that the BPF verifier alone was not enough to ensure security. Starovoitov disagreed: "None of them are security issues. They're just bugs."

Yun agreed that they were bugs, but disagreed that they have no security implications. Yun is of the opinion that the existence of bugs in the BPF verifier that have led to memory corruption in the past is enough justification to put another barrier between running a BPF program and having an exploitable vulnerability.

Considering the previous unsuccessful attempts to use memory protection keys in the kernel and the difficulty of implementation, Yun's proposal to introduce memory protection keys to the BPF subsystem seems unlikely to go anywhere. On the other hand, the kernel has slowly been adding hardening measures such as per-call-site slab caches — perhaps associating these caches with memory protection keys is a logical next step. Only time will tell whether memory protection keys are a useful addition to the kernel's various hardening tools, or whether they're a cumbersome distraction. Either way, they will likely have to make their way into the kernel via some other subsystem.

Comments (7 posted)

Lessons on attracting new contributors from 30 years of PostgreSQL

By Joe Brockmeier
February 23, 2026

FOSDEM

The PostgreSQL project has been chugging along for decades; in that time, it has become a thriving open-source project, and its participants have learned a thing or two about what works in attracting new contributors. At FOSDEM 2026, PostgreSQL contributor Claire Giordano shared some of the lessons learned and where the project is still struggling. The lessons might be of interest to others who are thinking about how their own projects can evolve.

Giordano said that her first open-source project was while working at Sun Microsystems; she worked on the effort to release Solaris as open-source software, "which was quite a wild ride". From there, she joined Citus Data, which provided an extension for PostgreSQL to add distributed-database features; that company was acquired by Microsoft in 2019. She said she now heads Microsoft's open-source community initiatives around PostgreSQL and that the talk is based on her upstream work with that project; as one might expect, though, she noted that the talk would represent her views and not those of her employer.

PostgreSQL: the next generation

[Claire Giordano]

The challenge that she wanted to talk about, "which might keep some of you awake at night", is how to attract the next generation of open-source developers. If a project is successful enough to stick around, how does it ensure there are people to keep it afloat when the older crop of contributors starts to retire? When Giordano refers to contributors, she means all of them—not only those who contribute code. Even though she started off her career as a developer many years ago, "I have never committed or contributed a single lick of code to this project", but she works on PostgreSQL in other ways.

PostgreSQL is one of the projects that has, indeed, been successful and long-lived enough that some of its contributors may be pondering retirement soon. Giordano noted that PostgreSQL, as a technology, had been around for about 40 years. It has been an open-source project for nearly 30 years: its first open-source release was on July 8, 1996. Since that release, the number of PostgreSQL committers has grown from five to 31 people. The project has many more contributors, but the committers are those who can actually push code directly to the project's Git repository.

There is a new major version of PostgreSQL released each year. Version 18 was released in September 2025; it contained more than 3,000 commits, nearly 4,000 files changed, and more than 410,000 lines of code changed by 279 people. During that cycle, 83 of the people were first-time contributors. Those statistics only consider code and documentation, however: Giordano estimated that there were more than 360 people who contributed "in some way, code or beyond code" in 2025.

Contributor funnel

PostgreSQL's contributor community is global. Giordano observed that two of the project's 31 committers were based in New Zealand, which meant that the country had more PostgreSQL committers per capita than any other country. But what types of contributors does the project have, and how do they become involved? She noted that the project does not formally track contributors, but she had created a "funnel", shown below, that visualized how a person might progress from casual contributor to major contributor or even committer.

[How
do people get involved in Postgres (slide)]

Every contributor has their own motivations, and they may not be altruistic. She hearkened back to a FOSDEM 2020 talk by James Bottomley, "The Selfish Contributor Explained", which posited that many people contribute for selfish reasons. But, she said with a reference to the classic novel Anna Karenina, every contributor is selfish in their own way.

In addition to her day job, Giordano is also the host of a monthly podcast called "Talking Postgres", where she talks to developers who work on or with PostgreSQL. Through that she has learned that each contributor has a different "origin story"; some start out working on PostgreSQL simply because it interests them, others begin contributing because they need the database for other projects, and of course some contributors are hired to do the work. She said that it is especially interesting to talk to people who are new to PostgreSQL and to think about whether the project is growing in the right way.

The soul

To frame the next section of the talk, she had taken inspiration from a book by Tracy Kidder, The Soul of a New Machine. The book, published in 1981, is a nonfiction account of the development of the Data General Eclipse MV/8000 minicomputer. In reference to the book, Giordano said she had structured the next section of the talk into three buckets: the soul, the machine, and the fuel:

For the soul, we'll be looking at the human ecosystem, the people, the community, and the culture. For the machine it's more about developer workflows, the processes, the tooling that we use, and the governance. Then for the fuel, it's about the economics and how people get paid. [...]

Let's start with the soul, with the human bits, because at the end of the day, without people on these projects, none of it would exist.

She said she heard the phrase once, "come for the code, stay for the community". (The actual phrase, "came for the language, stayed for the community" was originally coined by Brett Cannon about Python.) She said that her first European PostgreSQL conference was notable because people made her feel like she really belonged, "which just, like, boggled my mind". The community has many meetups and conferences to build in-person, trust-based relationships that are important to the project.

Another thing that works for the community, Giordano said, is "just the joy of working with Postgres". She displayed a slide from Bruce Momjian's FOSDEM 2023 talk, "Building Open Source Teams", that described the fun of programming. Not only is programming fun, she said, but PostgreSQL in particular is fun to work on "because this database is so big and has so many different subsystems" that developers have choices about what kind of problems they work on. If a developer doesn't want to work on database backup, for example, they can work on other parts of the system. That's true for companies funding developers too; organizations can choose to fund the work that is important to their business.

The annual PGConf.dev developer's conference is also working well to feed the "soul" of the project, she said. It is now in its third year, and its organizers have been doing "a lot of experimentation with different formats and how to make it easier for new contributors to get involved with the project".

Another thing that is working well is PostgreSQL's recognized contributors policy, which is managed by the project's contributor's committee. It's the committee's job to figure out who to recognize on the contributor profiles page, which lists the project's core team, major contributors, and significant contributors. "Most people like being recognized for good work. And it feels good. And you're more likely to do more of the work that feels good."

But the project struggles in other areas, such as its committer pipeline. Fast-forward five to seven years, Giordano said, and it would not be surprising if some of the project's current maintainers stepped back or retired. Someone had recently joked that the new rule is "you cannot retire until you have nominated and fully trained your successor". That just might work.

One question that comes up regularly is: "do we accidentally overlook people for promotion?" Is it really a lack of people in the pipeline, or is the project failing to recognize and advocate for people to be "promoted"? She suggested there may be a "desert" in the middle of the path from new contributors to committer. Major contributors who have been working on the project for a long time may not be getting the mentoring they need to move up to committer level.

Welcome to the machine

With the soul covered, Giordano wanted to move on to the machine; "I'll start again with the positives because, you know, it's always good to start with the positives". One of the nice things about PostgreSQL is the predictable lifecycle that is easy to understand: "annual major releases, quarterly patch releases, and there's a CommitFest five times a year in July, September, November, January, and March". LWN covered the PostgreSQL CommitFest process in June 2024.

The CommitFest emphasizes regular review of patches that have been submitted to the project. Giordano cited PostgreSQL committer Tom Lane, who had said to her that patch review encourages people to read the code and not only the patch. That, in turn, helps contributors increase their knowledge of the entire system. "So it's not only improving the quality of the patches, but it's helping to skill up reviewers within the community." PostgreSQL's continuous-integration (CI) system has also seen "pretty dramatic improvements" in the past few years. She said she imagined it is much much nicer now to get automated testing on patches rather than worrying about breaking the build farm—especially for drive-by contributors.

The skill and dedication of the maintainers, she said, is also working in favor of the project:

In fact, as I talk to more junior developers, they tell me that one of the big magnets of joining the project and working on Postgres is the chance to get to work with some of these pretty amazing, brilliant, talented luminaries. Now I know. Every project has brilliant people, and we don't corner the market on that. But if you're a database person and you care about Postgres, it's pretty cool.

But it's not all positive. While the patch-review process may be useful, the project struggles due to lack of bandwidth to review patches. She used an air-traffic-control analogy, describing patches as planes circling an airport. "There's a lot of planes up there, and not that many are landing". That was a bit of hyperbole, since there were more than 3,000 commits that made it into the last release, but she said that it can feel that way when an author of a patch is waiting on patch-review feedback. "If you've submitted it and you haven't heard anything, I can imagine that can be very frustrating". Likewise, people who have a lot of reviews to do may feel pressured. "So this is definitely, I think, an area where we're struggling".

One possible answer to the review problem is having component committers. Currently the project's committers have commit access to the entire project; it may be that the project should consider trusting some contributors with commit access to specific subsystems. It has been discussed, Giordano said, and no one has felt comfortable enough to do it—yet. "There's pros and cons, but it's going to keep coming up. And who knows? Maybe someday there will be an attempt to experiment with it".

Intimidation

Like a number of older open-source projects, PostgreSQL has a strong mailing-list culture. That has its positives, such as the project's invaluable mail archive and the fact that people tend to be more thoughtful in email replies, but it has its drawbacks as well. In particular, it is "a little bit intimidating" for new contributors to ask a simple question when they know it's going to be archived forever. Newer contributors are more familiar, and comfortable with, tools like Slack, Discord, and other text-messaging applications.

To solve this, she said that Robert Haas started a program in 2024 for mentoring new developers in a Discord channel (link to invite). "If you want to be mentored in Postgres, you should definitely reach out to Robert and apply for one of the next cohorts". There are monthly hacking workshops as well.

PostgreSQL is written in C, which has come up as another possible barrier to entry for new developers. She said that she didn't know if C was still a requirement for computer-science degrees, though it was required when she got her degree. In fact, PostgreSQL has its own flavor of C, which "I have heard from some newer developers is a bit intimidating". Add that to the fact that the project's getting-started information is fragmented and out of date, she said, and it can be a problem for new people.

The topic of AI slop overwhelming maintainers was almost omnipresent at FOSDEM. This presentation was no exception; Giordano said that PostgreSQL has been seeing an increase in slop contributions over the last year, and it is creating a burden for its maintainers. In turn, it makes them reluctant to look at contributions from new people "because they don't know if it's real or if it's just AI slop". The prevalence of AI slop may well burn out maintainers and clog the machine that keeps PostgreSQL running.

The fuel

PostgreSQL is hot right now, which helps to fuel the fire for PostgreSQL development. It is in use by plenty of organizations, developers love it, students are learning it, and there are many jobs related to it. "So this commercial gravity, if you will, is attracting people to learn the technology and work on the project." It also inspires corporate funding for contributors to work on the project, which means that not only are the committers paid to work on PostgreSQL full time, but many other people are paid to work on the project part-time.

Where the project is struggling, however, is in the realm of "committer paranoia". This is a topic that Haas spoke about at PGConf.dev (slides) in 2025. Essentially, there is a lot of stress around submitting patches and accepting them. Committers are responsible for the patches they commit, whether they authored them or not. This responsibility extends beyond the immediate need to make sure that the patch works, etc., but fixing bugs and so forth years down the road. "If there is a problem, you're on the hook to get it fixed and get it fixed right away. Get it fixed yesterday."

She returned again to the theme of committers retiring; the project is talking about that problem, but it doesn't have it figured out yet.

tl;dr

So, Giordano asked as time had nearly elapsed, what are the takeaways from the talk? The good news is that "Postgres has already proven that we can evolve". The key thing is that the project has to keep evolving, and to embrace the needs of its future developers without compromising the needs of the present. That includes fixing the project's patch-review throughput, running experiments to ensure that the project is not discouraging as many people and is instead getting them more engaged with the project. "And hopefully then we can also avoid burnout amongst our own people".

She followed up by inviting people, if they had the skills (or were willing to build them) described during her talk, to consider becoming a PostgreSQL hacker. She closed with a slide thanking a number of individuals and all the hackers at the FOSDEM 2026 PostgreSQL developer meeting for their inspiration and reviews of her talk.

[Thanks to the Linux Foundation, LWN's travel sponsor, for funding my travel to Brussels to attend FOSDEM.]

Comments (1 posted)

The second half of the 7.0 merge window

By Daroc Alden
February 23, 2026

The 7.0 merge window closed on February 22 with 11,588 non-merge commits total, 3,893 of which came in after the article covering the first half of the merge window. The changes in the second half were weighted toward bug fixes over new features, which is usual. There were still a handful of surprises, however, including 89 separate tiny code-cleanup changes from different people for the rtl8723bs driver, a number that surprised Greg Kroah-Hartman. It's unusual for a WiFi-chip driver to receive that much attention, especially a staging driver that is not yet ready for general use.

The most important changes included in this release were:

Architecture-specific

Core kernel

Filesystems and block I/O

Hardware support

  • Clock: Qualcomm Kaanapali clock controllers, Qualcomm SM8750 camera clock controllers, Qualcomm MSM8940 and SDM439 global clock controllers, Google GS101 DPU clock controllers, SpacemiT K3 clock controllers, Amlogic t7 clock controllers, Aspeed AST2700 clock controllers, and Loongson-2K0300 real-time clocks.
  • GPIO and pin control: Spacemit K3 pin controllers, Atmel AT91 PIO4 SAMA7D65 pin controllers, Exynos9610 pin controllers, Qualcomm Mahua TLMM pin controllers, Microchip Polarfire MSSIO pin controllers, and Ocelot LAN965XF pin controllers.
  • Graphics: Mediatek MT8188 HDMI PHYs, Mediatek Dimensity 6300 and 9200 DMA controllers, Qualcomm Kaanapali and Glymur GPI DMA engines, Synopsis DW AXI Agilex5 DMA devices, Atmel microchip lan9691-dma devices, and Tegra ADMA tegra264 devices.
  • Industrial I/O: AD18113 amplifiers, AD4060 and AD4052 analog-to-digital converters (ADCs), AD4134 24-bit 4-channel simultaneous sampling ADCs, ADAQ767-1 ADCs, ADAQ7768-1 ADCs, ADAQ7769-1 ADCs, Honeywell board-mount pressure and temperature sensors, mmc5633 I2C/I3C magnetometers, Microchip MCP47F(E/V)B(0/1/2)(1|2|4|8) buffered-voltage-output digital-to-analog converters (DACs), s32g2 and s32g3 platform ADCs, ADS1018 and ADS1118 SPI ADCs, and ADS131M(02/03/04/06/08)24-bit simultaneous sampling ADCs.
  • Input: FocalTech FT8112 touchscreen chips, FocalTech FT3518 touchscreen chips, and TWL603x power buttons.
  • Media: MediaTek MT8196 video companion processors and external memory interfaces.
  • Miscellaneous: Foresee F35SQB002G chips, TI LP5812 4x3 matrix RGB LED drivers, Osram AS3668 4-channel I2C LED controllers, Qualcomm Interconnect trace network on chip (TNOC) blocks, and DiamondRapids non-transparent PCI bridges.
  • Networking: Glymur bandwidth monitors, Glymur PCIe Gen4 2-lane PCIe PHYs, SC8280xp QMP UFS PHYs, Kaanapali PCIe PHYs, and TI TCAN1046 PHYs.
  • Power: ROHM BD72720 power supplies, Rockchip RK801 power management integrated circuits (PMICs), ROHM BD73900 PMICs, Delta Networks TN48M switch complex programmable logic devices (CPLDs), sama7d65 XLCD controllers, and Congatec Board Controller backlights.
  • USB: AST2700 SOCs, USB UNI PHY and SMB2370 eUSB2 repeaters, QCS615 QMP USB3+DP PHYs, SpacemiT PCIe/combo PHY and K1 USB2 PHYs, Renesas RZ/V2H(P) and RZ/V2N USB3 devices, Google Tensor SoC USB PHYs, and Apple Type-C PHYs.

Miscellaneous

Networking

Virtualization and containers

  • KVM has added a mask to correctly report CPUCFG bits on LoongArch, which provides LoongArch guests with the correct information about the CPU's capabilities.
  • Some AMD CPUs support Enhanced Return Address Predictor Security (ERAPS) — a feature that removes the need for some security-related flushes of CPU state when a guest exits back to the host operating system. KVM added support for using ERAPS, and for advertising that support to guests.
  • There's a new user-space control to configure KVM's end of interrupt (EOI) broadcast suppression (which prevents an EOI signal from being sent to all interrupt controllers in a system, making interrupts more efficient). Previously, KVM erroneously advertised support for EOI broadcast suppression, even though it wasn't fully implemented. Unfortunately, the flaw persisted long enough that some user-space applications came to depend on the behavior, so now that the feature has been implemented correctly, user-space programs will have to opt in when configuring KVM virtual machines.
  • Guests can now request full ownership of performance monitoring unit (PMU) hardware, which provides more accurate profiling and monitoring than the existing emulated PMU.
  • The kernel's Hyper-V driver has added a debugfs interface to view various statistics about the Microsoft hypervisor.

Internal kernel changes

  • The kernel has been almost entirely switched over to kmalloc_obj() through the use of Coccinelle. Allocations of structure types and union types have all been converted, but allocations of scalar types, which need manual checking, have been left alone. Linus Torvalds followed up with a handful of fixes for the problems that inevitably crop up after this kind of large change. The new interface allocates memory based on the size of the provided type, which should mean fewer mistakes with manual size calculations. Where one would previously write one of these:
        ptr = kmalloc(sizeof(*ptr), gfp);
        ptr = kmalloc(sizeof(struct some_obj_name), gfp);
        ptr = kzalloc(sizeof(*ptr), gfp);
        ptr = kmalloc_array(count, sizeof(*ptr), gfp);
        ptr = kcalloc(count, sizeof(*ptr), gfp);
        ptr = kmalloc(struct_size(ptr, flex_member, count), gfp);
    
    One can now write:
        ptr = kmalloc_obj(*ptr, gfp);
        ptr = kmalloc_obj(*ptr, gfp);
        ptr = kzalloc_obj(*ptr, gfp);
        ptr = kmalloc_objs(*ptr, count, gfp);
        ptr = kzalloc_objs(*ptr, count, gfp);
        ptr = kmalloc_flex(*ptr, flex_member, count, gfp);
    
    GFP_KERNEL is also now the default, and can be left out if that is the only memory allocation flag that one wishes to set.

The second-half of the merge window is often quieter, and this one was no exception. There were a number of debugging features added, however, which is always nice to see. At this point, the kernel will go through the usual seven-or-eight release candidates as people chase down bugs introduced in this merge window. The final v7.0 kernel should be expected around April 12 or 19.

Comments (55 posted)

An effort to secure the Network Time Protocol

February 25, 2026

This article was contributed by Joab Jackson


FOSDEM

The Network Time Protocol (NTP) debuted in 1985; it is a universally used, open specification that is deeply important for all sorts of activities we take for granted. It also, despite a number of efforts, remains stubbornly unsecured. Ruben Nijveld presented work at FOSDEM 2026 to speed adoption of the thus-far largely ignored standard for securing NTP traffic: IETF's RFC-8915 that specifies Network Time Security (NTS) for NTP.

I was not able to attend FOSDEM this year, but I watched the video for the talk in order to put together this article.

According to Nijveld, NTP is "fundamentally a broken protocol. It is a protocol that is fundamentally insecure". His employer, the Dutch nonprofit Trifecta Tech Foundation, is testing technologies that would make it easier for pool.ntp.org, and other large-scale time servers, to adopt NTS. That work is receiving additional funding from ICANN and other interested parties, he said. The foundation's specialty is improving open-source infrastructure software, and Nijveld himself is an expert on time synchronization software, having worked on or with ntpd-rs, and Statime for the IEEE's Precision Time Protocol.

The universal need for time

NTP was originally authored by the University of Delaware's David Mills, who passed in 2024. It is now almost universally used to synchronize everything else on the Internet with the Coordinated Universal Time (UTC) by referencing time data from high-precision timekeeping devices such as atomic clocks. Those are referred to as "Stratum 0" time sources in the hierarchy of time sources.

NTP timestamps are essential for computer-related synchronization tasks such as issuing Kerberos tickets, and TOTP tokens for one-time passwords. "If your clock is off by a minute, good luck getting a proper TOTP token, because they are only valid for a minute, generally." Database synchronization, distributed-systems logging, and in fact almost all of distributed computing, are built on precise synchronization, as are cellular networks, any sort of media streaming, high-frequency trading, power-grid load balancing, and astronomical research, Nijveld noted.

The issue with NTP, Nijveld said, is that it can be easily spoofed, which would cause havoc for any of the other above-listed uses. In a typical man-in-the-middle attack, one could capture a response, change the timestamp, and resend it to the requester, and no one would be the wiser.

Securing NTP is a trivial task, because the payload does not need to be encrypted, Nijveld said. "Time is everywhere, so who cares if it is encrypted or not?" Instead, only proof of authentication is needed to secure a timestamp packet. "We want to be sure that every time I send a request, the response comes from the party that I wanted it to come from." NTS does this work.

NTS is an extension of NTP that includes a TLS key exchange, using the same key exchange infrastructure used for secure web browsing. The client first contacts the server over a TCP connection secured with TLS (using port 4460). The server's existing digital certificate can be used so long as it's signed by a certificate authority trusted by the client.

After a successful handshake, the server sends the client eight unique cookies, each containing the symmetric key established in the handshake. The client includes one of these cookies in a separate NTP request packet (sent via UDP port 123). Multiple cookies are used in case of dropped packets, with the server supplying a new one in each response. When the server receives the time request, it decodes the cookie to retrieve the symmetric key, and verifies the packet's checksum using the Message Authentication Code (MAC). If the operation is successful, the server sends back an NTP packet with the current timestamp (and a new cookie). Once the key exchange is established, the client can then poll the server on regular time intervals using the stream of cookies. No additional handshakes are needed.

Slow adoption for NTS

Despite NTS becoming an IETF standard in 2020, uptake has been slow. There's been some success: Canonical, which maintains its own time servers, switched to NTS in 25.10. But the NTP reference implementation, ntpd, does not support NTS. A number of clients, such as NTPsec, chrony, and ntpd-rs, do support NTS. They are not, however, widely adopted.

The Simple Network Time Protocol (SNTP) is widely used by embedded devices because of its small footprint. It uses a single time source, rather than multiple time sources, and messages are sent over UDP. Nijveld said that he is not aware of a single implementation of SNTP that supports NTS.

He suspected that the NTS-supporting implementations are not widely adopted as device designers may prefer the lighter SNTP clients, even though the other clients are not that heavy. To help spur wider adoption of NTS on Linux distributions, other Trifecta staff are also working on a patch for systemd-timesyncd.

Learning how to scale NTS

Trifecta surmised that one roadblock thwarting the wider use of NTS is the lack of practices to deploy NTS for large-scale usage. In particular, NTS needs to work nicely with pool.ntp.org. This service is run by a virtual cluster of about 5,000 servers. Today, most servers and other machines that use NTP probably have a client configuration pointing to pool.ntp.org. When a request is sent to pool.ntp.org the authoritative DNS server, GeoDNS maps the client to the nearest time server. That time server will then fulfill requests for the current time for that client.

Grafting TLS onto pool.ntp.org would require either issuing certificates to all 5,000 servers or sharing a single certificate across all of the servers. Issuing individual certificates is not feasible as they all share the same domain name, Nijveld said, and sharing a single certificate would be a security nightmare. The whole setup could be compromised by one leaked certificate. "We need alternative approaches."

Trifecta is currently experimenting with two different approaches. One rethinks the server-side service and the other changes the behavior of the client. The proposed server-side change is a load-balancing proxy, with a single certificate, that handles all the server-side encryption. It terminates the client connections, giving the client an encrypted cookie, which is passed to the time server for quick validation. This way, the time servers don't get bogged down with key exchanges. The other approach requires modifying the client to do DNS Service Record (SRV) lookups, which would point to specific self-registering servers, each with its own unique domain name—and its own certificate. This approach requires DNSSEC validation, otherwise addresses could be spoofed "because DNS itself is also not encrypted".

There are still unresolved issues, and open questions, with those approaches. Nijveld didn't say this, explicitly, but equipping a pool of 5,000 servers with digital certificates, as the SRV approach takes, could be prohibitively costly. But if either or both architectures work, they will be submitted to the IETF for ratification.

Currently, Trifecta is looking for volunteers to add their own clients and servers to Trifecta's experimental NTS-only pool. To participate, a client or server needs to be online 24/7 with a fully qualified domain name, and the ability to accept certificates (NTS software can act both as a server and a client). The documentation says that it is possible to use Chrony, ntpd-rs, or NTPSec. However, Chrony requires modification for this.

Those who register with the program will get a token to validate their client to the pool, which needs to be added to the software's configuration file. The client will be given a set of upstream sources to connect with. If set up correctly, the participating time client/server gets a score, which increments each time it successfully executes. The score is used as part of the monitoring system.

Readers who would like to follow along with the work-in-progress management system for NTS servers can find the code on GitHub, and watch the Trifecta Tech Foundation blog for updates.

Comments (66 posted)

Page editor: Joe Brockmeier

Brief items

Security

Security quote of the week

These things are not trustworthy, and yet they are going to be widely trusted.
Bruce Schneier on the ease of poisoning LLM data

Comments (4 posted)

Kernel development

Kernel release status

The current development kernel is 7.0-rc1, released on February 22. Linus said: "You all know the drill by now: two weeks have passed, and the kernel merge window is closed."

Stable updates: 6.19.3, 6.18.13, 6.12.74, 6.6.127, 6.1.164, 5.15.201, and 5.10.251 were released on February 19.

The large 6.19.4 and 6.18.14 updates are in the review process; they are due on February 27.

Comments (none posted)

Support period lengthened for the 6.6, 6.12, and 6.18 kernels

The stated support periods for the 6.6, 6.12, and 6.18 kernels has been extended. The 6.6 kernel will be supported with stable updates through the end of 2027 (for four years of support total), while 6.12 and 6.18 will get updates through the end of 2028, for four and three years of support.

Comments (7 posted)

Quote of the week

With our normal release schedule of 5-6 releases per year and my antipathy to big version numbers, you should basically expect us to bump the major number roughly every 3.5 years.

And yeah, I don't have a solid plan for when the major number itself gets big. But doing the math - by that time, I expect that we'll have somebody more competent in charge who isn't afraid of numbers past the teens. So I'm not going to worry about it.

Linus Torvalds

Comments (none posted)

Distributions

openSUSE governance proposal advances

Douglas DeMaio has announced that Jeff Mahoney's new governance proposal for openSUSE, which was published in January, is moving forward. The new structure would have three governance bodies: a new technical steering committee (TSC), a community and marketing committee (CMC), as well as the existing openSUSE board.

The discussions during the meeting proposed that the Technical Steering Committee should begin with five members with a chair elected by the committee. The group would establish clear processes for reviewing and approving technical changes, drawing inspiration from Fedora's FESCo model. Decisions for the TSC would use a voting system of +1 to approve, 0 for neutral, or -1 to block. A proposal passes without objection. A -1 vote would require a dedicated meeting, where a majority of attendees would decide the outcome. Objections must include a clear, documented rationale.

Discussions related to the Community and Marketing Committee would focus on outreach, advocacy, and community growth. It could also serve as an initial escalation point for disputes. If consensus cannot be reached at that level, matters would advance to the Board.

[...] No timeline for final adoption was announced. Project contributors will continue discussions through the GitLab repository and future community meetings.

Comments (none posted)

Distributions quotes of the week

What if the goal of technical progress is NOT to remove physical limits on the amount of the bad quality work humans can produce?
Aleksandra Fedorova

I want to promote fully free hardware and firmware. However, given that very little hardware that does not require non-free firmware exists at present, I don't believe Debian is all that useful of a vehicle for promoting fully free hardware and firmware at the current stage.

That movement is still at the phase of trying to make such hardware and get people to buy it, and Debian is not a hardware company so we cannot contribute to that effort beyond ensuring that Debian supports what hardware does exist. I certainly support the latter effort, and I wish the free hardware movement the best of luck, but it's not mature enough for Debian to, for instance, recommend it to users who expect a normal computing experience.

Russ Allbery

Comments (1 posted)

Development

Firefox 148.0 released

Version 148 of Firefox has been released. The most notable change in this release is the addition of a "Block AI enhancements" option that allows turning off "new or current AI enhancements in Firefox, or pop-ups about them" with a single toggle.

With this release, Firefox now supports the Trusted Types API to help prevent cross-site scripting attacks as well as the Sanitizer API that provides new methods for HTML manipulation. See the release notes for developers for changes that may affect web developers or those who create Firefox add-ons.

Comments (2 posted)

GNU Awk 5.4.0 released

Version 5.4.0 of GNU awk (gawk) has been released. This is a major release with a change in gawk's default regular-expression matcher: it now uses MinRX as the default regular-expression engine.

This matcher is fully POSIX compliant, which the current GNU matchers are not. In particular it follows POSIX rules for finding the longest leftmost submatches. It is also more strict as to regular expression syntax, but primarily in a few corner cases that normal, correct, regular expression usage should not encounter.

Because regular expression matching is such a fundamental part of awk/gawk, the original GNU matchers are still included in gawk. In order to use them, give a value to the GAWK_GNU_MATCHERS environment variable before invoking gawk.

[...] The original GNU matchers will eventually be removed from gawk. So, please take the time to notice and report any issues in the MinRX matcher, so that they can be ironed out sooner rather than later.

See the release announcement for additional changes.

Comments (1 posted)

GNU Octave 11.1.0 released

Version 11.1.0 of the GNU Octave scientific programming language has been released.

This major release contains many new and improved functions. Among other things, it brings better support for classdef objects and arrays, broadcasting for special matrix types (like sparse, diagonal, or permutation matrices), updates for Matlab compatibility (notably support for the nanflag, vecdim and other parameters for many basic math and statistics functions), and performance improvements in many functions.

See the release notes for details.

Comments (none posted)

The Ladybird browser project shifts to Rust

The Ladybird browser project has announced a move to the Rust programming language:

When we originally evaluated Rust back in 2024, we rejected it because it's not great at C++ style OOP. The web platform object model inherits a lot of 1990s OOP flavor, with garbage collection, deep inheritance hierarchies, and so on. Rust's ownership model is not a natural fit for that.

But after another year of treading water, it's time to make the pragmatic choice. Rust has the ecosystem and the safety guarantees we need. Both Firefox and Chromium have already begun introducing Rust into their codebases, and we think it's the right choice for Ladybird too.

Large language models are being used to translate existing code.

Comments (12 posted)

Restarting LibreOffice Online

LibreOffice online is a web-based version of the LibreOffice suite that can be hosted on anybody's infrastructure. This project was put into stasis back in 2022, a move marked by some tension with Collabora, a major LibreOffice developer that has its own online offering. Now, the Document Foundation has announced a new effort to breathe life into this project.

We plan to reopen the repository for LibreOffice Online at The Document Foundation for contributions, but provide warnings about the state of the repository until TDF's team agrees that it's safe and usable – while at the same time encourage the community to join in with code, technologies and other contributions that can be used to move forward.

Meanwhile, this post from Michael Meeks suggests that the tension around online versions of LibreOffice has not abated.

Comments (12 posted)

The Book of Remind

Dianne Skoll, creator and maintainer of the command-line calendar and alarm program Remind, has announced the release of The Book of Remind. As the name suggests, it is a step-by-step guide to learning how to use Remind, and a useful supplement to the extensive remind(1) man page. The book is free to download.

Comments (53 posted)

Vlad: Weston 15.0 is here: Lua shells, Vulkan rendering, and a smoother display stack

Over on the Collabora blog, Marius Vlad has an overview of Weston 15.0, which was released on February 19. Weston is the reference an implementation of a Wayland compositor. The new release comes with a new shell that can be programmed using the Lua language, a new, experimental Vulkan renderer, smoother media playback, color-management additions, and more.
One of Weston's fundamental pillars has always been making the most efficient use of display hardware. Over time, all the work we did to track and offload as much work as possible to this efficient fixed-function hardware has come at the cost of eating CPU time. In the last couple of release cycles, we've focused really hard on improving performance on even the most low-end of devices, so not only do we make the most efficient use of the GPU and display hardware, but we're also really kind on your CPU now. As part of that and to improve our tooling, Weston 15 now comes with support for the Perfetto profiler.

Comments (6 posted)

Development quote of the week

AI models are extremely bad at original thinking, so any thinking that is offloaded to a LLM is as a result usually not very original, even if they're very good at treating your inputs to the discussion as amazing genius level insights.

This may be a feature if you are exploring a topic you are unfamiliar with, but it's a fatal flaw if you are writing a blog post or designing a product or trying to do some other form of original work.

Some will argue that this is why you need a human in the loop to steer the work and do the high level thinking. That premise is fundamentally flawed. Original ideas are the result of the very work you're offloading on LLMs. Having humans in the loop doesn't make the AI think more like people, it makes the human thought more like AI output.

The way human beings tend to have original ideas is to immerse in a problem for a long period of time, which is something that flat out doesn't happen when LLMs do the thinking. You get shallow, surface-level ideas instead.

Ideas are then further refined when you try to articulate them. This is why we make students write essays. It's also why we make professors teach undergraduates.

Prompting an AI model is not articulating an idea. You get the output, but in terms of ideation the output is discardable. It's the work that matters.

You don't get build muscle using an excavator to lift weights. You don't produce interesting thoughts using a GPU to think.

Viktor Löfgren

Comments (none posted)

Miscellaneous

MetaBrainz mourns the loss of Robert Kaye

The MetaBrainz Foundation has announced the unexpected passing of its founder and executive director, Robert Kaye:

Robert's vision and leadership shaped MetaBrainz and left a lasting mark on the music industry and open source movement. His contributions were significant and his loss is deeply felt across our global community.

The Board is actively overseeing a smooth leadership transition and has measures in place to ensure that MetaBrainz continues to operate without interruption. Further updates will be shared in due course.

Comments (1 posted)

Page editor: Daroc Alden

Announcements

Newsletters

Distributions and system administration

Development

Emacs News February 23
This Week in GNOME February 20
GNU Tools Weekly News February 22
Golang Weekly February 20
LLVM Weekly February 16
LLVM Weekly February 23
This Week in Matrix February 20
Perl Weekly February 23
This Week in Plasma February 21
PyCoder's Weekly February 24
Weekly Rakudo News February 23
Ruby Weekly News February 19
This Week in Rust February 18
Wikimedia Tech News February 23

Calls for Presentations

CFP Deadlines: February 26, 2026 to April 27, 2026

The following listing of CFP deadlines is taken from the LWN.net CFP Calendar.

DeadlineEvent Dates EventLocation
March 2 June 8
June 12
RISC-V Summit Europe 2026 Bologna, Italy
March 10 May 2 22nd Linux Infotag Augsburg Augsburg, Germany
March 13 August 6
August 9
FOSSY 2026 Vancouver, Canada
March 15 May 21
May 22
Linux Security Summit North America Minneapolis, Minnesota, US
March 15 May 30
May 31
Journées du Logiciel Libre 2026 Lyon, France
March 29 May 29 Yocto Project Developer Day Nice, France
April 20 July 20
July 25
DebConf 26 Santa Fe, Argentina
April 20 July 13
July 19
DebCamp 26 Santa Fe, Argentina

If the CFP deadline for your event does not appear here, please tell us about it.

Upcoming Events

Events: February 26, 2026 to April 27, 2026

The following event listing is taken from the LWN.net Calendar.

Date(s)EventLocation
March 5
March 8
SCALE Pasadena, CA, US
March 9
March 10
FOSSASIA Summit Bangkok, Thailand
March 16
March 17
FOSS Backstage Berlin, Germany
March 19 Open Tech Day 26: OpenTofu Edition Nuremberg, Germany
March 23
March 26
KubeCon + CloudNativeCon Europe Amsterdam, Netherlands
March 28 Central Pennsylvania Open Source Conference Lancaster, Pennsylvania, US
March 28
March 29
Chemnitz Linux Days Chemnitz, Germany
April 10
April 11
Grazer Linuxtage Graz, Austria
April 20
April 21
SambaXP Göttingen, Germany
April 23 OpenSUSE Open Developers Summit Prague, Czech Republic
April 25
April 26
Sesja Linuksowa (Linux Session) Wrocław, Poland

If your event does not appear here, please tell us about it.

Security updates

Alert summary February 19, 2026 to February 25, 2026

Dist. ID Release Package Date
AlmaLinux ALSA-2026:2776 9 edk2 2026-02-18
AlmaLinux ALSA-2026:2786 9 glibc 2026-02-18
AlmaLinux ALSA-2026:2719 10 gnupg2 2026-02-18
AlmaLinux ALSA-2026:2706 10 golang 2026-02-18
AlmaLinux ALSA-2026:2914 10 grafana 2026-02-20
AlmaLinux ALSA-2026:3188 8 grafana 2026-02-24
AlmaLinux ALSA-2026:2920 9 grafana 2026-02-19
AlmaLinux ALSA-2026:3187 8 grafana-pcp 2026-02-24
AlmaLinux ALSA-2026:2720 8 kernel 2026-02-24
AlmaLinux ALSA-2026:3083 8 kernel 2026-02-24
AlmaLinux ALSA-2026:2821 8 kernel-rt 2026-02-23
AlmaLinux ALSA-2026:3110 8 kernel-rt 2026-02-24
AlmaLinux ALSA-2026:3032 8 munge 2026-02-24
AlmaLinux ALSA-2026:2781 9 nodejs:24 2026-02-18
AlmaLinux ALSA-2026:3042 8 openssl 2026-02-23
AlmaLinux ALSA-2026:2799 9 php 2026-02-18
Debian DLA-4485-1 LTS ca-certificates 2026-02-20
Debian DSA-6146-1 stable chromium 2026-02-20
Debian DLA-4487-1 LTS gegl 2026-02-21
Debian DSA-6142-1 stable gegl 2026-02-19
Debian DLA-4483-1 LTS gimp 2026-02-18
Debian DLA-4491-1 LTS glib2.0 2026-02-23
Debian DLA-4492-1 LTS gnutls28 2026-02-25
Debian DSA-6144-1 stable inetutils 2026-02-19
Debian DSA-6141-1 stable kernel 2026-02-18
Debian DLA-4489-1 LTS libvpx 2026-02-22
Debian DSA-6143-1 stable libvpx 2026-02-19
Debian DLA-4488-1 LTS modsecurity-crs 2026-02-22
Debian DLA-4486-1 LTS nova 2026-02-20
Debian DSA-6145-1 stable nova 2026-02-19
Debian DLA-4490-1 LTS openssl 2026-02-24
Debian DSA-6147-1 stable pillow 2026-02-20
Debian DLA-4484-1 LTS python-django 2026-02-19
Fedora FEDORA-2026-3beebfc8ff F42 azure-cli 2026-02-20
Fedora FEDORA-2026-45e69bddb9 F43 azure-cli 2026-02-20
Fedora FEDORA-2026-583eef79a8 F42 chromium 2026-02-21
Fedora FEDORA-2026-443f9ace49 F43 chromium 2026-02-20
Fedora FEDORA-2026-18d617b2e5 F43 chromium 2026-02-25
Fedora FEDORA-2026-439af2cc95 F42 fvwm3 2026-02-19
Fedora FEDORA-2026-adbfebd04b F43 fvwm3 2026-02-19
Fedora FEDORA-2026-85ee8cb2a2 F42 microcode_ctl 2026-02-20
Fedora FEDORA-2026-60e8919a4a F43 microcode_ctl 2026-02-20
Fedora FEDORA-2026-2af2f2fa9d F42 mingw-libpng 2026-02-21
Fedora FEDORA-2026-4366b8d2d8 F42 mupdf 2026-02-22
Fedora FEDORA-2026-c06fd97a53 F43 mupdf 2026-02-23
Fedora FEDORA-2026-c06fd97a53 F43 python-PyMuPDF 2026-02-23
Fedora FEDORA-2026-3beebfc8ff F42 python-azure-core 2026-02-20
Fedora FEDORA-2026-45e69bddb9 F43 python-azure-core 2026-02-20
Fedora FEDORA-2026-ddafe1357a F42 python-pyasn1 2026-02-22
Fedora FEDORA-2026-0179c9b8ac F43 python-pyasn1 2026-02-22
Fedora FEDORA-2026-086a367966 F42 python-uv-build 2026-02-22
Fedora FEDORA-2026-6ee987bce2 F43 python3.13 2026-02-22
Fedora FEDORA-2026-9ad2d11c1f F42 python3.14 2026-02-20
Fedora FEDORA-2026-c06fd97a53 F43 qpdfview 2026-02-23
Fedora FEDORA-2026-d684b372f1 F42 roundcubemail 2026-02-20
Fedora FEDORA-2026-547e298156 F43 roundcubemail 2026-02-20
Fedora FEDORA-2026-086a367966 F42 rust-ambient-id 2026-02-22
Fedora FEDORA-2026-086a367966 F42 uv 2026-02-22
Fedora FEDORA-2026-d86b88630b F43 yt-dlp 2026-02-25
Fedora FEDORA-2026-c06fd97a53 F43 zathura-pdf-mupdf 2026-02-23
Mageia MGASA-2026-0046 9 freerdp 2026-02-22
Mageia MGASA-2026-0047 9 gegl 2026-02-23
Mageia MGASA-2026-0045 9 gnutls 2026-02-20
Mageia MGASA-2026-0044 9 libvpx 2026-02-20
Mageia MGASA-2026-0043 9 microcode 2026-02-18
Mageia MGASA-2026-0042 9 vim 2026-02-18
Oracle ELSA-2026-3208 OL10 389-ds-base 2026-02-24
Oracle ELSA-2026-3189 OL9 389-ds-base 2026-02-24
Oracle ELSA-2026-2776 OL9 edk2 2026-02-18
Oracle ELSA-2026-2231 OL7 firefox 2026-02-23
Oracle ELSA-2026-3068 OL10 freerdp 2026-02-23
Oracle ELSA-2026-3067 OL9 freerdp 2026-02-23
Oracle ELSA-2026-2786 OL9 glibc 2026-02-18
Oracle ELSA-2026-1677 OL7 gnupg2 2026-02-23
Oracle ELSA-2026-3092 OL10 golang-github-openprinting-ipp-usb 2026-02-23
Oracle ELSA-2026-2914 OL10 grafana 2026-02-23
Oracle ELSA-2026-2920 OL9 grafana 2026-02-23
Oracle ELSA-2026-3035 OL10 grafana-pcp 2026-02-23
Oracle ELSA-2026-3040 OL9 grafana-pcp 2026-02-23
Oracle ELSA-2026-0847 OL7 java-11-openjdk 2026-02-23
Oracle ELSA-2026-0755 OL7 kernel 2026-02-23
Oracle ELSA-2026-2720 OL8 kernel 2026-02-18
Oracle ELSA-2026-50113 OL8 kernel 2026-02-18
Oracle ELSA-2026-50113 OL9 kernel 2026-02-18
Oracle ELSA-2026-2722 OL9 kernel 2026-02-18
Oracle ELSA-2026-50113 OL9 kernel 2026-02-18
Oracle ELSA-2026-50112 OL9 kernel 2026-02-23
Oracle ELSA-2026-3066 OL9 kernel 2026-02-24
Oracle ELSA-2026-3031 OL9 libpng15 2026-02-23
Oracle ELSA-2026-3033 OL10 munge 2026-02-23
Oracle ELSA-2026-3032 OL8 munge 2026-02-24
Oracle ELSA-2026-3034 OL9 munge 2026-02-23
Oracle ELSA-2026-2783 OL9 nodejs:20 2026-02-23
Oracle ELSA-2026-2782 OL9 nodejs:22 2026-02-23
Oracle ELSA-2026-2781 OL9 nodejs:24 2026-02-18
Oracle ELSA-2026-3042 OL8 openssl 2026-02-24
Oracle ELSA-2026-2799 OL9 php 2026-02-18
Oracle ELSA-2026-3094 OL10 protobuf 2026-02-23
Oracle ELSA-2026-3095 OL9 protobuf 2026-02-23
Oracle ELSA-2026-50112 uek-kernel 2026-02-23
Red Hat RHSA-2026:3297-01 EL10 buildah 2026-02-25
Red Hat RHSA-2026:3298-01 EL9 buildah 2026-02-25
Red Hat RHSA-2026:3053-01 EL9.4 butane 2026-02-23
Red Hat RHSA-2026:3341-01 EL9 containernetworking-plugins 2026-02-25
Red Hat RHSA-2026:2914-01 EL10 grafana 2026-02-20
Red Hat RHSA-2026:2920-01 EL9 grafana 2026-02-20
Red Hat RHSA-2026:3035-01 EL10 grafana-pcp 2026-02-23
Red Hat RHSA-2026:3040-01 EL9 grafana-pcp 2026-02-23
Red Hat RHSA-2026:3288-01 EL10.0 opentelemetry-collector 2026-02-25
Red Hat RHSA-2026:3287-01 EL9.4 opentelemetry-collector 2026-02-25
Red Hat RHSA-2026:3289-01 EL9.6 opentelemetry-collector 2026-02-25
Red Hat RHSA-2026:2687-01 EL8.6 osbuild-composer 2026-02-20
Red Hat RHSA-2026:2685-01 EL8.8 osbuild-composer 2026-02-20
Red Hat RHSA-2026:2686-01 EL9.0 osbuild-composer 2026-02-20
Red Hat RHSA-2026:2688-01 EL9.2 osbuild-composer 2026-02-20
Red Hat RHSA-2026:3336-01 EL10 podman 2026-02-25
Red Hat RHSA-2026:2911-01 EL7 python-s3transfer 2026-02-19
Red Hat RHSA-2026:3291-01 EL9 runc 2026-02-25
Red Hat RHSA-2026:3343-01 EL10 skopeo 2026-02-25
Slackware SSA:2026-055-01 mozilla 2026-02-24
SUSE SUSE-SU-2026:0576-1 SLE-m5.3 oS15.4 abseil-cpp 2026-02-18
SUSE SUSE-SU-2026:0580-1 SLE15 oS15.6 apptainer 2026-02-19
SUSE SUSE-SU-2026:0577-1 SLE-m5.3 SLE-m5.4 SLE-m5.5 oS15.4 avahi 2026-02-18
SUSE openSUSE-SU-2026:10211-1 TW azure-cli-core 2026-02-18
SUSE openSUSE-SU-2026:10234-1 TW chromedriver 2026-02-22
SUSE openSUSE-SU-2026:20258-1 oS16.0 chromium 2026-02-21
SUSE openSUSE-SU-2026:0062-1 osB15 chromium 2026-02-25
SUSE openSUSE-SU-2026:20251-1 oS16.0 cockpit-repos 2026-02-21
SUSE openSUSE-SU-2026:10235-1 TW cosign 2026-02-24
SUSE openSUSE-SU-2026:10219-1 TW dnsdist 2026-02-19
SUSE SUSE-SU-2026:0602-1 SLE12 firefox 2026-02-24
SUSE SUSE-SU-2026:0611-1 SLE15 oS15.6 firefox 2026-02-25
SUSE openSUSE-SU-2026:10225-1 TW firefox 2026-02-20
SUSE SUSE-SU-2026:20435-1 SLE16 fontforge 2026-02-18
SUSE SUSE-SU-2026:0621-1 SLE15 oS15.4 freerdp 2026-02-25
SUSE SUSE-SU-2026:0604-1 SLE15 oS15.4 oS15.6 gimp 2026-02-24
SUSE SUSE-SU-2026:20429-1 SLE16 go1.24 2026-02-18
SUSE SUSE-SU-2026:20428-1 SLE16 go1.25 2026-02-18
SUSE openSUSE-SU-2026:20239-1 oS16.0 golang-github-prometheus-prometheus 2026-02-18
SUSE openSUSE-SU-2026:10236-1 TW heroic-games-launcher 2026-02-24
SUSE openSUSE-SU-2026:10220-1 TW istioctl 2026-02-19
SUSE SUSE-SU-2026:0617-1 MP4.3 SLE15 SLE-m5.3 SLE-m5.4 oS15.4 kernel 2026-02-24
SUSE SUSE-SU-2026:0587-1 SLE15 kernel 2026-02-20
SUSE openSUSE-SU-2026:10237-1 TW libopenssl-3-devel 2026-02-24
SUSE SUSE-SU-2026:0575-1 SLE-m5.3 SLE-m5.4 SLE-m5.5 oS15.4 libpcap 2026-02-18
SUSE SUSE-SU-2026:0598-1 SLE12 libpng12 2026-02-23
SUSE SUSE-SU-2026:0599-1 SLE15 oS15.6 libpng12 2026-02-24
SUSE SUSE-SU-2026:0583-1 SLE12 libpng16 2026-02-20
SUSE SUSE-SU-2026:0596-1 SLE15 SLE-m5.2 SLE-m5.3 SLE-m5.4 SLE-m5.5 libpng16 2026-02-23
SUSE SUSE-SU-2026:0597-1 SLE15 oS15.6 libpng16 2026-02-23
SUSE SUSE-SU-2026:0579-1 SLE15 oS15.4 libsoup 2026-02-19
SUSE SUSE-SU-2026:0574-1 SLE15 oS15.6 libsoup2 2026-02-18
SUSE openSUSE-SU-2026:10213-1 TW libxml2-16 2026-02-18
SUSE SUSE-SU-2026:0606-1 SLE-m5.3 SLE-m5.4 oS15.4 libxml2 2026-02-24
SUSE SUSE-SU-2026:0605-1 SLE15 libxml2 2026-02-24
SUSE SUSE-SU-2026:0603-1 SLE15 libxslt 2026-02-24
SUSE openSUSE-SU-2026:20260-1 oS16.0 mosquitto 2026-02-24
SUSE openSUSE-SU-2026:10214-1 TW mupdf 2026-02-18
SUSE SUSE-SU-2026:20436-1 SLE16 nodejs22 2026-02-18
SUSE SUSE-SU-2026:0581-1 SLE12 openCryptoki 2026-02-20
SUSE SUSE-SU-2026:20434-1 SLE16 openCryptoki 2026-02-18
SUSE openSUSE-SU-2026:0060-1 osB15 openQA, openQA-devel-container, os-autoinst 2026-02-24
SUSE SUSE-SU-2026:20422-1 SLE16 openjpeg2 2026-02-18
SUSE openSUSE-SU-2026:20261-1 oS16.0 openqa, os-autoinst, openqa-devel-container 2026-02-24
SUSE SUSE-SU-2026:0619-1 SLE12 openvswitch 2026-02-24
SUSE SUSE-SU-2026:20431-1 SLE16 patch 2026-02-18
SUSE openSUSE-SU-2026:0061-1 osB15 phpunit 2026-02-24
SUSE SUSE-SU-2026:0616-1 SLE12 postgresql14 2026-02-24
SUSE SUSE-SU-2026:0615-1 SLE12 postgresql15 2026-02-24
SUSE SUSE-SU-2026:0614-1 SLE12 postgresql16 2026-02-24
SUSE SUSE-SU-2026:0588-1 SLE15 postgresql16 2026-02-20
SUSE SUSE-SU-2026:0586-1 SLE15 postgresql17 2026-02-20
SUSE SUSE-SU-2026:0585-1 SLE12 postgresql18 2026-02-20
SUSE SUSE-SU-2026:0584-1 SLE15 postgresql18 2026-02-20
SUSE SUSE-SU-2026:0618-1 MP4.3 SLE15 SLE-m5.2 SLE-m5.3 SLE-m5.4 SLE-m5.5 protobuf 2026-02-24
SUSE SUSE-SU-2026:0590-1 SLE15 oS15.6 python 2026-02-20
SUSE SUSE-SU-2026:20425-1 SLE16 python-aiohttp, python-Brotli 2026-02-18
SUSE openSUSE-SU-2026:0057-1 osB15 python-nltk 2026-02-19
SUSE openSUSE-SU-2026:0056-1 osB15 python-nltk 2026-02-19
SUSE SUSE-SU-2026:20423-1 SLE16 python-pip 2026-02-18
SUSE SUSE-SU-2026:0613-1 SLE15 oS15.4 oS15.6 python310 2026-02-24
SUSE openSUSE-SU-2026:10238-1 TW python311-PyPDF2 2026-02-24
SUSE openSUSE-SU-2026:10221-1 TW python311 2026-02-19
SUSE openSUSE-SU-2026:10216-1 TW python311-asgiref 2026-02-18
SUSE openSUSE-SU-2026:10226-1 TW python311-nltk 2026-02-20
SUSE openSUSE-SU-2026:10223-1 TW python313 2026-02-19
SUSE SUSE-SU-2026:0612-1 SLE12 python36 2026-02-24
SUSE openSUSE-SU-2026:10224-1 TW rclone 2026-02-19
SUSE SUSE-SU-2026:20426-1 SLE16 rust1.93 2026-02-18
SUSE SUSE-SU-2026:0620-1 SLE15 snpguest 2026-02-25
SUSE SUSE-SU-2026:0582-1 SLE15 oS15.6 snpguest 2026-02-20
SUSE openSUSE-SU-2026:10218-1 TW thunderbird 2026-02-19
SUSE openSUSE-SU-2026:10217-1 TW traefik 2026-02-18
SUSE openSUSE-SU-2026:10229-1 TW ucode-intel-20260210 2026-02-20
SUSE SUSE-SU-2026:0592-1 oS15.6 vexctl 2026-02-20
SUSE openSUSE-SU-2026:10239-1 TW warewulf4 2026-02-24
SUSE openSUSE-SU-2026:10240-1 TW weblate 2026-02-24
SUSE SUSE-SU-2026:0589-1 SLE15 xen 2026-02-20
Ubuntu USN-8062-1 22.04 24.04 25.10 curl 2026-02-25
Ubuntu USN-8054-1 16.04 18.04 20.04 22.04 24.04 djvulibre 2026-02-23
Ubuntu USN-8055-1 22.04 24.04 25.10 evolution-data-server 2026-02-23
Ubuntu USN-8057-1 16.04 18.04 20.04 22.04 24.04 gimp 2026-02-23
Ubuntu USN-7992-2 16.04 18.04 20.04 inetutils 2026-02-18
Ubuntu USN-8061-1 14.04 kernel 2026-02-24
Ubuntu USN-8051-2 16.04 18.04 20.04 libssh 2026-02-23
Ubuntu USN-8051-1 22.04 24.04 25.10 libssh 2026-02-18
Ubuntu USN-8053-1 22.04 24.04 25.10 libvpx 2026-02-19
Ubuntu USN-8060-1 20.04 22.04 linux, linux-gcp, linux-gke, linux-gkeop, linux-intel-iotg, linux-intel-iotg-5.15, linux-kvm, linux-lowlatency, linux-lowlatency-hwe-5.15, linux-nvidia-tegra, linux-oracle, linux-xilinx-zynqmp 2026-02-24
Ubuntu USN-8059-1 22.04 24.04 linux, linux-gkeop, linux-hwe-6.8, linux-lowlatency, linux-lowlatency-hwe-6.8, linux-oracle, linux-raspi 2026-02-24
Ubuntu USN-7990-5 18.04 20.04 linux-azure, linux-azure-5.4, linux-azure-fips 2026-02-19
Ubuntu USN-8029-3 25.10 linux-azure 2026-02-24
Ubuntu USN-8059-5 24.04 linux-fips, linux-gcp-fips 2026-02-25
Ubuntu USN-8060-4 22.04 linux-fips 2026-02-25
Ubuntu USN-8059-3 22.04 24.04 linux-gcp, linux-gcp-6.8, linux-gke, linux-oracle-6.8 2026-02-25
Ubuntu USN-8031-3 24.04 linux-gcp, linux-gke 2026-02-19
Ubuntu USN-8060-3 22.04 linux-gcp-fips 2026-02-24
Ubuntu USN-8028-6 22.04 linux-hwe-6.8, linux-lowlatency-hwe-6.8 2026-02-19
Ubuntu USN-8028-8 22.04 24.04 linux-ibm, linux-ibm-6.8 2026-02-24
Ubuntu USN-8060-2 22.04 linux-intel-iot-realtime, linux-realtime 2026-02-24
Ubuntu USN-8033-8 22.04 linux-intel-iotg 2026-02-19
Ubuntu USN-8033-7 20.04 22.04 linux-intel-iotg-5.15, linux-xilinx-zynqmp 2026-02-19
Ubuntu USN-8015-5 24.04 linux-lowlatency, linux-xilinx 2026-02-20
Ubuntu USN-8052-1 24.04 linux-lowlatency 2026-02-19
Ubuntu USN-8028-7 24.04 linux-nvidia-lowlatency 2026-02-19
Ubuntu USN-8059-2 24.04 linux-raspi-realtime 2026-02-24
Ubuntu USN-8059-4 22.04 24.04 linux-realtime, linux-realtime-6.8 2026-02-25
Ubuntu USN-8052-2 24.04 linux-xilinx 2026-02-24
Ubuntu USN-8050-1 22.04 24.04 25.10 trafficserver 2026-02-18
Ubuntu USN-8056-1 22.04 24.04 u-boot 2026-02-23
Full Story (comments: none)

Kernel patches of interest

Kernel releases

Linus Torvalds Linux 7.0-rc1 Feb 22
Greg Kroah-Hartman Linux 6.19.3 Feb 19
Greg Kroah-Hartman Linux 6.18.13 Feb 19
Greg Kroah-Hartman Linux 6.12.74 Feb 19
Greg Kroah-Hartman Linux 6.6.127 Feb 19
Clark Williams 6.6.127-rt69 Feb 20
Greg Kroah-Hartman Linux 6.1.164 Feb 19
Clark Williams 6.1.164-rt60 Feb 24
Greg Kroah-Hartman Linux 5.15.201 Feb 19
Joseph Salisbury 5.15.201-rt92 Feb 20
Greg Kroah-Hartman Linux 5.10.251 Feb 19
Luis Claudio R. Goncalves 5.10.250-rt145 Feb 19

Architecture-specific

Core kernel

Development tools

Device drivers

Amit Sunil Dhamne via B4 Relay Introduce MAX77759 charger driver Feb 18
Abhinaba Rakshit Enable ICE clock scaling Feb 19
Pavitrakumar Managutte crypto: spacc - Add SPAcc Crypto Driver Feb 19
Ian Ray hwmon: INA234 Feb 19
Derek J. Clark HID: Add Legion Go and Go S Drivers Feb 20
Alexis Czezar Torreno Add support for AD5706R DAC Feb 20
Rodrigo Alencar via B4 Relay AD9910 Direct Digital Synthesizer Feb 20
Caleb Sander Mateos nvme: set discard_granularity from NPDG/NPDA Feb 20
Manikandan Muralidharan Add LVDS display support and PLL handling Feb 23
Larysa Zaremba libeth and full XDP for ixgbevf Feb 23
Ankit Agrawal Add virtualization support for EGM Feb 23
Maciek Machnikowski Implement PTP support in netdevsim Feb 22
Marcelo Schmitt Add SPI offload support to AD4030 Feb 23
Luca Leonardo Scorcia Add support for mt6392 PMIC Feb 23
Manivannan Sadhasivam via B4 Relay Add support for handling PCIe M.2 Key E connectors in devicetree Feb 24
Ratheesh Kannoth NPC HW block support for cn20k Feb 24
Yemike Abhilash Chandra Add support for DS90UB954-Q1 Feb 24
Vishnu Saini Add lontium lt8713sx bridge driver Feb 24
Leon Romanovsky Add support for TLP emulation Feb 25
Russell King (Oracle) net: stmmac: improve PCS support Feb 25
Chris Morgan Add Invensense ICM42607 Feb 24

Device-driver infrastructure

Documentation

Filesystems and block layer

Memory management

Networking

Security-related

Eric Biggers AES-CMAC library Feb 18
Christian Brauner bpf: add a few hooks for sandboxing Feb 20

Virtualization and containers

Miscellaneous

Page editor: Joe Brockmeier


Copyright © 2026, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds