|
|
Log in / Subscribe / Register

LWN.net Weekly Edition for February 26, 2026

Welcome to the LWN.net Weekly Edition for February 26, 2026

This edition contains the following feature content:

This week's edition also includes these inner pages:

  • Brief items: Brief news items from throughout the community.
  • Announcements: Newsletters, conferences, security updates, patches, and more.

Please enjoy this week's edition, and, as always, thank you for supporting LWN.net.

Comments (none posted)

As ye clone(), so shall ye AUTOREAP

By Jonathan Corbet
February 24, 2026
The facilities provided by the kernel for the management of processes have evolved considerably in the last few years, driven mostly by the advent of the pidfd API. A pidfd is a file descriptor that refers to a process; unlike a process ID, a pidfd is an unambiguous handle for a process; that makes it a safer, more deterministic way of operating on processes. Christian Brauner, who has driven much of the pidfd-related work, is proposing two new flags for the clone3() system call, one of which changes the kernel's security model in a somewhat controversial way.

The existing CLONE_PIDFD flag was added (by Brauner) for the 5.2 kernel release; it causes clone3() to create and return a pidfd for the newly created process (or thread). That gives the parent process a handle on its child from the outset. This pidfd can be used to, among other things, detect when the child has exited and obtain its exit status.

The classic Unix way of obtaining exit-status information, though, is with one of the wait() family of system calls. When a process exits, it will go into the "zombie" state until its parent calls a form of wait() to reap the zombie's exit information and clean up the process entry itself. If a parent creates a lot of children and fails to reap them, it will fill the system with zombie processes — a classic beginner's process-management mistake. Even if the parent does not care about what happens to its children after they are created, it normally needs to carefully clean up after those children when they exit.

There are ways of getting out of that duty. The kernel will send a SIGCHLD signal to the parent of an exited process, but that signal is normally ignored. If the parent explicitly sets that signal to SIG_IGN, though, its children will be automatically cleaned up without bothering the parent. This setting applies to all children, though; if a parent does care about some of them, it cannot use this approach. Among other things, that means that library code that creates child processes cannot make the cleanup problem go away by setting SIGCHLD to SIG_IGN without creating problems for the rest of the program.

One of the new flags that Brauner is proposing is CLONE_AUTOREAP, which would cause the newly created process to be automatically cleaned up once it exits. There would be no need (or ability) to do so with wait(), and no SIGCHLD signal sent. The flag applies only to the just-created process, so it does not affect process management elsewhere in the program. It also is not inherited by grandchild processes. If the child is created with CLONE_PIDFD as well, the resulting pidfd can be used to obtain the exit status without the need to explicitly reap the child. Otherwise, that status will be unavailable; as Brauner noted, this mode "provides a fire-and-forget pattern".

The CLONE_AUTOREAP flag seems useful and uncontroversial. The other proposed flag, CLONE_PIDFD_AUTOKILL, may also be useful, but it has raised some eyebrows. This flag ties the life of the new process to the pidfd that represents it; if that pidfd is closed, the child will be immediately and unceremoniously killed with a SIGKILL signal. The use of CLONE_PIDFD is required with CLONE_PIDFD_AUTOKILL; otherwise, the pidfd that keeps the child alive would never exist in the first place. CLONE_PIDFD_AUTOKILL also requires CLONE_AUTOREAP, since it is likely to be enforced when the parent process exits and can no longer clean up after its children.

The intended use case for this feature is "container runtimes, service managers, and sandboxed subprocess execution - any scenario where the child must die if the parent crashes or abandons the pidfd". In other words, this flag is meant to be used in cases where processes are created under the supervision of other processes, and they should not be allowed to keep on running if the supervisor goes away.

Linus Torvalds quickly pointed out a potential problem after seeing version 3 of the patch series: this feature would allow the killing of setuid processes that the creator would otherwise not have the ability to affect. A suitably timed kill might interrupt a privileged program in the middle of a sensitive operation, leaving the system in an inconsistent (and perhaps exploitable) state. "If I'm right", Torvalds said, "this is completely broken. Please explain.". Brauner responded that he was aware that he was proposing a significant change to how the kernel's security model works, but was insistent that this capability is needed; he wanted a discussion on how it could be safely implemented:

My ideal model for kill-on-close is to just ruthlessly enforce that the kernel murders anything once the file is released. I would value input under what circumstances we could make this work without having the kernel magically unset it under magical circumstances that are completely opaque to userspace.

Jann Horn questioned whether this change was actually problematic. There are various ways to kill setuid processes now, he said, including the use of SIGHUP signals or certain systemd configurations. "I agree that this would be a change to the security model, but I'm not sure if it would be that big a change." He suggested that, if this change is worrisome, it could be restricted to situations where the "no new privileges" flag has been set; that is how access to seccomp() is regulated now. Brauner indicated that this option might be workable, and the current version of the series (version 4 as of this writing) automatically sets "no new privileges" in the newly created child when CLONE_PIDFD_AUTOKILL is used.

Meanwhile, Ted Ts'o suggested that, rather than killing the child with SIGKILL, the kernel could instead send a sequence of signals, starting with SIGHUP, then going to SIGTERM, with a delay between them. That would give a setuid program the ability to catch the signal and clean up properly before exiting. Only if the target process doesn't take the hint would SIGKILL be sent. Even then, he was not convinced that this feature could be made secure: "I bet we'll still see some zero days coming out of this, but we can at least mitigate likelihood of security breach."

Brauner did not respond to that last suggestion, and the conversation wound down, relatively quickly, without any definitive conclusions having been reached. There is clearly a bit of tension between the need for supervisors to have complete control over the applications they manage and the need to keep existing security boundaries intact. Whether that tension is enough to keep relatively new process-management approaches from being implemented remains to be seen.

Comments (8 posted)

Open-source Discord alternatives

By Daroc Alden
February 20, 2026

The closed-source chat platform Discord announced on February 9 that it would soon require some users to verify their ages in order to access some content — although the company quickly added that the "vast majority" of users would not have to. That reassurance has to contend with the fact that the UK and other countries are implementing increasingly strict age requirements for social media. Discord's age verification would be done with an AI age-judging model or with a government photo ID. A surprising number of open-source projects use Discord for support or project communications, and some of those projects are now looking for open-source alternatives. Mastodon, for example, has moved discussion to Zulip. There are some alternatives out there, all with their own pros and cons, that communities may want to consider if they want to switch away from Discord.

Matrix

Centralized, federated, decentralized

Chat applications generally fall into three distinct groups: centralized services (where one server manages all communication), federated services (where there are multiple servers hosting different users that can all talk to one another), and decentralized services (where clients talk to each other directly without servers). In a centralized system, all usernames live in the same namespace. In a federated system, each server forms a separate namespace, so users have to pick a specific server to join. Decentralized naming schemes are more complicated — and also out of this article's scope, since none of the options listed here are decentralized.

The Matrix project started in 2014; it is an attempt to build a federated messaging and synchronization protocol that gives people full control over their own communication. The project's manifesto says: "The ability to converse securely and privately is a basic human right." The project has been welcoming former Discord users, after seeing a huge spike of sign-ups to the matrix.org "homeserver" — the server that the Matrix.org Foundation runs as a way for people to easily start chatting on the network.

Many users do eventually move to another homeserver, self-hosted or otherwise; there are many independent server implementations, fitting a variety of different needs. Seven are listed as actively maintained, with eleven more that have been abandoned for one reason or another. There are even more client applications, including a handful that act as web front-ends for people who don't wish to download and install another application.

This variety of implementations is both a blessing and a curse for Matrix. On the one hand, they're a clear sign of a vibrant community and a diverse software ecosystem. On the other hand, many of the clients are missing specific features, meaning that users will need to shop around a little to find a client that fits their needs. Generally, all clients support joining rooms (analogous to Discord channels) and doing basic chatting. The way the Matrix protocol is structured also means that they universally support synchronized use of the same account on multiple devices, even across clients. When new clients sign into an account, there's a short authentication dance involving the previous clients to ensure they're authorized. (Or a recovery procedure, for those who have lost their previous device.) Some security researchers have raised concerns about the cryptographic implementation in existing libraries [although the Matrix Foundation has published a rebuttal], so that is another factor that varies across implementations.

Most clients support organizing rooms into "spaces" (groups of related rooms that you can invite other people to, analogous to Discord "servers" or "guilds"), adding and viewing emoji reactions, and splitting side conversations into separate "threads". Custom emoji reactions are also well supported. Having multiple accounts signed in on the same client, on the other hand, is not well-supported; the advice seems to be to run multiple separate clients if one wants to keep work and personal accounts separate, for example.

On the protocol level, Matrix does a good job of abstracting over the federated nature of the network. Rooms and spaces can be transparently shared with users on other homeservers, spaces can include rooms from multiple servers, and so on. One can even create a room on one server, have a user on another server join, and then have the first server leave the room, passing off the responsibility for operating the room to the new server. Individual servers going down, therefore, has little impact on the network as a whole.

At a lower level, Matrix communication is sent over HTTP or HTTPS. This adds some overhead to the protocol compared to operating directly over a TCP socket, but it makes it easy to run a Matrix server from a residential connection and put an HTTPS proxy in front of it. The protocol also supports using DNS SRV records to separate the domain name of a homeserver from its actual hosting location.

Overall, Matrix is a reasonable alternative, especially for users who care about seamless federation. Server software is relatively easy to set up for projects wishing to self-host. The main downside is that some of the software involved is a little unpolished; of the clients I tried, several of them had small bugs or user-interface glitches. Element is probably the most polished, but its support for threads wasn't as nice as some of the other clients.

Jabber

Jabber, also known as XMPP, is a federated open standard for online communication and messaging. It's considerably older than Matrix, dating back to 1999. Despite that, it supports a number of modern chat features through protocol extensions, such as emoji reactions, typing notifications, and so on. Its long history also means that it has a large number of client and server implementations. Those clients run the full range from simple proof-of-concepts to well-designed applications, and from normal desktop and mobile applications to special-purpose tools. The libpurple C++ library has a generic XMPP implementation that makes it possible to write personal tools that use the protocol. Each server has its own ways to register, but the project maintains a list of public instances for people to try.

Jabber was historically designed for one-on-one chats, with multi-user chats (MUCs) being a later addition to the protocol. Consequently, it lacks the server/channel/thread hierarchical organization that Discord has. On the other hand, the protocol has stood the test of time, and seems unlikely to be going anywhere. For its core use case of chatting with friends, it is well-polished and usable. Jabber clients also tend to have good support for multi-account usage, and there are a number of bridges that can facilitate communication between Jabber accounts and other chat services. Some Jabber clients, such as Pidgin even have native support for other protocols as well.

Jabber's federation system lets users from different servers chat seamlessly with one another, but it doesn't make migrating MUCs around quite as transparent as migrating Matrix rooms: MUCs are created on a specific server, and rely on that server staying up to remain usable. That isn't normally too much of a problem, however; if one's Jabber server goes down, one can't participate in chats anyway.

On a protocol level, Jabber does its own communication over TCP, so one needs a handful of externally accessible ports to self-host a server. That is just about the only requirement, however. Projects like Snikket or Openfire provide self-contained Jabber servers requiring minimal setup. For people who just want to chat with their friends with minimal hassle, using mature tools that work reliably, Jabber is a good choice. Projects that want to make it easy for new contributors to join their discussions might find Jabber's group-chat-oriented organization less helpful, however.

Zulip

Zulip is a centralized chat platform that focuses on making it easy to follow multiple threads of conversation. Zulip is primarily developed and maintained by a company called Kandra Labs, which offers paid hosting for people who want a solution where someone else does the relevant server administration. The software itself is completely open source under an Apache 2.0 license.

Since Zulip is centralized, users on one server cannot easily and directly talk to users on another server. The project does expose an API for bridging software to use, and specifically recommends matterbridge as a way to connect Zulip channels to chats on other protocols.

Zulip organizes channels on a given server into folders, and then supports "conversations" as a subdivision of an individual channel. The user interface, which comes in both web and desktop versions, sports a combined activity feed that makes it easy to see what has changed in the conversations that one has actively participated in. That makes it handy for large projects that have a number of discussions going on at the same time. Zulip boasts a large number of moderation and organization features as well.

Self-hosting is fairly straightforward, although the installer assumes that Zulip is the only software running on the server and will freely install and configure additional dependencies, so it's best to install in a container or virtual machine. The requirements are also a bit heftier than Matrix or Jabber. Zulip can be hosted behind a HTTPS proxy, although using the optional email integration requires access to incoming port 25.

The one difference between the self-hosted version and the paid version, which are otherwise completely feature equivalent, is that the self-hosted version is limited in the number of mobile devices it can push notifications to. Open-source projects, academic researchers, and non-profits can have that limit raised for free. The limitation comes from the fact that Google and Apple require a single server to send all push notifications to the Zulip mobile apps, so self-hosted installations have to relay them via the central Zulip server.

The desktop and mobile clients support signing into multiple Zulip servers, but the web interface requires the use of separate tabs for separate servers. The interface is generally clean and usable, although the lack of alternative clients makes that somewhat understandable. Overall, Zulip would be a good choice for larger projects that want a modern, polished experience and care less about federation. Even if Zulip uses a centralized architecture, however, its almost-all-open-source nature and ability to export history significantly reduces the risk of lock-in.

Stoat

Stoat chat is another centralized, self-hostable chat platform. It has a smaller development community than Zulip, but it has an interface and structure that more resembles Discord. It also has a single main server, stoat.chat, with little interest so far in any kind of federation or interoperability. The project's documentation doesn't completely rule out the possibility, however — it just indicates that it's not on the project roadmap for now.

Users who want a Discord-like experience, including having a single central server, but who want the possibility of self hosting in the future, might like Stoat. It is licensed under the AGPL v3.0, and hosting costs are currently covered by donations. In the future, the project may look into charging on a usage basis, to defray costs, but promises not to charge for premium features the way Discord does. Stoat's lack of import/export support and interoperability make it a less good choice for projects that don't want to put all of their eggs in one basket, however.

Spacebar

Ultimately, people often don't want to stop using the software that works for them. For people who really miss Discord and want the exact same experience — down to the level of API compatibility — there is Spacebar. This project is an attempt to reverse-engineer Discord and create a self-hostable AGPL-licensed version. The project was started in 2020, and the server software is mostly working. The client implementation, however, is still new and experimental. The project doesn't recommend using it yet, and bugs are inevitable.

The server software is designed to implement the exact same API as Discord itself does — although, since the Discord client is closed-source, it can't be pointed at a Spacebar server. That identical API means that bots and integrations written for Discord can be pointed at a Spacebar server, however, and theoretically run without ever knowing that they've been redirected.

Spacebar is under active development, with a healthy development community. People who just want "Discord, but open-source" and are willing to pitch in, or to wait for Spacebar's client implementation to catch up, might find Spacebar the simplest replacement.

Conclusion

This list is by no means exhaustive. There are so many other open-source technologies and protocols with overlapping use-cases that a complete list would be impossible to construct. But these alternatives seem like the completely open-source applications most capable of replacing Discord and the things that people use it for at the present moment. For everyone who doesn't want or need a Discord-like "modern" chat experience — IRC will always be an option.

Comments (68 posted)

Modernizing swapping: virtual swap spaces

By Jonathan Corbet
February 19, 2026
The kernel's unloved but performance-critical swapping subsystem has been undergoing multiple rounds of improvement in recent times. Recent articles have described the addition of the swap table as a new way of representing the state of the swap cache, and the removal of the swap map as the way of tracking swap space. Work in this area is not done, though; this series from Nhat Pham addresses a number of swap-related problems by replacing the new swap table structures with a single, virtual swap space.

The problem with swap entries

As a reminder, a "swap entry" identifies a slot on a swap device that can be used to hold a page of data. It is a 64-bit value split into two fields: the device index (called the "type" within the code), and an offset within the device. When an anonymous page is pushed out to a swap device, the associated swap entry is stored into all page-table entries referring to that page. Using that entry, the kernel can quickly locate a swapped-out page when that page needs to be faulted back into RAM.

The "swap table" is, in truth, a set of tables, one for each swap device in the system. The transition to swap tables has simplified the kernel considerably, but the current design of swap entries and swap tables ties swapped-out pages firmly to a specific device. That creates some pain for system administrators and designers.

As a simple example, consider the removal of a swap device. Clearly, before the device can be removed, all pages of data stored on that device must be faulted back into RAM; there is no getting around that. But there is the additional problem of the page-table entries pointing to a swap slot that no longer exists once the device is gone. To resolve that problem, the kernel must, at removal time, scan through all of the anonymous page-table entries in the system and update them to the page's new location. That is not a fast process.

This design also, as Pham describes, creates trouble for users of the zswap subsystem. Zswap works by intercepting pages during the swap-out process and, rather than writing them to disk, compresses them and stores the result back into memory. It is well integrated with the rest of the swapping subsystem, and can be an effective way of extending memory capacity on a system. When the in-memory space fills, zswap is able to push pages out to the backing device.

The problem is that the kernel must be able to swap those pages back in quickly, regardless of whether they are still in zswap or have been pushed to slower storage. For this reason, zswap hides behind the index of the backing device; the same swap entry is used whether the page is in RAM or on the backing device. For this trick to work, though, the slot in the backing device must be allocated at the beginning, when a page is first put into zswap. So every zswap usage must include space on a backing device, even if the intent is to never actually store pages on disk. That leads to a lot of wasted storage space and makes zswap difficult or impossible to use on systems where that space is not available to waste.

Virtual swap spaces

The solution that Pham proposes, as is so often the case in this field, is to add another layer of indirection. That means the replacement of the per-device swap tables with a single swap table that is independent of the underlying device. When a page is added to the swap cache, an entry from this table is allocated for it; the swap-entry type is now just a single integer offset. The table itself is an array of swp_desc structures:

    struct swp_desc {
        union {
            swp_slot_t         slot;
            struct zswap_entry * zswap_entry;
        };
        union {
            struct folio *     swap_cache;
            void *             shadow;
        };
        unsigned int               swap_count;
        unsigned short             memcgid:16;
        bool                       in_swapcache:1;
        enum swap_type             type:2;
    };

The first union tells the system where to find a swapped-out page; it either points to a device-specific swap slot or an entry in the zswap cache. It is the mapping between the virtual swap slot and a real location. The second union contains either the location of the page in RAM (or, more precisely, its folio) or the shadow information used by the memory-management subsystem to track how quickly pages are faulted back in. The swap_count field tracks how many page-table entries refer to this swap slot, while in_swapcache is set when a page is assigned to the slot. The control group (if any) managing this allocation is noted in memcgid.

The type field tells the kernel what type of mapping is currently represented by this swap slot. If it is VSWAP_SWAPFILE, the virtual slot maps to a physical slot (identified by the slot field) on a swap device. If, instead, it is VSWAP_ZERO, it represents a swapped-out page that was filled with zeroes that need not be stored anywhere. VSWAP_ZSWAP identifies a slot in the zswap subsystem (pointed to by zswap_entry), and VSWAP_FOLIO is for a page (indicated by swap_cache) that is currently resident in RAM.

The big advantage of this arrangement is that a page can move easily from one swap device to another. A zswap page can be pushed out to a storage device, for example, and all that needs to change is a pair of fields in the swp_desc structure. The slot in that storage device need not be assigned until a decision to push the page out is made; if a given page is never pushed out, it will not need a slot in the storage device at all. If a swap device is removed, a bunch of swp_desc entries will need to be changed, but there will be no need to go scanning through page tables, since the virtual swap slots will be the same.

The cost comes in the form of increased memory usage and complexity. The swap table is one 64-bit word per swap entry; the swp_desc structure triples that size. Pham points out that the added memory overhead is less than it seems, since this structure holds other information that is stored elsewhere in current kernels. Still, it is a significant increase in memory usage in a subsystem whose purpose is to make memory available for other uses. This code also shows performance regressions on various benchmarks, though those have improved considerably from previous versions of the patch set.

Still, while the value of this work is evident, it is not yet obvious that it can clear the bar for merging. Kairui Song, who has done the bulk of the swap-related work described in the previous articles, has expressed concerns about the memory overhead and how the system performs under pressure. Chris Li also worries about the overhead and said that the series is too focused on improving zswap at the expense of other swap methods. So it seems likely that this work will need to see a number of rounds of further development to reach a point where it is more widely considered acceptable.

Postscript: swap tiers

There is a separate project that appears to be entirely independent from the implementation of the virtual swap space, but which might combine well with it: the swap tiers patch set from Youngjun Park. In short, this series allows administrators to configure multiple swap devices into tiers; high-performance devices would go into one tier, while slower devices would go into another. The kernel will prefer to swap to the faster tiers when space is available. There is a set of control-group hooks to allow the administrator to control which tiers any given group of processes is allowed to use, so latency-sensitive (or higher-paying) workloads could be given exclusive access to the faster swap devices.

A virtual swap table would clearly complement this arrangement. Zswap is already a special case of tiered swapping; Park's infrastructure would make it more general. Movement of pages between tiers would become relatively easy, allowing cold data to be pushed in the direction of slower storage. So it would not be surprising to see this patch series and the virtual swap space eventually become tied together in some way, assuming that both sets of patches continue to advance.

In general, the kernel's swapping subsystem has recently seen more attention than it has received in years. There is clearly interest in improving the performance and flexibility of swapping while making the code more maintainable in the long run. The days when developers feared to tread in this part of the memory-management subsystem appear to have passed.

Comments (24 posted)

No hardware memory isolation for BPF programs

By Daroc Alden
February 25, 2026

On February 12, Yeoreum Yun posted a suggestion for an improvement to the security of the kernel's BPF implementation: use memory protection keys to prevent unauthorized access to memory by BPF programs. Yun wanted to put the topic on the list for discussion at the Linux Storage, Filesystem, Memory Management, and BPF Summit in May, but the lack of engagement makes that unlikely. They also have a patch set implementing some of the proposed changes, but has not yet shared that with the mailing list. Yun's proposal does not seem likely to be accepted in its current form, but the kernel has added hardware-based hardening options in the past, sometimes after substantial discussion.

When a modern CPU needs to turn a virtual address into a physical address, it does so by consulting a page table. This table also dictates whether the memory in question is readable, writable, executable, accessible by user space, etc. Page tables have a multi-level structure, requiring several pointer indirections to find the actual entry for a page of memory. To avoid the overhead of following these indirections on every memory access, the CPU keeps a cache of recently accessed entries called the translation lookaside buffer (TLB).

When the kernel wants to change the access permissions of a given area of memory, it needs to update the page table and then flush the TLB (causing an inevitable performance hit as it refills). Worse, if the area of memory is large, it may need to update many page-table entries. Even just keeping track of which page-table entries need to be updated and iterating through them all can be a time-consuming operation. Memory protection keys are a hardware feature that helps avoid the overhead of changing large sections of the page table, making it practical to change the permissions of memory as part of a routine operation.

Memory protection keys use four bits in the page table to associate each page in memory with one of sixteen keys; there is a special CPU register that associates each key with read and write permissions. This avoids the need to actually change individual page-table entries: just change the permission bits for the corresponding key and the memory becomes inaccessible, without the need to walk the page table or flush the TLB.

The kernel has had support for memory protection keys since 2016, but that support has only extended to user space. The related system calls allow user-space applications to use memory protection keys to implement a faster version of mprotect(). There is no reason, in theory, that memory protection keys couldn't be used within the kernel as well. In practice, there have been a number of attempts to integrate them into the kernel that have not come to fruition.

Yun suggested adding a new set of kmalloc_pkey() and vmalloc_pkey() functions to allocate kernel objects in parts of memory protected by a key. That would make it practical to give BPF programs access to only a subset of kernel memory. Specifically, memory that is owned by BPF programs, or that a subsystem specifically intends to share with a BPF program, could be allocated with a separate memory protection key from other kernel allocations. Then, when entering BPF code, access to general kernel memory could be swiftly disabled. Yun's message described how this would work in some depth, but did not include any of the actual code to implement it — even though they intend to share their work-in-progress code soon — so it's hard to judge how invasive a change this would be.

Dave Hansen thought that plan might be feasible for subsystems such as the scheduler that have a relatively limited amount of writable data, but that other areas of the kernel would have more problems:

Networking isn't my strong suit, but packet memory seems rather dynamically allocated and also needs to be written to by eBPF programs. I suspect anything that slows packet allocation down by even a few cycles is a non-starter.

IMNHO, any approach to solving this problem that starts with: we just need a new allocator or modification to existing kernel allocators to track a new memory type makes it a dead end. Or, best case, a very surgical, targeted solution.

Alexei Starovoitov, on the other hand, did not just think the suggestion would be difficult to pull off, but also pointless. Yun had listed several CVEs from between 2020 and 2023 as a way of showing that the BPF verifier alone was not enough to ensure security. Starovoitov disagreed: "None of them are security issues. They're just bugs."

Yun agreed that they were bugs, but disagreed that they have no security implications. Yun is of the opinion that the existence of bugs in the BPF verifier that have led to memory corruption in the past is enough justification to put another barrier between running a BPF program and having an exploitable vulnerability.

Considering the previous unsuccessful attempts to use memory protection keys in the kernel and the difficulty of implementation, Yun's proposal to introduce memory protection keys to the BPF subsystem seems unlikely to go anywhere. On the other hand, the kernel has slowly been adding hardening measures such as per-call-site slab caches — perhaps associating these caches with memory protection keys is a logical next step. Only time will tell whether memory protection keys are a useful addition to the kernel's various hardening tools, or whether they're a cumbersome distraction. Either way, they will likely have to make their way into the kernel via some other subsystem.

Comments (7 posted)

Lessons on attracting new contributors from 30 years of PostgreSQL

By Joe Brockmeier
February 23, 2026

FOSDEM

The PostgreSQL project has been chugging along for decades; in that time, it has become a thriving open-source project, and its participants have learned a thing or two about what works in attracting new contributors. At FOSDEM 2026, PostgreSQL contributor Claire Giordano shared some of the lessons learned and where the project is still struggling. The lessons might be of interest to others who are thinking about how their own projects can evolve.

Giordano said that her first open-source project was while working at Sun Microsystems; she worked on the effort to release Solaris as open-source software, "which was quite a wild ride". From there, she joined Citus Data, which provided an extension for PostgreSQL to add distributed-database features; that company was acquired by Microsoft in 2019. She said she now heads Microsoft's open-source community initiatives around PostgreSQL and that the talk is based on her upstream work with that project; as one might expect, though, she noted that the talk would represent her views and not those of her employer.

PostgreSQL: the next generation

[Claire Giordano]

The challenge that she wanted to talk about, "which might keep some of you awake at night", is how to attract the next generation of open-source developers. If a project is successful enough to stick around, how does it ensure there are people to keep it afloat when the older crop of contributors starts to retire? When Giordano refers to contributors, she means all of them—not only those who contribute code. Even though she started off her career as a developer many years ago, "I have never committed or contributed a single lick of code to this project", but she works on PostgreSQL in other ways.

PostgreSQL is one of the projects that has, indeed, been successful and long-lived enough that some of its contributors may be pondering retirement soon. Giordano noted that PostgreSQL, as a technology, had been around for about 40 years. It has been an open-source project for nearly 30 years: its first open-source release was on July 8, 1996. Since that release, the number of PostgreSQL committers has grown from five to 31 people. The project has many more contributors, but the committers are those who can actually push code directly to the project's Git repository.

There is a new major version of PostgreSQL released each year. Version 18 was released in September 2025; it contained more than 3,000 commits, nearly 4,000 files changed, and more than 410,000 lines of code changed by 279 people. During that cycle, 83 of the people were first-time contributors. Those statistics only consider code and documentation, however: Giordano estimated that there were more than 360 people who contributed "in some way, code or beyond code" in 2025.

Contributor funnel

PostgreSQL's contributor community is global. Giordano observed that two of the project's 31 committers were based in New Zealand, which meant that the country had more PostgreSQL committers per capita than any other country. But what types of contributors does the project have, and how do they become involved? She noted that the project does not formally track contributors, but she had created a "funnel", shown below, that visualized how a person might progress from casual contributor to major contributor or even committer.

[How
do people get involved in Postgres (slide)]

Every contributor has their own motivations, and they may not be altruistic. She hearkened back to a FOSDEM 2020 talk by James Bottomley, "The Selfish Contributor Explained", which posited that many people contribute for selfish reasons. But, she said with a reference to the classic novel Anna Karenina, every contributor is selfish in their own way.

In addition to her day job, Giordano is also the host of a monthly podcast called "Talking Postgres", where she talks to developers who work on or with PostgreSQL. Through that she has learned that each contributor has a different "origin story"; some start out working on PostgreSQL simply because it interests them, others begin contributing because they need the database for other projects, and of course some contributors are hired to do the work. She said that it is especially interesting to talk to people who are new to PostgreSQL and to think about whether the project is growing in the right way.

The soul

To frame the next section of the talk, she had taken inspiration from a book by Tracy Kidder, The Soul of a New Machine. The book, published in 1981, is a nonfiction account of the development of the Data General Eclipse MV/8000 minicomputer. In reference to the book, Giordano said she had structured the next section of the talk into three buckets: the soul, the machine, and the fuel:

For the soul, we'll be looking at the human ecosystem, the people, the community, and the culture. For the machine it's more about developer workflows, the processes, the tooling that we use, and the governance. Then for the fuel, it's about the economics and how people get paid. [...]

Let's start with the soul, with the human bits, because at the end of the day, without people on these projects, none of it would exist.

She said she heard the phrase once, "come for the code, stay for the community". (The actual phrase, "came for the language, stayed for the community" was originally coined by Brett Cannon about Python.) She said that her first European PostgreSQL conference was notable because people made her feel like she really belonged, "which just, like, boggled my mind". The community has many meetups and conferences to build in-person, trust-based relationships that are important to the project.

Another thing that works for the community, Giordano said, is "just the joy of working with Postgres". She displayed a slide from Bruce Momjian's FOSDEM 2023 talk, "Building Open Source Teams", that described the fun of programming. Not only is programming fun, she said, but PostgreSQL in particular is fun to work on "because this database is so big and has so many different subsystems" that developers have choices about what kind of problems they work on. If a developer doesn't want to work on database backup, for example, they can work on other parts of the system. That's true for companies funding developers too; organizations can choose to fund the work that is important to their business.

The annual PGConf.dev developer's conference is also working well to feed the "soul" of the project, she said. It is now in its third year, and its organizers have been doing "a lot of experimentation with different formats and how to make it easier for new contributors to get involved with the project".

Another thing that is working well is PostgreSQL's recognized contributors policy, which is managed by the project's contributor's committee. It's the committee's job to figure out who to recognize on the contributor profiles page, which lists the project's core team, major contributors, and significant contributors. "Most people like being recognized for good work. And it feels good. And you're more likely to do more of the work that feels good."

But the project struggles in other areas, such as its committer pipeline. Fast-forward five to seven years, Giordano said, and it would not be surprising if some of the project's current maintainers stepped back or retired. Someone had recently joked that the new rule is "you cannot retire until you have nominated and fully trained your successor". That just might work.

One question that comes up regularly is: "do we accidentally overlook people for promotion?" Is it really a lack of people in the pipeline, or is the project failing to recognize and advocate for people to be "promoted"? She suggested there may be a "desert" in the middle of the path from new contributors to committer. Major contributors who have been working on the project for a long time may not be getting the mentoring they need to move up to committer level.

Welcome to the machine

With the soul covered, Giordano wanted to move on to the machine; "I'll start again with the positives because, you know, it's always good to start with the positives". One of the nice things about PostgreSQL is the predictable lifecycle that is easy to understand: "annual major releases, quarterly patch releases, and there's a CommitFest five times a year in July, September, November, January, and March". LWN covered the PostgreSQL CommitFest process in June 2024.

The CommitFest emphasizes regular review of patches that have been submitted to the project. Giordano cited PostgreSQL committer Tom Lane, who had said to her that patch review encourages people to read the code and not only the patch. That, in turn, helps contributors increase their knowledge of the entire system. "So it's not only improving the quality of the patches, but it's helping to skill up reviewers within the community." PostgreSQL's continuous-integration (CI) system has also seen "pretty dramatic improvements" in the past few years. She said she imagined it is much much nicer now to get automated testing on patches rather than worrying about breaking the build farm—especially for drive-by contributors.

The skill and dedication of the maintainers, she said, is also working in favor of the project:

In fact, as I talk to more junior developers, they tell me that one of the big magnets of joining the project and working on Postgres is the chance to get to work with some of these pretty amazing, brilliant, talented luminaries. Now I know. Every project has brilliant people, and we don't corner the market on that. But if you're a database person and you care about Postgres, it's pretty cool.

But it's not all positive. While the patch-review process may be useful, the project struggles due to lack of bandwidth to review patches. She used an air-traffic-control analogy, describing patches as planes circling an airport. "There's a lot of planes up there, and not that many are landing". That was a bit of hyperbole, since there were more than 3,000 commits that made it into the last release, but she said that it can feel that way when an author of a patch is waiting on patch-review feedback. "If you've submitted it and you haven't heard anything, I can imagine that can be very frustrating". Likewise, people who have a lot of reviews to do may feel pressured. "So this is definitely, I think, an area where we're struggling".

One possible answer to the review problem is having component committers. Currently the project's committers have commit access to the entire project; it may be that the project should consider trusting some contributors with commit access to specific subsystems. It has been discussed, Giordano said, and no one has felt comfortable enough to do it—yet. "There's pros and cons, but it's going to keep coming up. And who knows? Maybe someday there will be an attempt to experiment with it".

Intimidation

Like a number of older open-source projects, PostgreSQL has a strong mailing-list culture. That has its positives, such as the project's invaluable mail archive and the fact that people tend to be more thoughtful in email replies, but it has its drawbacks as well. In particular, it is "a little bit intimidating" for new contributors to ask a simple question when they know it's going to be archived forever. Newer contributors are more familiar, and comfortable with, tools like Slack, Discord, and other text-messaging applications.

To solve this, she said that Robert Haas started a program in 2024 for mentoring new developers in a Discord channel (link to invite). "If you want to be mentored in Postgres, you should definitely reach out to Robert and apply for one of the next cohorts". There are monthly hacking workshops as well.

PostgreSQL is written in C, which has come up as another possible barrier to entry for new developers. She said that she didn't know if C was still a requirement for computer-science degrees, though it was required when she got her degree. In fact, PostgreSQL has its own flavor of C, which "I have heard from some newer developers is a bit intimidating". Add that to the fact that the project's getting-started information is fragmented and out of date, she said, and it can be a problem for new people.

The topic of AI slop overwhelming maintainers was almost omnipresent at FOSDEM. This presentation was no exception; Giordano said that PostgreSQL has been seeing an increase in slop contributions over the last year, and it is creating a burden for its maintainers. In turn, it makes them reluctant to look at contributions from new people "because they don't know if it's real or if it's just AI slop". The prevalence of AI slop may well burn out maintainers and clog the machine that keeps PostgreSQL running.

The fuel

PostgreSQL is hot right now, which helps to fuel the fire for PostgreSQL development. It is in use by plenty of organizations, developers love it, students are learning it, and there are many jobs related to it. "So this commercial gravity, if you will, is attracting people to learn the technology and work on the project." It also inspires corporate funding for contributors to work on the project, which means that not only are the committers paid to work on PostgreSQL full time, but many other people are paid to work on the project part-time.

Where the project is struggling, however, is in the realm of "committer paranoia". This is a topic that Haas spoke about at PGConf.dev (slides) in 2025. Essentially, there is a lot of stress around submitting patches and accepting them. Committers are responsible for the patches they commit, whether they authored them or not. This responsibility extends beyond the immediate need to make sure that the patch works, etc., but fixing bugs and so forth years down the road. "If there is a problem, you're on the hook to get it fixed and get it fixed right away. Get it fixed yesterday."

She returned again to the theme of committers retiring; the project is talking about that problem, but it doesn't have it figured out yet.

tl;dr

So, Giordano asked as time had nearly elapsed, what are the takeaways from the talk? The good news is that "Postgres has already proven that we can evolve". The key thing is that the project has to keep evolving, and to embrace the needs of its future developers without compromising the needs of the present. That includes fixing the project's patch-review throughput, running experiments to ensure that the project is not discouraging as many people and is instead getting them more engaged with the project. "And hopefully then we can also avoid burnout amongst our own people".

She followed up by inviting people, if they had the skills (or were willing to build them) described during her talk, to consider becoming a PostgreSQL hacker. She closed with a slide thanking a number of individuals and all the hackers at the FOSDEM 2026 PostgreSQL developer meeting for their inspiration and reviews of her talk.

[Thanks to the Linux Foundation, LWN's travel sponsor, for funding my travel to Brussels to attend FOSDEM.]

Comments (1 posted)

The second half of the 7.0 merge window

By Daroc Alden
February 23, 2026

The 7.0 merge window closed on February 22 with 11,588 non-merge commits total, 3,893 of which came in after the article covering the first half of the merge window. The changes in the second half were weighted toward bug fixes over new features, which is usual. There were still a handful of surprises, however, including 89 separate tiny code-cleanup changes from different people for the rtl8723bs driver, a number that surprised Greg Kroah-Hartman. It's unusual for a WiFi-chip driver to receive that much attention, especially a staging driver that is not yet ready for general use.

The most important changes included in this release were:

Architecture-specific

Core kernel

Filesystems and block I/O

Hardware support

  • Clock: Qualcomm Kaanapali clock controllers, Qualcomm SM8750 camera clock controllers, Qualcomm MSM8940 and SDM439 global clock controllers, Google GS101 DPU clock controllers, SpacemiT K3 clock controllers, Amlogic t7 clock controllers, Aspeed AST2700 clock controllers, and Loongson-2K0300 real-time clocks.
  • GPIO and pin control: Spacemit K3 pin controllers, Atmel AT91 PIO4 SAMA7D65 pin controllers, Exynos9610 pin controllers, Qualcomm Mahua TLMM pin controllers, Microchip Polarfire MSSIO pin controllers, and Ocelot LAN965XF pin controllers.
  • Graphics: Mediatek MT8188 HDMI PHYs, Mediatek Dimensity 6300 and 9200 DMA controllers, Qualcomm Kaanapali and Glymur GPI DMA engines, Synopsis DW AXI Agilex5 DMA devices, Atmel microchip lan9691-dma devices, and Tegra ADMA tegra264 devices.
  • Industrial I/O: AD18113 amplifiers, AD4060 and AD4052 analog-to-digital converters (ADCs), AD4134 24-bit 4-channel simultaneous sampling ADCs, ADAQ767-1 ADCs, ADAQ7768-1 ADCs, ADAQ7769-1 ADCs, Honeywell board-mount pressure and temperature sensors, mmc5633 I2C/I3C magnetometers, Microchip MCP47F(E/V)B(0/1/2)(1|2|4|8) buffered-voltage-output digital-to-analog converters (DACs), s32g2 and s32g3 platform ADCs, ADS1018 and ADS1118 SPI ADCs, and ADS131M(02/03/04/06/08)24-bit simultaneous sampling ADCs.
  • Input: FocalTech FT8112 touchscreen chips, FocalTech FT3518 touchscreen chips, and TWL603x power buttons.
  • Media: MediaTek MT8196 video companion processors and external memory interfaces.
  • Miscellaneous: Foresee F35SQB002G chips, TI LP5812 4x3 matrix RGB LED drivers, Osram AS3668 4-channel I2C LED controllers, Qualcomm Interconnect trace network on chip (TNOC) blocks, and DiamondRapids non-transparent PCI bridges.
  • Networking: Glymur bandwidth monitors, Glymur PCIe Gen4 2-lane PCIe PHYs, SC8280xp QMP UFS PHYs, Kaanapali PCIe PHYs, and TI TCAN1046 PHYs.
  • Power: ROHM BD72720 power supplies, Rockchip RK801 power management integrated circuits (PMICs), ROHM BD73900 PMICs, Delta Networks TN48M switch complex programmable logic devices (CPLDs), sama7d65 XLCD controllers, and Congatec Board Controller backlights.
  • USB: AST2700 SOCs, USB UNI PHY and SMB2370 eUSB2 repeaters, QCS615 QMP USB3+DP PHYs, SpacemiT PCIe/combo PHY and K1 USB2 PHYs, Renesas RZ/V2H(P) and RZ/V2N USB3 devices, Google Tensor SoC USB PHYs, and Apple Type-C PHYs.

Miscellaneous

Networking

Virtualization and containers

  • KVM has added a mask to correctly report CPUCFG bits on LoongArch, which provides LoongArch guests with the correct information about the CPU's capabilities.
  • Some AMD CPUs support Enhanced Return Address Predictor Security (ERAPS) — a feature that removes the need for some security-related flushes of CPU state when a guest exits back to the host operating system. KVM added support for using ERAPS, and for advertising that support to guests.
  • There's a new user-space control to configure KVM's end of interrupt (EOI) broadcast suppression (which prevents an EOI signal from being sent to all interrupt controllers in a system, making interrupts more efficient). Previously, KVM erroneously advertised support for EOI broadcast suppression, even though it wasn't fully implemented. Unfortunately, the flaw persisted long enough that some user-space applications came to depend on the behavior, so now that the feature has been implemented correctly, user-space programs will have to opt in when configuring KVM virtual machines.
  • Guests can now request full ownership of performance monitoring unit (PMU) hardware, which provides more accurate profiling and monitoring than the existing emulated PMU.
  • The kernel's Hyper-V driver has added a debugfs interface to view various statistics about the Microsoft hypervisor.

Internal kernel changes

  • The kernel has been almost entirely switched over to kmalloc_obj() through the use of Coccinelle. Allocations of structure types and union types have all been converted, but allocations of scalar types, which need manual checking, have been left alone. Linus Torvalds followed up with a handful of fixes for the problems that inevitably crop up after this kind of large change. The new interface allocates memory based on the size of the provided type, which should mean fewer mistakes with manual size calculations. Where one would previously write one of these:
        ptr = kmalloc(sizeof(*ptr), gfp);
        ptr = kmalloc(sizeof(struct some_obj_name), gfp);
        ptr = kzalloc(sizeof(*ptr), gfp);
        ptr = kmalloc_array(count, sizeof(*ptr), gfp);
        ptr = kcalloc(count, sizeof(*ptr), gfp);
        ptr = kmalloc(struct_size(ptr, flex_member, count), gfp);
    
    One can now write:
        ptr = kmalloc_obj(*ptr, gfp);
        ptr = kmalloc_obj(*ptr, gfp);
        ptr = kzalloc_obj(*ptr, gfp);
        ptr = kmalloc_objs(*ptr, count, gfp);
        ptr = kzalloc_objs(*ptr, count, gfp);
        ptr = kmalloc_flex(*ptr, flex_member, count, gfp);
    
    GFP_KERNEL is also now the default, and can be left out if that is the only memory allocation flag that one wishes to set.

The second-half of the merge window is often quieter, and this one was no exception. There were a number of debugging features added, however, which is always nice to see. At this point, the kernel will go through the usual seven-or-eight release candidates as people chase down bugs introduced in this merge window. The final v7.0 kernel should be expected around April 12 or 19.

Comments (58 posted)

An effort to secure the Network Time Protocol

February 25, 2026

This article was contributed by Joab Jackson


FOSDEM

The Network Time Protocol (NTP) debuted in 1985; it is a universally used, open specification that is deeply important for all sorts of activities we take for granted. It also, despite a number of efforts, remains stubbornly unsecured. Ruben Nijveld presented work at FOSDEM 2026 to speed adoption of the thus-far largely ignored standard for securing NTP traffic: IETF's RFC-8915 that specifies Network Time Security (NTS) for NTP.

I was not able to attend FOSDEM this year, but I watched the video for the talk in order to put together this article.

According to Nijveld, NTP is "fundamentally a broken protocol. It is a protocol that is fundamentally insecure". His employer, the Dutch nonprofit Trifecta Tech Foundation, is testing technologies that would make it easier for pool.ntp.org, and other large-scale time servers, to adopt NTS. That work is receiving additional funding from ICANN and other interested parties, he said. The foundation's specialty is improving open-source infrastructure software, and Nijveld himself is an expert on time synchronization software, having worked on or with ntpd-rs, and Statime for the IEEE's Precision Time Protocol.

The universal need for time

NTP was originally authored by the University of Delaware's David Mills, who passed in 2024. It is now almost universally used to synchronize everything else on the Internet with the Coordinated Universal Time (UTC) by referencing time data from high-precision timekeeping devices such as atomic clocks. Those are referred to as "Stratum 0" time sources in the hierarchy of time sources.

NTP timestamps are essential for computer-related synchronization tasks such as issuing Kerberos tickets, and TOTP tokens for one-time passwords. "If your clock is off by a minute, good luck getting a proper TOTP token, because they are only valid for a minute, generally." Database synchronization, distributed-systems logging, and in fact almost all of distributed computing, are built on precise synchronization, as are cellular networks, any sort of media streaming, high-frequency trading, power-grid load balancing, and astronomical research, Nijveld noted.

The issue with NTP, Nijveld said, is that it can be easily spoofed, which would cause havoc for any of the other above-listed uses. In a typical man-in-the-middle attack, one could capture a response, change the timestamp, and resend it to the requester, and no one would be the wiser.

Securing NTP is a trivial task, because the payload does not need to be encrypted, Nijveld said. "Time is everywhere, so who cares if it is encrypted or not?" Instead, only proof of authentication is needed to secure a timestamp packet. "We want to be sure that every time I send a request, the response comes from the party that I wanted it to come from." NTS does this work.

NTS is an extension of NTP that includes a TLS key exchange, using the same key exchange infrastructure used for secure web browsing. The client first contacts the server over a TCP connection secured with TLS (using port 4460). The server's existing digital certificate can be used so long as it's signed by a certificate authority trusted by the client.

After a successful handshake, the server sends the client eight unique cookies, each containing the symmetric key established in the handshake. The client includes one of these cookies in a separate NTP request packet (sent via UDP port 123). Multiple cookies are used in case of dropped packets, with the server supplying a new one in each response. When the server receives the time request, it decodes the cookie to retrieve the symmetric key, and verifies the packet's checksum using the Message Authentication Code (MAC). If the operation is successful, the server sends back an NTP packet with the current timestamp (and a new cookie). Once the key exchange is established, the client can then poll the server on regular time intervals using the stream of cookies. No additional handshakes are needed.

Slow adoption for NTS

Despite NTS becoming an IETF standard in 2020, uptake has been slow. There's been some success: Canonical, which maintains its own time servers, switched to NTS in 25.10. But the NTP reference implementation, ntpd, does not support NTS. A number of clients, such as NTPsec, chrony, and ntpd-rs, do support NTS. They are not, however, widely adopted.

The Simple Network Time Protocol (SNTP) is widely used by embedded devices because of its small footprint. It uses a single time source, rather than multiple time sources, and messages are sent over UDP. Nijveld said that he is not aware of a single implementation of SNTP that supports NTS.

He suspected that the NTS-supporting implementations are not widely adopted as device designers may prefer the lighter SNTP clients, even though the other clients are not that heavy. To help spur wider adoption of NTS on Linux distributions, other Trifecta staff are also working on a patch for systemd-timesyncd.

Learning how to scale NTS

Trifecta surmised that one roadblock thwarting the wider use of NTS is the lack of practices to deploy NTS for large-scale usage. In particular, NTS needs to work nicely with pool.ntp.org. This service is run by a virtual cluster of about 5,000 servers. Today, most servers and other machines that use NTP probably have a client configuration pointing to pool.ntp.org. When a request is sent to pool.ntp.org the authoritative DNS server, GeoDNS maps the client to the nearest time server. That time server will then fulfill requests for the current time for that client.

Grafting TLS onto pool.ntp.org would require either issuing certificates to all 5,000 servers or sharing a single certificate across all of the servers. Issuing individual certificates is not feasible as they all share the same domain name, Nijveld said, and sharing a single certificate would be a security nightmare. The whole setup could be compromised by one leaked certificate. "We need alternative approaches."

Trifecta is currently experimenting with two different approaches. One rethinks the server-side service and the other changes the behavior of the client. The proposed server-side change is a load-balancing proxy, with a single certificate, that handles all the server-side encryption. It terminates the client connections, giving the client an encrypted cookie, which is passed to the time server for quick validation. This way, the time servers don't get bogged down with key exchanges. The other approach requires modifying the client to do DNS Service Record (SRV) lookups, which would point to specific self-registering servers, each with its own unique domain name—and its own certificate. This approach requires DNSSEC validation, otherwise addresses could be spoofed "because DNS itself is also not encrypted".

There are still unresolved issues, and open questions, with those approaches. Nijveld didn't say this, explicitly, but equipping a pool of 5,000 servers with digital certificates, as the SRV approach takes, could be prohibitively costly. But if either or both architectures work, they will be submitted to the IETF for ratification.

Currently, Trifecta is looking for volunteers to add their own clients and servers to Trifecta's experimental NTS-only pool. To participate, a client or server needs to be online 24/7 with a fully qualified domain name, and the ability to accept certificates (NTS software can act both as a server and a client). The documentation says that it is possible to use Chrony, ntpd-rs, or NTPSec. However, Chrony requires modification for this.

Those who register with the program will get a token to validate their client to the pool, which needs to be added to the software's configuration file. The client will be given a set of upstream sources to connect with. If set up correctly, the participating time client/server gets a score, which increments each time it successfully executes. The score is used as part of the monitoring system.

Readers who would like to follow along with the work-in-progress management system for NTS servers can find the code on GitHub, and watch the Trifecta Tech Foundation blog for updates.

Comments (66 posted)

Page editor: Joe Brockmeier

Inside this week's LWN.net Weekly Edition

  • Briefs: OpenSUSE governance; Firefox 148.0; GNU Awk 5.4.0; GNU Octave 11.1.0; Rust in Ladybird; LibreOffice Online; Weston 15.0; RIP Robert Kaye; Quotes; ...
  • Announcements: Newsletters, conferences, security updates, patches, and more.
Next page: Brief items>>

Copyright © 2026, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds