Leading items
Welcome to the LWN.net Weekly Edition for December 18, 2025
This edition contains the following feature content:
- The Civil Infrastructure Platform after (nearly) ten years: creating a platform for industrial Linux deployments.
- Going boldly into the COSMIC desktop environment: a look at the first stable release of the Rust-based desktop.
- Calibre adds AI "discussion" feature: the popular ebook-management program gets a controversial feature.
- The 2025 Maintainers Summit:
reporting from the annual gathering of top kernel subsystem
maintainers:
- Toward a policy for machine-learning tools in kernel development: what role—if any—should machine-learning tools play in the kernel development process?
- Best practices for linux-next: making the kernel development process run even more smoothly.
- The state of the kernel Rust experiment: it is official—Rust is here to stay in the Linux kernel.
- Better development tools for the kernel: a look at the tooling side of the kernel-development process.
- 2025 Maintainers Summit development process discussions: succession planning for the kernel, and more.
- The rest of the 6.19 merge window: notable changes that will appear in the next major kernel release.
This week's edition also includes these inner pages:
- Brief items: Brief news items from throughout the community.
- Announcements: Newsletters, conferences, security updates, patches, and more.
Please enjoy this week's edition, and, as always, thank you for supporting LWN.net.
The Civil Infrastructure Platform after (nearly) ten years
The Civil Infrastructure Platform (CIP) first launched in that form in April 2016, so it has a tenth-anniversary celebration in its near future. At the 2025 Open Source Summit Japan, Yoshitake Kobayashi talked about the goals of this project and where it is headed in the future. Supporting a Linux system for even one year is a challenging task; maintaining that support for a decade or more is rather more so, and a changing regulatory environment complicates the task further.
The mission of CIP is to provide "industrial-grade Linux" as an open-source
base layer, Kobayashi began. CIP has run up a few achievements in its
first ten years, starting with the "super long-term support" (SLTS)
kernels, which are supported for a minimum of ten years. CIP has been
working toward alignment with industry standards, and IEC
62443 (which is concerned with "requirements and processes for
implementing and maintaining electronically secure industrial automation
and control systems
") in particular. The project has also made
significant upstream contributions to projects like Debian and KernelCI.
The longevity gap
Civilization, he said, runs on Linux. There are vast numbers of hidden
industrial systems running Linux in many settings, including energy,
transportation, building automation, and more. When CIP had its beginnings
in the early 2010s, industries using Linux in this way were contending with
a "longevity gap
"; kernel releases are frequent, but even the kernel's
long-term support runs out after six years in the longest (and
discontinued) case. Power plants, railways, and other systems with Linux
inside can run for ten, 20, or even 50 years, though. That has led
companies to create — and to have to maintain — their own proprietary Linux
forks, with the usual costs and security risks.
The CIP concept was first presented (slides) at LinuxCon Japan in 2015. The requirements at that time included a minimum of ten years of support, ongoing security updates, and a kernel with realtime capabilities. Over time those requirements have evolved, but they remain focused on industrial-grade reliability, functional safety, and realtime response. To get there, CIP has created an open-source base layer, a sort of minimal Linux distribution. This layer includes the CIP kernel, and a small set of core packages. This base layer, when used by companies in their project, can bring about a 70% reduction in the effort required to create and maintain the resulting system, he said.
CIP's history can be split into three phases, he said. The first, through 2017, was mostly focused on defining policies for the project. From 2018 to 2021, the effort went into the creation of working groups and the implementation of the base layer. Since 2022, the focus has been on compliance and resilience work.
The pillars
The project's working groups comprise the pillars that hold the whole thing up. The first is the kernel effort which, he said again, seeks to project a minimum of ten years of support. That work necessarily involves backporting a lot of patches, but the project's policy requires that any backported patches must first land in the mainline kernel. While most backports are fixes of one type or another, there is also a certain amount of work done to support newer hardware in older kernels.
The first SLTS kernel was 4.4, which was released in 2016; it was first adopted by CIP in 2017. Initially the support work was done by Ben Hutchings, but then it moved over to the CIP kernel team. This kernel will hit end of life in January 2027. It was intended to be a proof-of-concept showing that extended support of a kernel in an open setting can work; now, the project is supporting five SLTS kernels (the others are 4.19, 5.10, 6.1, and 6.12). For as long as those kernels have normal long-term support, CIP does not have much work to do; the project will take the kernels over once the regular support ends.
The kernel team, he said, is currently reviewing over 1,000 patches per month for backport consideration. The team also looks at about 2,000 CVE entries per month, just for the 6.1 kernel. There have been 466 SLTS releases to date. There are 11 boards supported by the five-person team working on the SLTS kernel. As an example of how this support has worked, he put up a slide showing each 4.4 release, indicating how many patches were backported by the CIP project itself; those comprise all of the patches applied, of course, once the community support for 4.4 ended.
The security process involves reviewing huge numbers of CVE entries, many of which are not applicable to the CIP kernel. The first review step is automated; it takes the CIP kernel configuration into account to weed out the CVEs that cannot be applicable. The remainder must then be manually reviewed.
The CIP Core Working Group is charged with providing the reference base image — the kernel with the core utilities on top of it. There was no desire within the project to create an entirely new distribution, so CIP chose to work with the Debian project instead. CIP's efforts help with Debian's long-term support, and continue after Debian moves on. There are two system profiles — "tiny" and "generic" — maintained by CIP, but the tiny profile is being phased out. As the capabilities of embedded systems have grown, the need for an extra-small base image has decreased. There are currently five Debian releases supported by this group.
Kobayashi pointed out that the reference images are created with a reproducible build process; there is a strong desire to keep the process transparent and ensure the the result can be trusted.
The Testing Working Group has put together a system called Board At Desk (B@D), which allows developers to connect boards to the central continuous-integration (CI) system. Developers can use B@D to test changes on real hardware from their own desktops. The working group has been building a centralized testing infrastructure, using GitLab runners, that is integrated with the KernelCI project. The results from CIP testing can be seen on the KernelCI site.
The Security Working Group is focused on the requirement that the CIP core image needs to be a secure reference image. There is an emphasis on IEC 62443 compliance; the hope is that a compliant base image will be helpful to users seeking their own compliance certification. The project has also put together some guidance for its users to help them obtain that certification. The IEC 62443-4-1 assessment of the base was completed in August 2024; the IEC 62443-4-2 assessment is underway now, with a hoped-for completion in 2026.
The Software Update Working Group is charged with the creation of a robust update framework for CIP-based systems. This work involves integrating with systems like SWUpdate, TUF, and wfx. The project has implemented A/B updating (maintaining two independent images allowing fallback to a working system if an update fails) and delta updates. Integration of TUF is done now; wfx integration is in progress.
The road ahead
Kobayashi concluded with a brief look forward — which actually started with the recent past. CIP first integrated the realtime preemption patches in 2017; the feature has been officially supported since late 2024, shortly after the completion of the realtime preemption merge.
The near-term future of CIP, beyond maintaining all those images, appears
to be dominated by the coming "regulatory wave
". That wave takes the
form of the European Cyber Resilience Act (CRA) in the near future. The
CIP base system, he said, will serve as a sort of shelter for
manufacturers, helping them to provide the updates mandated by the CRA.
The plan, he said at the end, is to evolve CIP into a "compliance
base
" maintained as an open-source project.
The slides from this presentation are available.
[Thanks to the Linux Foundation, LWN's travel sponsor, for supporting my travel to this event.]
Going boldly into the COSMIC desktop environment
After three years of development, Linux hardware provider System76 has declared the COSMIC desktop environment stable. It shipped COSMIC Epoch 1 as part of the long-awaited Pop!_OS 24.04 LTS release on December 11, just in time for Linux enthusiasts to have something to tinker with over the end-of-year holidays. With the stable release out the door, it seemed like a good time to check back in on COSMIC and see how it has evolved since the first alpha. For a first stable release of a new desktop environment, COSMIC shows a lot of promise and room to grow.
System76 is, first and foremost, a provider of Linux laptops, desktops, servers, and other hardware. It originally shipped its hardware with Ubuntu preinstalled. It created the Ubuntu-based Pop!_OS distribution in 2017 after Canonical discontinued work on the Unity desktop. Rather than trying to maintain Unity alone, the company offered GNOME as its default desktop instead. Eventually, System76 introduced a GNOME shell extension to add tiling features, but maintaining the extension in the face of GNOME changes proved to be difficult. Ultimately, System76 decided that it would build its own Rust-based, Wayland-only desktop environment.
As the version number indicates, this release is based on Ubuntu's 24.04 LTS, which has been out for about 18 months now. Pop!_OS users have had a longer-than-usual wait for the release because System76 decided to focus on its new COSMIC desktop environment rather than putting out a 24.04 release last year with the old GNOME-based desktop it had also called COSMIC.
The first alpha of the Rust-based COSMIC, released in August 2024, showed a great deal of potential, but there were many missing features and applications needed before it could be considered a suitable desktop. In the 15 months since the alpha, the development team has managed to move things along quite a bit—some of applications are still a bit bare-bones, but overall the result is a usable desktop distribution with a few rough edges. Note that COSMIC has also been packaged for quite a few Linux distributions, so one does not need to use Pop!_OS to get it, but I wanted to get the stable version directly from System76 to ensure it was set up as they envisioned it.
There are two x86_64 builds of Pop!_OS available via the downloads page, one with the proprietary drivers for systems with NVIDIA graphics, the other for systems with AMD or Intel graphics. Users are instructed to disable secure boot to install Pop!_OS.
System76 has introduced Arm builds for the 24.04 release;
these are primarily for the company's Arm hardware that uses Ampere Altra
CPUs. As with x86_64, there is one for Arm computers with
NVIDIA graphics and another for those without it. The description indicates that
the builds may work with other devices supported by the Tow-Boot firmware (an "opinionated
distribution of U-Boot
").
The Pop!_OS installation has not changed much in the interim. Other than choosing the system language, keyboard layout, then pointing the installer at the right storage device, supplying user credentials, and so forth, there is not much for the user to do other than accept options and wait for the process to finish.
Even though Pop!_OS is based on Ubuntu, there are a few differences worth noting. One is that System76 has yanked support for Snap packaging out of the distribution, and adds support for Flatpaks instead. Software like Firefox, which is shipped as a snap package for Ubuntu, is shipped by System76 as a Debian package instead. The distribution also includes extra APT repositories for some proprietary software, such as Google Chrome, Plex Media Server, and Steam for games.
System76 ships its own kernel, based on Ubuntu's kernel sources; currently Pop!_OS includes Linux 6.17.9, which is available for Ubuntu 25.10, but not for the 24.04 releases. Ubuntu 24.04 offers 6.8 or 6.14, depending on the point release. The kernel does not seem to be heavily customized beyond any changes from Ubuntu; a quick skim of the changelog and some poking with the git-who utility shows fewer than 40 commits from System76 employees. Most of those are to make adjustments for System76 hardware.
There is also a recovery
partition created when a user installs Pop!_OS. This contains the
installation media for the distribution and tools that allow users to
perform a "refresh install
" that is supposed to preserve user
data. (I have not tested it myself.)
For the most part, Pop!_OS is Ubuntu-like enough that Ubuntu users will probably feel comfortable with it; software created for Ubuntu (which is abundant) will run just fine.
Desktop
COSMIC has two modes for window management, floating and tiling. What's particularly nice about COSMIC is that users do not have to choose one or the other; each workspace can be set to floating or tiling, independently. A person might use tiling mode for a workspace with terminal windows, but use floating for a workspace with a graphics application. Windows can be set to floating mode on tiling workspaces, as well. That can be useful for applications like the calculator or COSMIC's settings utility.
The tiling mode seems like it would be ideal for users who are new to tiling window management; some tiling window managers require users to do extensive up-front configuration and manage everything via shortcuts. This is fine for some more technical users, but a bit of a headache and blocker for others. COSMIC has the benefit of discoverability; users can start off with the desktop in traditional floating mode, and dip a toe into tiling to see if they like it.
It is possible to do everything with COSMIC via shortcuts, but not required. All of the window-management operations can be done by dragging and dropping, or by right-clicking the mouse on a window's title bar and selecting the option one wants. The shortcuts for operations are displayed when right-clicking, too, so users can learn the shortcuts gradually.
The Alt+Tab behavior for switching between windows has improved since the early alphas, but is still unique to COSMIC. Alt+Tab brings up a floating dialog box with a list of applications and a shortcut next to each application, Ctrl+1 through Ctrl+0. In the first iterations of COSMIC, Alt+Tab would only display eight windows; any additional windows were ignored. Now, hitting Alt+Tab while the dialog is up cycles through the full list of open windows, or a user can type Ctrl+N to switch to the desired window. If there are 11 or more windows open, the remainder simply do not get shortcuts. This method feels a bit weird to me, but other users may like it.
The only tiling bug, or at least unwanted surprise, that I discovered is that window properties do not persist when changing workspaces. That is, if a window on a tiling workspace is set to floating, that setting does not persist if the window is moved to another workspace. Instead, the window is automatically tiled, and the user has to toggle it to floating again.
While COSMIC's tiling mode is quite usable on a large widescreen
monitor, it quickly becomes cramped when working on a smaller canvas
such as a laptop screen. After using PaperWM on GNOME
and then moving to the
niri Wayland compositor, I've gotten hooked on the scrolling model
for tiling window management. There is a feature
request that was opened in May 2024 that asked for an option
to add scrolling window management. COSMIC developer Victoria
Brekenfeld responded
that it was unlikely the team would add additional window-management
concepts at that point in development, but "we might (long-term)
try to provide an api to have external programs add new window
management options
". That doesn't seem to exist yet, but perhaps
the team will get to it now that the stable release is out.
Brekenfeld said that the team also wanted to allow changing the cosmic-comp compositor used with the desktop, which would allow swapping in niri to get its scrolling features. That they have done; she has published a cosmic-ext-extra-sessions repository with scripts and instructions for using COSMIC with Sway, miracle-wm, or niri. I've encountered no problems while using niri as COSMIC's compositor on Fedora 42 and Fedora 43. Unfortunately, niri is only packaged for Ubuntu 25.10 and later; users who want to use niri on Pop!_OS 24.04 will have to build it from source.
By default, COSMIC has a macOS-ish layout: a top panel with a clock and assorted widgets, and a dock with buttons for the application launcher, workspaces, as well as open applications. These can be adjusted to better match one's preferences; for example, the panel and dock can be made larger or smaller, moved to the sides of the screen, set to automatically hide, and so forth. The dock can be disabled entirely if a user does not feel a need for it.
All of the features on the panel and dock are implemented as applets that can be moved around or turned off. At the moment, the applets themselves have no configuration options. For example, the "App Library Button" applet displays icons for each application that is currently open. Clicking the Firefox button displays each of its open windows. However, it is not possible to ungroup application windows so each one is shown separately, which is typically an option for similar taskbar applets on other desktops. It seems probable that the System76-provided applets will receive additional features and polish over time, now that the core desktop is considered stable.
The code for the applets shipped with COSMIC is in a repository on GitHub, but there is not much in the way of documentation. Bryan Hyland has written a tutorial, though, for anyone looking to develop their own. There is already a COSMIC Utils site with a small collection of community-created applets available.
COSMIC applications
Desktop environments typically include a selection of basic software that users need right away; a file manager, text editor, terminal emulator, etc. COSMIC includes all of these, though some are more complete than others.
The terminal emulator, simply named "COSMIC Terminal" has a good selection of basic features; it has tabs, split windows, profiles, and so on. Not as full-featured as something like Alacritty or Ghostty, but quite usable. It has more than enough features if one assumes that the target audience for Pop!_OS is less likely to spend a lot of time at the command line.
The COSMIC Text Editor is also good, but basic. It offers a clean
interface, syntax highlighting, word-wrap, Vim keybindings, and
more. It also seems slightly unfinished: there is a "Git management"
menu item that simply brings up a sidebar that says "Git management
is a developer tool used for version control
operations
". Presumably, a later version of the editor will
include some functionality that ties it in with Git in some way. It's
no substitute for Emacs or Vim, but it's perfectly suitable for basic
editing of configuration files and such.
COSMIC includes a bare-bones media player that can open audio or video files and play them back. There is no playlist or queue. Just open a file, play it back. That's it. It has no frills whatsoever, not even the ability to play audio or video files back at slower or faster speeds. It's unclear whether the media player was developed primarily as a proof-of-concept for the desktop or if there are plans to build it out into a more full-featured application. However, there is no real pressing need for the COSMIC developers to prioritize enhancing the media player; there are plenty of alternatives for Linux, and users are likely to have a preferred option already, which can probably be found in the desktop's application store.
The COSMIC Files application is a decent enough file manager, but a little cumbersome to use compared to Dolphin or Nautilus. The reason for that is the design scheme used for all of COSMIC's applications; there is no toolbar as one might expect with a file manager, just the menu bar with the "File", "Edit", "View", and "Sort" entries. Where one might usually click an icon to toggle between list and thumbnail view of files, the COSMIC file manager requires clicking the View menu then selecting grid view. Oddly, toggling between list and thumbnail view is not an option in the right-click menu. If one learns the shortcuts (Ctrl+2 for grid, Ctrl+1 for list) it's less cumbersome, but that's just one operation.
The desktop does not have an official tool that makes it easy to
pick themes, but there is a Tweaks application available
as a Flatpak that does so. In addition to themes, Tweaks lets users
customize some hidden settings for the dock and panel, such as padding
between items displayed in both. It enables saving desktop layouts,
too, so users can tinker with settings and return to a previous layout
without having to undo each change separately. The tagline for Tweaks
is "Personalize your COSMIC desktop beyond infinity
", which
seems to be overselling the program's capabilities quite a bit, but it
is a handy tool if one likes to customize the desktop.
Managing software
COSMIC Store is a front-end for installing and managing a variety of software for Pop!_OS. It showcases desktop software from the Pop!_OS repositories, Flatpak applications from Flathub and other repositories, as well as a collection of third-party applets for the desktop. It replaces the Pop!_Shop software-management application that was included in 22.04 and prior versions of Pop!_OS. The older application was, in my experience, slow and unstable; when I used Pop!_OS, I tended to avoid it if possible. COSMIC Store, on the other hand, is responsive and stable.
There are a few gaps where Flatpak handling is concerned. The Store does not indicate whether an application is verified through Flathub or not; in fact, it shows the developer of a Flatpak application as the upstream developer, regardless of whether that upstream had a hand in packaging the application for Flathub. For instance, the Spotify entry in the Store shows Spotify as the developer even though the Flatpak is unverified. Other app stores for Linux link back to the Flathub homepage for Flatpaks, where one can find the developer information and manifest for the package; the COSMIC Store simply lists the application's homepage, even if the upstream or vendor has nothing to do with the Flatpak.
The Store also has a collection of third-party applets, such as the Dictionary Applet or Classic Menu for the desktop. This is a different selection of applets than what is available on the Cosmic Utils site. System packages and Flatpak updates are managed through the Store as well. Oddly, it shows updates to COSMIC desktop applications separately from other system packages, even though those are also installed as Debian packages. No doubt there is some reasoning behind that, but it isn't immediately obvious what that might be.
Before the Bazzite project became an option, Pop!_OS was my default recommendation to users who wanted a distribution to play games on Linux. In my experience, it worked well on systems with NVIDIA GPUs, and Steam was packaged for the distribution and ran my selection of games well. My experience with Steam games on 24.04, so far, has not gone as well. One game, Quake III Arena, launches in a small window that only displays part of the game. Another, Prodeus, opens in a full-screen view as it should, but mouse input does not work properly. Stray, on the other hand seems to work just fine.
Native Linux games installed as Debian packages or from Flatpak fared better, though I also ran into some weirdness when trying to play games on an external monitor. If I stuck to using the laptop screen, games ran fine.
Documentation and community
One of the other weak spots for the distribution, and for COSMIC as a desktop environment, is a lack of documentation. There is no link to documentation from the main landing page for Pop!_OS, with the exception of a pointer to the keyboard shortcuts. There are a number of good support articles provided by System76, but some are out of date.
Right now, for example, there is no apparent way for users to contribute documentation for COSMIC. Since COSMIC has been packaged and adopted by other distributions, there is a good chance that it will be well-documented for some distributions and not others. The ArchWiki COSMIC documentation is already off to a good start.
There is not a forum or mailing list for COSMIC developers or users, but there is a chat platform based on the Mattermost open-source collaboration software. It is refreshing to see a project, especially one backed by a company, deliberately using an open-source project for communications rather than pointing people to proprietary services like Discord.
There is also a COSMIC Epoch 2 project board that interested users and contributors can keep an eye on to see what work is planned for the next major release. Some of the features on the to-do list include new Alt+Tab options, per-application volume control, and additional window-management settings. The next major release of Pop!_OS will be based on Ubuntu 26.04 LTS, which is expected in April 2026, but it's not clear if Epoch 2 is planned for that release.
COSMIC Epoch 1 is a solid and usable desktop; it is not perfect, but it's an impressive 1.0 release. It is clear that the developers have a vision and are working on realizing it. How COSMIC evolves from here will be interesting to watch. Will COSMIC attract Linux users from other desktops, or (even better) attract new users to Linux?
Calibre adds AI "discussion" feature
Version 8.16.0 of the calibre ebook-management software, released on December 4, includes a "Discuss with AI" feature that can be used to query various AI/LLM services or local models about books, and ask for recommendations on what to read next. The feature has sparked discussion among human users of calibre as well, and more than a few are upset about the intrusion of AI into the software. After much pushback, it looks as though users will get the ability to hide the feature from calibre's user interface, but LLM-driven features are here to stay and more will likely be added over time.
Amir Tehrani proposed adding an LLM query feature directly to calibre in August 2025:
I have developed and tested a new feature that integrates Google's Gemini API (which can be abstracted to any compatible LLM) directly into the Calibre E-book Viewer. My aim is to empower users with in-context AI tools, removing the need to leave the reading environment. The results: capability of instant text summarization, clarification of complex topics, grammar correction, translation, and more, enhancing the reading and research experience.
Kovid Goyal, creator and maintainer of calibre, quickly voiced approval. He dismissed the idea that it might bother some calibre users and suggested that Tehrani submit a pull request for the feature. On August 10, Tehrani submitted the patches, and Goyal later merged them into mainline after refactoring the code. He provided a description of the additional LLM features he had in mind as well:
There are likely going to be new APIs added to all backends to support things like generating covers, finding what to read next, TTS [text-to-speech], grammar and style fixing in the editor and possibly metadata download.
Goyal did promise
that calibre would "never ever use any third party service without
explicit opt-in
".
Discuss removing the feature
It did not take long after the Discuss feature was released for users to start asking for its removal. User "msr" on the Mobileread forum started a thread to ask if there was a way to block or hide all AI features:
I generally find the AI-push to be morally repugnant (among other things, I am an author whose work has been stolen for training) and I hate to see these features creep into software I use. I have zero interest in ever using so-called AI for anything.
Goyal replied
that the features do nothing unless they are enabled. "The worst
you get is a few menu entries. Simply ignore them.
"
Other users echoed the anti-AI sentiment. "Quoth" said
they would not update calibre until the feature was scrapped. "It's
a thin end of a wedge and encouraging people to use these over-hyped
LLMs, even though off by default.
" Goyal replied
that it is in calibre to stay:
It's not going to be scrapped, so good bye, I guess. You are more than welcome to not use AI if you don't want to. calibre very nicely makes that easy for you by having it off by default to the extent that the AI code is not even loaded unless you enable it. What you DO NOT get to do is try to make that choice for other people.
What's added so far
The feature is displayed in the calibre user interface by default; it shows up in the View menu as "Discuss selected books with AI". The naming is unfortunate on its own. Calling the process of sending queries to an LLM provider a discussion encourages people to anthropomorphize the tools and furthers the misconception that these tools "think" in the way that people do. Whatever value the responses may have, they do not reflect actual thought.
As Goyal pointed out, though, the Discuss feature does not work until an LLM provider is configured. If a user attempts to use it without doing so, calibre displays a dialog that directs the user to configure a provider first. Each provider is supplied as a separate plugin. Currently, calibre users have a choice of commercial providers, or running models locally using LM Studio or Ollama.
The Discuss feature shows up as a plugin as well. It is located in the calibre preferences in the "User interface action" category. However, it is a plugin that cannot be disabled or removed; nor can any of the other alleged plugins in that category. It seems fair to question whether something is actually a "plugin" if it cannot be unplugged. The separate provider plugins, in the "AI provider" category, can be disabled or removed, though. The provider plugins are enabled by default, but they do nothing until a user supplies credentials of some kind.
Users do not need to worry about accidentally enabling a feature that sends data off to a provider, because it is impossible to accidentally configure the plugins. For example, the GitHub AI provider requires an access token before it will work, and Google's AI provider needs an API key to function. Using a local provider requires the user to actually have LM Studio or Ollama set up, and then jump through a couple of hoops to enable them.
Even if a user wants to query an LLM about a book, they may encounter problems. I tried setting calibre up to use GitHub AI, but even after appearing to have successfully configured it as provider with the token, I had no luck. I could send queries, but received no reply. I was able to get calibre working with Ollama, though the experience was not particularly compelling.
Responses from GitHub AI or Ollama about books are of little interest to me; a model may have ingested a million or more books as it was trained, but it hasn't read a single one, nor had any life experience that could spark an insight or reaction. Thoughtful discussions of books with well-read people with real perspectives, on the other hand, would be delightful—but beyond calibre's capabilities to provide.
Hide AI
Despite dismissing complaints about the addition of AI, Goyal has grudgingly accepted
a pull
request to hide AI features. He said that anyone offended by a few
menu entries is not worth worrying about but, "I don't particularly
mind having a tweak just to hide the menu entries, but that is all it
should do
". He added that someone would need to supply patches to
hide additional AI functionality in the future. "That someone
isn't going to be me as I don't have the patience to waste my time
catering to insanity.
"
A "remove slop" pull request from "Ember-ruby" that would have stripped out AI features from calibre was rejected without comment. The calibre forked repository with those patches may be of interest, however, to those interested in forking calibre.
At least two forks have been announced so far; one seems to have
only gotten so far as the name, clbre
"because the AI is stripped out
". To date the only work that
has shown up in that repository is to update the
README. Xandra Granade announced
rereading on
December 9; that project is currently working on a fork called arcalibre,
but its goals are limited to a snapshot of calibre "with all AI
antifeatures removed
" that can be used for future forks of
calibre. No new features are planned for arcalibre.
The rereading draft charter suggests that the project will develop additional applications based on arcalibre. It is, of course, far too early to say whether the project will produce anything interesting in the long term. Any future forkers should note that the name "Excalibre" is right there for the taking.
Resistance seems futile
No doubt part of calibre's audience is pleased to see the feature; but it has proven to be an unwelcome addition for some of calibre's users. It is not surprising that those users have asked for it to be removed or changed in such a way that it can be hidden.
It has been a disappointing year overall for Linux and open-source enthusiasts who object to the seemingly relentless AI-ification of everything. It is fairly commonplace at this point to see companies shoving AI features into proprietary software whether the features actually make sense or not. However, an open-source project like calibre has no shareholders to please by ticking off the "AI inside" box, so few people would have had "adds AI" to their calibre bingo card for 2025.
An AI feature landing in calibre seems a fitting coda to the recurrent theme of AI and open source in 2025; whether users want to engage with AI or not, it is seemingly inescapable. One might wonder if AI has come to calibre, a project with no commercial incentive to add it, is there no refuge to be had from it at all?
Bitwarden, which makes an open-source password manager and server, is now accepting AI-generated contributions, as is the KeePassXC password-manager project. Even projects like Fedora and the Linux kernel are accepting or leaning toward accepting LLM-assisted contributions; Mozilla is all-in on AI and pushing it into Firefox as well. This is not an exhaustive list of AI-friendly projects, of course; it would be exhausting to try to compile one at this point.
In most cases, though, users still have options without LLM features. When it comes to calibre, there is no alternative to turn to. Then again, there was no real alternative to calibre before it adopted "Discuss with AI", either. There are many open-source programs that handle reading ebooks; that is well-covered territory. Some, like Foliate, are arguably better than calibre at that task.
But there is no other ebook-management software (open source or
otherwise) that has all of
calibre's conversion features and support for exporting to such a
wide variety of ebook readers. Evan Buss attempted a calibre
alternative, called 22,
in 2019. Buss threw in the towel after learning "ebook managers are
much more difficult to get right than I had previously imagined
",
and maintaining compatibility with calibre "proved near
impossible
". Phil Denhoff started the Citadel
project in late 2023. It looked like a promising calibre-compatible
ebook-library manager, but its last
release was in October 2024. Denhoff continues to make commits to the
repository, though, so one might still hold out hope for the
project.
While the lack of alternatives is frustrating for some, it is not Goyal's fault. The fact that the open-source community, to date, has not produced anything else that can fill in for calibre is not his problem. It is not his responsibility to take the program in any particular direction, nor is he obliged to entertain user complaints. Whether users love or loathe seeing calibre adding LLM features, it's up to its maintainer to decide what gets in and what doesn't.
For now, the AI-objectors on Linux have a few options. One is to live with lurking LLM features, or stick with calibre versions before 8.16.0. Goyal has made it easy to revert to an older version; the download.calibre.com site seems to have all prior releases of calibre going back to its pre-1.0 days. The Download for Linux page has instructions on reverting to previous versions, too. Those who get calibre from their Linux distribution may be LLM-free for some time without taking any action. Debian 13 ("trixie") users, for example, should be on the 8.5.0 branch for the remainder of the release's lifetime. Fedora 42 is still on the 8.0 branch, and Fedora 43 is on 8.14. Rawhide has 8.16.2, though, so users are likely to get the Discuss feature in Fedora 44.
The strong reaction against calibre's Discuss feature may seem more emotional than logical. It is also understandable. Books are a human endeavor, even those that are in electronic format. AI models have often been trained by plundering a corpus of books, without respect for the author's wishes or copyright. Suggesting that the readers now turn to the technologies that seek to replace humans to supplement their reading experience is, for some at least, deeply offensive. It is a little puzzling that Goyal, who has catered to a large audience of book lovers for nearly 20 years, seems not to understand that.
The 2025 Maintainers Summit
Once each year, a small group of kernel maintainers meets to discuss important process-oriented concerns that may not lend themselves well to a public mailing-list discussion. The 2025 gathering was held on December 10 in Tokyo, Japan, alongside the Open Source Summit Japan and the Linux Plumbers Conference.LWN's coverage from this gathering is now complete; the topics discussed at the 2025 Maintainers Summit were:
- Toward a policy for machine-learning tools in kernel development: what sort of role should large-language models play in the development process, and how should that process change, if at all, to accommodate them?
- Best practices for linux-next: how can the community's integration repository be made to work better?
- The state of the kernel Rust experiment: the discussion on removing the "experimental" label for Rust in the kernel and what comes next.
- Better development tools for the kernel: an update on work being done within and around kernel.org.
- Development-process discussions: what happens if Linus Torvalds disappears, and what other topics are developers concerned about?
Group photo
Acknowledgment
Thanks to the Linux Foundation, LWN's travel sponsor, for supporting our travel to this event.
Toward a policy for machine-learning tools in kernel development
The first topic of discussion at the 2025 Maintainers Summit has been in the air for a while: what role — if any — should machine-learning-based tools have in the kernel development process? While there has been a fair amount of controversy around these tools, and concerns remain, it seems that the kernel community, or at least its high-level maintainership, is comfortable with these tools becoming a significant part of the development process.Sasha Levin began the discussion by pointing to a summary he had sent to the mailing lists a few days before. There is some consensus, he said, that human accountability for patches is critical, and that use of a large language model in the creation of a patch does not change that. Purely machine-generated patches, without human involvement, are not welcome. Maintainers must retain the authority to accept or reject machine-generated contributions as they see fit. And, he said, there is agreement that the use of tools should be disclosed in some manner.
Just tools?
But, he asked the group: is there agreement in general that these tools are, in the end, just more tools? Steve Rostedt said that LLM-generated code may bring legal concerns that other tools do not raise, but Greg Kroah-Hartman answered that the current developers certificate of origin ("Signed-off-by") process should cover the legal side of things. Rostedt agreed that the submitter is ultimately on the hook for the code they contribute, but he wondered about the possibility of some court ruling that a given model violates copyright years after the kernel had accepted code it generated. That would create the need for a significant cleanup effort.
Ted Ts'o said that people worry about the copyright problems, but those
same problems exist even in the absence of these tools. Developers could,
for example, submit patches without going through the processes required by
their employer — patches which, as a result, they have no right to submit.
We do not worry about that problem now, he said, and it has almost never
actually come up. Jiri Kosina said that these tools make code creation
easy enough that the problem could become larger over time.
Dave Airlie asked whether it makes sense to keep track of which models
people are using. But, he said, any copyrighted code put into a patch by
an LLM is likely to have come from the kernel itself.
Levin mentioned that there had been some ethical concerns raised about LLM use and its effects on the rest of the world. Arnd Bergmann said that it could make sense to distinguish between which types of models are in use. Running one's own model locally is different from using a third party's tool.
Linus Torvalds jumped in to say that he thought the conversation was overly
focused on the use of LLMs to write code, but there has not, yet, been much
of that happening for the kernel. So any problems around LLM-written code
are purely hypothetical. But these tools are being used for other
purposes, including identifying CVE candidates and stable-backport
candidates, and for patch review. Andrew
Morton, Torvalds said, had recently shown an example of a machine-reviewed
patch that was "stunning
"; it found all of the complaints that
Torvalds had raised with the patch in question, and a few more as well.
Alexei Starovoitov said that, within Meta, automated tools have been
producing good reviews about 60% of the time, with another 20% having some
good points. Less than 20% of the review comments have been false
positives. Jens Axboe added that he has been testing with older patches
and seeing similar results. He passed one five-line patch with a known
problem to three human reviewers, and none of them found the bug. But the
automated tool did find the problem (a reversed condition in a test);
"AI always catches that stuff
".
Christian Brauner asked the group how many people use LLMs for coding; about four developers raised their hands. Shuah Khan expressed concern about access to LLMs; most of this work is being done behind corporate walls. Ts'o said that he has been using the review prompts posted by Chris Mason, originally written for Claude, with Gemini, with generally good results and at a relatively low cost.
Torvalds, though, pointed out that developers have long been complaining about a lack of code review; LLMs may just solve that problem. They are not writing code at this point, he said, though that will likely happen at some point too. Once these systems start submitting kernel code, we will truly need automated systems to review all that code, he said.
Proprietary systems
Konstantin Ryabitsev said that he had tried using some of these systems, but found them to be far too expensive; he also was worried about depending on proprietary technology. Brauner said that this usage had to be supported by employers, or perhaps the Linux Foundation could attempt to provide an automated review service. Ts'o said that the expense depends on how the system is used. One can pull in the entire kernel, using a lot of tokens; that will be expensive. The alternative is to create a set of review rules, reducing the token use by a factor of at least five. Khan repeated that not all developers will have equal access to this technology.
Mark Brown was concerned about requiring submitters to run their patches through proprietary tools; some will surely object to that. Axboe suggested that the review tools should be run by subsystem maintainers, not submitters.
I pointed out that, 20 years ago, the kernel community abruptly lost access to BitKeeper, highlighting the hazards of depending on proprietary tools. If the kernel community becomes dependent on these systems, development will suffer when the inevitable rug-pull happens. At some point, the cost of using LLMs will have to increase significantly if the companies behind them are to have a chance at reaching their revenue targets.
Torvalds, though, called that concern a "non-argument
". We do not
have those tools today, he said; if they go away tomorrow, the community
will just be back where it is now. Meanwhile, he said, we should take
advantage of the technology industry's willingness to waste billions of
dollars to get people to use these tools. Even if it only lasts a couple
of years, it can help the community.
Starovoitov said that he loves the reviews that the BPF community gets from
the LLM systems. They ask good questions even when the reviews are wrong.
Even better, developers respond to the questions, despite the fact that
they are answering a bot; those answers can be used to help the models
learn to do better in the future. But he acknowledged a recent three-day
outage caused by some problems at GitHub; it "felt devastating
". He
was just waiting for the service to come back, since it does a better job
of reviewing than he does.
Disclosure
Levin shifted the discussion to disclosure requirements. There have been proposals for an Assisted-by tag that would name the specific tool used; should that tag be required for all tools, or just for LLMs? Torvalds said that he would like to see links to LLM-generated reviews, but that there is no need for a special review tag. Ts'o agreed, saying that people need to look at the reviews to determine whether they make sense, but he pointed out that a lot of reviews are not posted publicly. Starovoitov answered that the reviews in the BPF subsystem are posted as email responses to the patches.
Kees Cook said that he didn't care about which specific tag is used, he just wants to know what he should use; Torvalds answered that there does not need to be a tag at all. The information could just be put into the changelog instead. Ryabitsev suggested putting it after the "---" marker so that it doesn't appear in the Git changelog, but Bergmann said he would prefer to have that information in the changelog. Torvalds accused the group of over-thinking the problem, saying that it was better to experiment and see what works. The community should encourage disclosure of tool use, but not make hard rules about how that disclosure should be done.
Ts'o said that, in any case, it is not possible to count on submitters disclosing their tool use; some people may want to lie about it. Dan Williams said that disclosure rules would make it clear that the community values transparency in this area. Levin added that the nice thing about these tools is that they listen; if a disclosure rule is added to the documentation, the models will comply. Williams suggested a rule that all changelogs should mention leprechauns.
As the session moved toward a close, Levin said he would post a documentation patch asking LLM tools to add an Assisted-by tag, but would not make an effort to enforce the rule. There was some final discussion on the details of that tag, which seems sure to evolve over time.
Best practices for linux-next
One of the key components in the kernel's development process is the linux-next repository. Every day, a large number of branches, each containing commits intended for the next kernel development cycle, is pulled into linux-next and integrated. If there are conflicts between branches, the linux-next process will reveal them. In theory, many other types of problems can be found as well. Some developers feel that linux-next does not work as well as it could, though. At the 2025 Maintainers Summit, Mark Brown, who helps to keep linux-next going, led a session on how it could be made to work more effectively.
The idea for the session, he said, began when he heard complaints about
some types of "odd fixes
" not appearing in linux-next before landing
in the mainline. Maintainers may manage many branches, and some of those
branches are not being pulled into linux-next. Additional problems come
about when maintainers cherry-pick commits before sending them to the
mainline, making it harder to track those commits as they appeared in
linux-next. There is also, he said, one maintainer who refuses to put
fixes into linux-next because he does not want them tested next to
feature-oriented changes.
He asked the group whether more pressure should be applied to maintainers to put their repositories into linux-next. Nobody seemed to disagree with that idea.
Linus Torvalds had a different complaint: fixes that do get into linux-next, but then are not forwarded on to him for merging into the mainline. There are cases where the kernel has a known bug, there is a fix that is ready, but he does not have it.
Another problem for Torvalds is buggy patches that break linux-next for everybody, interfering with its primary purpose. He has repeatedly asked for problematic repositories to be simply removed from linux-next, and to be notified when that happens.
Steve Rostedt said that he will occasionally rebase his repositories that appear in linux-next, mostly to add tags to commits. That can have an effect similar to cherry-picking, where commits will change their IDs. Torvalds said that habit, too, can cause trouble in linux-next, but Brown said that rebasing is really only a problem if others have built on the repository that has been rebased. The most problematic tree when it comes to cherry-picking, he said, is the DRM (graphics) subsystem, which does extensive cherry-picking of patches. DRM maintainer Dave Airlie answered that the scale of that subsystem requires cherry-picking. All other subsystems have the same problems, he said, but they haven't yet gotten big enough to make that clear. He expressed willingness to have the assembled maintainers develop a better process for DRM, but did not believe that they could do it.
Torvalds repeated that he would like to see consequences when buggy commits break linux-next, that the guilty repository needs to be kicked out — at least temporarily. Once the relevant maintainer has acknowledged and fixed the problem, the repository would be allowed back in. If the problem is not fixed, though, then the repository should be kept out and not pulled during the next merge window.
Avoiding linux-next bugs
Ted Ts'o said that the filesystem developers had often run into problems testing linux-next, caused by bugs introduced by other subsystems. In response, linux-next maintainer Stephen Rothwell set up a process where the filesystem trees are pulled in first, and an fs-next tag is set once that process is is complete, before other repositories are pulled. That creates a sort of limited linux-next containing only filesystem trees and a few others that filesystem subsystems depend on, but without most of the rest of the work being done in the kernel community. Linux-next as a whole is too flaky, Ts'o said, but fs-next is useful for testing.
Arnd Bergmann said that all subsystems are not equal, and that handling the filesystem repositories first makes sense. The cost of mistakes in that area is especially high. Miguel Ojeda suggested a similar process for other higher-level subsystems, merging them into a side branch before merging the result into linux-next. That would create a sort of two-level process, integrating broad areas of the kernel before throwing everything together.
Airlie said that breaking linux-next (and mainline) around the rc1 release is expected, but breaking it every day is expensive. That leads to the wrong people finding problems, he said. Torvalds said that real development teams should be running continuous-integration testing on their own branches and only test linux-next occasionally. It is not reasonable, he said, to expect developers to debug problems introduced by other subsystems. The real problem, he added, is that some repositories clearly are not seeing any testing at all. That often happens with the smaller subsystems, he said, and linux-next is not really helping in this case.
Brown agreed that maintainers should care most about their own repositories, but said that it is still worthwhile to keep an eye on linux-next as a whole; that improves the chances that the culprit behind a problem will fix it before the buggy commit makes it into the mainline. Ojeda says that he has to test linux-next every day to find problems that affect the Rust build.
Brown asked about what tests could be run by linux-next itself to help with
Rust problems, and got some ideas, but wondered about which specific
configurations should be tested. Torvalds worried that, if some specific
configurations are tested, others will only test with those, and he will
still get "the weird bugs
" that show up with other configurations.
Ojeda said that he did not want people testing with canned configurations;
rather, they should use their own and see which problems, in particular,
appear.
Part of the problem, Airlie said, is that a lot of testing is done in continuous-integration systems, but nobody actually boots their laptop with their new code. So the parts of the system that actually show text on the screen aren't checked, and inevitably break.
Build problems
Jiri Kosina asked how the build problems that Torvalds encounters when pulling repositories happen, given that those repositories should already be pulled into linux-next and build-tested there. Torvalds answered that ordering issues are one source of problems; one repository will implicitly depend on commits in another, and the order in which linux-next pulls the repositories hides the problem. More often, though, Torvalds discovers that basic configurations just haven't been tested. He added that "allmodconfig" builds (which build the entire kernel as loadable modules) cause some code to be disabled, and can thus hide problems. As a result, there are some problems that he only encounters when he builds a kernel with his own personal configuration.
Ts'o expressed a concern that dropping repositories from linux-next would cause a loss of test coverage; it would be necessary, he said, to take that action in a public way. Torvalds said he would like to get an automatic email just before the merge window telling him which repositories have been disabled.
During the merge window, Torvalds said, he typically does about 30 merges each day. Build failures are relatively rare during this time; they will happen maybe two or three times during the merge window, but they are still annoying and unnecessary.
Jakub Kicinski said that he often finds merge conflicts before linux-next does, and wondered if he should send resolutions. Torvalds said that he doesn't look at conflict resolutions done by maintainers, at least not before he does the resolution himself. But having the resolution to check his work afterward can be helpful. He does depend on resolutions from others for conflicts in Rust code, where he is less confident of his own abilities.
At the end of the session, Rostedt asked how long maintainers should keep a bug fix in linux-next before sending it to Torvalds. The answer was that, if it is a bug that affects others, the fix should be sent before the next rc release — for obvious fixes, at least. If a fix is not obvious, though, more time should be taken to be sure that the fix is correct.
The state of the kernel Rust experiment
The ability to write kernel code in Rust was explicitly added as an experiment — if things did not go well, Rust would be removed again. At the 2025 Maintainers Summit, a session was held to evaluate the state of that experiment, and to decide whether the time had come to declare the result to be a success. The (arguably unsurprising) conclusion was that the experiment is indeed a success, but there were some interesting points made along the way.
Headlines
Miguel Ojeda, who led the session, started with some headlines. The Nova driver for NVIDIA GPUs is coming, with pieces already merged into the mainline, and the Android binder driver was merged for 6.18. Even bigger news, he said, is that Android 16 systems running the 6.12 kernel are shipping with the Rust-written ashmem module. So there are millions of real devices running kernels with Rust code now.
Meanwhile, the Debian project has, at last, enabled Rust in its kernel
builds; that will show up in the upcoming "forky" release. The amount of
Rust code in the kernel is "exploding
", having grown by a factor of
five over the last year. There has been an increase in the amount of
cooperation between kernel developers and Rust language developers, giving
the kernel project significant influence over the development of the
language itself. The Rust community, he said, is committed to helping the
kernel project.
The rust_codegen_gcc effort, which grafts the GCC code generator onto the rustc compiler, is progressing. Meanwhile the fully GCC-based gccrs project is making good progress. Gccrs is now able to compile the kernel's Rust code (though, evidently, compiling it to correct runnable code is still being worked on). The gccrs developers see building the kernel as one of their top priorities; Ojeda said to expect some interesting news from that project next year.
With regard to Rust language versions, the current plan is to ensure that the kernel can always be built with the version of Rust that ships in the Debian stable release. The kernel's minimum version would be increased 6-12 months after the corresponding Debian release. The kernel currently specifies a minimum of Rust 1.78, while the current version is (as of the session) 1.92. Debian is shipping 1.85, so Ojeda suggested that the kernel move to that version, which would enable the removal of a number of workarounds.
Jiri Kosina asked how often the minimum language version would be increased; Ojeda repeated that it would happen after every Debian stable release, though that could eventually change to every other Debian release. It is mostly a matter of what developers need, he said. Linus Torvalds said that he would be happy to increase the minimum version relatively aggressively as long as it doesn't result in developers being shut out. Distributors are updating Rust more aggressively than they have traditionally updated GCC, so requiring a newer version should be less of a problem.
Arnd Bergmann said that the kernel could have made GCC 8 the minimum supported version a year earlier than it did, except that SUSE's SLES was running behind. Kosina answered that SUSE is getting better and shipping newer versions of the compiler now. Dave Airlie worried that problems could appear once the enterprise distributors start enabling Rust; they could lock in an ancient version for a long time. Thomas Gleixner noted, though, that even Debian is now shipping GCC 14; the situation in general has gotten better.
Still experimental?
Given all this news, Ojeda asked, is it time to reconsider the "experimental" tag? He has been trying to be conservative about asking for that change, but said that, with Android shipping Rust code, the time has come. Airlie suggested making the announcement on April 1 and saying that the experiment had failed. More seriously, he said, removing the "experimental" tag would help people argue for more resources to be directed toward Rust in their companies.
Bergmann agreed with declaring the experiment over, worrying only that Rust
still "doesn't work on architectures that nobody uses
". So he
thought that Rust code needed to be limited to the well-supported
architectures for now. Ojeda said that there is currently good support for
x86, Arm, Loongarch, RISC-V, and user-mode Linux, so the main architectures
are in good shape. Bergmann asked about PowerPC support; Ojeda answered
that the PowerPC developers were among the first to send a pull request
adding Rust support for their architecture.
Bergmann persisted, asking about s390 support; Ojeda said that he has looked into it and concluded that it should work, but he doesn't know the current status. Airlie said that IBM would have to solve that problem, and that it will happen. Greg Kroah-Hartman pointed out the Rust upstream supports that architecture. Bergmann asked if problems with big-endian systems were expected; Kroah-Hartman said that some drivers were simply unlikely to run properly on those systems.
With regard to adding core-kernel dependencies on Rust code, Airlie said that it shouldn't happen for another year or two. Kroah-Hartman said that he had worried about interactions between the core kernel and Rust drivers, but had seen far fewer than he had expected. Drivers in Rust, he said, are indeed proving to be far safer than those written in C. Torvalds said that some people are starting to push for CVE numbers to be assigned to Rust code, proving that it is definitely not experimental; Kroah-Hartman said that no such CVE has yet been issued.
The DRM (graphics) subsystem has been an early adopter of the Rust
language. It was still perhaps surprising, though, when Airlie (the DRM
maintainer) said that the subsystem is only "about a year away
" from
disallowing new drivers written in C and requiring the use of Rust.
Ojeda returned to his initial question: can the "experimental" status be
ended? Torvalds said that, after nearly five years, the time had come.
Kroah-Hartman cited the increased support from compiler developers as a
strong reason to declare victory. Steve Rostedt asked whether function
tracing works; the answer from Alice Ryhl was quick to say that it does
indeed work, though "symbol demangling would be nice
".
Ojeda concluded that the ability to work in Rust has succeeded in bringing in new developers and new maintainers, which had been one of the original goals of the project. It is also inspiring people to do documentation work. There are a lot of people wanting to review Rust code, he said; he is putting together a list of more experienced developers who can help bring the new folks up to speed.
The session ended with Dan Williams saying that he could not imagine a better person than Ojeda to have led a project like this and offered his congratulations; the room responded with strong applause.
Better development tools for the kernel
Despite depending heavily on tools, the kernel project often seems to under-invest in the development of those tools. There has been progress in that area, though. At the 2025 Maintainers Summit, Konstantin Ryabitsev, who is (among other things) the author of b4, led a session on ways in which the kernel's tools could be improved to make the development process more efficient and accessible.He started with a plea for developers to let him know what is needed, since that is likely to work better than leaving him to figure it out on his own. He continues to work, slowly, on a b4 review command that would assist with the application of review tags to commits. He has also spent some time trying to integrate large language models (LLMs) with b4, without a huge amount of success.
The LLM work is somewhat ironic, he said, since he has had to put a lot of
time into protecting kernel.org from scraper
attacks run by companies seeking training material for their models.
So he is simultaneously trying to make LLMs work while trying to block them
from the site. On kernel.org, a number of services have been decoupled
onto separate servers in an attempt to shield the lore archive from these
attacks. He noted that the scrapers have started solving the challenges
needed to get past Anubis, so he
has had to dial up the difficulty of those challenges.
Kernel.org sends a lot of email, he said; that mail is often marked as spam at the receiving end even though he has jumped through all of the requisite hoops. The email that the kernel community generates is sufficiently different from the norm that it looks strange to a system that is increasingly focused on commercial email. Linus Torvalds suggested that the problem could be addressed by adding more emojis to patch postings. Ryabitsev, though, has become increasingly interested in solutions that deliver messages directly to lore, without sending it as email at all. Pieces of that puzzle are already in place; developers are using lei now to follow discussions without having to subscribe to mailing lists, for example.
The systems behind kernel.org have been moved over to hosting at Akamai. He has been trying to keep kernel.org decentralized, with copies of the data behind kernel development widely distributed. If somebody wanted to take kernel.org off the net, he said, they likely would succeed, but developers, with local copies of everything they need, would be able to continue working. Still, more thought needs to go into how the project would continue if its provider goes out for an extended period. He wants to get to a point where developers can communicate even if lore is gone.
He has also been working on a new ring of trust that is more robust than the current solutions; it is not ready yet. Torvalds noted that he left home for this meeting without all of the keys for developers he pulls from on his laptop, and those keys were not present in the kernel's key repository. He put out a plea for developers to ensure that Ryabitsev has their GPG keys so that he can pull from them.
The kernel bugzilla server, Ryabitsev said, is "semi-dead
", and has
been for several years. He suggested that the time has come to simply get
rid of it. That server is running bugzilla 5.2; upstream is up to 5.9, but
there is no upgrade path to get there. If the bugzilla server is removed,
he said, he would find a way to keep the existing history around, but it
would not be possible to create new entries. There did not seem to be any
opposition to removing the bugzilla server (which has never been all that
extensively used in the kernel community), but it will not happen
immediately.
Patchwork, he said, is used extensively. He is working on getting it to use lei queries to see when specific files have been touched. Torvalds said that the emails he gets from the pull-request tracker arrive within five minutes, and he loves that, but email from patchwork can take days. He was wondering what was going on, but nobody seemed to have an answer.
Jakub Kicinski suggested that it is time to move on from patchwork.
Ryabitsev asked what the replacement would be; Kicinski responded that it
should be possible to "vibe-code something in a day
". Ted Ts'o said
that he is "utterly reliant
" on patchwork to keep track of the
outstanding patches. He doesn't need patchwork specifically, though, as
long as something provides patch tracking; a system that was integrated
with lore would be nice. Ryabitsev said that some of that functionality
could maybe be incorporated into public-inbox (the email archive system
behind lore and LWN's email archive); the Linux
Foundation has been sponsoring work in public-inbox for a while now.
The session ended there; Ryabitsev said that he would post a summary of what was discussed. That summary duly arrived shortly thereafter.
2025 Maintainers Summit development process discussions
The final part of the 2025 Maintainers Summit was devoted to the kernel's development process itself. There were two sessions, one on continuity and succession planning, and the traditional discussion, led by Linus Torvalds, on any pain points that the community is experiencing. There was not a lot that developers were unhappy about, and there are now more explicit plans in the works to provide a process should Torvalds abruptly become unable to fill his role.
Succession planning
The succession topic was addressed first in a session led by Dan Williams;
he described it as an uplifting subject tied to "our eventual march
toward death
", and offered to talk about Link tags instead if it got to be too much.
More seriously, he pointed out that people do worry about what might ensue
if something were to happen to Torvalds without a designated successor in
place, and wanted to talk about a potential way to address those concerns.
The details of this discussion will mostly be left out; it is sufficient to
say that there was not a lot of disagreement in the room. There were two
noteworthy outcomes at the end.
The first is that some provisions for disaster have been made, in that there are multiple people who have the ability to commit to Torvalds's repository, and there is redundancy for the stable repository as well. There is not a single point of failure there that would force a move to another repository just to keep the kernel releases coming.
In the most likely scenario, Torvalds will eventually decide that it is time to move on, and will arrange for a smooth transition to his replacement. He let it be known, though, that he has recently signed a new contract with the Linux Foundation and does not intend to go away anytime soon.
It was agreed that there should be a process in place to decide on a path forward should something happen that prevents a smooth transition. As I put it in the discussion, in the absence of an agreed-upon process, the community would find itself playing Calvinball at an awkward time.
Williams had a proposal for that process: should the need arise, the attendees of the most recent Maintainers Summit would be brought together to collectively decide what happens next. Those attendees are a group that is trusted by both Torvalds and the community as a whole, and would have the breadth of vision needed to make a good decision. That decision could be the appointment of a new benevolent dictator, or it could be a transition to some sort of group maintainership. Dave Airlie suggested that the group should be locked in a room with the ability to send out white smoke when a decision has been reached.
The plan is for Williams to write up a proposed process document, which would first go through review by the Technical Advisory Board and (presumably) Torvalds. After that, it will be posted for discussion within the community as a whole.
The state of the development process
The traditional title for the final session is "Is Linus happy?". This
time around, he began by saying that he has not been seriously unhappy
about the process for a long time. When he is unhappy, people tend
to know about it, and the technical press writes about it. But he does not
like it when other developers are unhappy with the process. For
that reason, he said, the events in August and September (he was referring
obliquely to the removal of bcachefs from
the kernel) were not fun for him. Williams described that episode as a
"perfect storm
" that hadn't happened before in the community and,
hopefully, will not happen again.
Torvalds said that he sometimes gets complaints about the kernel
community's use of email and "other old ways
", but that people
generally think that the model is working. Greg Kroah-Hartman said that
the kernel project's rate of change still exceeds that of any other
project; when projects start to approach the kernel's scale, they come and
ask on how to put together a process that works as well. Torvalds said he
didn't really see any big changes that he would want to make.
Konstantin Ryabitsev said that it is not possible to avoid using email.
Even developers using the Git-forge systems depend on email for
notifications and such. He is hoping that he can eventually develop the
lore archive into a sort of "message bus
" that might enable the
kernel community to leave email behind and "leapfrog the forge
stage
" entirely.
Airlie said that email is no longer a reliable message transport mechanism, and that corporate email systems often cannot be used for kernel development. Kroah-Hartman pointed out, though, that each kernel release features the work of at least 200 first-time contributors, so people are figuring out how to make email work for them. Ryabitsev observed that email is not the hardest part of kernel programming.
There was talk of wanting better continuous-integration systems, and some equivalent to GitHub's Actions. Alexei Starovoitov complained that he frequently runs into the limits of what GitHub is willing to provide. Christian Brauner said that the community needs a good test infrastructure that is not tied to any specific employer.
Airlie said that freedesktop.org has managed to solve a lot of these problems, but it was not easy. The first step is to put together high-quality tooling that solves problems; once that has been done, the painful process of finding sponsors to support the system becomes more tractable. It worked out for freedesktop.org, but would probably be harder to do for the whole kernel, he said.
The session was winding down when the suggestion was made that it would be nice to have some sort of continuous-integration capability within kernel.org. Ryabitsev said that kernel.org is a charitable organization with a specific mission, and that providing that sort of service would fall outside the bounds, so he thinks that KernelCI would be a better home.
The rest of the 6.19 merge window
Linus Torvalds released 6.19-rc1 and closed the 6.19 merge window on December 14 (Japan time), after having pulled 12,314 non-merge commits into the mainline. Over 8,000 of those commits came in after our first 6.19 merge-window summary was written. The second part of the merge window was focused on drivers, but brought in a number of other changes as well.The most significant changes pulled in the latter part of this merge window include:
Architecture-specific
- User-mode Linux has gained support, finally, for multiple processors. This support is limited in 6.19, though, in that threads within a single process cannot run concurrently.
- Support for the LoongArch32 subarchitecture has been merged, but it cannot actually be built until the toolchains catch up.
Core kernel
- There is now generic support for the management of page tables for I/O memory-management units; see Documentation/driver-api/generic_pt.rst for more information.
- System-call trace events are now able to read user-space buffers (file names, for example) and include them in the trace output.
- Guard pages are now specially marked in the /proc/PID/smaps file; see this commit for details.
- The kernel is now able to manage transparent huge pages in device-private memory. See this commit for more information.
- The zram device has gained support for writeback batching, improving performance.
- The live update orchestrator, which allows the kernel to be replaced on a running system, has been merged. See this changelog and Documentation/core-api/liveupdate.rst for more information.
Filesystems and block I/O
- Caching of data from direct-I/O operations on NFS filesystems can now be disabled, further reducing the client-side cost of large I/O operations. This documentation commit contains more information and details on how to use the new mode.
Hardware support
- Clock: Rockchip RV1126B and RK3506 clock controllers, Qualcomm IPQ5424 NSS clock controllers, Qualcomm SM8750 video clock controllers, Andes ATCRTC100 realtime clocks, NVIDIA VRS10 realtime clocks, and Apple Mac system management controller realtime clocks.
- GPIO and pin control: NXP QIXIS FPGA GPIO controllers, Intel Elkhart Lake PSE GPIO controllers, MediaTek MT6878 pin controllers, Qualcomm Kaanapali pin controllers, Microchip pic64gx gpio2 pin controllers, Microchip Polarfire pin controllers, and Cix Sky1 pin controllers.
- Graphics: Freescale i.MX8MP HDMI PAI bridges, Sharp LQ079L1SX01 panels, Synopsis Designware QP CEC interfaces, Arm Ethos-U65/U85 NPUs, Samsung S6E3FC2X01 DSI panels, Synaptics TDDI display panels, and LG LD070WX3 MIPI DSI panels.
- Hardware monitoring: MPS VR mp9945, mp2925 and mp2929 monitoring controllers, Analog Devices MAX17616/MAX17616A current limiters, Aosong dht20 temperature and humidity sensors, ST Microelectronics TSC1641 16-bit high-precision power monitors, and Apple system management controllers.
- Industrial I/O: Renesas RZ/T2H / RZ/N2H analog-to-digital converters, InvenSense ICM-456xx I2C, IC3, and SPI interfaces, Analog Devices MAX14001/MAX14002 analog-to-digital converters, Bosch SMI330 I2C and SPI interfaces, Aosong adp810 differential pressure and temperature sensors, and Renesas RZ/N1 analog-to-digital converters.
- Media: Sony IMX111 sensors, ARM Mali-C55 image signal processors, Renesas RZ/V2H(P) input video control blocks, and Rockchip camera interfaces.
- Miscellaneous: Airoha pulse-width modulators, T-HEAD TH1520 pulse-width modulators (Rust driver), MediaTek MT6316 and MT6363 SPMI PMIC regulators, FitiPower FP9931/JD9930 EPD regulators, NXP PF1550 PMICs, Microchip FPGA CoreSPI controllers, MediaTek MFlexGraphics power-domain controllers, Awinic AW99706 backlight controllers, ROHM BD71828 and BD71815 PMIC charger controllers, SpacemiT P1 poweroff and reset controllers, Richtek RT9756 smart cap divider chargers, Renesas RZ/G3S PCIe host controllers, NXP S32G PCIe host controllers, SpacemiT K1 PCIe host controllers, Qualcomm TC9563 PCIe switch power controllers, Broadcom next generation 50/100/200/400/800 gigabit RoCE HCAs, ESWIN SoC reset controllers, HiSilicon Hydra home agents, AMD Versal Gen 2 UFS controllers, Renesas Window WWDT watchdog timers, Qualcomm KAANAPALI interconnects, QNAP MCU EEPROMs, KEBA 8250 UARTs, Loongson 8250 based serial ports, and Ayaneo embedded controllers.
- Sound: Intel Novalake audio subsystems, Spacemit K1 I2S controllers, Cirrus Logic CS530x analog to digital converters, and CIX IPBLOQ HD audio interfaces.
- USB: Apple Silicon DWC3 platform controllers and Renesas RZ/G3E USB 3.0 PHYs.
Miscellaneous
- The perf tool has, among other things, gained support for unified event and metric descriptions in the JSON format and deferred unwinding of user-space call stacks. See this merge message for more information.
Virtualization and containers
- The guest_memfd() implementation has gained support for NUMA policies, allowing hypervisors to set policies on where memory should be allocated. See this commit for a little more information.
- The kernel has gained support for PCIe link encryption and
device authentication; this allows confidential-computing guests to
maintain encrypted communications with PCI devices and to ensure that
they are talking to the devices they think they are. From this
merge message:
Linux gets a link encryption facility which has practical benefits along the same lines as memory encryption. It authenticates devices via certificates and may protect against interposer attacks trying to capture clear-text PCIe traffic.
- The HyperV "confidential VMBus" mechanism is another mechanism for confidential communication between guests and devices; see this documentation commit for more information.
Internal kernel changes
- The new UT=1 build parameter will cause a warning to be emitted for each tracepoint that is declared but never used. Since each tracepoint consumes about 5KB of memory, there is value in removing the ones that are not actually useful. The intent is to make this warning the default once all of the existing unused tracepoints have been cleaned up.
- The internals of the vmalloc() allocator have been reworked to enable allocations to be safely made in atomic contexts (GFP_ATOMIC and such). As always, atomic allocations have a higher probability of failing.
- There is now support for module parameters for loadable modules written in Rust.
- The "Terminus 10x18" console font, meant to improve readability on mid-resolution (1440x900) laptop screens, has been added.
- This development cycle removed 98 exported symbols and added 483 new ones. See this page for the full list.
- There are seven new kfuncs in 6.19: __scx_bpf_dsq_insert_vtime(), __scx_bpf_select_cpu_and(), bpf_dynptr_from_file(), scx_bpf_dsq_insert___v2(), scx_bpf_dsq_peek(), scx_bpf_task_set_dsq_vtime(), and scx_bpf_task_set_slice().
The time has now come to stabilize all of this work for the 6.19 release, which can be expected on February 1, 2026.
Page editor: Joe Brockmeier
Next page:
Brief items>>

![[Group photo]](https://static.lwn.net/images/conf/2025/ossjp/ms-group-sm.jpg)