LWN.net Weekly Edition for December 25, 2025
Welcome to the LWN.net Weekly Edition for December 25, 2025
This edition contains the following feature content:
- A 2025 retrospective: looking back on the past year and how well our predictions held up.
- Episode 29 of the Dirk and Linus show: an informal conversation with the kernel creator himself.
- Tools for successful documentation projects: lessons from six years of Google's Season of Docs initiative.
- Reporting from the 2025 Linux Plumbers Conference:
- Verifier-state pruning in BPF: making it easier to diagnose BPF verification errors.
- A high-memory elimination timeline for the kernel: the future of 32-bit support in the kernel.
- A visualizer for BPF program state: a look at tools to simplify diagnosing verification errors.
- What's new in systemd v259: a look at some of the noteworthy features from a shorter systemd development cycle.
This week's edition also includes these inner pages:
- Brief items: Brief news items from throughout the community.
- Announcements: Newsletters, conferences, security updates, patches, and more.
This is the last LWN.net Weekly Edition for 2025. As is customary, we will take the last week of the year off to rest and ready ourselves for 2026. The Weekly Edition will return on January 8. We wish all of our readers a fine holiday season and joyous new year.
Please enjoy this week's edition, and, as always, thank you for supporting LWN.net.
A 2025 retrospective
Another year has reached its conclusion. That can only mean one thing: the time has come to take a look back at the predictions we made in January and evaluate just how badly they turned out. Much to our surprise, not all of our predictions were entirely accurate. It has been a wild year in the Linux community and beyond, to say the least.
Evaluating the predictions
The lead prediction last year was that the extensible scheduling class would be "a game changer"; the reality has been a bit more subdued. Development on sched_ext itself continues apace, and there is definitely interesting work happening on specific schedulers. The scx_lavd gaming-oriented scheduler continues to advance, and was the subject of multiple sessions at the recently concluded series of conferences in Tokyo. But it is not clear that ideas from sched_ext are filtering back into the mainline scheduler, and the use of sched_ext schedulers is not, yet, widespread. At least, it is not widespread in any public way; it seems that private use is on the rise.
On the other hand, the prediction that Rust code would enter the kernel at an increasing rate has been borne out nicely. In hindsight, the removal of the "experimental" tag from Rust in the kernel was also somewhat predictable, but we missed that one.
We predicted that another XZ-like backdoor attempt would come to light; that did not quite happen, though we did see the usual malicious uploads to various language-specific and distribution repositories. There is little doubt that such attempts are ongoing, but they have not yet been discovered. There are signs that single-maintainer projects are being seen as carrying more risk, as predicted.
The prediction that a major project would discover that it has accepted a lot of LLM-generated code has not really come to pass. One major project (the kernel) did discover that it had accepted small amounts of such code, though, and not everybody was happy about it. What has come to pass is that many projects have put a lot of effort into developing policies around such code. The predicted efforts to create truly free generative-AI systems have not reached the predicted level, though.
The launch of foundations to support maintainers was predicted; the announcement of the Rust Foundation Maintainers Fund qualifies as a fulfillment of that one, as does the netdev foundation announcement. Unfortunately, the prediction that free-software foundations in general would struggle to raise funds has proved to be true.
The bricking of cloud-based products was a fairly easy prediction; Google deciding to kill older Nest thermostats was just one of many examples of that coming true (the No Longer Evil project is working to keep the Nest devices working). We predicted the arrival of more truly open hardware, but it is not clear that is happening in any significant way. It does seem that there is an increase in interest in distributions for mobile devices, with GrapheneOS perhaps leading the pack.
Finally, we predicted that global belligerence would make itself felt within our community. There are plenty of examples, alas, of that happening. Various sanction regimes are affecting who can participate in our projects. Changes within the United States have, among other things, nearly guaranteed that few significant technical conferences will happen there for some time — and that many people working in the US will be unable to attend events elsewhere out of fear of being unable to return. Digital-sovereignty efforts within Europe are still nascent, but may lead to significant changes in the coming years.
What was missed
The other part of prediction evaluation, of course, is looking at what was missed. What did we fail to predict that, in retrospect, we should perhaps have foreseen?While we predicted some of the ways in which the growth of generative AI would affect free-software development, we missed one important area: patch review. For all their faults, these systems do appear to be good at finding bugs that humans often miss. Expect to see more projects incorporating automated patch review going forward, but also hope that they keep the risks of becoming dependent on proprietary systems in mind as they do so.
One thing we definitely did not foresee was the onset of AI scraper attacks, which have caused problems for free-software projects across the net (and for LWN as well). Many have responded by installing systems like Anubis or retreating behind proprietary protective screens. LWN has, so far, resisted such moves, but the problem appears to be getting worse. There may come a time when it is simply impossible to maintain a public resource on the Internet without heavy screening. That does not bode well for the free net as a whole.
At the end of 2024, the prognosis for gccrs, the Rust front-end for GCC, seemed poor. Creating a new Rust compiler is not a small project, and gccrs seemed to lack the resources to get the job done. Over the course of 2025, though, gccrs has shown a great burst of progress, and is now able to compile the kernel's Rust code (though correctly compiling it is still a work in progress). It would have been nice to have predicted this change, but we did not.
We certainly didn't foresee the hostile takeover of the Groklaw site. That episode has made it clear, again, that history on the Internet is an ephemeral thing unless overt efforts are made to preserve it. We will see this kind of thing happen again, unfortunately.
The kernel port to WebAssembly never showed up in our crystal ball.
Of course, we cannot (and would not want to) predict the loss of developers from our community. In 2025, we mourned, among others, Bill Gianopoulos, Dave Täht, Helen Borrie, John Young, Paolo Mantegazza, and Steve Langasek, all of whom are deeply missed.
Finishing another year
LWN has not been immune to the trends felt by many in 2025. Subscriptions are down somewhat from one year ago, presumably as a result of a number of factors. The rise of LLM-generated content has been hard on the publishing industry in general, and we are not immune to it; we nonetheless intend to continue creating 100% human-written news for the Linux and free-software communities. Economic uncertainty will be making it harder for some people to subscribe. There are also people who have decided to do no more business with US-based companies. All of these factors are likely to have impeded subscription sales.
Complicating this situation is a rise in costs. Health insurance for LWN staff is the largest expense after payroll; this year, our provider informed us that the cost of this coverage will be increasing by 14% — just another in a series of double-digit increases. Other costs are increasing as well.
The end result is not threatening for LWN at this time, but it is not the best situation either. We have not increased subscription prices for four years, and hope to avoid doing so now. The best way to help us to do that, and to benefit LWN in general, is to be sure to subscribe or, even better, get your employer to buy a group subscription. Reversing the subscription trend will help to ensure that we stay on a stable footing going forward.
Be assured that LWN is not going anywhere; our readers have sustained us since 1998, and we have every reason to hope that they will continue to do so. On our side, we will do what we have always done: work to provide the best coverage of our community that we can. Thanks to all of you for a great 2025, and we're looking forward to the coming year.
Episode 29 of the Dirk and Linus show
Linus Torvalds is famously averse to presenting prepared talks, but the wider community is always interested in what he has to say about the condition of the Linux kernel. So, for some time now, his appearances have been in the form of an informal conversation with Dirk Hohndel. At the 2025 Open Source Summit Japan, the pair followed that tradition for the 29th time. Topics covered include the state of the development process, what Torvalds actually does, and how machine-learning tools might fit into the kernel project.
Hohndel began by noting that Torvalds is now a video star. He was
referring to the "Linus x Linus" video
that was published at the beginning of December, which is rapidly
approaching four-million views. Torvalds said that he enjoys being able to
"do these strange things
" on occasion. He hastened to add that once
was enough, though.
Release cycles
The 6.18 kernel had just been released, Hohndel said; what are the
highlights from that release? There are few highlights, according to
Torvalds, who described the release as "just more of the same
". It
is boring, and he likes it that way. When one is working on a kernel that
large numbers of people rely on, you do not want excitement. He has been
doing this work for 35 years now, and the code word is simply
"solid progress
". There were a lot of cleanups in this release, he
added.
The 6.18 kernel will be the next long-term-support kernel, so cleanups may
be appropriate, Hohndel said. Was the focus on cleanups by design?
Torvalds said that any semblance of design is just something that has
evolved over the years.
In the past, Hohndel said, there has been a rush to cram features into the long-term kernels; he wondered if that is still happening. Torvalds acknowledged that it has happened in the past, and it caused problems, leading to "stable" kernels that were only stable in name. But people have gotten used to how the development process works, and there is a lot less pressure to push changes in quickly. Developers can plan around the release cycle, and they know that there will be another long-term-support kernel before too long.
Hohndel noted that, at the time of the talk, the 6.19 merge window was in its second half, and asked Torvalds to describe what the merge window is. Torvalds said that it's the period when he takes in new code that maintainers feel is ready. Since merge windows happen so often (every nine or ten weeks), there is little pressure on developers to land code in any specific one; there will be another one shortly. This process, he said, takes a lot of the stress out of kernel development. There is stress for him during this time, as he tries to make all those changes work together, though.
What, exactly, does Torvalds do? Torvalds answered that he doesn't do
coding anymore; he mostly takes code that was written by others and is maintained
by others. He'll merge around 12,000 commits during a typical merge
window, over the course of about 200 pull requests. So he is doing merges
constantly, especially during the first week of the merge window. He likes
the process to calm down during the second week, giving him time to deal
with merges that require a closer look. After the merge window closes,
there are seven weeks of stabilization work. The community has been using
this process for long enough that it all goes pretty smoothly.
Hohndel observed that it can be difficult to get Torvalds to talk about the
interesting things he does. Merging all those patches is not a "click
and it works
" process, he said; inevitably there must be challenges?
Torvalds answered that he can do merges in his sleep at this point; he
handles conflicts all the time. He has gotten so good at it, he said, that
he asks maintainers not to resolve conflicts for him. They know their
subsystems better than he does, but he knows merges better than they do.
Seeing the conflicts also lets him know about where clashes are happening.
How often does Torvalds actively review the code that he merges? When a conflict happens, Torvalds said, he wants to understand the code to be sure that he has resolved the conflict correctly. That is when he looks most closely, and sometimes he finds code that is overly wrong while doing so. It happens during every merge window.
Pet peeves
When prompted by Hohndel to talk about his pet peeves, Torvalds cited late pull requests. The code being pushed into the mainline during the merge window is supposed to be ready before the merge window starts, so there should be no reason for a pull request to arrive during the latter part of that time. That is when he is trying to calm things down, and he does not appreciate getting major pull requests then; sometimes he will respond by making that pull request wait for the next cycle. Developers know better than to send these requests, but sometimes they try anyway.
Torvalds does a lot of test builds during the merge window; he also runs the resulting kernels on his own systems. That leads him to find bugs during every release cycle, he said; this should not happen, but it does. He is not doing anything particularly strange on his systems, so finding a bug indicates that somebody did not test their code well. If he is the first person to find a problem, he will be bitter about it, especially if the code comes from somebody who has been part of the kernel community for decades and should know better.
What really gets him mad, he said, is when developers will not acknowledge problems in their code. Mistakes will happen, that is life, and he will not get upset about them, but when a bug is identified, the responsible parties need to step up to deal with it. Claiming that a buggy patch is acceptable because it fixes a different bug will not fly, given the kernel's strict "no regressions" policy.
Large language models
Hohndel asked about the possibility of new tools that might make kernel development easier, and machine-learning-based tools in particular. Torvalds said that these tools can help; the kernel is a tool-heavy development environment, and better tools are welcome. He hates the hype around machine learning and large language models (LLMs), but believes in the tools themselves.
LLMs are not all that interesting for writing kernel code at this point, he said, but they are more interesting for maintenance. There are a number of projects in companies where large language models are being used to check patches for problems, and some of them are promising. He hopes that these tools will become an important part of the development process in the coming years.
When Torvalds started working on Linux, the available compilers were bad,
he said; now they can do "magic things
". Compilers have brought
about a "1000x
" improvement in development speed. Machine learning
might add another 10x, he said, but it is not a revolution; it's just
another tool.
Hohndel suggested that these tools could help new developers get started in the community; there is a lot that has to be learned at the outset. Torvalds answered that the kernel might not be the place for new developers to get started; it is highly interesting, but it is special. Starting with a smaller project might be more rewarding, he said.
Few projects have a "no regressions" rule like the kernel does, Hohndel
observed; he wondered why that is. Torvalds said that living up to such a
rule is difficult technically. Not all kernel developers love the rule,
and it can be hard to maintain over time. Sometimes regressions are found
long after their introduction; by the time they come to light, other
applications have come to rely on the new behavior. At that point, the
correct fix is not obvious, and the kernel project has occasionally
"done outrageous things
" to try to resolve the situation.
When a regression happens, he said, it can be tempting to tell users to
simply fix and recompile their applications. But there are a lot of
applications that rely on old and unmaintained code; they can be hard or
impossible to fix. Since, Torvalds said, he is "not yet king of the
world
", he cannot make rules for others, and the kernel simply needs to
live with it. Hohndel said that "not yet king
" was a fitting
place to end the conversation.
Tools for successful documentation projects
At Open Source Summit Japan 2025, Erin McKean talked about the challenges to producing good project documentation, along with some tooling that can help guide the process toward success. It is a problem that many projects struggle with and one that her employer, Google, gained a lot of experience with from its now-concluded Season of Docs initiative. Through that program, more than 200 case studies of documentation projects were gathered that were mined for common problems and solutions, which led to the tools and techniques that McKean described.
She introduced herself as a developer-relations engineer in the Google
open-source-programs office; part of her job—"and it's a fun job
"—is
to "help open-source projects have better docs
". She was also an
honorary fellow of the "late, lamented
" Society
for Technical Communication and runs the online, non-profit Wordnik English-language-dictionary web
site. Beyond all of that, she runs the Semicolon Appreciation
Society; some of us here at LWN should probably join said society.
She worked on the Season of Docs project, which was set up along the lines
of the Summer of Code
initiative as an opportunity for open-source projects to mentor
documentation writers. Season of Docs was started in 2019 by Sarah Maddox
and Andrew Chen; fairly quickly, those involved realized
that the maintainers of open-source projects "were not able to mentor
technical writers because they were software developers
". So it
switched to making grants to more than 130 open-source projects, which
wrote over 200 case studies that can be found on the Season of Docs web site.
Nobody wants to read 200+ case studies all at once, she said, but there is
a lot of valuable information in them. So those working on the initiative
decided to use the studies as the basis for "some tools for doc
maintainers
". Given that those tools are the focus of the talk, she
expected that those attending cared about open-source
documentation and were looking for help to produce better documentation for
their project. Beyond that, she believed that attendees were also empowered to
make changes to the documentation process of their projects and were
willing to accept help in doing so.
The "docs impulse
" comes out regularly for those who care about
documentation. It often happens due to noticing a problem of some kind:
outdated documentation, something that has to be repeated frequently in
pull-request or mailing-list comments, something that is obviously missing
from the documentation, and so on. Those situations lead to the feeling
that "I need docs
", but that idea has lots of assumptions
built in, such as what the software does, what the users expect, and what
project members are capable of with regards to documentation.
Pulling the assumptions out of it leads to a recognition that there is a
user or project need that might be solved through documentation. "You
don't just want some words, those words have to do something, they have to
help someone.
" Looking deeper, perhaps a process is needed to get
there. "Docs don't happen by magic; somebody has to do something in a
planned way to make docs.
" The fact that it has not happened yet
indicates that it is real work; "if it were easy, it would be done by
now
".
It may turn out that documentation is not really what is needed for the identified problem; instead, fixing a bug, rather than documenting a way around it, may make more sense. Or, perhaps, the project's culture is not particularly welcoming, so changing the culture, rather than creating more documentation about conduct, is the right path forward.
Advisor
There are four essential questions that need to be answered for any
documentation project, McKean said. "Why are we doing it? And, then,
what are we actually doing?
" After that, figuring out what process will
be used to accomplish it; "How are we going to get it done? And, then,
who is going to do it?
" Getting a grip on all of that is far more
complicated than the simple "I need docs" thought, which is why projects
need tools to help.
Based on the case studies, Google has created the Docs
Advisor, which is a guide to help answering those questions and more.
It can assist projects in picking the right path, learning about their
users and what they need, and figuring what documentation already exists
and what is missing. Once that is determined, the guide can help
"figure out how to do the actual work of writing docs and maintaining
them
". The Season of Docs folks teamed up with Erin Kissane to produce the guide.
The first step is to determine the level of resources available and the urgency of the work. Projects with limited resources and no urgency should try a planless iteration approach, which applies continuous iterative improvements to the documentation. If the project has limited resources and tight timelines, the mini-overhaul is a good choice; it applies some focused effort to documentation in a particular area or format, such as a new version of the software or a video tutorial, for example. For the project with ample resources and a tight timeline, the heroic overhaul, which tries to address all of the project's documentation woes in one focused effort, may be indicated. The links to the guide give more information about these paths, including the upsides and downsides of each. Those rare projects that have lots of resources and relaxed timelines can do anything they want, McKean said, because they lack any constraints.
Next up is figuring out the
project's users and their needs; determining users' range of expertise
and the conceptual hurdles they have faced is part of that process.
Within the project, identifying the contributors and,
"more importantly
", the potential contributors is crucial;
beyond just helping with writing documentation, they may have documentation
needs as well.
Taking notes is a vital part of the process, starting with things "off
the top of your head
" that are
known or suspected about the documentation needs of various users.
There are some techniques for filling in any missing information about
where the documentation problems lie. For example, friction
logging—observing someone doing a specific task using the project and
writing down everything that thwarts their efforts—can be used to find
areas that need attention. Tracking their reactions and feelings as they
go through the exercise will also help show the worst problem areas.
Gathering up complaints from bug reports, email or forum discussions, and
other sources can help fill in gaps, as can simply asking users specific
questions about the project and their usage of it. "Resist the urge to
use an LLM [large language model] to summarize this for you; you want exact
data.
" As a last resort, doing a survey is a possibility, but it is
difficult to design one that can provide the qualitative feedback needed to
direct documentation efforts, she said.
Using that information, it is time to "assess and plan
" by
prioritizing
the needs for documentation and deciding
on the overall goals. As
part of that, making an inventory of the existing documentation and gauging
the level of resources available for writing and editing will help in
choosing
a structure to work toward for the documentation.
Archetypes
The team worked with Daniel Beck on
a set of documentation
project archetypes that can help guide different kinds of work. Each
of them answers some of the questions that might arise with respect to a
given documentation task. She gave "The
Migration" as an example, asking attendees if they had ever done a
migration of documentation from one platform to another—and whether they
would ever want to do so again. "Probably not, right?
" she said with a laugh.
There are a number of reasons why a project might want to migrate to a new platform, including that users and contributors cannot find the information they need, people who want to contribute to the documentation cannot do so, or that the content-management system is old and out of date. The archetype can also help show when a specific type of project should not be done. A migration should not be undertaken when only a single vocal contributor wants to switch to their favorite tool, for example. That is simply solving one person's problem, when the goal should be to address problems that many are experiencing. The archetype will also help define what is out of scope, set the end goals, and describe some of the failure risks for that kind of project.
She listed and briefly described the dozen other archetypes. Each of them has a name and accompanying illustration (see "The Manual" at left), a description of the audience it is intended to assist, when to do it (and when not to), the key people, how to figure out what skills will be needed, and so on. For a migration to a static site, for example, a technical writer will need some experience using static sites, but will also need change-management experience. The archetypes refer to each other as possible precursor or add-on tasks; for migration, perhaps "The Prototype" makes sense in order to test whether the new platform provides all of the benefits that are hoped for.
McKean had some caveats about the archetypes, as well. She warned against
tunnel vision and immediately over-focusing on a single archetype.
"There is a risk of getting overcommitted to a certain kind of docs
project, but good outcomes don't happen by force, they happen by lining up
your goals, and your resources, and your key abilities with what you want
to achieve.
" Another thing to watch out for is the "Abilene paradox",
which is a form of groupthink where the decision that is made is an outcome
that no one really wants, which happens because participants mistakenly
think that everyone else does want it. "Make sure you get real,
enthusiastic buy-in.
"
Never-ending projects should be avoided, so building in rest stops along the way is important. Pick a point where there will be a tangible intermediate outcome for the project and pause to check in with participants. If only one person wants to continue, though, it makes sense to put the project aside rather than have it rest on the work of a single person.
Another good tool is a pre-mortem, where participants brainstorm all of the ways that the project could fail and work out what might be done to prevent or fix those things. It only makes sense to consider problems that are under the project's control, however; alien invasion, for example, is not something that can (or should be) prepared for.
Onboarding
Bringing technical writers onto a project—assuming they can be found in the
first place—can be difficult. "80% of onboarding is tooling
", but,
as with development, there are countless tools and configurations used for
producing documentation. Writers are more technical than might be guessed,
she said; they are "often building or duct-taping tools
". A good
starting point for a new writer on a project is to have them revise the
README and project-setup documentation, possibly just as an
exercise to get them to the point of being able to make their first change
and commit.
The other 20% of onboarding is learning who to ask, "usually about
tools
". Tech writers thrive in an "informal network
", she said;
when introducing them to someone, ask that person who the next two people
to talk to are. The same questions about the norms will often elicit different
responses from various parts of the project, so it is important for the
writer to learn about that. "Because, if a technical writer is going to
annoy you, and they are absolutely going to annoy you, you want them
to annoy you in the right way.
"
Due to "Cunningham's
Law", attributed to wiki creator Ward Cunningham, a writer's first
draft of a new document may deliberately be wrong: "because they want
you to look at it, they want you to have a reaction, and they want you to
point them in the right direction
". If that draft were mostly right,
"your eyes would skim over it
", and they would not get the feedback
that they need. Tech writers are students of psychology, she said,
"they're going to trick you into helping them
".
For some help with onboarding, she recommended the onboarding
toolkit that had been developed as a companion to Nicola Yap's presentation
at the 2021
Write the Docs conference (YouTube
video). "If you're lucky enough to have a tech writer to
onboard into your community, definitely look at that.
"
The project will probably want to use some kind of metric to gauge its
progress; her only rule for metrics is that if they change in one direction
or another, that needs to cause some change in the project or else it is
simply a "vanity metric
". If GitHub stars is the metric and having
it go up just means that "you pat yourself on the back a little bit harder
",
it is not a good metric. If the documentation is completely changed and
the number of downloads or installs is being used as the metric,
what will the project do if those numbers go way down (or up)?
Once a documentation project is underway, the project can consider getting
writers together for a sprint. Maddox has a blog
post about running such a sprint that McKean recommended. She also
suggested the Write the Docs
organization—"they're such friendly people, they have great
conferences
". The organization runs a free Slack network for connecting
with others in the community, including looking for contributors on a
dedicated channel for open-source projects. There are lots of other
documentation-project resources on the web site as well.
Q&A
The first question asked about the lack of semicolons in the presentation
and McKean admitted to "hoarding them
"—to laughter. The second was,
inevitably, about using LLMs instead of documentation writers, and, in
particular, how to convince management that human writers are needed. One
part of the problem is training, McKean said; if the project is new or the
documentation is outdated, where will the LLM get the information it needs
for a summary?
Another problem is that the output of LLMs tends to be "well-formed English
text
", which makes it harder for people to spot errors even when they
know the subject matter. "It just
flows smoothly into your eyeballs and kind of bypasses your brain
sometimes.
" It is "intended to be statistically average text
",
so unless management wants statistically average documentation, as opposed
to "great docs
", it will take extra work create documentation that
is engaging for users. LLMs can be useful for smoothing out difficult
parts, or assisting writers who are not entirely comfortable using English,
but the old "garbage in, garbage out" saying is clearly applicable to LLMs.
For projects (or managers) that want to experiment with LLMs, she suggested
using "The Prototype" archetype to test the output. The goals and criteria
should be established ahead of time, so that the output can be judged
fairly. "Test it on real users, don't just test it with management
";
people in management may have a hard time seeing past the money that can be
saved.
Speaking of money, her next answer largely consisted of an explanation of
why tech writers should be paid for contributions to open-source projects.
The question was about finding tech writers for projects and she suggested
the Slack channel at the Write for Docs site. She noted that, while tech writing
is one of the highest paid writing jobs out there, "it's not the
highest paid tech job you can have
". Tech writers are often also
developers who could make more doing that, but choose writing because they
like it more. They do not get the same boost on their resume that a
developer gets from
contributing to an open-source project; "if you can pay a technical
writer, please do
". There are tech writing students who are looking
for portfolio projects, who can be found on that channel; they may not
require payment, but they will likely need much more onboarding and
mentoring than a professional tech writer will.
The final question was about projects lacking a tech writer looking to
"muddle through
" and get the best documentation that they can.
McKean noted that most developers may not be technical writers, but they
"are almost certainly a technical explainer
"; she suggested getting
out of the documentation mindset and instead creating an explanation as if it
were for an email to a friend. If it needs smoothing out from there,
finding someone to read it and make suggestions or using an LLM may help.
Instead of starting out with the idea of creating "capital-D docs
",
simply describe what the project is and does—and why people would want to
use it—as project developers have probably already done in email with their
friends.
The slides from McKean's talk are available; a video of the talk should appear sometime over the next few weeks on the Linux Foundation YouTube channel. [Update: That video has now been posted.]
[ I would like to thank the Linux Foundation, LWN's travel sponsor, for assistance with traveling to Tokyo for Open Source Summit Japan. ]
Verifier-state pruning in BPF
The BPF verifier works, on a theoretical level, by considering every possible path that a BPF program could take. As a practical matter, however, it needs to do that in a reasonable amount of time. At the 2025 Linux Plumbers Conference, Mahé Tardy and Paul Chaignon gave a detailed explanation (slides; video) of the main mechanism that it uses to accomplish that: state pruning. They focused on two optimizations that help reduce the number of paths the verifier needs to check, and discussed some of the complications the optimizations introduced to the verifier's code.
Tardy began by giving an example of the simplest kind of branching control flow: a program with a single if statement in it. This program has two potential execution paths. Adding another (non-nested) if statement makes it four, then eight, and by the time one reaches a realistic program, the number of possible paths is completely intractable. Sometimes, however, a conditional branch doesn't actually result in any changes that the verifier cares about:
int index = 3;
if (condition) {
// Some code that doesn't change the value of index
...
}
// The validity of index doesn't depend on whether the branch was taken
int foo = array[index];
The core question that state pruning asks, Tardy said, is: "Can we skip some
of the other paths?
" To determine that, the verifier uses special "pruning points" in
a program's execution where it knows that it might be able to cut out redundant
paths. Pruning points are inserted at the sources and targets of conditional
jumps, places where unconditional jumps rejoin a different series of
instructions, and function calls. In the above example, pruning points are added
at the conditional jump corresponding to the if statement, and at the end of the
if statement.
When the verifier reaches a pruning point during verification, it saves a copy of its current state for later reference. If the pruning point is a conditional branch, it also pushes that copy of the state to a stack to come back and explore later. The state includes everything that affects the execution of the BPF program, including the current instruction pointer, so no information is lost when backtracking. When the verifier reaches a subsequent pruning point, it compares its current state against the saved states; if the current state is equivalent to a previously observed state, the verifier knows that it can stop exploring the current state, since it won't find anything new.
That explanation puts a lot of weight on the word "equivalent", however. In theory, Tardy said, two states are equivalent if the current state has a subset of the possible values of the saved state (and, in particular, if they both occur at the same position in the program). So two states that are identical, except that the saved state has an unknown value in register r1 and the current state has the specific value 4 in r1, are considered equivalent.
When state pruning was introduced in kernel version 3.18, the check was that simple. But, since then, the implementation of state pruning has gained an increasing number of complexities to allow the verifier to efficiently prune more states. For example, recent kernels use a least-recently-used cache for seen states, to reduce the memory footprint of state pruning.
In practice, Chaignon said, real programs rarely feature states that are
entirely included in one another. But the verifier can find more overlapping
states if it only compares parts of the state that turn out to actually matter
for verification. "Also, the less we compare, the more efficient state
pruning is.
"
To illustrate this principle, Chaignon went over the two most important state-pruning optimizations in the verifier today. The first is to consider only "live" registers when comparing states. A register is live if its value is used in the future by the program, and dead if it is not used again before it is overwritten. If two states are the same other than the contents of a dead register, the verifier can infer that exploring further wouldn't result in any different program behaviors, since the dead register is by definition not used. Therefore, the state can safely be pruned.
That does require the verifier to know when registers are live or dead; it
computes that information in a pre-pass over the program before the start of the
main verification logic. Since which registers a program uses can be seen purely
by inspecting the individual instructions, that pass is relatively simple. Stack
slots, however, are not so simple. The same concept of liveness can be applied
to stack slots, but because of the potential for pointer arithmetic, the
verifier can't actually tell which stack slots are used at which points in the
program without simulating it. So the verifier's equivalent logic for
stack-slot-based pruning is interwoven with the main verification pass, which
"makes the whole implementation quite different and more complicated
."
The second optimization that Chaignon covered was a bit more specialized. It was born from the observation that the verifier often doesn't care about the exact value of a register. If a register is used as an index into an array, it needs to care about whether the value falls within bounds. But if a value is just stored into a BPF map for later use, the verifier doesn't care what the exact value is.
So if two states are equivalent except for the value in a live register or stack slot, but that value is never used in a way that requires the verifier to care about it, the state can be safely pruned. To track this, every time a value is used for verification (such as being used as an array index), it is marked as "precise". That mark is propagated backward to previous states and all of the other registers and stack slots that contributed to that value. When two states are being compared for equivalence, the verifier only checks values that have been marked as precise.
The overall implementation of state pruning in the verifier has changed a lot over time, Chaignon said. There are more details than would fit in the presentation, so he and Tardy intend to publish a series of blog posts about the other things they learned in the course of preparing the talk. At the time of writing, those posts are not yet up, but they will presumably appear on either Tardy's blog or Chaignon's blog.
One member of the audience asked whether the verifier ever unions two states together. That would lose precision, but be better than the verifier failing to prove the program safe in a reasonable amount of time, they said. Chaignon answered that the Linux kernel verifier doesn't do that, but the Windows implementation of eBPF does. That kind of operation is called widening, and doesn't always help. Widening replaces a specific state with a more general state, in the hopes of causing more state pruning. It's hard to know exactly when widening will actually help, and when it will result in the verifier rejecting programs that are actually safe. Another member of the audience jumped in to clarify that the Linux implementation does actually do widening in one specific place: on the second iteration of a loop, if a value wasn't marked as precise, it gets widened (i.e. the verifier assumes that register or stack slot could take on any value). That helps loops to reach a fixed point, where future iterations of the loop can be pruned since they wouldn't add any new information, which is important for fast verification in practice.
[ Thanks to the Linux Foundation, LWN's travel sponsor, for supporting my travel to the Linux Plumbers Conference. ]
A high-memory elimination timeline for the kernel
Arnd Bergmann began his 2025 Linux Plumbers Conference session on the future of 32-bit support in the Linux kernel by saying that it was to be a followup to his September talk on the same topic. The focus this time, though, was on the kernel's "high memory" abstraction, and when it could be removed. It seems that the kernel community will need to support 32-bit systems for some time yet, even if it might be possible to remove some functionality, including support for large amounts of memory on those systems, more quickly.
The high-memory problem
High memory, he began, is needed to support 32-bit systems with more than
768MB of installed physical memory with the default kernel configuration;
it can enable the use of up to 16GB of physical memory. The high-memory
abstraction, though, is a maintenance burden and "needs to die
".
Interested readers can learn more about high memory, why it is necessary,
and how it works in this article.
A 32-bit system has a 4GB virtual address space (the physical address space
is larger), which is split between
user and kernel space. There are various kernel-configuration options that
can change where that split happens; the most common configuration is
VMSPLIT_3G, which allocates the bottom 3GB of the address space to
user space, while reserving 1GB for the kernel. The problem is that the
kernel's "linear map" (or "direct map"), which maps all of physical memory,
must fit in the kernel's part of the address space; with
VMSPLIT_3G, that limits the kernel to directly mapping 768MB of
physical memory. Any memory beyond that is managed as high memory, which
must be explicitly mapped before every use (and unmapped afterward).
Configuration options that increase the size of kernel space, and thus the size of the linear mapping, exist, but they do so at the cost of reducing the address space available to applications. For example, the VMSPLIT_2G options support up to nearly 2GB of physical memory without using high memory, but limit the virtual address space to 2GB.
There are a number of reasons to want to get rid of high memory, Bergmann said. Embedded developers dislike it because it tends to be the source of regressions on updates and complicates the code significantly. While 32-bit CPUs are still used for embedded applications, they normally do not have large amounts of memory installed, so high memory is not particularly helpful there. The memory-management maintainers dislike high memory for the same reasons; they would also like to see it disappear before anybody starts to think that 64-bit high memory might be a good idea.
There are reasons to keep high memory too, of course. Removing it increases the chances of exposing driver bugs and can force changes to user-space code. Any 32-bit system with more than 2GB of installed memory cannot actually use that memory without the high-memory abstraction. Even smaller amounts of memory cannot be supported without reducing the size of the user-space address space as described above, which would break applications that use a lot of virtual memory. If high memory were removed from the kernel, there would be no hope of supporting 32-bit systems with more than 4GB of memory; even 2GB systems would suffer significant limitations. There would be no impact at all, instead, on systems with less than 1GB installed.
There are still 32-bit systems being made, but almost all of them have 1GB or less of installed memory. The only reason to use a 32-bit CPU in a new system, he said, is extreme cost sensitivity, so there is unlikely to be a budget for larger amounts of memory. Anything with more than 1GB is thus almost certainly an older system. The 1GB systems are relatively easy to support, except that some of them have discontiguous memory, which makes it impossible to map into the kernel's linear mapping. There are workarounds, but some of them might require user-space changes.
The 2GB case is becoming rare, he said, but people do still have these systems, so support for them may have to be maintained as long as support for 1GB systems. The 2GB VMSPLIT option may work for some, but it results in a 1.75GB virtual address space, which is too small to run Firefox. David Hildenbrand asked whether "support" means that new kernels have to work on these systems; the answer was "yes", at least for systems where users still want to be able to update.
Then, there are systems with more than 2GB of memory. These can be old (pre-2007) x86 laptops or Arm Chromebooks from 2012 or 2013. There are also evidently in-flight entertainment systems, fire-alarm systems, and digital signs that fall into this category. These are systems where the cost of the board is a tiny part of the total cost, he said, so the manufacturers go ahead and put in more memory. Jason Gunthorpe asked whether people were really upgrading these systems to current kernels; Bergmann said they were, and Gunthorpe responded that he was shocked to hear that.
There are a number of known 32-bit systems with more than 4GB of memory;
Bergmann said they were all "dead systems
". They include Amazon
Annapurna Alpine boxes, Calxeda Midway systems, HiSilicon HiP04 systems,
among others. Evidently there are also SPARC systems with up to 2GB of
memory that are still getting updates.
Proposals
So what is to be done about high memory? Bergmann said that he had held out a lot of hope for a VMSPLIT_4G option, which would separate kernel and user space entirely, giving the full 4GB to each. That would allow systems with up to nearly 4GB of physical memory to be supported without high memory and would have solved a lot of problems, but this option has never been pushed to the point where it actually works, and nobody is funding that work now. So this option will probably never happen, he said, but it also probably will not be necessary.
What may be needed is an option called "densemem", which uses some mapping trickery to close up holes in the physical address space. Densemem is needed to replace the SPARSEMEM model in any case, and it could enable support for systems with up to 2GB without high memory. This option would reduce the address space available to user space, though. It also is not working yet; Bergmann is looking for developers who want to help finish it.
Another thing that is likely to happen is "reduced-feature high memory", where high-memory support would be dropped from one subsystem at a time. That would reduce the complexity of the system, and would also reduce the impact of an eventual high-memory removal. So, for example, support might be removed for page tables, DMA buffers, filesystem metadata, and more in high memory. Other users, such as file-backed and anonymous memory mappings, would need to continue to use high memory for now.
Years ago, low memory was a relatively scarce resource, so developers were
advised to provide the __GFP_HIGHMEM flag (indicating that the
request could be satisfied from high memory) whenever possible. Bergmann
suggested changing that policy so that memory allocations would only
include __GFP_HIGHMEM where high memory is truly required. He said
that it might be possible to start phasing out 32-bit desktop use cases in
particular, though he acknowledged that such a move might be controversial.
Gunthorpe wanted to confirm that no such desktops are being made anymore;
Bergmann said that is the case. The users of 32-bit desktop systems are
hobbyists and people who actively seek out old hardware. Dave Hansen
described those users as "a vocal minority
", and suggested that they
might be better off supporting a computer-history museum.
Eventually, Bergmann said, high memory could be placed behind the CONFIG_EXPERT configuration option, making it inaccessible to many users.
Hildenbrand asked what the impact on maintenance would be for the partial removals that had been proposed; Bergman said that it was small, but it would help with the following removal stages. Gunthorpe worried about the possibility of exposing driver bugs, and that removing high-memory support from them may not be worth the trouble. He suggested targeting the areas where high memory creates a lot of complexity instead. Matthew Wilcox said that removal could break kernel code that has to map large folios; Bergmann suggested making large-folio support incompatible with high memory. Hansen said that the __GFP_HIGHMEM flag could simply be removed for large-folio allocations, but Bergmann said that would break the page cache. Gunthorpe said that, when high-memory support is removed, systems that need it for the page cache simply will not work anymore.
Bergmann went on, suggesting that a separate configuration option could be created for each high-memory user. That would allow the creation of statistics for each usage. High-memory support for the page cache, he said, would need to be retained for at least five more years.
The timeline
He concluded with a suggested timeline for the future of 32-bit support; it looked like this:
- The high-memory feature-set reduction would begin in 2026.
- Also in 2026, the VMSPLIT_2G_OPT configuration would become the default for Arm, PowerPC, and x86 systems.
- In 2027, high-memory support could be removed entirely for lesser-used architectures like Arc, Microblaze, MIPS, SPARC, and xtensa.
- With luck, 2027 will also see the addition of densemem support for Armv7 systems.
- In 203x, support for the page cache in high memory would go away.
- In 204x, support for the last 32-bit architectures would be removed.
Hansen reacted by saying that many of the removals could be done more quickly. He also suggested a configuration option to only use low memory, even on systems with high-memory support, until forced by an out-of-memory situation to use the high memory too. Will Deacon said there might yet be hope for VMSPLIT_4G, which may become more attractive as high-memory support is removed. He asked if it would be useful if somebody completed the work; Bergmann answered that it would, but whether that work will be done is unclear.
After the conference, Bergmann posted a patch series implementing parts of his 2026 timeline items.
The slides and video of this talk are available.
[Thanks to the Linux Foundation, LWN's travel sponsor, for supporting my travel to this event.]
A visualizer for BPF program state
The BPF verifier is complicated. It needs to check every possible path that a BPF program's execution could take. The fact that its determination of whether a BPF program is safe is based on the whole lifetime of the program, instead of simple local factors, means that the cause of a verification failure is not always obvious. Ihor Solodrai and Jordan Rome gave a presentation (slides) at the 2025 Linux Plumbers Conference in Tokyo about the BPF verifier visualizer that they have been building to make diagnosing verification failures easier.
When the verifier rejects a BPF program, it produces a verification log
with a mixture of different information: the exact BPF instructions executed on the failing
path, calls to any kernel functions or BPF subprograms, line numbers
from the debugging information in the program, and information about the
contents of different registers and stack slots. This technically contains all of the information
needed to understand the failure, but in an "incomprehensible
" form,
Solodrai said. The logs don't include information about the previous states of
registers and stack slots, for example, so tracing through a log could involve
remembering context from a million instructions ago, which humans cannot do.
The solution is a tool that tracks the state of a program during verification and shows it to the programmer in a useful way. Solodrai and Rome have been working on the BPF verifier visualizer, a BSD-licensed tool written in TypeScript that does just that. It is a web-based application that lets one upload a verifier log for viewing. It then presents a three-panel view, showing the reconstructed C source code, the annotated verifier log, and the current state of the BPF virtual machine (as seen below). Solodrai demonstrated how clicking on a line in the C source code or in the verifier trace would automatically highlight the corresponding line in the other pane.
The pane showing the current state of the BPF program also uses colors to indicate which registers and stack slots were read from or written to, and includes visualizations for various different kinds of data, such as scalars and values from BPF maps. The BPF verifier uses a kind of static analysis based on abstract interpretation, so a register could hold a specific value such as "4", but it could also hold "an unknown number that is a multiple of 4 between 12 and 340". The visualizer does its best to show the simplest form of the value in a register.
Clicking on a register shows all of the instructions that influenced the value in that register, so one can trace back how a particular value was obtained. Execution in a BPF subprogram (i.e., a subroutine call) is indented to show the separation from the main program. Altogether, the demonstration proved compelling, with the attendees agreeing that the tool would be helpful for debugging.
Solodrai demonstrated using the visualizer to investigate a particular bug that Andrii Nakryiko had run into — one that wasn't obvious from the C source code of the BPF program. The problem happened in a function implementing stack unwinding:
int i = 0;
bpf_for(i, 1, MAX_STACK_DEPTH) {
...
// Verifier failure: "R2 unbounded memory access"
stack[i] = frame.ret_addr;
}
The verifier was rejecting the program on reaching the marked line, even though i is kept within bounds by the call to the bpf_for() macro. Investigating the generated assembly code in the BPF verifier showed the problem: the compiler was emitting the loop bounds check using register r1, and then calling a function that clobbered r1 before reloading the value for the access to stack. The verifier wasn't able to infer the connection between the check and the second load of the same value into r1, so it saw the access as using an index that had not been bounds-checked. Using the visualizer made that chain of circumstances a good deal easier to follow. The "solution" was to add a (redundant) bounds check and hope that the compiler doesn't manage to notice that the check is redundant and take it out.
Solodrai finished the talk by discussing some of the architectural decisions that he and Rome had made to ensure the visualizer could handle large verifier traces. He also included a shout out to the project's dependencies, including Vite, react-window, LocalData, and others.
One member of the audience asked whether the visualizer could also handle debugging dumps of a BPF program. Rome answered that it could not, it relies on parsing the verifier log. Solodrai suggested that programmers could dump the verifier log for successful programs with increased verbosity, although this could be somewhat unwieldy because it would include every possible path through the program in the output.
Another person asked which kernel versions were supported. "Good question. We
don't know,
" Solodrai answered. They're only testing against the most recent
kernel version, but in practice the verifier's log format is pretty stable, so
it should work for "most modern versions
" of the kernel. Daniel Borkmann
asked whether they had any plans to expose parts of the verifier's internal
state, which might be less stable. Solodrai replied that they had discussed the
idea of making some kind of binary format, but had ultimately decided against
it, since that would involve creating an entirely new format for debugging
information, which would be a lot of work for marginal benefit. The session
wrapped up there.
[ Thanks to the Linux Foundation, LWN's travel sponsor, for enabling me to travel to Tokyo to cover the Linux Plumbers Conference. ]
What's new in systemd v259
The systemd v259 release was announced on December 17, just three months after v258. It is a more modest release but still includes a number of important changes such as a new option for the run0 command (an alternative to sudo), ability to mount user home directories from the host in virtual machines, as well as under-the-hood changes with dlopen() for library linking, the ability to compile systemd with musl libc, and more.
Systemd v258 was something of a mammoth release; it took more than ten months to develop and included an unusually large number of new features and changes, which we covered in two installments (part one, part two). When it was released on September 17, Lennart Poettering said the project hoped to speed up its release cycle and push out smaller, more frequent releases—so far, so good.
Empowering run0
With v259 run0 has gained a new feature to retain a user's
UID/GID while taking on the capabilities of root. This feature was implemented
by Daan De Meyer, who said
that he was inspired to work on it "when I was playing around with
bpftrace and systing and got annoyed that the files written by these
tools were owned by root instead of my own user
".
The feature is invoked with the --empower option. If used, the user can run all privileged operations—and thus any executable on the system—while any files written by processes run by "run0 --empower" will have the UID and GID of the user. One practical benefit of that is that any files created will be owned by the user executing the run0 command, and not root.
As a simple example, running "run0 touch filename" would run touch as root and create filename with root as the owner of the file. Using "run0 --empower touch filename" would run touch as the regular user but with all capabilities, and create filename with the user as the owner of the file rather than root.
Users can also run "run0 --empower" and get a session with root capabilities but retain their UID and GID. The --empower option can be combined with --user to run a command or start a session with the capabilities of the user specified, but retain the UID and GID of the user running run0. Note that if a tool checks for UID rather than capabilities, then --empower will not work.
The other caveat to the --empower feature is that a user's
other processes can interact with the process being managed by
run0; De Meyer acknowledges,
for example, that a user could attach a debugger to a process started
with "run0 --empower", and
that the option "gives malicious processes a vector to infiltrate
the system
". He noted that the same is true of various uses of
sudo, too.
Libraries
In the wake of the XZ backdoor in 2024, the systemd project began to explore using dlopen() to load libraries only when required. With v259, the project has changed to use seven libraries with dlopen() instead of shared library linking; now systemd only directly links to systemd's shared libraries, GCC and GNU C (glibc) libraries, as well as the password-hashing library libxcrypt (libcrypt), OpenSSL (libcrypto), and zlib compression library (libz). Poettering said that the plan is to turn libcrypt and libcrypto into dlopen() dependencies as well in the v260 release.
Note that the project has also added a feature to systemd-analyze to display libraries that are used with dlopen(), since they are not displayed by ldd. The command "systemd-analyze dlopen-metadata filename" will display a table containing all of the libraries accessed via dlopen(), as well as a description of the features that are enabled by the library.
Historically, systemd only worked with the glibc, but there have been requests for systemd to work with the musl libc implementation for some time. With v259, systemd now compiles with musl libc but with a number of limitations. Poettering said that the largest limitation is lack of Name Service Switch (NSS) support, which is used to allow systemd to look up domain names, user names, and group names. Without that feature, he notes, a good chunk of systemd's infrastructure is gone or half-broken.
One might wonder why the project has bothered with musl libc if it
results in a substandard systemd; Poettering said
that it was in part to accommodate the postmarketOS project which has adopted
musl libc "and now they are stuck with it
". A lot of other
people have asked for it as well, but he continues
to recommend "just use glibc, the pain and limitations musl
brings are really not worth it
".
Miscellaneous
The systemd-vmspawn command is used to start a virtual machine from an operating system image. With v259 it has a new option, --bind-user; this makes a user's home directory on the host available in the virtual machine using the virtiofs shared filesystem, and passes the user's credentials to the virtual machine as well. This makes it possible for a user to log into a virtual machine using the same account information as the host, as long as the targeted virtual machine is running systemd v258 or later. This option was made available in systemd v249 for systemd-nspawn, which is used to run containers.
In addition, there is a new --bind-user-group option that works with --bind-user for systemd-vspawn and systemd-nspawn. As the name suggests, this specifies additional groups (such as wheel) that the user should be added to in the virtual machine or container.
Kernel modules loaded using the systemd-modules-load.service
are now done in parallel. While kernel modules are usually
auto-loaded, Poettering noted
that the service is still popular "for certain commercial
offerings
", and parallelizing the service will optimize boot times
quite a bit. To see how many modules this would impact on a system,
run this command:
$ systemd-analyze cat-config modules-load.d
Poettering also suggested that this work may pay dividends in other ways later:
For example, in certain fixed-function usecases it might make sense to load modules via this infrastructure during boot, and then "blow a fuse" for security reasons to disallow any further kmod loading during later boot. Because of that I think this parallelization work has been worthwhile, even though I personally might not be too sympathetic to those commercial offerings I mentioned.
Removals, deprecations, and incompatible changes
With systemd v259, the control group version 2 filesystem (cgroupfs) is mounted with the memory_hugetlb_accounting option, which means that hugetlb memory usage will be counted toward a control group's overall memory usage. The systemd-networkd network manager and systemd-nspawn utility now only support creating NAT rules with nftables; support for iptables and the libiptc API have been dropped.
Support for TPM 1.2 has been removed from systemd-boot and systemd-stub; TPM 2.0 support is retained. According to the release notes, this may not be much of a loss:
The security value of TPM 1.2 support is questionable in 2025, and because we never supported it in userspace, it was always quite incomplete to the point of uselessness.
System V service scripts have had a good run, but their time is near an end—at least with systemd. The project plans to pull the plug on them in v260, and will be removing several components used to create or work with those scripts.
The project plans to raise the minimum required versions of a number of components with the next release; v260 will require Linux 5.10 or later (with 5.14 or later recommended), and a minimum of Python 3.9.0, libxcrypt 4.4.0, libseccomp 2.40, OpenSSL 3.0.0, util-linux 2.37, and glibc 2.34.
There were 70 issues closed on GitHub for the v259 release, compared with 227 for v258. The milestone tracker shows 84 open issues for v260, and three that are already closed. Most of the issues are bugs and regressions that need addressing. It will be interesting to see what develops during this cycle and if the project keeps to the three-month cadence.
Brief items
Security
Security quote of the week
When an entire class of technology states on the packaging that it was made in China but intended "for overseas use only," this should really give you pause before plugging it into your network.— Brian KrebsYou will find this verbiage on a lot of Android TV streaming boxes for sale at the major retailers. There's a very good reason the country that makes this crap doesn't want it on their own networks. My advice: If you have one of these Android streaming boxes on your network or get one as a gift, toss it in the trash.
Kernel development
Kernel release status
The current development kernel is 6.19-rc2, released on December 21. Linus said: "I obviously expect next week to be even quieter, with people being distracted by the holidays. So let's all enjoy taking a little break, but maybe break the boredom with some early rc testing?"
Stable updates: 6.18.2, 6.17.13, and 6.12.63 were released on December 18. Note that the 6.17.x series ends with 6.17.13.
A change of maintainership for linux-next
Stephen Rothwell, who has maintained the kernel's linux-next integration tree from its inception, has announced his retirement from that role:
I will be stepping down as Linux-Next maintainer on Jan 16, 2026. Mark Brown has generously volunteered to take up the challenge. He has helped in the past filling in when I have been unavailable, so hopefully knows what he is getting in to. I hope you will all treat him with the same (or better) level of respect that I have received.It has been a long but mostly interesting task and I hope it has been helpful to others. It seems a long time since I read Andrew Morton's "I have a dream" email and decided that I could help out there - little did I know what I was heading for.
Over the last two decades or so, the kernel's development process has evolved from an unorganized mess with irregular releases to a smooth machine with a new release every nine or ten weeks. That would not have happened without linux-next; thanks are due to Stephen for helping to make the current process possible.
Results from the 2025 TAB election
The 2025 election for members of the Linux Foundation Technical Advisory Board has concluded; the winners are Greg Kroah-Hartman, Steven Rostedt, Julia Lawall, David Hildenbrand, and Ted Ts'o.Quotes of the week
Linus made an interesting observation: he enjoys doing merges in C and has become exceptionally good at it through decades of experience - he can "do them in his sleep". But he also observed that merges in Rust are more difficult as he's not familiar enough with the language. He tries to resolve them himself, then refers back to linux-next's resolution. When his resolution doesn't match, he uses it as a teaching moment.— Sasha LevinThis observation points to something fundamental about merge conflict resolution: it is the epitome of understanding code. To resolve a conflict, one must understand why the divergence occurred, what the developers on each side were trying to accomplish, and then unify the divergence in a way that makes the final code equal to or better than the sum of both parts.
LLMinus is a tool designed to support a maintainer's decision making around merge conflict resolution by learning from past merges as well as investigating into the different branches, trying to understand the underlying reason behind a conflict.
The Linux kernel's mm system weighs in at about 200KLoC, and Lorenzo [Stoakes] wrote a book on its design that weighs in at about 1300 pages, or about 150 LoC/page. This suggests that the Linux-kernel scheduler, which weighs in at about 70KLoC and has similar heuristics/workload challenges as does mm, would require a 430-page textbook to provide a similar level of design detail. By this methodology, RCU would require "only" 190 pages, presumably substituting its unfamiliarity for sched's and mm's deeply heuristic and workload-dependent nature.— Paul McKenneySadly, this data does not support the hypothesis that we can create comments that will provide understanding to people taking random dives into the Linux kernel's source code. In contrast to code that is closely associated with a specific type of mechanical device, Linux-kernel code requires the reader to possess a great deal of abstract and global conceptual/workload information.
This is not to say that the Linux kernel's internal documentation (including its comments) cannot or should not be improved. They clearly should. It instead means that a necessary part of any instant-understanding methodology for the Linux kernel include active software assistance, for example, Anthropic's Claude LLM or IBM's (rather older but less readily accessible) Analysis and Renovation Catalyst (ARC). I am not denigrating other options, but rather restricting myself to tools with which I have personal experience.
In the interest of full disclosure, this review came from a local version of my prompts that told AI to review as though it was a kernel developer who preferred vi over emacs.— Chris Mason
Distributions
Jackson: Debian’s git transition
Ian Jackson (along with Sean Whitton) has posted a manifesto and status update to the effect that, since Git repositories have become the preferred method to distribute source, that is how Debian should be distributing its source packages.
Everyone who interacts with Debian source code should be able to do so entirely in git.That means, more specifically:
- All examination and edits to the source should be performed via normal git operations.
- Source code should be transferred and exchanged as git data, not tarballs. git should be the canonical form everywhere.
- Upstream git histories should be re-published, traceably, as part of formal git releases published by Debian.
- No-one should have to learn about Debian Source Packages, which are bizarre, and have been obsoleted by modern version control.
This is very ambitious, but we have come a long way!
Loong64 is now an official Debian architecture
John Paul Adrian Glaubitz has announced that loong64 is now an official architecture for Debian, and will be part of the Debian 14 ("forky") release "if everything goes along as planned". This is a bit more than two years after the initial bootstrap of the architecture.
So far, we have manually built and imported an initial set of 112 packages with the help of the packages in Debian Ports. This was enough to create an initial chroot and set up the first buildd which is now churning through the build queue. Over night, the currently single buildd instance already built and uploaded 300 new packages.
Elementary OS 8.1 released
Version 8.1 of elementary OS has been released. Notable changes in this release include making the Wayland session the default, changes to window management and multitasking, as well as a number of accessibility improvements. The 8.1 release is the first to be made available for Arm64 devices, which should allow users to run elementary on Apple M-series hardware or other Arm devices that can load UEFI-supporting firmware, such as some Raspberry Pi models. See the blog post for a full list of changes.
FreeBSD laptop progress
The FreeBSD Foundation has a blog post about the progress it has made in 2025 on the Laptop Support & Usability Project for FreeBSD. The foundation committed $750,000 to the project in 2025 and has made progress on graphics drivers, Wi-Fi 4 and 5 support, audio improvements, sleep states, and more.
The installer for FreeBSD has gained a couple of new features that benefit laptop users. In 15.0 the installer now supports downloading and installing firmware packages after the FreeBSD base system installation is complete. Coming in 15.1 it will be possible to install the KDE graphical desktop environment during the installation process. Grateful thanks to Bjoern Zeeb and Alfonso Siciliano respectively. [...]
The project continues into 2026 with a similar sized investment and scope. Key targets include completing work on sleep states (modern standby and hibernate), adding support for graphics drivers up to Linux 6.18, Wi-Fi 6 support, USB4 and Thunderbolt support, HDMI improvements, UVC webcam support, and Bluetooth improvements.
A substantial testing program will also start in January, aiming to test all the functionality together across a range of hardware. Community testers are very welcome to help out, the Foundation will release a blog post and send an invite to help to the Desktop mailing list some time in January 2026.
Qubes OS 4.3.0 released
Version 4.3.0 of the security-oriented Qubes OS distribution has been released. Changes include more recent distribution templates, preloaded disposable virtual machines, and the reintroduction of the Qubes Windows Tools set. See the release notes for more information.
Development
GDB 17.1 released
Version 17.1 of the GDB debugger is out. Changes include shadow-stack support, info threads improvements, a number of Python API improvements, and more, including: "Warnings and error messages now start with an emoji (warning sign, or cross mark) if supported by the host charset. Configurable." See the NEWS file for more information.
Incus 6.20 released
Version 6.20 of the Incus container and virtual-machine management system has been released. Notable changes in this release include a new standalone command to add IncusOS servers to a cluster, qcow2-formatted volumes for clustered LVM, and reverse DNS records in OVN. See the announcement for a full list of changes.
Systemd v259 released
Systemd v259 has been released. Notable changes include a new "--empower" option for run0 that provides elevated privileges to a user without switching to root, ability to propagate a user's home directory into a VM with systemd-vmspawn, and more. Support for System V service scripts has been deprecated, and will be removed in v260. See the release notes for other changes, feature removals, and deprecated features.
Development quote of the week
LibreOffice these days is sufficiently compatible with Microsoft Office documents that I can exchange edited change-tracked books with >3000 tracked changes and several hundred comments with my production editor without them blinking. (DOCX is an output format as far as Scrivener is concerned; it's an input format as far as publishing is concerned: the real work gets done in Adobe InDesign, which I'm not touching with a barge-pole.)— Charlie Stross
Page editor: Daroc Alden
Announcements
Newsletters
Distributions and system administration
Development
Calls for Presentations
CFP Deadlines: December 25, 2025 to February 23, 2026
The following listing of CFP deadlines is taken from the LWN.net CFP Calendar.
| Deadline | Event Dates | Event | Location |
|---|---|---|---|
| December 31 | April 28 April 29 |
stackconf 2026 | Munich, Germany |
| January 12 | March 28 March 29 |
Chemnitz Linux Days | Chemnitz, Germany |
| January 12 | March 28 | Central Pennsylvania Open Source Conference | Lancaster, Pennsylvania, US |
| January 31 | April 10 April 11 |
Grazer Linuxtage | Graz, Austria |
| February 7 | April 20 April 21 |
SambaXP | Göttingen, Germany |
| February 9 | May 18 May 20 |
Open Source Summit North America | Minneapolis, Minnesota, US |
| February 14 | April 23 | OpenSUSE Open Developers Summit | Prague, Czech Republic |
| February 15 | July 13 July 19 |
EuroPython | Kraków, Poland |
| February 15 | April 27 April 28 |
foss-north | Gothenburg, Sweden |
If the CFP deadline for your event does not appear here, please tell us about it.
Upcoming Events
Events: December 25, 2025 to February 23, 2026
The following event listing is taken from the LWN.net Calendar.
| Date(s) | Event | Location |
|---|---|---|
| January 21 January 23 |
Everything Open | Canberra, Australia |
| January 29 January 30 |
CentOS Connect | Brussels, Belgium |
| January 31 February 1 |
Free and Open source Software Developers' European Meeting | Brussels, Belgium |
| February 2 | OpenEmbedded Workshop 2026 | Brussels, Belgium |
| February 2 February 4 |
Config Management Camp | Ghent, Belgium |
| February 17 | AlpOSS 2026 | Échirolles, France |
If your event does not appear here, please tell us about it.
Security updates
Alert summary December 18, 2025 to December 24, 2025
| Dist. | ID | Release | Package | Date |
|---|---|---|---|---|
| AlmaLinux | ALSA-2025:23306 | 10 | binutils | 2025-12-23 |
| AlmaLinux | ALSA-2025:23382 | 8 | binutils | 2025-12-22 |
| AlmaLinux | ALSA-2025:23343 | 9 | binutils | 2025-12-23 |
| AlmaLinux | ALSA-2025:23543 | 8 | container-tools:rhel8 | 2025-12-24 |
| AlmaLinux | ALSA-2025:23383 | 8 | curl | 2025-12-22 |
| AlmaLinux | ALSA-2025:23336 | 9 | gcc-toolset-13-binutils | 2025-12-23 |
| AlmaLinux | ALSA-2025:23667 | 10 | git-lfs | 2025-12-23 |
| AlmaLinux | ALSA-2025:23745 | 8 | git-lfs | 2025-12-22 |
| AlmaLinux | ALSA-2025:23744 | 9 | git-lfs | 2025-12-23 |
| AlmaLinux | ALSA-2025:23948 | 8 | grafana | 2025-12-24 |
| AlmaLinux | ALSA-2025:23932 | 10 | httpd | 2025-12-23 |
| AlmaLinux | ALSA-2025:23732 | 8 | httpd:2.4 | 2025-12-22 |
| AlmaLinux | ALSA-2025:22865 | 9 | kernel | 2025-12-17 |
| AlmaLinux | ALSA-2025:23201 | 10 | keylime | 2025-12-23 |
| AlmaLinux | ALSA-2025:23210 | 9 | keylime | 2025-12-17 |
| AlmaLinux | ALSA-2025:23484 | 10 | libssh | 2025-12-23 |
| AlmaLinux | ALSA-2025:23483 | 9 | libssh | 2025-12-23 |
| AlmaLinux | ALSA-2025:23738 | 10 | mod_md | 2025-12-23 |
| AlmaLinux | ALSA-2025:23739 | 9 | mod_md | 2025-12-23 |
| AlmaLinux | ALSA-2025:23111 | 9 | mysql:8.4 | 2025-12-17 |
| AlmaLinux | ALSA-2025:23479 | 10 | openssh | 2025-12-23 |
| AlmaLinux | ALSA-2025:23481 | 8 | openssh | 2025-12-22 |
| AlmaLinux | ALSA-2025:23480 | 9 | openssh | 2025-12-23 |
| AlmaLinux | ALSA-2025:23664 | 10 | opentelemetry-collector | 2025-12-23 |
| AlmaLinux | ALSA-2025:23729 | 9 | opentelemetry-collector | 2025-12-24 |
| AlmaLinux | ALSA-2025:23309 | 9 | php:8.3 | 2025-12-23 |
| AlmaLinux | ALSA-2025:23295 | 10 | podman | 2025-12-23 |
| AlmaLinux | ALSA-2025:23325 | 9 | podman | 2025-12-23 |
| AlmaLinux | ALSA-2025:23940 | 10 | python3.12 | 2025-12-23 |
| AlmaLinux | ALSA-2025:23323 | 9 | python3.12 | 2025-12-23 |
| AlmaLinux | ALSA-2025:23342 | 9 | python3.9 | 2025-12-23 |
| AlmaLinux | ALSA-2025:23530 | 8 | python39:3.9 | 2025-12-22 |
| AlmaLinux | ALSA-2025:23294 | 10 | skopeo | 2025-12-23 |
| AlmaLinux | ALSA-2025:23326 | 9 | skopeo | 2025-12-23 |
| AlmaLinux | ALSA-2025:23856 | 9 | thunderbird | 2025-12-24 |
| AlmaLinux | ALSA-2025:23050 | 10 | tomcat | 2025-12-23 |
| AlmaLinux | ALSA-2025:23049 | 9 | tomcat | 2025-12-17 |
| AlmaLinux | ALSA-2025:23052 | 10 | tomcat9 | 2025-12-23 |
| AlmaLinux | ALSA-2025:23663 | 8 | webkit2gtk3 | 2025-12-22 |
| AlmaLinux | ALSA-2025:23700 | 9 | webkit2gtk3 | 2025-12-23 |
| Debian | DSA-6084-1 | stable | c-ares | 2025-12-18 |
| Debian | DSA-6089-1 | stable | chromium | 2025-12-21 |
| Debian | DSA-6086-1 | stable | dropbear | 2025-12-19 |
| Debian | DSA-6085-1 | stable | mediawiki | 2025-12-19 |
| Debian | DSA-6088-1 | stable | php8.4 | 2025-12-21 |
| Debian | DLA-4418-1 | LTS | python-mechanize | 2025-12-22 |
| Debian | DSA-6090-1 | stable | rails | 2025-12-21 |
| Debian | DLA-4415-1 | LTS | roundcube | 2025-12-18 |
| Debian | DSA-6087-1 | stable | roundcube | 2025-12-19 |
| Debian | DLA-4417-1 | LTS | usbmuxd | 2025-12-22 |
| Debian | DLA-4414-1 | LTS | webkit2gtk | 2025-12-18 |
| Debian | DSA-6083-1 | stable | webkit2gtk | 2025-12-18 |
| Debian | DSA-6091-1 | stable | wordpress | 2025-12-21 |
| Fedora | FEDORA-2025-27f16898ba | F42 | NetworkManager | 2025-12-19 |
| Fedora | FEDORA-2025-ceeda3c40d | F43 | NetworkManager | 2025-12-18 |
| Fedora | FEDORA-2025-9e233a4e22 | F42 | brotli | 2025-12-18 |
| Fedora | FEDORA-2025-7605ca0d7d | F42 | cef | 2025-12-21 |
| Fedora | FEDORA-2025-6e776254bf | F43 | cef | 2025-12-21 |
| Fedora | FEDORA-2025-909f303a85 | F42 | checkpointctl | 2025-12-19 |
| Fedora | FEDORA-2025-ebfdef0115 | F43 | checkpointctl | 2025-12-19 |
| Fedora | FEDORA-2025-0805619c28 | F42 | chromium | 2025-12-20 |
| Fedora | FEDORA-2025-cd7567466d | F43 | chromium | 2025-12-20 |
| Fedora | FEDORA-2025-bab8cb971e | F42 | containernetworking-plugins | 2025-12-19 |
| Fedora | FEDORA-2025-294d534170 | F43 | containernetworking-plugins | 2025-12-19 |
| Fedora | FEDORA-2025-c09b980696 | F42 | cups | 2025-12-18 |
| Fedora | FEDORA-2025-58e2bb0f1e | F42 | fonttools | 2025-12-20 |
| Fedora | FEDORA-2025-36b3527937 | F42 | gobuster | 2025-12-22 |
| Fedora | FEDORA-2025-723b7f2990 | F43 | gobuster | 2025-12-22 |
| Fedora | FEDORA-2025-b8d9bd75d2 | F42 | golang-github-facebook-time | 2025-12-18 |
| Fedora | FEDORA-2025-6e8c819299 | F43 | golang-github-facebook-time | 2025-12-18 |
| Fedora | FEDORA-2025-447e38400e | F42 | gosec | 2025-12-20 |
| Fedora | FEDORA-2025-6ad9ed1275 | F43 | gosec | 2025-12-20 |
| Fedora | FEDORA-2025-b2df36b70a | F42 | mingw-glib2 | 2025-12-23 |
| Fedora | FEDORA-2025-ecdc29aa34 | F43 | mingw-glib2 | 2025-12-23 |
| Fedora | FEDORA-2025-dbd70402f4 | F42 | mingw-libpng | 2025-12-22 |
| Fedora | FEDORA-2025-da6d092209 | F43 | mingw-libpng | 2025-12-19 |
| Fedora | FEDORA-2025-6c78aad721 | F42 | mingw-libsoup | 2025-12-23 |
| Fedora | FEDORA-2025-5a82449616 | F43 | mingw-libsoup | 2025-12-23 |
| Fedora | FEDORA-2025-34626c05f6 | F42 | mingw-python3 | 2025-12-23 |
| Fedora | FEDORA-2025-883181272e | F43 | mingw-python3 | 2025-12-23 |
| Fedora | FEDORA-2025-2f6ca95a74 | F42 | moby-engine | 2025-12-22 |
| Fedora | FEDORA-2025-d39f46567c | F43 | moby-engine | 2025-12-22 |
| Fedora | FEDORA-2025-34b0986502 | F42 | mqttcli | 2025-12-20 |
| Fedora | FEDORA-2025-89758d1b13 | F43 | mqttcli | 2025-12-20 |
| Fedora | FEDORA-2025-bf07d21f3e | F43 | nebula | 2025-12-18 |
| Fedora | FEDORA-2025-519240c972 | F42 | nextcloud | 2025-12-21 |
| Fedora | FEDORA-2025-86c0829159 | F43 | nextcloud | 2025-12-21 |
| Fedora | FEDORA-2025-9e233a4e22 | F42 | perl-Alien-Brotli | 2025-12-18 |
| Fedora | FEDORA-2025-b08763f674 | F42 | pgadmin4 | 2025-12-22 |
| Fedora | FEDORA-2025-c7fd6acdf6 | F43 | pgadmin4 | 2025-12-22 |
| Fedora | FEDORA-2025-ce8a4096e7 | F42 | php | 2025-12-19 |
| Fedora | FEDORA-2025-7e9290d67f | F43 | php | 2025-12-19 |
| Fedora | FEDORA-2025-b1379d950d | F42 | python-django4.2 | 2025-12-18 |
| Fedora | FEDORA-2025-45ee190318 | F42 | python-django5 | 2025-12-18 |
| Fedora | FEDORA-2025-24dfd3b072 | F43 | python-django5 | 2025-12-18 |
| Fedora | FEDORA-2025-58e2bb0f1e | F42 | python-unicodedata2 | 2025-12-20 |
| Fedora | FEDORA-2025-7ec743931c | F42 | python3-docs | 2025-12-19 |
| Fedora | FEDORA-2025-7ec743931c | F42 | python3.13 | 2025-12-19 |
| Fedora | FEDORA-2025-bf69e91bda | F42 | uriparser | 2025-12-21 |
| Fedora | FEDORA-2025-5c12420f33 | F43 | uriparser | 2025-12-20 |
| Fedora | FEDORA-2025-fc18ab1e37 | F42 | util-linux | 2025-12-21 |
| Fedora | FEDORA-2025-107641b428 | F42 | vips | 2025-12-18 |
| Fedora | FEDORA-2025-d9707059b7 | F43 | vips | 2025-12-18 |
| Fedora | FEDORA-2025-96a708ea95 | F43 | webkitgtk | 2025-12-19 |
| Mageia | MGASA-2025-0330 | 9 | php | 2025-12-21 |
| Mageia | MGASA-2025-0332 | 9 | roundcubemail | 2025-12-23 |
| Mageia | MGASA-2025-0331 | 9 | webkit2 | 2025-12-21 |
| Oracle | ELSA-2025-23306 | OL10 | binutils | 2025-12-20 |
| Oracle | ELSA-2025-23382 | OL8 | binutils | 2025-12-19 |
| Oracle | ELSA-2025-23343 | OL9 | binutils | 2025-12-20 |
| Oracle | ELSA-2025-23383 | OL8 | curl | 2025-12-19 |
| Oracle | ELSA-2025-23336 | OL9 | gcc-toolset-13-binutils | 2025-12-20 |
| Oracle | ELSA-2025-22866 | OL7 | gimp | 2025-12-20 |
| Oracle | ELSA-2025-23667 | OL10 | git-lfs | 2025-12-20 |
| Oracle | ELSA-2025-23745 | OL8 | git-lfs | 2025-12-22 |
| Oracle | ELSA-2025-23744 | OL9 | git-lfs | 2025-12-22 |
| Oracle | ELSA-2025-23279 | OL10 | kernel | 2025-12-18 |
| Oracle | ELSA-2025-21063 | OL7 | kernel | 2025-12-18 |
| Oracle | ELSA-2025-23241 | OL9 | kernel | 2025-12-20 |
| Oracle | ELSA-2025-23201 | OL10 | keylime | 2025-12-18 |
| Oracle | ELSA-2025-23210 | OL9 | keylime | 2025-12-18 |
| Oracle | ELSA-2025-23484 | OL10 | libssh | 2025-12-18 |
| Oracle | ELSA-2025-23483 | OL9 | libssh | 2025-12-18 |
| Oracle | ELSA-2025-23738 | OL10 | mod_md | 2025-12-22 |
| Oracle | ELSA-2025-23739 | OL9 | mod_md | 2025-12-22 |
| Oracle | ELSA-2025-23479 | OL10 | openssh | 2025-12-20 |
| Oracle | ELSA-2025-23481 | OL8 | openssh | 2025-12-20 |
| Oracle | ELSA-2025-23480 | OL9 | openssh | 2025-12-20 |
| Oracle | ELSA-2025-23309 | OL9 | php:8.3 | 2025-12-20 |
| Oracle | ELSA-2025-23295 | OL10 | podman | 2025-12-20 |
| Oracle | ELSA-2025-23325 | OL9 | podman | 2025-12-20 |
| Oracle | ELSA-2025-22982 | OL7 | python-kdcproxy | 2025-12-20 |
| Oracle | ELSA-2025-23323 | OL9 | python3.12 | 2025-12-20 |
| Oracle | ELSA-2025-23342 | OL9 | python3.9 | 2025-12-20 |
| Oracle | ELSA-2025-23294 | OL10 | skopeo | 2025-12-20 |
| Oracle | ELSA-2025-23326 | OL9 | skopeo | 2025-12-20 |
| Oracle | ELSA-2025-23663 | OL8 | webkit2gtk3 | 2025-12-19 |
| Oracle | ELSA-2025-23700 | OL9 | webkit2gtk3 | 2025-12-20 |
| Red Hat | RHSA-2025:23306-01 | EL10 | binutils | 2025-12-18 |
| Red Hat | RHSA-2025:23405-01 | EL10.0 | binutils | 2025-12-18 |
| Red Hat | RHSA-2025:23382-01 | EL8 | binutils | 2025-12-18 |
| Red Hat | RHSA-2025:23343-01 | EL9 | binutils | 2025-12-18 |
| Red Hat | RHSA-2025:23232-01 | EL9.0 | binutils | 2025-12-18 |
| Red Hat | RHSA-2025:23233-01 | EL9.2 | binutils | 2025-12-18 |
| Red Hat | RHSA-2025:23400-01 | EL9.6 | binutils | 2025-12-18 |
| Red Hat | RHSA-2025:22012-01 | EL10 | buildah | 2025-12-18 |
| Red Hat | RHSA-2025:22011-01 | EL9 | buildah | 2025-12-18 |
| Red Hat | RHSA-2025:21964-01 | EL9.6 | buildah | 2025-12-18 |
| Red Hat | RHSA-2025:23383-01 | EL8 | curl | 2025-12-18 |
| Red Hat | RHSA-2025:23126-01 | EL9.0 | curl | 2025-12-18 |
| Red Hat | RHSA-2025:23127-01 | EL9.2 | curl | 2025-12-18 |
| Red Hat | RHSA-2025:23125-01 | EL9.4 | curl | 2025-12-18 |
| Red Hat | RHSA-2025:23043-01 | EL9.6 | curl | 2025-12-18 |
| Red Hat | RHSA-2025:22668-01 | EL8 | go-toolset:rhel8 | 2025-12-18 |
| Red Hat | RHSA-2025:21779-01 | EL10.0 | golang | 2025-12-18 |
| Red Hat | RHSA-2025:22899-01 | EL9.0 | golang | 2025-12-18 |
| Red Hat | RHSA-2025:22181-01 | EL9.2 | golang | 2025-12-18 |
| Red Hat | RHSA-2025:21856-01 | EL9.4 | golang | 2025-12-18 |
| Red Hat | RHSA-2025:21778-01 | EL9.6 | golang | 2025-12-18 |
| Red Hat | RHSA-2025:23088-01 | EL10 | grafana | 2025-12-18 |
| Red Hat | RHSA-2025:23001-01 | EL10.0 | grafana | 2025-12-18 |
| Red Hat | RHSA-2025:23087-01 | EL9 | grafana | 2025-12-18 |
| Red Hat | RHSA-2025:23002-01 | EL9.6 | grafana | 2025-12-18 |
| Red Hat | RHSA-2025:23789-01 | EL9.6 | kernel | 2025-12-24 |
| Red Hat | RHSA-2025:21816-01 | EL10 | multiple packages | 2025-12-18 |
| Red Hat | RHSA-2025:21815-01 | EL9 | multiple packages | 2025-12-18 |
| Red Hat | RHSA-2025:23309-01 | EL9 | php:8.3 | 2025-12-18 |
| Red Hat | RHSA-2025:23295-01 | EL10 | podman | 2025-12-18 |
| Red Hat | RHSA-2025:23347-01 | EL10.0 | podman | 2025-12-18 |
| Red Hat | RHSA-2025:23325-01 | EL9 | podman | 2025-12-18 |
| Red Hat | RHSA-2025:22030-01 | EL9.6 | podman | 2025-12-18 |
| Red Hat | RHSA-2025:23323-01 | EL9 | python3.12 | 2025-12-18 |
| Red Hat | RHSA-2025:23530-01 | EL8 | python39:3.9 | 2025-12-18 |
| Red Hat | RHSA-2025:23416-01 | EL6 | rsync | 2025-12-22 |
| Red Hat | RHSA-2025:23415-01 | EL7 | rsync | 2025-12-22 |
| Red Hat | RHSA-2025:23842-01 | EL8.2 | rsync | 2025-12-22 |
| Red Hat | RHSA-2025:23853-01 | EL8.4 | rsync | 2025-12-22 |
| Red Hat | RHSA-2025:23854-01 | EL8.6 | rsync | 2025-12-22 |
| Red Hat | RHSA-2025:23858-01 | EL8.8 | rsync | 2025-12-22 |
| Red Hat | RHSA-2025:23407-01 | EL9.0 | rsync | 2025-12-22 |
| Red Hat | RHSA-2025:23140-01 | EL9.4 | ruby:3.3 | 2025-12-18 |
| Red Hat | RHSA-2025:23648-01 | EL9.6 | ruby:3.3 | 2025-12-18 |
| Red Hat | RHSA-2025:23294-01 | EL10 | skopeo | 2025-12-18 |
| Red Hat | RHSA-2025:23348-01 | EL10.0 | skopeo | 2025-12-18 |
| Red Hat | RHSA-2025:23394-01 | EL9.6 | skopeo | 2025-12-18 |
| Slackware | SSA:2025-353-01 | php | 2025-12-19 | |
| SUSE | SUSE-SU-2025:4429-1 | SLE12 | ImageMagick | 2025-12-17 |
| SUSE | SUSE-SU-2025:4427-1 | SLE15 | ImageMagick | 2025-12-17 |
| SUSE | SUSE-SU-2025:4428-1 | SLE15 SES7.1 | ImageMagick | 2025-12-17 |
| SUSE | SUSE-SU-2025:21211-1 | SLE16 | ImageMagick | 2025-12-18 |
| SUSE | openSUSE-SU-2025:15830-1 | TW | alloy | 2025-12-20 |
| SUSE | SUSE-SU-2025:4488-1 | SLE12 | apache2 | 2025-12-19 |
| SUSE | SUSE-SU-2025:4421-1 | SLE15 oS15.5 oS15.6 | buildah | 2025-12-17 |
| SUSE | openSUSE-SU-2025:15834-1 | TW | busybox | 2025-12-21 |
| SUSE | openSUSE-SU-2025:20177-1 | oS16.0 | cheat | 2025-12-23 |
| SUSE | openSUSE-SU-2025:15831-1 | TW | chromedriver | 2025-12-20 |
| SUSE | openSUSE-SU-2025:0475-1 | osB15 | chromium | 2025-12-19 |
| SUSE | openSUSE-SU-2025:0476-1 | osB15 | chromium | 2025-12-19 |
| SUSE | openSUSE-SU-2025:15823-1 | TW | clair | 2025-12-18 |
| SUSE | SUSE-SU-2025:4483-1 | SLE12 | colord | 2025-12-18 |
| SUSE | openSUSE-SU-2025:15825-1 | TW | coredns-for-k8s | 2025-12-19 |
| SUSE | openSUSE-SU-2025:15826-1 | TW | coredns-for-k8s | 2025-12-19 |
| SUSE | SUSE-SU-2025:4425-1 | SLE15 SLE-m5.2 SLE-m5.3 SLE-m5.4 SLE-m5.5 oS15.6 | cups | 2025-12-17 |
| SUSE | openSUSE-SU-2025:15835-1 | TW | duc | 2025-12-21 |
| SUSE | SUSE-SU-2025:4424-1 | SLE15 SES7.1 oS15.6 | firefox | 2025-12-17 |
| SUSE | openSUSE-SU-2025:15833-1 | TW | firefox | 2025-12-21 |
| SUSE | openSUSE-SU-2025:0474-1 | osB15 | flannel | 2025-12-18 |
| SUSE | SUSE-SU-2025:4504-1 | MP4.3 SLE15 SLE-m5.3 SLE-m5.4 SLE-m5.5 oS15.4 | glib2 | 2025-12-23 |
| SUSE | SUSE-SU-2025:4441-1 | SLE12 | glib2 | 2025-12-18 |
| SUSE | SUSE-SU-2025:4442-1 | SLE15 SLE-m5.2 SES7.1 | glib2 | 2025-12-18 |
| SUSE | SUSE-SU-2025:21222-1 | SLE-m6.1 | gnutls | 2025-12-18 |
| SUSE | SUSE-SU-2025:4481-1 | MP4.3 SLE15 oS15.3 oS15.4 oS15.5 oS15.6 | golang-github-prometheus-alertmanager | 2025-12-18 |
| SUSE | SUSE-SU-2025:4444-1 | grafana | 2025-12-18 | |
| SUSE | SUSE-SU-2025:4457-1 | SLE12 | grafana | 2025-12-18 |
| SUSE | SUSE-SU-2025:4446-1 | SLE15 | grafana | 2025-12-18 |
| SUSE | SUSE-SU-2025:4482-1 | SLE15 oS15.6 | grafana | 2025-12-18 |
| SUSE | SUSE-SU-2025:21223-1 | SLE-m6.2 | grub2 | 2025-12-18 |
| SUSE | SUSE-SU-2025:21212-1 | SLE16 | grub2 | 2025-12-18 |
| SUSE | openSUSE-SU-2025:20163-1 | oS16.0 | grub2 | 2025-12-17 |
| SUSE | SUSE-SU-2025:21221-1 | SLE-m6.1 | helm | 2025-12-18 |
| SUSE | SUSE-SU-2025:4437-1 | SLE15 SLE-m5.5 oS15.6 | helm | 2025-12-17 |
| SUSE | openSUSE-SU-2025:0473-1 | osB15 | icinga-php-library, icingaweb2 | 2025-12-18 |
| SUSE | openSUSE-SU-2025:20162-1 | oS16.0 | imagemagick | 2025-12-17 |
| SUSE | SUSE-SU-2025:4507-1 | SLE11 | kernel | 2025-12-23 |
| SUSE | SUSE-SU-2025:4506-1 | SLE15 SLE-m5.5 oS15.5 | kernel | 2025-12-23 |
| SUSE | SUSE-SU-2025:4422-1 | SLE15 oS15.6 | kernel | 2025-12-17 |
| SUSE | SUSE-SU-2025:4505-1 | SLE15 oS15.6 | kernel | 2025-12-23 |
| SUSE | openSUSE-SU-2025:15836-1 | TW | kernel-devel | 2025-12-21 |
| SUSE | SUSE-SU-2025:4432-1 | SLE15 oS15.6 | libpng12 | 2025-12-17 |
| SUSE | SUSE-SU-2025:4436-1 | MP4.3 SLE15 SLE-m5.2 SLE-m5.3 SLE-m5.4 SLE-m5.5 SES7.1 | libpng16 | 2025-12-17 |
| SUSE | SUSE-SU-2025:21217-1 | SLE-m6.0 | libpng16 | 2025-12-18 |
| SUSE | SUSE-SU-2025:21220-1 | SLE-m6.1 | libpng16 | 2025-12-18 |
| SUSE | SUSE-SU-2025:4494-1 | SLE15 oS15.6 | libpng16 | 2025-12-19 |
| SUSE | openSUSE-SU-2025:15828-1 | TW | libruby3_4-3_4 | 2025-12-19 |
| SUSE | SUSE-SU-2025:4514-1 | MP4.3 SLE15 oS15.4 | libsoup | 2025-12-23 |
| SUSE | SUSE-SU-2025:4493-1 | MP4.3 SLE15 oS15.4 | mariadb | 2025-12-19 |
| SUSE | SUSE-SU-2025:4438-1 | SLE15 | mariadb | 2025-12-17 |
| SUSE | SUSE-SU-2025:4491-1 | SLE15 SES7.1 oS15.3 | mariadb | 2025-12-19 |
| SUSE | SUSE-SU-2025:4502-1 | SLE15 oS15.6 | mariadb | 2025-12-22 |
| SUSE | openSUSE-SU-2025:20175-1 | oS16.0 | mariadb | 2025-12-23 |
| SUSE | SUSE-SU-2025:4512-1 | SLE15 oS15.6 | mozjs52 | 2025-12-23 |
| SUSE | SUSE-SU-2025:4489-1 | SLE15 oS15.6 | netty | 2025-12-19 |
| SUSE | openSUSE-SU-2025:15824-1 | TW | netty | 2025-12-18 |
| SUSE | SUSE-SU-2025:21224-1 | SLE-m6.2 | openssl-3 | 2025-12-18 |
| SUSE | SUSE-SU-2025:21213-1 | SLE16 | openssl-3 | 2025-12-18 |
| SUSE | openSUSE-SU-2025:20164-1 | oS16.0 | openssl-3 | 2025-12-17 |
| SUSE | openSUSE-SU-2025:15837-1 | TW | php8 | 2025-12-21 |
| SUSE | SUSE-SU-2025:4439-1 | SLE15 | poppler | 2025-12-17 |
| SUSE | SUSE-SU-2025:4434-1 | SLE15 oS15.6 | poppler | 2025-12-17 |
| SUSE | SUSE-SU-2025:4486-1 | SLE15 SES7.1 | postgresql13 | 2025-12-18 |
| SUSE | SUSE-SU-2025:4485-1 | MP4.3 SLE15 SES7.1 | postgresql14 | 2025-12-18 |
| SUSE | SUSE-SU-2025:4484-1 | SLE15 oS15.6 | postgresql15 | 2025-12-18 |
| SUSE | openSUSE-SU-2025:15839-1 | TW | python310 | 2025-12-23 |
| SUSE | openSUSE-SU-2025:15838-1 | TW | python311-tornado6 | 2025-12-21 |
| SUSE | openSUSE-SU-2025:15840-1 | TW | python315 | 2025-12-23 |
| SUSE | SUSE-SU-2025:4487-1 | SLE12 | python36 | 2025-12-18 |
| SUSE | SUSE-SU-2025:4433-1 | oS15.3 oS15.6 | python39 | 2025-12-17 |
| SUSE | SUSE-SU-2025:21230-1 | SLE-m6.2 | qemu | 2025-12-22 |
| SUSE | SUSE-SU-2025:21233-1 | SLE16 | qemu | 2025-12-23 |
| SUSE | SUSE-SU-2025:4511-1 | SLE15 oS15.6 | rsync | 2025-12-23 |
| SUSE | openSUSE-SU-2025:15827-1 | TW | rsync | 2025-12-19 |
| SUSE | SUSE-SU-2025:4476-1 | MP4.3 SLE15 SLE-m5.3 SLE-m5.4 oS15.4 | salt | 2025-12-18 |
| SUSE | SUSE-SU-2025:21227-1 | SLE-m6 SLE-m6.0 SLE-m6.1 SLE-m6.2 | salt | 2025-12-18 |
| SUSE | SUSE-SU-2025:21216-1 | SLE-m6.0 | salt | 2025-12-18 |
| SUSE | SUSE-SU-2025:21218-1 | SLE-m6.1 | salt | 2025-12-18 |
| SUSE | SUSE-SU-2025:4478-1 | SLE15 | salt | 2025-12-18 |
| SUSE | SUSE-SU-2025:4475-1 | SLE15 SLE-m5.2 SES7.1 oS15.3 | salt | 2025-12-18 |
| SUSE | SUSE-SU-2025:4477-1 | SLE15 SLE-m5.5 oS15.5 oS15.6 | salt | 2025-12-18 |
| SUSE | SUSE-SU-2025:4501-1 | SLE15 oS15.6 | taglib | 2025-12-22 |
| SUSE | SUSE-SU-2025:4435-1 | SLE12 | usbmuxd | 2025-12-17 |
| SUSE | SUSE-SU-2025:4458-1 | SLE15 SLE-m5.0 SLE-m5.1 SLE-m5.2 SLE-m5.3 SLE-m5.4 SLE-m5.5 oS15.3 oS15.4 oS15.5 oS15.6 | uyuni-tools | 2025-12-18 |
| SUSE | SUSE-SU-202511:15318-1 | venv-salt-minion | 2025-12-18 | |
| SUSE | SUSE-SU-202511:15317-1 | venv-salt-minion | 2025-12-18 | |
| SUSE | SUSE-SU-202511:15319-1 | venv-salt-minion | 2025-12-18 | |
| SUSE | SUSE-SU-2025:4451-1 | venv-salt-minion | 2025-12-18 | |
| SUSE | SUSE-SU-2025:4474-1 | venv-salt-minion | 2025-12-18 | |
| SUSE | SUSE-SU-2025:4445-1 | venv-salt-minion | 2025-12-18 | |
| SUSE | SUSE-SU-2025:4450-1 | venv-salt-minion | 2025-12-18 | |
| SUSE | SUSE-SU-2025:4449-1 | venv-salt-minion | 2025-12-18 | |
| SUSE | SUSE-SU-2025:4448-1 | venv-salt-minion | 2025-12-18 | |
| SUSE | SUSE-SU-2025:4453-1 | venv-salt-minion | 2025-12-18 | |
| SUSE | SUSE-SU-2025:4452-1 | venv-salt-minion | 2025-12-18 | |
| SUSE | SUSE-SU-2025:4471-1 | Debian 12 | venv-salt-minion | 2025-12-18 |
| SUSE | SUSE-SU-2025:4467-1 | MP4.3 SLE15 SLE-m5.0 SLE-m5.1 SLE-m5.2 SLE-m5.3 SLE-m5.4 SLE-m5.5 oS15.3 oS15.4 oS15.5 oS15.6 | venv-salt-minion | 2025-12-18 |
| SUSE | SUSE-SU-2025:4466-1 | SLE12 | venv-salt-minion | 2025-12-18 |
| SUSE | SUSE-SU-2025:4447-1 | SLE15 | venv-salt-minion | 2025-12-18 |
| SUSE | SUSE-SU-2025:4423-1 | SLE12 | webkit2gtk3 | 2025-12-17 |
| SUSE | SUSE-SU-2025:4440-1 | SLE15 oS15.6 | wireshark | 2025-12-17 |
| SUSE | SUSE-SU-2025:4490-1 | SLE-m5.5 oS15.5 | xen | 2025-12-19 |
| SUSE | SUSE-SU-2025:4426-1 | SLE15 oS15.6 | xkbcomp | 2025-12-17 |
| SUSE | openSUSE-SU-2025:15841-1 | TW | zk | 2025-12-23 |
| Ubuntu | USN-7940-1 | 24.04 | linux-azure-fips | 2025-12-17 |
| Ubuntu | USN-7922-3 | 18.04 | linux-oracle-5.4 | 2025-12-19 |
| Ubuntu | USN-7928-4 | 22.04 | linux-raspi | 2025-12-19 |
| Ubuntu | USN-7921-2 | 24.04 | linux-realtime-6.14 | 2025-12-19 |
| Ubuntu | USN-7931-4 | 24.04 | linux-xilinx | 2025-12-19 |
Kernel patches of interest
Kernel releases
Architecture-specific
Build system
Core kernel
Development tools
Device drivers
Device-driver infrastructure
Documentation
Filesystems and block layer
Memory management
Networking
Security-related
Virtualization and containers
Miscellaneous
Page editor: Joe Brockmeier
