LWN.net Weekly Edition for March 12, 2026
Welcome to the LWN.net Weekly Edition for March 12, 2026
This edition contains the following feature content:
- The relicensing of chardet: an AI-assisted rewrite and relicensing of a Python module has raised a number of questions.
- California's Digital Age Assurance Act and Linux distributions: an age-verification law seems to require urgent action by operating-system providers.
- Debian decides not to decide on AI-generated contributions: after much debate, the project shelves (for now) a General Resolution on allowing AI-assisted contributions.
- Disabling Python's lazy imports from the command line: a discussion about the API used to control lazy imports.
- Inspecting and modifying Python types during type checking: a look at a PEP that would add new capabilities to Python's type system.
- HTTPS certificates in the age of quantum computing: an IETF working group investigates how to reduce the overhead of large post-quantum signatures.
- Reconsidering the multi-generational LRU: memory-management developers explore improving, or removing, the multi-generational LRU algorithm.
- Fedora shares strategy updates and "weird research university" model: an update from the Fedora Project Leader and council on recent strategy meetings.
This week's edition also includes these inner pages:
- Brief items: Brief news items from throughout the community.
- Announcements: Newsletters, conferences, security updates, patches, and more.
Please enjoy this week's edition, and, as always, thank you for supporting LWN.net.
The relicensing of chardet
Chardet is a Python module that attempts to determine which character set was used to encode a text string. It was originally written by Mark Pilgrim, who is also the author of a number of Python books; the 1.0 release happened in 2006. For many years, this module has been under the maintainership of Dan Blanchard. Chardet has always been licensed under the LGPL, but, with the 7.0.0 release, Blanchard changed the terms to the permissive MIT license. That has led to an extensive (and ongoing) discussion on when code can be relicensed against the wishes of its original author, and whether using a large language model to rewrite code is a legitimate way to strip copyleft requirements from code.The fact that chardet is LGPL-licensed has indeed caused some unhappiness in the past. That license is incompatible with the requirements for the Python standard library, frustrating those who would like to see chardet become one of the "batteries" that are included with Python; that licensing has also blocked the inclusion of some other modules that use chardet. Blanchard bemoaned his inability to relicense the code back in 2021:
Unfortunately, because the code that chardet was originally based on was LGPL, we don't really have a way to relicense it. Believe me, if we could, I would. There was talk of chardet being added to the standard library, and that was deemed impossible because of being unable to change the license.
In 2026, though, that inability has, according to Blanchard, been overcome by virtue of a complete rewrite — done using Anthropic's Claude LLM — of the source. Pilgrim did not see it that way:
However, it has been brought to my attention that, in the release 7.0.0, the maintainers claim to have the right to "relicense" the project. They have no such right; doing so is an explicit violation of the LGPL. Licensed code, when modified, must be released under the same LGPL license. Their claim that it is a "complete rewrite" is irrelevant, since they had ample exposure to the originally licensed code (i.e. this is not a "clean room" implementation). Adding a fancy code generator into the mix does not somehow grant them any additional rights.
Blanchard, unsurprisingly, disagreed.
A clean-room reimplementation, he said, "is a means to an end, not the
end itself
", and that there are other ways to reach that end, including
an LLM rewrite. He pointed to results from a code-comparison tool showing
that there was almost no similarity between version 7.0 and the
previous versions, and concluded:
I then started in an empty repository with no access to the old source tree, and explicitly instructed Claude not to base anything on LGPL/GPL-licensed code. I then reviewed, tested, and iterated on every piece of the result using Claude. You can see the history of all the design and implementation plans that were used to create 7.0.0 here. I did not write the code by hand, but I was deeply involved in designing, reviewing, and iterating on every aspect of it.I understand this is a new and uncomfortable area, and that using AI tools in the rewrite of a long-standing open source project raises legitimate questions. But the evidence here is clear: 7.0 is an independent work, not a derivative of the LGPL-licensed codebase. The MIT license applies to it legitimately.
Simon Willison has observed, though, that the LLM did indeed access the LGPL-licensed source at one point. Beyond that, as others have pointed out, it is easy to ask an LLM to reimplement a body of code in a style different from the original, with the result that similarity checkers will see something entirely new. That does not necessarily break the derived-work link, though. Had an LLM been employed to translate chardet to, say, Lisp, the level of similarity would be quite low, but most would agree that the new code was derived from the original. The fact that the training corpus for Claude surely included all previous versions of chardet also muddies the picture.
A lot of people who are not lawyers have offered opinions on whether chardet 7.0 is derived from previous versions. I, too, am not a lawyer, and will not add to that pile. But it is worth saying that, if instructing an LLM to rewrite an existing body of code is sufficient to strip copyleft requirements from that code, then the future of copyleft looks even dimmer than it did before. But, then, the future of any sort of software licensing scheme could be threatened. The death of copyleft could, ironically, be part of its real goal: the end of copyright.
Meanwhile, of course, had Blanchard simply shown up with a new Python module, let's call it "detectchar", that implemented the same API as chardet, the overall level of eyebrow elevation would have been considerably lower. Replacing the existing code, under the same name but with a different license, drew a lot more attention to this move than it would have otherwise attracted.
Nobody involved in the current discussion is showing any sign of backing down. That means the license change seems likely to stand, unless Pilgrim decides to bring in real lawyers, which would be an expensive and uncertain prospect at best. But if the change stands, it would not be surprising to see a lot more people engaging in this sort of license-stripping exercise. That may eventually lead to a court decision (or, more likely, a series of conflicting decisions) on whether an LLM can be used to launder source code in this way. The old Chinese curse — may you live in interesting times — would certainly appear to be upon us.
California's Digital Age Assurance Act and Linux distributions
A recently enacted law in California imposes an age-verification requirement on operating-system providers beginning next year. The language of the Digital Age Assurance Act does not restrict its requirements to proprietary or commercial operating systems; projects like Debian, FreeBSD, Fedora, and others seem to be on the hook just as much as Apple or Microsoft. There is some hope that the law will be amended, but there is no guarantee that it will be. This means that the developer communities behind Linux distributions are having to discuss whether and how to comply with the law with little time and even less legal guidance.
The law requires operating-system providers to provide a form of age
verification that can be queried by any web site, application, or online service
"that distributes and facilitates the download of applications from third-party
developers
" for computers, mobile devices, or other general-purpose computing
devices. The law goes into effect on January 1, 2027, which leaves less than ten
months for distributions to determine if the law applies to them and then implement a
solution if it does.
The law was introduced in February 2025 and passed into law in October
2025. Unlike other legislation, such as the European Union's Cyber Resilience Act
(CRA), it seems to have slipped in under the radar without raising any real protest
from the open-source projects it affects. It seems to have gathered widespread
attention in the Linux community after Aaron Rainbolt started a
discussion about the new law by cross-posting a message about "the unfortunate
need for an 'age verification' API
" to Debian, Fedora, and Ubuntu mailing
lists on March 1. He provided a pointer to the California law as well as a similar bill that is
working its way through the Colorado legislature.
Requirements
The bill is short and, unfortunately, leaves a great deal unspecified. The
preamble (digest) for the bill explains that existing California law, such as the Age-Appropriate
Design Code Act, requires businesses that provide online services which are likely
to be accessed by children to estimate the age of their users. This is in order to
apply privacy and data protection, as well as to prohibit the use of "dark patterns to
lead or encourage children
" to provide more personal information than necessary,
or to forgo privacy protections. One might wonder why the state of California
wouldn't extend such courtesies to all users.
In order for businesses to comply with the Age-Appropriate Design Code Act and
other laws, the Digital Age Assurance Act compels operating-system providers to
"provide an accessible interface at account setup that requires an account holder
[...] to indicate the birth date, age, or both, of the user of that device
". This
is to allow third parties to query the user's age bracket to determine, for
example, if they are old enough to access certain content or applications. Requiring
online platforms to perform age verification, though, means that they are handling
personally identifiable information (PII); the law is positioned as a
privacy-friendly alternative to allowing those services from collecting or retaining
PII beyond what is necessary to provide the service. For example, rather than a site
asking the user for their birthday to ascertain their age, the site is supposed to
send a query using an API asking for the age bracket of the user instead. So the site
does not collect data that indicates that user "CAGamerPerson7" was born on June 7, 2014;
it only gets a signal to attest that the user is in a certain age bracket.
In the US, various state legislatures have been passing
laws that require sites to verify the age of people who attempt to access "adult"
content or putting
age restrictions on social media platforms like TikTok and Instagram. For
example, a number of states now require adult web sites to verify a user's age. This
is usually done by asking the user to show government-issued ID or to use third-party
service that verify age (also, usually, by reviewing the person's ID). Several states
also have either banned minors from having social-media accounts, or require parental
consent to have an account. There are also laws that try to make online services
"safer" for children, such as California's SB-976
("Protecting Our Kids from Social Media Addiction Act"). That law, passed in 2024,
makes it "unlawful for the operator of an addictive internet-based service or
application [...] to provide an addictive feed to a user, unless the operator does
not have actual knowledge that the user is a minor
".
There is more age-verification legislation on the horizon. User "aaronsb" on Reddit dug
into the age-verification bills being introduced in the US. California's law is a
version that is being pushed by a group called Common Sense Media, which is a nonprofit
organization that is advocating for laws it says will "hold tech accountable, and
put children's well-being at the center of the digital world
". Another version,
called the "App Store
Accountability Act" has been introduced in many other states. It is being pushed
by a group called the Digital
Childhood Alliance. According to aaronsb, the purpose of that legislation is to
shift age-verification from providers like Meta or Epic Games to the app stores. That
legislation does not appear to impact providers of open-source operating systems.
The methods of age verification have been a nightmare for users who care about privacy. The implementations have often required users to provide legal ID to a third party in order to prove their age; these providers are ripe targets for attackers, and a number of them have already exposed that information via data breaches of one form or another. California's law, at least, allows the user to self-supply their age range without sharing data with a third party; we can breathe easy knowing that no 13-year-old would ever fib about their age in order to access "forbidden" content.
The push for age-verification laws has not been restricted to the US, of course. In 2023, France passed a law requiring age verification for minors using social media, and the UK enacted the Online Safety Act. Australia passed the Online Safety Amendment in 2024. No doubt there are many more that have either passed or are under consideration.
The California law is overbroad and makes no exceptions for open-source operating
systems. It defines an operating-system provider as "a person or entity that
develops, licenses, or controls the operating system software on a computer, mobile
device, or any other general purpose computing device
". The penalty for
non-compliance is $2,500 "per affected child for each negligent violation
",
but not more than $7,500 per child. That seems to leave the door open for any
operating-system provider, including projects like Debian or Fedora, to be sued by
the state if the distributions do not have a mechanism to comply with this by next
year.
Distribution discussion
Rainbolt said that, since operating-system providers will need to provide an API for age verification, he was looking into implementing one for the Kicksecure and Whonix distributions. He threw out a few ideas about how to implement the functionality required, such as using the D-Bus service AccountsService. However, this would pose a problem for long-term-support (LTS) distributions; California's law requires the age-attestation interface be available even if an operating system had been installed and had accounts set up before January 1, 2027 as long as the device is still getting updates. Therefore the law seems to require that operating-system providers implement this for older systems that are still getting updates, meaning that it would apply to some fairly old Linux releases. To account for that requirement, he suggested that distributions take a hybrid approach by introducing a new D-Bus interface, "org.freedesktop.AgeVerification1", that could be implemented in AccountsService or via another application as a stop-gap solution.
There was some discussion about the details of how age attestation could be
implemented to comply with the California law as well as other age-attestation
requirements in other jurisdictions. One might think that California's law would
provide more details about implementation, but it does not. It simply specifies that
an operating-system provider must provide developers with "a signal with respect
to a particular user with a digital signal via a reasonably consistent real-time
application programming interface that identifies
" the user's age bracket.
Danielle Foré, founder and CEO of elementary OS, weighed in with some ideas and pointers to documentation of Apple's Declared Age Range API. In a private message, she said that the implementation being discussed would be modeled after that API:
It's entirely on-device, self-attested, and does a decent job providing the least information to developers we possibly can while still following the law to the best of our understanding.
I think the general consensus among folks participating here is that we don't think age declaration is the best way to empower parents and we all are very interested in asking for as little data as possible, storing it on your device only, and giving only the bare minimum data as required by law to app developers. It's being discussed begrudgingly. Nobody is eager about this and we're all hoping the laws get overturned before the implementation deadlines
Legal analysis
A number of people active in the discussion Rainbolt started said either that they thought the law did not apply to
open-source operating systems, or suggested that the projects should ignore the
law. For example, attorney Vincent F. Heuser Jr. said
he doubted that California "can actually succeed in applying the law to Debian,
Ubuntu
" and others. Debian developer Soren Stoutner said that
distributions should "do nothing towards implementing this dangerous
legislation
", as he expected it would be overturned or unenforced. There also has
been much
discussion on Fedora's Discourse forum and elsewhere, but there is something of a
vacuum when it comes to official legal guidance. I reached out to a number of legal
experts and organizations that might be well-positioned to comment. The Software
Freedom Conservancy (SFC) and the Electronic Frontier Foundation (EFF) responded.
Bradley Kühn, policy fellow and hacker-in-residence for the SFC, replied
with some observations about the law, including the fact that Governor Gavin Newsom
had included a signing
statement that urged the legislature to amend the law to address some concerns
expressed by video-game developers and streaming services. That might be an
opportunity to also exempt open-source operating systems. Kühn said that it was
"not a disaster for FOSS
" even if it did go into effect as written: "DRM,
vendor-restricted boot, other copyleft-violating technologies are not required for
implementation
". He added that the SFC is only focused on the impact on FOSS
itself and copyleft licensing, as that is its area of expertise. The SFC is still
analyzing the bill, and he said that it would likely issue a comprehensive statement
in about a month.
Samantha Baldwin, a policy and research staff technologist on EFF's Public
Interest Technology team, said that the bills were "technologically
ignorant
". The only carve-out in the bill is for broadband internet access
services, "which we suppose is meant to exempt routers and modems from needing to
implement age bracketing
". The EFF released a statement
in March 2025 detailing its concerns about the bill at the time, including worries
about platforms censoring protected speech as well as the impact of age-verification
laws on all users from a privacy and security standpoint. The EFF does hold that the
law is enforceable for operating systems produced by FOSS projects:
The bill drafters seem to only be thinking about general purpose operating systems from corporate vendors, but almost any digital device runs an operating system of some kind. It is completely nonsensical technically. It is not feasible to have your headphones, your insulin pump, your ebike, your oven, your kerosene powered cheese grater implement age bracketing, yet all of these run operating systems.
These bills strike at the heart of digital liberty, at our ability to have control of our own devices. They seek to restrict our ability to run open platforms composed of software that is both free as in speech and as in beer.
Nothing in the bill language exempts noncommercial projects, meaning open source research operating systems like the BSDs, Plan 9, OpenSolaris, etc. are all affected.
These laws should be challenged on their constitutionality in court.
Status
How Linux distributions and other open operating systems will choose to react or
implement this is still largely up in the air. MidnightBSD has declared on its
download page that residents of
countries, states, or territories that require age verification "are not
authorized to use
" the operating system. Fedora Project Leader Jef Spaleta noted
that the age-verification law was "fully in the realm of requiring legal
advice
". Jon Seager, VP engineering for Canonical, said
that the company is aware of the legislation and is reviewing it with legal
counsel. "There are currently no concrete plans on how, or even whether, Ubuntu
will change in response
".
System76, which is based in Colorado, produces the Ubuntu-based Pop!_OS
distribution. Its CEO, Carl Richell, said that he has met
with Colorado senator Matt Ball, who is the co-author of that state's age-attestation
bill. He said that Ball suggested excluding open-source software from that bill,
which "appears to be a real possibility
". In addition, he expected there would
be amendments to California's law.
It's my hope we can move fast enough to influence excluding open source in the CA bill amendments.
No illusions, it's an uphill battle, but we have an open door to advocate for the open source community.
If we are lucky, open-source operating systems will be exempted before California's law goes into effect and before Colorado's bill is passed into law (if it is). That does not mean that such laws are actually good policy, however, just that open-source projects won't bear the brunt of having to implement functionality to be compliant with bad policy. At best, the Digital Age Assurance Act seems to be futile attempt at "protecting" children while actually accomplishing nothing more than adding compliance headaches for operating-system providers and application developers.
Debian decides not to decide on AI-generated contributions
Debian is the latest in an ever-growing list of projects to wrestle (again) with the question of LLM-generated contributions; the latest debate started in mid-February, after Lucas Nussbaum opened a discussion with a draft general resolution (GR) on whether Debian should accept AI-assisted contributions. It seems to have, mostly, subsided without a GR being put forward or any decisions being made, but the conversation was illuminating nonetheless.
Nussbaum said that Debian probably needed to have a
discussion "to understand where we stand regarding AI-assisted contributions to
Debian
" based on some recent discussions, though it was not clear
what discussions he was referring to. Whatever the spark was, Nussbaum
put forward the draft GR to clarify Debian's stance on allowing
AI-assisted contributions. He said that he would wait a couple of days
to collect feedback before formally submitting the GR.
His proposal would allow "AI-assisted contributions (partially
or fully generated by an LLM)
" if a number of conditions were
met. For example, it would require explicit disclosure if "a
significant portion of the contribution is taken from a tool without
manual modification
", and labeling of such contributions with
"a clear disclaimer or a machine-readable tag like
'[AI-Generated]'
." It also spells out that contributors should
"fully understand
" their submissions and would be accountable
for the contributions, "including vouching for the technical merit,
security, license compliance, and utility of their
submissions
". The GR would also prohibit using generative-AI tools
with non-public or sensitive project information, including private
mailing lists or embargoed security reports.
AI is a marketing term
It is fair to say that it is difficult to have an effective conversation about a technology when pinning down accurate terminology is like trying to nail Jell-O to a tree. AI is the catch-all term, but much (not all) of the technology in question is actually tooling around large language models (LLMs). When participants have differing ideas of what is being discussed, deciding whether the thing should be allowed may pose something of a problem.
Russ Allbery asked for people to
be more precise in their descriptions of the technologies that their proposals might
affect. He asserted that it has become common for AI, as a term, "to be so
amorphously and sloppily defined that it could encompass every physical object in the
universe
". If the project is going to make policy, he said, it needed to be very
specific about what it was making policy about:
An LLM has some level of defined meaning, although even there it would be nice if people were specific. Reinforcement learning is a specific technique with some interesting implications, such as the existence of labeled test data used to train the algorithm. "AI" just means whatever the person writing a given message wants it to mean and often changes meaning from one message to the next, which makes it not useful for writing any sort of durable policy.
Gunnar Wolf agreed with Allbery, but Nussbaum claimed that the specific technology did not matter. The proposal boiled down to the use of automated tools for code analysis and generation:
I see the problem we face as similar to the historical questions surrounding the use of BitKeeper by Linux (except that the choice of BitKeeper imposed its use by other contributors). It is also similar to the discussions about proprietary security analysis tools: since those tools are proprietary, should we ignore the vulnerability reports they issue?
If we were to adopt a hard-line "anti-tools" stance, I would find it very hard to draw a clear line.
Drawing clear lines, however, is something that a number of Debian developers felt
was important. Sean Whitton proposed that
the GR should not only say "LLM" rather than "AI", but it should also
distinguish between the uses of LLMs, such as code review, generating
prototypes, or generating production code. He envisioned ballot options that could
allow some, but not all, of those uses. Distinguishing between the various
so-called AI technologies would help in that regard. He urged
Nussbaum "not to argue too hard for something that is more general than LLMs
because that might alienate the people you want to agree to disagree with
."
Andrea Pappacoda said that
the specific technology mattered a lot; he wanted the proposal to have clear
boundaries and avoid broad terms like AI. He was uncomfortable with the idea of
banning LLMs, and not sure where to draw the line. "What I can confidently say,
though, is that a project like Claude's C
Compiler should not have a place in Debian.
"
Beyond terminology
The conversation did not focus solely on the terminology, of course. Simon Richter had questions about the implications of allowing AI-driven contributions from the standpoint of onboarding new contributors to Debian. An AI agent, he said, could take the place of a junior developer. Both could perform basic tasks under guidance, but the AI agent would not learn anything from the exchange; the project resources spent in guiding such a tool do not result in long-lasting knowledge transfer.
AI use presents us (and the commercial software world as well) with a similar problem: there is a massive skill gap between "gets some results" and "consistently and sustainably delivers results", bridging that gap essentially requires starting from scratch, but is required to achieve independence from the operators of the AI service, and this gap is disrupting the pipeline of new entrants.
He called that the onboarding problem, and said that an AI policy
needed to solve that problem; he did not want to discourage people by rejecting
contributions or expend resources on mentoring people who did not want to be
mentored. Accepting AI-assisted drive-by contributions is harmful because it is a
missed opportunity to onboard a new contributor. "The best-case outcome is that a
trivial problem got solved without actually onboarding a new contributor, and the
worst-case outcome is that the new contributor is just proxying between an AI and the
maintainer
". He also expressed concerns around the costs associated with such
tools, and speculated it might discourage contribution from users who could not
afford to use for-pay tools.
Nussbaum agreed that the
cost could be a problem in the future. For now, he said, it is not an issue because
there are vendors providing access for free, but that could change. He disagreed that
Debian was likely to run out of tasks suitable for new contributors, even if it does
accept AI-driven contributions, and suggested that it may make harder tasks more
accessible. He pointed to a study
written by an Anthropic employee and a person participating in the company's fellows
program, about how the use of AI impacts skill formation: "A takeaway is that
there are very different ways to interact with AI, that produce very different
results both in terms of speed and of understanding
". He did not seem to be
persuaded that use of AI tools would be a net negative in onboarding new
contributors.
Ted Ts'o argued against the idea that AI would have a negative impact:
Some anti-AI voices are concerned that use of AI will decrease the ability to gain seasoned contributors, with the implied concern that this is self-defeating because it restricts the ability to gain new members in the future. And you are now saying we should gate keep contributors that might be using AI as being unworthy of contributing to Debian? I'd say that is even more self-defeating.
Matthew Vernon said that
the proposed GR minimized the ethical dimension of using generative AI. The
organizations that are developing and marketing tools like ChatGPT and Claude are
behaving unethically, he said, by systematically damaging the wider commons in the
form of automated scraping and doing as they like with others' intellectual
property. "They hoover up content as hard as they possibly can, with scant if any
regard to its copyright or licensing
". He also cited environmental concerns and
other harms that are attributed to generative AI tools, "from non-consensual
nudification to the flooding of free software projects with bogus security
reports
". He felt that Debian should take a clear stand against those tools and
encourage other projects to do the same:
At its best, Debian is a group of people who come together to make the world a better place through free software. I think we should be centering the appalling behaviour of the organisations who are pushing genAI on everyone, and the real harms they are causing; and we should be pushing back on the idea that genAI is either a social good or inevitable.
There was also debate around the question of copyright, both in terms of the licenses of material used to train models, as well as the output of LLM tools. Jonathan Dowland thought that it might be better to forbid some contributions now, since some see risks in accepting such contributions, and then relax the project's position later on when the legal situation is clearer.
Thorsten Glaser took a
particularly harsh stance against LLM-driven contributions, going so far as to
suggest that some upstream projects should be forced out of Debian's main
archive into non-free
unless "the maintainers revert known slop commits
". Ansgar Burchardt pointed
out that would have the effect of banning the Linux kernel, Python, LLVM, and
others. Glaser's proposal did not seem particularly popular. He had taken a similar
stance on AI models in 2025; he argued most should be outside the main archive, when the project discussed a GR about AI
models and the Debian Free Software Guidelines (DFSG). That GR never came to a vote,
in part because it was unclear whether the language would forbid anti-spam
technologies because one could not include the corpus of spam used as training data
along with filters.
Allbery did not want to touch on copyright issues but had a few words to
say about the quality of AI-assisted code. It is common for people to object to code
generated by LLMs on quality grounds, but he said that argument does not make
sense. Humans are capable of producing better code than LLMs, but they are also
capable of producing worse code too. "Writing meaningless slop requires no
creativity; writing really bad code requires human ingenuity.
"
Bdale Garbee seconded
that notion, and said that he was reluctant to take a hard stance one way or the
other. "I see it as just another evolutionary stage we don't really understand the
longer term positive and negative impacts of yet.
" He wanted to focus on
long-term implications and questions such as "what is the preferred form of
modification for code written by issuing chat prompts?
" Nussbaum answered that
would be "the input to the tool, not the generated source code
".
That may not be an entirely satisfying answer, however, given that LLM output is not deterministic and the various providers of LLM tools retire models with some frequency. A user may have the prompt and other materials fed to an LLM to generate a result at a specific point in time, but it might generate a much different result later on, even if one has access to the same vendor's tools or models to run locally.
Debian isn't ready
It is clear from the discussion that Debian developers are not of one mind on the question of accepting AI-generated contributions; the developers have not yet even converged on a shared definition of what constitutes an AI-generated contribution.
What many do seem to agree on is that Debian is not quite ready to vote on a
GR about AI-generated contributions. On March 3, Nussbaum said
that he had proposed the GR "in response to various attacks against people using
AI in the context of Debian
"; he felt then it was something that needed to be dealt
with urgently. However, the GR discussion had been civil and interesting. As
long as the discussions around AI remained calm and productive, the project could
just continue exploring the topic in mailing-list discussions. He guessed that,
if there were a GR, "the winning option would probably be very nuanced, allowing
AI but with a set of safeguards
".
The questions of what to do about AI models in the archive, how to handle upstream code generated with LLMs, and LLM-generated contributions written specifically for Debian remain unanswered. For now, it seems, they will continue to be handled on a case-by-case basis by applying Debian's existing policies. Given the complexity of the questions, diverse opinions, and rapid rate of change of technologies lumped in under the "AI" umbrella, that may be the best possible, and least disruptive, outcome for now.
Disabling Python's lazy imports from the command line
The advent of lazy imports in the Python language is upon us, now that PEP 810 ("Explicit lazy imports") was accepted by the steering council and the feature will appear in the upcoming Python 3.15 release in October. There are a number of good reasons, performance foremost, for wanting to defer spending—perhaps wasting—the time to do an import before a needed symbol is used. However, there are also good reasons not to want that behavior, at least in some cases. The tension between those two positions is what led to an earlier PEP rejection, but it is also playing into a recent discussion of the API used to control lazy imports.
We looked at the PEP shortly before its acceptance and there is quite a bit of history of the idea going much further back than the 2022 rejection of a different PEP that would have made all imports lazy by default. PEP 810 adds a new "lazy" soft keyword that can be used to indicate a module (or symbol) that should not be imported immediately. Instead, proxy objects are created for the symbols that are resolved (instantiated or "reified") when they are needed. An example from our earlier article helps illustrate:
lazy import abc # abc is now bound to a lazy proxy object
lazy from foo import bar, baz # foo, bar, baz all proxies
abc.def() # loads module abc
bar() # resolves bar, which loads foo, baz still proxy
baz() # resolves baz, does not reload foo
There are various restrictions on lazy imports, such as that wildcard imports (e.g. from foo import *) cannot be lazy and that they can only appear at the module level, so not inside functions or classes. In addition, a lazy import is really only potentially lazy as there are a few ways that users or programs can alter the processing of imports. It is one of those settings that started the recent discussion.
In mid-February, Peter Bierma posted his concerns about the "-X lazy_imports=none" command-line setting, which turns off all lazy imports, resulting in what is often called "eager" imports, for a program and any of its dependencies. That flag can also be set via an environment variable or the sys.set_lazy_imports() call. The effect of lazy_imports=none is to make all imports be eager, meaning they work the way they always have, but using it overrides any explicit uses of lazy that modules may have made. Some of those lazy imports may have been done to avoid circular-dependency loops, thus lazy_imports=none potentially breaks them.
Bierma referred to a pull request from
Pablo Galindo Salgado, one of the PEP's authors, to convert multiple uses
of "old-style" lazy imports (generally placed in the functions where the
symbols would be needed) in the standard library to the explicit version.
Galindo Salgado eventually closed the request after David Ellis pointed
out that multiple modules would fail with an ImportError if
they were run in eager mode. Ellis generally preferred the new form, but
thought that it needed "to be done with care (are there already tests to make sure the stdlib is still importable under lazy_imports=none?)
".
One of the reasons given for having a way for libraries to turn off lazy imports is that the pip package installer needs to be able to prevent imports to avoid executing code from the package being installed. Currently, the pip developers ensure that any old-style lazy imports are resolved before code from the wheel gets installed. While pip installs files into the environment it is running in, the tool has always promised not to run any of the code in a wheel at install time, so it does any importing before that step in the installation process. If lazy imports become more widespread, particularly in the standard library, some mechanism to control those imports will be needed by pip and lazy_imports=none looked like it would do what was needed. Bierma said that a pip issue describes the problem, but he did not think lazy_imports=none actually solves it, and if using the flag causes problems it could lead to one of two outcomes:
Either way, I think it would be very unfortunate if the standard library couldn't use the new syntax because of this flag.
- Most libraries ignore the existence of -X lazy_imports=none, so disabling lazy imports results in circular import errors, making the flag effectively unusable anywhere outside the standard library. This seems most likely to me.
- Or, people see this as too frustrating to reason about, so they continue to use the old system of using an eager import to mimic lazy imports, preventing widespread adoption of the new syntax.
He suggested perhaps using the Python audit facility to prevent pip from importing anything after files from a wheel have been installed. Pip developer Damian Shaw agreed that using lazy_imports=none may not be the right approach. While it may be unfortunate that pip alters the environment it is running in, doing so is a longstanding pip "feature" that is deeply wired into its design—it cannot really be changed at this point.
Several in the thread agreed with Bierma that the lazy_imports=none option
should simply be removed.
Another pip developer, Paul Moore, also thought
that pip should probably not use lazy_imports=none to combat the problem with
arbitrary code execution from the installation of a wheel, which is the
potential problem with doing imports that could pick up newly installed
files. He had no opinion on whether that option should be removed, "but I don't think pip should be used as a reason to claim that it needs to stay
".
There was some discussion of alternatives to removing lazy_imports=none in the thread, but most seemed to agree on removal. As Moore noted, the lazy imports filter provides a way for those who really want to disable the feature. It allows a program to pass a function that is run on each potential lazy import; if it returns False, the import is processed eagerly:
Anyone who really wants to force all imports to be eager can still do sys.set_lazy_imports_filter(lambda *args: False) - and that form makes it very clear what you'd change if you needed to add an exclusion list of modules that could be imported lazily.
Part of the problem with lazy_imports=none is that it is a big hammer, while filters provide a more fine-grained approach. Meanwhile, though, neither addresses the old-style lazy imports, which pip needs to disable as well. The current thinking seems to be to use a call to something like the resolve_all_lazy_imports() function described by Ellis to explicitly reify any pending imports and then disable further imports after that, which should make pip safe.
So one of the main intended users of the lazy_imports=none mode probably should not use it, but it turns out that another possible user would also be better served with other techniques. Back in the discussion of PEP 810, there was concern expressed about latency-sensitive programs being disrupted by a "surprise" lazy import. Michael Hall was one of those concerned, but said that he no longer thought the lazy_imports=none flag was the right approach for handling that either. An approach using resolve_all_lazy_imports() or similar should be sufficient, he thought.
After Donald Stufft wondered about the real security concern with pip importing modules during installation, Shaw explained that users might install a wheel to inspect it and would not expect that to execute code. Since the thread had gotten long, Moore summarized the key points for pip and lazy loading, including why pip must avoid running the new code:
Users have a reasonable expectation that running pip install <some_wheel> will not execute arbitrary code. That's a deliberate design feature of the wheel format.
While removing the flag (at least at run time with -X lazy_imports=none) seemed to be popular, it is notable that none of the PEP authors were part of the discussion. But Cornelius Krupp was concerned that the PEP authors would need to be involved in making this change and it would perhaps need to be run past the steering council again:
The only thing removing the flag would do is to signal that it's ok and expected for libraries to have a strict reliance on lazy imports for e.g. optional dependencies or circular dependencies, which is a significant change from the conditions under which the PEP was originally accepted, directly contradicting it's text and the expressed intentions of the authors.
Oscar Benjamin noted that the text of the PEP has changed over time, especially with regard to circular imports and the guards often placed around typing imports, which are only needed when a type checker is being used. The current text is a little contradictory in that respect. The final message (as of this writing) is from Brénainn Woodsend who reiterates a real problem with the existence of the flag; he and other library developers may be loath to remove their existing old-style lazy imports:
As long as -X lazy_imports=none exists, I'm reluctant to port existing forms of deferred imports to lazy import knowing that I would be slowing down or breaking anyone who uses -X lazy_imports=none.
It is not clear where things go from here. As noted, the PEP authors have not weighed in, but Galindo Salgado is obviously aware of the issue due to his closed pull request. It would seem that the use cases envisioned for the flag do not actually need it and there are other ways to accomplish the same thing, though not directly at run time from the command line. Once Python 3.15 ships later this year, it may be harder to retract the command-line flag, so it would seem that some kind of decision is needed here before too long.
Inspecting and modifying Python types during type checking
Python has a unique approach to static typing. Python programs can contain type annotations, and even access those annotations at run time, but the annotations aren't evaluated by default. Instead, it is up to external programs to ascribe meaning to those annotations. The annotations themselves can be arbitrary Python expressions, but in practice usually involve using helpers from the built-in typing module, the meanings of which external type-checkers mostly agree upon. Yet the type system implicitly defined by the typing module and common type-checkers is insufficiently powerful to model all of the kinds of dynamic metaprogramming found in real-world Python programs. PEP 827 ("Type Manipulation") aims to add additional capabilities to Python's type system to fix this, but discussion of the PEP has been of mixed sentiment.
The problem
Python decorators are functions that take in a function or class as an argument, and return a modified version. A commonly used example is the dataclasses.dataclass() function that takes a class definition and automatically adds a constructor, code to print out instances of the class in human-readable form, and so on.
from dataclasses import dataclass
@dataclass
class Dog:
name: str
size: float
print(Dog("Rufus", 9.0))
# Prints "Dog(name='Rufus', size=9.0)"
How can a type checker, which is external to Python, know how a decorator such as dataclass() will modify the code that it is attempting to check? In the specific case of dataclasses, PEP 681 ("Data Class Transforms") specifies a decorator that can be used to annotate decorators that behave in ways similar to dataclass(), so that type checkers can recognize them and take this into account.
from typing import dataclass_transform
# Tell type checkers that this is a decorator similar to @dataclass:
@dataclass_transform()
def my_custom_transformer(function):
...
# Now the type checker can understand a class using it:
@my_custom_transformer
class Cat:
name: str
coat_color: str
That solution is far from universal, though — it doesn't apply to other kinds of decorator, let alone Python's other metaprogramming facilities such as metaclasses or context managers.
Decorators that modify function definitions mostly don't run into this problem, since they can be defined to return a Callable[T ..., V] (that is, something with a __call__() method, which Python will treat like a function). The type checker can rely on the return type of the decorator (as instantiated with any generic types from the function being modified) to tell it how the resulting callable can be used. Decorators that modify classes, however, run into the problem that there is currently no way to specify a type that computes modifications to another type in Python.
For example, here is a decorator that removes declared int fields from a class, which cannot currently be given a correct type in Python:
def remove_int_members(clss):
for name, annotation in list(clss.__annotations__.items()):
if annotation is int:
del clss.__annotations__[name]
if hasattr(clss, name):
delattr(clss, name)
return clss
This is — despite all appearances — not a niche problem. There are plenty of useful Python libraries that automatically generate or adapt classes, such as object-relational mapping libraries like SQLAlchemy that use type annotations to indicate how fields correspond to database columns, or HTTP libraries like FastAPI that generate client code from an API definition. Currently, those libraries must use code generation (that adds an extra build step), go untyped (which makes them harder to use), or implement type-checker plugins (that require implementing one plugin per mutually-incompatible type checker that users of the library want to use). Even something like the contrived example above could be used to create separate database-facing and customer-facing types that remove sensitive fields, for example.
The solution
Michael Sullivan, Daniel Park, and Yury Selivanov proposed PEP 827 to address this perceived deficiency in Python's type system. It adds features to the typing module that let library authors write modified types using a set of type-level constructs inspired by TypeScript's type-level operators. These features would make it possible to correctly specify the type of a decorator that modifies a class, among other uses.
The most fundamental addition is a new type (IsAssignable[T, S]) that evaluates to a type that corresponds to True when an object of type T can be assigned to a variable of type S, and a type corresponding to False otherwise. The types that IsAssignable evaluates to are not the literal Python values True and False because the PEP authors wanted to avoid requiring type checkers to implement a full Python runtime. Instead, the specific types involved can be freely decided on by individual type checkers, as long as they conform to the interface provided in the PEP.
The True and False types, for example, must be usable in an if expression. A new Iter type would have to be usable in list comprehensions as well. The description of these types is spread throughout the PEP, but the core purpose is to bring control flow (conditionals and loops) into the type system in a way that does not require type checkers to reimplement all of Python's semantics in one go. Iter is essentially used as a signal that a tuple type should be looped over. The True and False types would let Python programmers write type annotations for functions that return different types depending on whether an input type is assignable to another type. For example, here is the type signature of a function that produces a string unless its argument is already duck-typed like a string (i.e., has an interface compatible with that of a string), in which case the argument is passed through unchanged:
def foo[T](input: T) -> T if IsAssignable[T, str] else str: ...
That ability becomes more useful when paired with the types introduced in the rest of the PEP. Members[T], for example, takes a class or typed dictionary T and evaluates to a tuple of types representing the class or dictionary's members. The NewProtocol[Ms] and NewTypedDict[Ms] types can then put a tuple of member types back together into a new protocol (Python's equivalent of an interface) or dictionary type. This allows a type annotation to destructure, modify, and reconstitute classes during type checking.
Here is the type of the example remove_int_members() decorator from above using the PEP's new types:
type WithoutInts[T] = [
Member
for Member in Iter[Members[T]]
if not IsAssignable[Member, int]
]
def remove_int_members[T](clss: Class[T]) -> NewProtocol[*WithoutInts[T]]: ...
A type checker that supported the types added in the PEP could evaluate the type of this decorator to correctly check uses of the modified class, even though the modified form of the class never appears in the actual Python source program.
Discussion
The PEP includes a fairly large number of new types, including types for raising errors at type-checking time, types for manipulating function arguments and results, types for handling unions of disjoint types, and more. On seeing this complexity, a natural question might be why the PEP needs to introduce special types that act like built-in Python values, mimicking their semantics, instead of allowing normal Python functions to be used to compute modifications to types. Cornelius Krupp thought that approach would be cleaner and reduce complexity of implementation.
Selivanov
disagreed,
saying that requiring type checkers to implement a Python runtime in order to
type check Python code would be highly non-trivial. Krupp's proposal "shifts
the complexity and makes it someone else's problem, which in reality will mean
that we're just not solving this problem at all
".
Sullivan
suggested
that if type checkers were to take that approach, functions that compute types
"wouldn't really be normal Python functions,
" since they would
be interpreted by the type checker and not Python itself. This would lead to
needless confusion between actual Python code, and code that merely looks like
Python code and is written in Python files, but which is actually executed by a
separate program according to its own rules, he said.
Justine Krejcha worried that introducing this extra complexity to the type system would lead to slow type checking and cryptic error messages. She thought that judicious use of the Any type was a more reasonable approach for libraries that have highly dynamic behavior. Other participants expressed similar concerns, including the inevitable discussion of syntax.
The PEP did receive some support in its current form, however. Sebastián Ramírez
said
that the PEP would "enable so many features in things I've built or wanted to
build.
"
"Philipp A."
said:
"The functionality in this PEP is something I've been reaching for again
and again
".
Jelle Zijlstra
thought the scale of the PEP was "a bit
scary
", but that it could "make the type system radically more
powerful.
" Zijlstra and Steve Dower both asked for the PEP to be implemented
in at least one type checker for people to experiment with before trying to
include it in the typing module in the standard library. Dower
wasn't a fan of seeing big, complicated types added to Python code.
Selivanov was
dubious
about the possibility of getting real-world testing out of the proposal before
adding it to the standard library. Today's users rely on integrated development
environments (IDEs), and those IDEs rely on their own internal type checking;
implementing the PEP's ideas in a single type checker "will not give you any
actionable data,
" he said. Users would also not necessarily need to see
complicated types directly, he pointed out. As with any existing code base,
maintainers can keep the code tidy by factoring out complex expressions into
their own definitions — something that is actually easier with more powerful
abstractions.
At the time of writing, discussion of the PEP is still ongoing. There seems to be little danger of a consensus emerging any time soon, but there are several other tangentially related proposals that could make the complexity introduced by this PEP more palatable. A draft PEP would add syntactic sugar for typed dictionaries, for example, that would make creating and manipulating types using PEP 827 types somewhat more streamlined. The Python community has also discussed the viability of introducing more existing Python syntax into type annotations, including the use of tuples and operators.
If Python did adopt the ability to use regular Python functions in type annotations, that would give it a similar ability to Zig, which lets users write functions that create new types at compile time. Even if Python doesn't go that far, however, its type system has consistently become more complex and flexible over time. It seems likely that, even if this particular PEP is not adopted as proposed, library authors will eventually enjoy the flexibility to implement static types for complex operations if they think the complexity is worth it.
HTTPS certificates in the age of quantum computing
There has been ongoing discussion in the Internet Engineering Task Force (IETF) about how to protect internet traffic against future quantum computers. So far, that work has focused on key exchange as the most urgent problem; now, a new IETF working group is looking at adopting post-quantum cryptography for authentication and certificate transparency as well. The main challenge to doing so is the increased size of certificates — around 40 times larger. The techniques that the working group is investigating to reduce that overhead could have efficiency benefits for traditional certificates as well.
Authentication
When a browser connects to LWN.net, it first establishes an ephemeral encryption key to protect the session. This is key exchange, and some browsers and servers are already using the post-quantum cryptography standardized in 2024 to avoid "store now, decrypt later" attacks. Attacks of this kind store encrypted traffic for later, in the hope that future quantum computers will be able to break the key-exchange mechanisms used. The possibility of these kinds of attacks makes it important to deploy quantum-resistant key-exchange mechanisms well in advance of quantum computers becoming practically usable.
Next, the server provides the browser with a certificate that proves it actually is LWN.net — authenticating the connection. That certificate is made up of a chain of signatures, where each signature comes from a "more trusted" organization, and verifies that the next public key in the list is valid. In our case, this means that the server will send three signatures to the browser: one from LWN.net, one from Let's Encrypt, and one from the Internet Security Research Group's (ISRG) X1 Root certificate. With traditional cryptography, this certificate is approximately 3.5KB, which is roughly one third of the entire LWN front page's HTML content, after compression. These signatures aren't subject to "store now, decrypt later" attacks in the same way encryption keys are, because compromising an authentication key later doesn't impact the correctness of the connection now. Therefore, while key-exchange mechanisms need to defend against future quantum computers to keep communications private, authentication mechanisms only need to defend against current computers.
Depending on the algorithm in question, post-quantum cryptography can produce signatures much larger than comparable traditional algorithms. ML-DSA-44, which is a standardized post-quantum signature scheme thought to have security similar to Ed25519 signatures, produces signatures 37 times larger. Naively adopting post-quantum signatures for authentication could cause certificate chains to take up more data than the actual content of the web site in question, at least for small, text-heavy web sites like LWN. To ensure that certificate authorities are issuing certificates according to their stated policies, many certificates also include signatures from certificate-transparency logs, which publish a list of all certificates being issued. Browsers will typically refuse to trust a certificate authority that does not participate in certificate transparency, since it is much harder to tell if the certificate authority is issuing certificates that it shouldn't. The extra bandwidth overhead that would be incurred by a direct switch to post-quantum signatures for authentication would have a measurable impact on the overall latency of connections.
Logging
The solution that the new working group (called "PKI, Logs, and Tree Signatures" or PLANTS) has been discussing inverts the relationship between signatures from certificate authorities and the transparency logs. Currently, a certificate authority first creates a certificate, then logs it in a certificate-transparency log, and then optionally includes the signature from the log in the certificate as a piece of additional information. This is, in some sense, redundant: the information that the certificate is valid is already present in the certificate-transparency log, so why send the client any information other than proof that it appears in the log?
The mechanism PLANTS proposes would have each certificate authority maintain its own append-only issuance log, containing a list of every certificate it has issued. The same organizations that run certificate-transparency logs today would monitor and mirror each certificate authority's log to ensure compliance. They primarily check that the log is actually append-only, so that a certificate authority can't backdate changes to issued certificates. That way, if there is a security problem caused by a misbehaving certificate authority it will be easy to prove that and revoke trust in the authority. Instead of having a chain of signatures in a certificate to represent some transitive relationship between a certificate authority and a root of trust, the third-party observers would add their signatures to a certificate authority's log as they validate it. A browser can choose its own criteria for which third-party observers it trusts, and whether it requires a quorum of them before accepting the state of an issuance log.
The certificate seen by the client would therefore no longer be a chain of signatures leading back to a root of trust: it would be a set of signatures from the certificate authority and any relevant observers attesting to the state of the issuance log, plus a proof that the web server's public key was included in the issuance log. This constitutes what PLANTS calls a "full" certificate. For an individual web site, a full certificate doesn't decrease the number of needed signatures; but since the issuance logs are append-only, if a browser has already verified the issuance log for a certificate authority up to some checkpoint, it doesn't need to see the signatures for that checkpoint again. Instead, it can ask the server to just send the proof that the server's public key appeared in the log prior to that point — a "signatureless" certificate that should be substantially smaller.
Merkle trees
Those proofs use Merkle trees, a cryptographic commitment scheme which uses a small number of hashes to show that a leaf node belongs to a binary tree. That way, a certificate authority can batch up a large number of certificates into a single tree that only needs to be signed once. The overall number of signatures to be made and verified becomes independent of the number of issued certificates. The idea is that each internal node of the tree stores the hash of its children, all of the way up to the root.
To prove that a leaf node (such as node 10, above) belongs to a tree with a given root node, it suffices to provide the hashes of the other nodes that are "adjacent" to the path from the leaf to the root (nodes [8,10), 11, and 12). This lets a verifier reconstruct what the root hash would be if 10 were included, and then check that this matches what the root hash actually is. In the above example, the browser would calculate the hash of 10, then the hash of [10,12), then the hash of [8,12), before finally calculating the root hash and ensuring that it matches. The number of additional hashes to provide grows logarithmically with the size of the tree. And, since cryptographic hashes aren't vulnerable to the same kinds of quantum attacks as public-key cryptography, the size of a Merkle inclusion proof like this doesn't change when switching to post-quantum cryptography.
Let's Encrypt issues around six million certificates per day (although that number is expected to go up as the standardized certificate lifetime goes down over the next several years). If it adopted the new system and created a new checkpoint every minute (meaning that a server could obtain a full certificate right away, but would need to wait up to one minute to obtain a signatureless certificate), each certificate would need to include twelve hashes (totaling 384 bytes for SHA-256 hashes). That is only 16% the size of a single ML-DSA-44 signature. Of course, servers will ideally have both: a full certificate for clients that have not seen a recent checkpoint, and a signatureless certificate for clients that have. The full certificate will be significantly larger than current certificates (around 133KB using ML-DSA-44), but it hopefully only needs to be used by a small fraction of connections.
To prevent an issuance log from growing without bound, older entries are periodically pruned as they expire. This might seem to be at odds with the append-only nature of an issuance log, but since expired certificates shouldn't validate correctly anyway, the certificate authority can delete the corresponding leaf nodes and any internal nodes that are therefore no longer usable in actual proofs. The tree maintains the same conceptual size, but the on-disk storage requirements remain proportional to the number of active certificates.
Revoking mistakenly issued certificates, which don't have the decency to expire at known times, is a little more complicated: along with the issuance log, a certificate authority also maintains a set of revoked certificates. This set of revocations is also covered by the signatures of each checkpoint, so a browser will obtain an updated set of certificate revocations every time it validates a full certificate from a given certificate authority.
Adoption
The PLANTS working group is still in the early stages of the standardization process — a draft standard exists, but it has not yet been proposed for standardization, and probably won't be within the next year, since several details remain to be worked out. Despite that, Google has announced a plan to evaluate the performance impacts of Merkle-tree-based certificates in Chrome, and to deploy an experimental post-quantum certificate-authority system based on the PLANTS draft by the end of 2027. Most likely, certificate authorities, server operators, and users won't need to update any of their configurations until 2029 or 2030.
The key question Google hopes to answer, which will impact the usability of the protocol, is whether clients will actually stay up-to-date enough (by occasionally verifying a full certificate) to benefit from signatureless certificates on average. The whole protocol only provides a bandwidth and latency advantage if actual browsers visit enough distinct web sites with certificates issued by the same certificate authority. Over the next several months, the Google Chrome team will hopefully provide empirical data on that question. Users of non-Chrome browsers will go unmeasured, at least for now. Hopefully other browser projects will either join the experiment, or have browsing patterns that are statistically similar enough to Chrome's userbase to draw reasonable conclusions.
Changes in web infrastructure often take a significant amount of time. Between running experiments, standardizing the protocol, and rolling out the changes to certificate authorities and browsers, it may be a long time before we see real connections authenticated with post-quantum cryptography. Still, even the most pro-quantum-computing estimates suggest that the system will be in place before quantum computers can pose a real threat to the security of authentication. In a world that is increasingly hectic, it's nice to occasionally have a security concern that is handled well before it becomes an actual problem.
Reconsidering the multi-generational LRU
The multi-generational LRU (MGLRU) is an alternative memory-management algorithm that was merged for the 6.1 kernel in late 2022. It brought a promise of much-improved performance and simplified code. Since then, though, progress on MGLRU has stalled, and it still is not enabled on many systems. As the 2026 Linux Storage, Filesystem, Memory-Management and BPF Summit (LSFMM+BPF) approaches, several memory-management developers have indicated a desire to talk about the future of MGLRU. While some developers are looking for ways to improve the subsystem, another has called for it to be removed entirely.
An MGLRU refresher
One of the core memory-management tasks a kernel must handle is to determine which pages belong in RAM and which should be pushed out to slower storage (or "reclaimed"). As a general rule, it is best to retain the pages that will be used the most in the near future, while reclaiming pages that will not be used again. Given the challenges involved in predicting the future, the kernel must rely heavily on information about how pages were used recently as a guide for what will happen going forward. The least-recently-used (LRU) lists are a key component of that solution.
The classic (and still default) solution in the kernel relies on two LRU lists (more correctly, numerous pairs of such lists) called the "active" and "inactive" lists. Pages that are thought to be in current use should be on the active list, while those that are seemingly unused go onto the inactive list. When the time comes to reclaim pages for other use, the inactive list will be consulted for a list of potential victims. Much of the complexity (and many of the heuristics) in this solution are focused on properly sizing the two lists and deciding when to move pages from one to the other.
The MGLRU extends that approach to multiple lists, deemed "generations". At one end, the youngest generation contains pages that are known (or at least thought) to have been used within the recent past. Each older generation tracks pages that have been idle for longer than those in the preceding generations. Various sorts of accesses will move a page from an older generation to a younger one; the oldest generation is pillaged by the kernel when the need for more free memory arises. The MGLRU is claimed to more accurately identify the truly cold pages and to use less CPU time while doing that work. See this 2021 article for more information about the design of MGLRU.
The trouble with MGLRU
Recent discussions have made it clear that MGLRU is not seen as living up to all of its promises. It all started in mid-February, when Zicheng Wang posted a request for an LSFMM+BPF discussion about MGLRU and, specifically, how it works with Android. Even though MGLRU has been in the kernel for some years, Wang said, many vendors of Android systems do not enable it. There are a number of problems that play into that decision.
One complaint (which was later echoed by others) is that MGLRU does not properly balance reclaim between anonymous and file-backed pages. The traditional LRU maintains a separate pair of lists for each of those page types (thus the comment above about "numerous pairs" of LRU lists — there are other complications as well). Reclaim from the two sets of lists is normally biased somewhat toward file-backed pages, since they do not normally need to be written back to persistent storage; the longstanding "swappiness" sysctl knob can be used to adjust how aggressively the kernel attacks each list.
With MGLRU, Wang said, anonymous pages tend to stay within the youngest two generations, causing them to never be reclaimed (and file-backed pages to be reclaimed overly aggressively). Adjusting the swappiness knob does not fix the problem. Wang's employer (an Android OEM called "Honor") addresses this problem by explicitly using memory control groups to force reclaim of anonymous pages from non-foreground apps, but there is no general solution in the mainline kernel. Kairui Song, who proposed an MGLRU session as well, also mentioned problems with the reclaim of anonymous pages.
Wang had a number of other problems to discuss. MGLRU can reclaim too aggressively from any given control group, freeing memory beyond the required amount. It's too expensive on low-end devices, especially in situations where there are not a lot of reclaimable pages. There is also a disconnect between Android's notion of hot and cold apps (designed to prioritize the app the user is interacting with at any given time) and MGLRU, which (like the rest of the kernel) lacks that distinction. Some of these problems have been addressed with vendor-specific hacks; there is, for example, a vendor hook that exempts the current foreground task from reclaim. Wang would like to discuss which of these vendor changes, if any, should find their way into the mainline kernel.
Barry Song added a separate complaint: when the system performs readahead (speculatively reading data that it thinks user space may soon request), it places all of the resulting pages into the youngest generation, even though there is no guarantee that those pages will ever be used at all. That may cause pages actually in use to be reclaimed while leaving the readahead pages in RAM. The traditional LRU, instead, puts those pages onto the inactive list, where they will be reclaimed relatively quickly if they are not referenced again. This problem, at least, should be amenable to a relatively simple solution.
Kairui Song's list of problems had a different focus, starting with the fact that MGLRU uses three page flags. These flags are in perennial short supply; patches that try to allocate even one of them tend to run into stiff resistance. The desire to increase the number of generations managed by MGLRU also implies using even more page flags. He had a proposal for shifting those flags elsewhere, making systems with up to 63 generations possible while freeing the three page flags currently used by MGLRU.
Another problem is performance regressions for some workloads (while others do better). Kairui Song thinks that these problems result from the control loop that manages reclaim in MGLRU, and that they could be addressed by better tracking the usage history of file-backed pages. Doing that, though, would require three page flags, presumably those that had just been freed by shifting the generation number elsewhere.
The metrics provided by MGLRU differ from those out of the traditional LRU in a number of ways, he said, making it harder for other parts of the system to understand the memory-management state of any given page. He has a proposal for changing how the state of pages is tracked to improve that situation. Kalesh Singh also described problems with metrics, saying that they differ significantly between the two LRU implementations, and that makes life difficult for components like the Android user-space out-of-memory daemon.
In passing, Kairui Song also mentioned the idea of adding a BPF hook that would allow the customization of generation-placement decisions.
There were other problems mentioned as well but, perhaps surprisingly, most
of the participants skipped over one other relevant issue: the fact that
there are two competing LRU implementations in the kernel in the first
place. Kairui Song did note that the problems he described were among the
"many reasons MGLRU is still not the only LRU implementation in the
kernel
". David Rientjes added
that the discussion should cover "what needs to be addressed so that
MGLRU can be on a path to becoming the default implementation and we can
eliminate two separate implementations
". That will be a challenging
thing to do; there will certainly always be workloads that do better with
one implementation than the other, so removing one will cause some
workloads to regress. Getting those regressions down to a tolerable level
will require some work yet.
Just remove it?
A persistent fear among kernel developers (and developers in many projects, in truth) is that a developer will add a pile of complex code, then not be around to maintain it. Matthew Wilcox asserted that this is exactly what has happened with MGLRU:
To my mind, the biggest problem with MGLRU is that Google dumped it on us and ran away. Commit 44958000bada claimed that it was now maintained and added three people as maintainers. In the six months since that commit, none of those three people have any commits in mm/! This is a shameful state of affairs.I say rip it out.
The original developer of MGLRU, Yu Zhao, has, as is noted in the
above-mentioned commit, "moved on to other projects these days
". As
can be seen in the (subscriber-only) KSDB page for Zhao, he has
occasionally made improvements to MGLRU, but the last such was a handful of
commits in the 6.14 kernel. While other developers are said to be working
on this code, none of those who were added to the MAINTAINERS file
have made any changes to MGLRU since.
So it is true, to an extent, that MGLRU was contributed to the kernel and abandoned shortly thereafter. Axel Rasmussen, one of the named maintainers of MGLRU, seemed to agree with this assessment, but said that the situation would soon change:
I acknowledge this is a big problem. We have let the community down here, and we plan to correct this starting in April, e.g. by working together with Kairui and others to address outstanding issues.
The lack of ongoing developer attention certainly has not helped MGLRU to overcome the problems that many potential users have encountered with it. Even so, there was little support expressed for the idea of removing it. Barry Song asked to keep it around so that those problems could be addressed:
It just needs more work. MGLRU has many strong design aspects, including using more generations to differentiate cold from hot, the look-around mechanism to reduce scanning overhead by leveraging cache locality, and data structure designs that minimize lock holding.
Gregory Price listed a number of perceived problems with MGLRU. He did, however, stop short of calling for to be taken out of the kernel entirely.
So the MGLRU discussion at LSFMM+BPF in May seems unlikely to spend much time on the idea of removing it entirely. But there will be a lot of interest in understanding the work that needs to be done to bring MGLRU up to the needed level of performance and, perhaps someday, be the only LRU implementation in the kernel. If some developers are willing to commit to doing that work, MGLRU may finally make the progress that has been missing for the last few years. It seems likely to be an interesting session.
Fedora shares strategy updates and "weird research university" model
In early February, members of the Fedora Council met in Tirana, Albania to discuss and set the strategic direction for the Fedora Project. The council has published summaries from its strategy summit, and Fedora Project Leader (FPL) Jef Spaleta, as well as some of the council members, held a video meeting to discuss outcomes from the summit on February 25. Topics included a plan to experiment with Open Collective to raise funds for specific Fedora projects, tools to build image-based editions, and more. Spaleta also explained his model for Fedora governance.
The weird university
Spaleta began the meeting by explaining his mental model of the Fedora Project and
its governance. He thinks of the project as a "weird research university
",
with himself as the university president and his primary task being to set its
overall strategic vision and mission. Red Hat is its funding organization, and he
likened the company's role to that of a state legislature funding higher
education.
The Fedora Council, he said, is much like the trustees or regents
of a university, with project contributors being the university's faculty, and the Fedora Engineering Steering
Committee (FESCo) acting as a faculty senate. Red Hat's open-source program
office (OSPO) and community Linux engineering (CLE) team are "among staff and
administration
" in this metaphor, and Fedora users are the students and the
public. The council "sits in the uncomfortable position
" between the
contributors and funding sponsor that provides resources. Its job is to translate
between cultures, he said.
Nonprofit
Part of that translation exercise has been a long-running conversation between the council and Red Hat about setting up some kind of nonprofit entity for Fedora. Spaleta said that the project is looking at Open Collective as a fiscal host to hold funds for specific Fedora activities. During the Q&A portion of the call, he expanded a bit on this to say that he wanted to start with a well-scoped, time-limited project to learn from; if that was successful, then the project could attempt bigger and better things.
To begin with, projects through Open Collective would likely be event-related, but
Fedora may go beyond events if the initial tests are successful. "The sky's the
limit
", Spaleta said, "we just have to work the process and get better at it
and work trust into it
."
Council member Aleksandra Fedorova noted that some people might be disappointed
because they expected "something larger
" from conversations about Fedora
having a nonprofit. But, for now, the goal was to collect money for specific
activities rather than a big bucket of funds that would be spent by the council. The
body is tasked with coming up with a framework for members of the community to
propose projects, and Fedorova invited people who wanted to collaborate on that
process to reach out. The expectation is that more concrete details will be presented
at the Fedora contributor conference, Flock, in June.
Konflux
Fedora has been producing various image-based editions, such as Atomic Desktops, Fedora CoreOS, and Fedora IoT, for more than a decade now. However, the various groups producing those editions have not always worked together in creating or adopting tooling to create the image artifacts. Thus, the Image Mode initiative was created to put together a unified pipeline for building, delivering, and hosting artifacts for those editions. The initiative, led by council member Laura Santamaria, has been working to get Fedora to adopt Konflux to produce bootable containers (bootc) as the standard artifact type for image-based releases.
Konflux is an Apache-licensed, continuous integration and delivery (CI/CD) platform for building, testing, and releasing software artifacts—including bootc images and RPMs. It is a project led by Red Hat that is being used by the company in its internal build systems. Currently, Fedora uses the Koji build system to create its artifacts, but there is work going on currently to set up Konflux as a parallel build system for creating bootc images. However, at this time, Konflux is only being used in a proof-of-concept capacity for pre-release images; the final images for the Fedora 44 release in April will be produced by Koji.
The Fedora Council is involved in the discussion, Spaleta said, to
provide an opinion statement on whether Konflux is suitable for
Fedora. He stressed that using Konflux is "not a mandate
" from
Red Hat, but a push from the Image Mode initiative team to use the
project because it solves the team's problems.
During the council summit, Fedorova gave a presentation about
the project; Spaleta said, based on what he had learned, "it feels
like it's the reasonable technology to move forward with for that
purpose
". Fedorova emphasized that the conversation right now is
strictly about building bootc artifacts for the Image Mode projects,
and any conversations around using Konflux for RPMs would be
separate.
It may not be a mandate, but Red Hat has been expressing interest in persuading
Fedora to adopt Konflux for a while now. In March 2025, Red Hat engineering manager Brendan Conoboy started
a discussion to ask when it would be the right time to bring up using Konflux in
Fedora. Miro Hrončok asked
why Fedora would want to use Konflux rather than Koji. "It's presented as
'the new cool thing,' yet I struggle to grasp the basic motivation. Call me old
fashioned if you must – I'd appreciate an elevator pitch for 'why should I want
this'
".
Conoboy responded with Red Hat's motivation rather than what might motivate Fedora
packagers: the effort of maintaining all the disparate build systems within Red Hat
is expensive. Red Hat has chosen to invest in one "secure software development
pipeline
" that it will use for all of its products and that might be used by other
organizations for similar development needs. Since Red Hat also, ultimately, pays for
the upkeep of Koji and other Fedora build systems, it is not surprising that it would
want to see the project standardize on Konflux as well.
Defining membership
In January there was a debate over one of Fedora's special-interest groups (SIGs) handing out temporary membership to allow voting in FESCo elections. How to define membership was expected to be a topic of discussion at the council summit. During the Q&A period of the meeting, Michael Winters asked if there were any concrete conclusions or action items toward defining Fedora membership that had come out of the summit.
Spaleta said that the council did not manage to "get something
actionable in terms of measurement
" for contributions. One problem is deciding
what is in scope to consider as a contribution to Fedora, then figuring out how to
measure those contributions. Spaleta also said that he has to start surfacing some
metrics for his role as FPL to be able to measure work in trying to increase
contributions. Fedorova stressed the importance of being flexible in terms of
deciding what constitutes a contribution. "We have to leave the door open for
people to come and tell us how they actually contribute in ways we haven't
anticipated before
".
Fedora council member Justin Wheeler said that he is tasked with
coming up with a proposal around membership for the council. He mentioned that there
were discussions about a "vouch-based system
" that would allow existing
members to vouch for new ones. However, he cautioned that more conversations would be
needed and said the community could expect to hear more in the lead-up to the Flock
conference.
It would seem that little was decided during the annual council summit, but some progress was made while the members were face-to-face. The Fedora community should expect to see some concrete proposals for membership and experimental fundraising by the time Flock rolls around in June.
Brief items
Security
A GitHub Issue Title Compromised 4,000 Developer Machines (grith.ai)
The grith.ai blog reports on an LLM prompt-injection vulnerability that led to 4,000 installations of a compromised version of the Cline utility.
For the next eight hours, every developer who installed or updated Cline got OpenClaw - a separate AI agent with full system access - installed globally on their machine without consent. Approximately 4,000 downloads occurred before the package was pulled.The interesting part is not the payload. It is how the attacker got the npm token in the first place: by injecting a prompt into a GitHub issue title, which an AI triage bot read, interpreted as an instruction, and executed.
Huston: Revisiting time
Geoff Huston looks at the network time protocol, and efforts to secure it, in detail.
NTP operates in the clear, and it is often the case that the servers used by a client are not local. This provides an opportunity for an adversary to disrupt an NTP session, by masquerading as a NTP server, or altering NTP payloads in an effort to disrupt a client's time-of-day clock. Many application-level protocols are time sensitive, including TLS, HTTPS, DNSSEC and NFS. Most Cloud applications rely on a coordinated time to determine the most recent version of a data object. Disrupting time can cause significant chaos in distributed network environments.While it can be relatively straightforward to secure a TCP-based protocol by adding an initial TLS handshake and operating a TLS shim between TCP and the application traffic, it's not so straightforward to use TLS in place of a UDP-based protocol for NTP. TLS can add significant jitter to the packet exchange. Where the privacy of the UDP payload is essential, then DTLS might conceivably be considered, but in the case of NTP the privacy of the timestamps is not essential, but the veracity and authenticity of the server is important.
NTS, a secured version of NTP, is designed to address this requirement relating to the veracity and authenticity of packets passed from a NTS server to an NTS client. The protocol adds a NTS Key Establishment protocol (NTS-KE) in additional to a conventional NTPv4 UDP packet exchange (RFC 8915).
Kernel development
Kernel release status
The current development kernel is 7.0-rc3, released on March 8. Linus said: "So it's still pretty early in the release cycle, and it just feels a bit busier than I'd like. But nothing particularly stands out or looks bad."
This release, as of -rc3, has brought in 12,419 non-merge changes from 2,031 developers, 361 of whom are first-time kernel contributors. The release history looks like:
RC Date Commits v7.0-rc1 2026-02-22 12468 12468 v7.0-rc2 2026-03-01 434 434 v7.0-rc3 2026-03-08 537 537
See the (subscriber only) KSDB 7.0 page for a lot more details.
Stable updates: 6.12.76, 6.6.129, and 6.1.166 were released on March 5. The 6.18.17 update is in the review process; it is due on March 12.
Distributions
Introducing Moonforge: a Yocto-based Linux OS (Igalia Blog)
Igalia has announced the Moonforge Linux distribution, based on OpenEmbedded and Yocto.Moonforge is an operating system framework for Linux devices that simplifies the process of building and maintaining custom operating systems.
It provides a curated collection of Yocto layers and configuration files that help developers generate immutable, maintainable, and easily updatable operating system images.
The goal is to offer the best possible developer experience for teams building embedded Linux products. Moonforge handles the complex aspects of operating system creation, such as system integration, security, updates, and infrastructure, so developers can focus on building and deploying their applications or devices.
OpenWrt 25.12.0 released
Version 25.12.0 of the OpenWrt router distribution is available; this release has been dedicated to the memory of Dave Täht. Changes include a switch to the apk package manager, the integration of the attended sysupgrade method, and support for a long list of new targets.SUSE may be for sale, again
Reuters is reporting that private-equity firm EQT may be looking to sell SUSE:
EQT has hired investment bank Arma Partners to sound out a group of private equity investors for a possible sale of the company, said the sources, who requested anonymity to discuss confidential matters. The deliberations are at an early stage and there is no certainty that EQT will proceed with a transaction, the sources said.
SUSE has traded hands a number of times over the years. Most recently it was acquired by EQT in 2018, was listed on the Frankfurt Stock Exchange in 2021, and then taken private again by EQT in August 2023.
Distributions quotes of the week
The key problem is, how do we decide whether to package something or not? We definitely don't have the capability of inspecting whatever crap upstream may be committing. Of course, that was always a risk, but with LLMs around, things are just crazy. And we definitely can't stick with old versions forever.— Michał GórnyThe other side of this is that I have very little motivation to put my human effort into dealing with random slop people are pushing to production these days, and reporting issues that are going to be met with incomprehensible slop replies.
— Morten LinderudI don't think we can reasonably argue that Linux is not free software, and I don't think we can argue for forking Linux to remove llm generated code.
My take on this is mostly apathy. I don't think we can reasonably challenge the use in the FOSS community. The productivity boost of experienced developers using these is too appealing when we are looking at overburdened FOSS maintainers.
We've already been repeatedly DDoSed by these companies. Spending hundreds of volunteers hours keeping our services running while the companies extract the labour to sell back to the FOSS community, using their standing in the Linux Foundation to further cement their usage in our communities.
Then the FOSS communities use these models without any care of the ethical considerations.
Is this depressing? Yes.
Development
Buildroot 2026.02 released
Peter Korsgaard has announced version 2026.02 of Buildroot, a tool for generating embedded Linux systems through cross-compilation. Notable changes include added support for HPPA, use of the 6.19.x kernel headers by default, better SBOM generation, and more.
Again a very active cycle with more than 1500 changes from 97 unique contributors. I'm once again very happy to see so many "new" people next to the "oldtimers".
See the changelog for full details. Thanks to Julien Olivain for pointing us to the announcement.
digiKam 9.0.0 released
Version
9.0.0 of the digiKam photo-management system has been
released. "This major version introduces groundbreaking
improvements in performance, usability, and workflow efficiency, with
a strong focus on modernizing the user interface, enhancing metadata
management, and expanding support for new camera models and file
formats.
" Some of the changes include a
new survey tool, more advanced search and sorting options, as well
as bulk
editing of geolocation coordinates.
Rust 1.94.0 released
Version 1.94.0 of the Rust language has been released. Changes include array windows (an iterator for slices), some Cargo enhancements, and a number of newly stabilized APIs.Development quote of the week
— Mike Hoye on the relicensing of chardet.For whatever my opinion's worth, I think that at least part of our collective thinking about this question needs to be grounded in the fact that this one developer has been working on this codebase almost entirely alone, without support or funding, for at least twelve years.
And I have to ask you, I am begging you, to think about where we've heard a story like that recently, and maybe about how close we came to the brink.
As gross as I think Claude is, as dubious as I think this relicensing exercise is, I also think that if the end state of open source projects is that devs are left to work alone for years on the keystone projects of this jenga tower we're calling modern infrastructure, and then we collectively jump all over them when they turn to the kind of help that, however reprehensible it might be, actually shows up to help, then this entire FOSS project is just a popularity contest where the losers join a slow, lonely suicide pact.
We have to find a better way to do this.
Page editor: Daroc Alden
Announcements
Newsletters
Distributions and system administration
Development
Meeting minutes
Miscellaneous
Calls for Presentations
CFP Deadlines: March 12, 2026 to May 11, 2026
The following listing of CFP deadlines is taken from the LWN.net CFP Calendar.
| Deadline | Event Dates | Event | Location |
|---|---|---|---|
| March 13 | August 6 August 9 |
FOSSY 2026 | Vancouver, Canada |
| March 15 | May 21 May 22 |
Linux Security Summit North America | Minneapolis, Minnesota, US |
| March 15 | May 30 May 31 |
Journées du Logiciel Libre 2026 | Lyon, France |
| March 18 | June 18 June 20 |
Linux Audio Conference | Maynooth, Ireland |
| March 29 | May 29 | Yocto Project Developer Day | Nice, France |
| March 31 | June 6 | Hong Kong Open Source Conference | Hong Kong, Hong Kong |
| April 15 | May 4 May 11 |
MiniDebConf Hamburg 2026 | Hamburg, Germany |
| April 20 | July 20 July 25 |
DebConf 26 | Santa Fe, Argentina |
| April 20 | July 13 July 19 |
DebCamp 26 | Santa Fe, Argentina |
| April 23 | October 5 October 7 |
Linux Plumbers Conference 2026 | Prague, Czechia |
| April 30 | September 29 September 30 |
devopsdays Berlin 2026 | Berlin, Germany |
If the CFP deadline for your event does not appear here, please tell us about it.
Event Reports
Events: March 12, 2026 to May 11, 2026
The following event listing is taken from the LWN.net Calendar.
| Date(s) | Event | Location |
|---|---|---|
| March 16 March 17 |
FOSS Backstage | Berlin, Germany |
| March 19 | Open Tech Day 26: OpenTofu Edition | Nuremberg, Germany |
| March 23 March 26 |
KubeCon + CloudNativeCon Europe | Amsterdam, Netherlands |
| March 28 | Central Pennsylvania Open Source Conference | Lancaster, Pennsylvania, US |
| March 28 March 29 |
Chemnitz Linux Days | Chemnitz, Germany |
| March 28 March 29 |
InstallFest 2026 | Prague, Czechia |
| April 10 April 11 |
Grazer Linuxtage | Graz, Austria |
| April 20 April 21 |
SambaXP | Göttingen, Germany |
| April 23 | OpenSUSE Open Developers Summit | Prague, Czech Republic |
| April 25 April 26 |
Sesja Linuksowa (Linux Session) | Wrocław, Poland |
| April 27 April 28 |
foss-north | Gothenburg, Sweden |
| April 28 April 29 |
stackconf 2026 | Munich, Germany |
| April 29 May 1 |
Linaro Connect Madrid 2026 | Madrid, Spain |
| May 2 | 22nd Linux Infotag Augsburg | Augsburg, Germany |
| May 4 May 6 |
Linux Storage, Filesystem, Memory Management and BPF Summit | Zagreb, Croatia |
| May 4 May 11 |
MiniDebConf Hamburg 2026 | Hamburg, Germany |
If your event does not appear here, please tell us about it.
Security updates
Alert summary March 5, 2026 to March 11, 2026
| Dist. | ID | Release | Package | Date |
|---|---|---|---|---|
| AlmaLinux | ALSA-2026:3864 | 10 | delve | 2026-03-06 |
| AlmaLinux | ALSA-2026:3928 | 9 | git-lfs | 2026-03-06 |
| AlmaLinux | ALSA-2026:3669 | 10 | go-rpm-macros | 2026-03-05 |
| AlmaLinux | ALSA-2026:3963 | 8 | kernel | 2026-03-11 |
| AlmaLinux | ALSA-2026:3964 | 8 | kernel-rt | 2026-03-11 |
| AlmaLinux | ALSA-2026:3551 | 10 | libpng | 2026-03-05 |
| AlmaLinux | ALSA-2026:3967 | 8 | libvpx | 2026-03-11 |
| AlmaLinux | ALSA-2026:3938 | 8 | nfs-utils | 2026-03-11 |
| AlmaLinux | ALSA-2026:4235 | 9 | nginx:1.26 | 2026-03-11 |
| AlmaLinux | ALSA-2026:3898 | 8 | osbuild-composer | 2026-03-11 |
| AlmaLinux | ALSA-2026:3753 | 9 | osbuild-composer | 2026-03-11 |
| AlmaLinux | ALSA-2026:3730 | 9 | postgresql | 2026-03-11 |
| AlmaLinux | ALSA-2026:3887 | 10 | postgresql16 | 2026-03-06 |
| AlmaLinux | ALSA-2026:4064 | 8 | postgresql:12 | 2026-03-11 |
| AlmaLinux | ALSA-2026:4024 | 8 | postgresql:13 | 2026-03-11 |
| AlmaLinux | ALSA-2026:4059 | 8 | postgresql:15 | 2026-03-11 |
| AlmaLinux | ALSA-2026:3896 | 9 | postgresql:15 | 2026-03-11 |
| AlmaLinux | ALSA-2026:4063 | 8 | postgresql:16 | 2026-03-11 |
| AlmaLinux | ALSA-2026:4146 | 8 | python-pyasn1 | 2026-03-11 |
| AlmaLinux | ALSA-2026:3517 | 10 | thunderbird | 2026-03-05 |
| AlmaLinux | ALSA-2026:3515 | 8 | thunderbird | 2026-03-05 |
| AlmaLinux | ALSA-2026:3476 | 10 | udisks2 | 2026-03-05 |
| AlmaLinux | ALSA-2026:3443 | 10 | valkey | 2026-03-05 |
| Debian | DSA-6157-1 | stable | chromium | 2026-03-06 |
| Debian | DSA-6158-1 | stable | imagemagick | 2026-03-09 |
| Debian | DSA-6159-1 | stable | imagemagick | 2026-03-10 |
| Fedora | FEDORA-2026-95fffce421 | F42 | cef | 2026-03-09 |
| Fedora | FEDORA-2026-b5f8adc627 | F43 | cef | 2026-03-08 |
| Fedora | FEDORA-2026-376794abc1 | F44 | cef | 2026-03-07 |
| Fedora | FEDORA-2026-9834b25fc2 | F44 | cef | 2026-03-08 |
| Fedora | FEDORA-2026-f6901d5918 | F42 | chezmoi | 2026-03-07 |
| Fedora | FEDORA-2026-cf96901e5c | F42 | chromium | 2026-03-07 |
| Fedora | FEDORA-2026-06657d1811 | F42 | chromium | 2026-03-10 |
| Fedora | FEDORA-2026-f62db6b372 | F43 | chromium | 2026-03-10 |
| Fedora | FEDORA-2026-f9edb96182 | F44 | chromium | 2026-03-07 |
| Fedora | FEDORA-2026-845d4a7f07 | F44 | chromium | 2026-03-07 |
| Fedora | FEDORA-2026-b7b02bebba | F44 | chromium | 2026-03-10 |
| Fedora | FEDORA-2026-2a1aa1f57f | F42 | coturn | 2026-03-05 |
| Fedora | FEDORA-2026-8cb5571ddc | F43 | coturn | 2026-03-05 |
| Fedora | FEDORA-2026-379e214a37 | F44 | coturn | 2026-03-07 |
| Fedora | FEDORA-2026-e67a6f9c45 | F43 | erlang-hex_core | 2026-03-07 |
| Fedora | FEDORA-2026-e6bf22d958 | F44 | erlang-hex_core | 2026-03-07 |
| Fedora | FEDORA-2026-b5bde68630 | F44 | firefox | 2026-03-07 |
| Fedora | FEDORA-2026-a160e550ec | F44 | freerdp | 2026-03-06 |
| Fedora | FEDORA-2026-de52e7caa1 | F42 | gh | 2026-03-07 |
| Fedora | FEDORA-2026-aecd3809f1 | F42 | gimp | 2026-03-07 |
| Fedora | FEDORA-2026-b930e5c133 | F44 | gimp | 2026-03-07 |
| Fedora | FEDORA-2026-a74aa25180 | F42 | k9s | 2026-03-09 |
| Fedora | FEDORA-2026-2b8b223cf0 | F44 | keylime | 2026-03-07 |
| Fedora | FEDORA-2026-2b8b223cf0 | F44 | keylime-agent-rust | 2026-03-07 |
| Fedora | FEDORA-2026-57cd5704e9 | F42 | libsixel | 2026-03-06 |
| Fedora | FEDORA-2026-b227fad171 | F43 | libsixel | 2026-03-06 |
| Fedora | FEDORA-2026-a800d3417b | F44 | libsixel | 2026-03-07 |
| Fedora | FEDORA-2026-151bfcc2af | F43 | matrix-synapse | 2026-03-10 |
| Fedora | FEDORA-2026-3b12e49fee | F44 | microcode_ctl | 2026-03-07 |
| Fedora | FEDORA-2026-ca44fe35a9 | F42 | mingw-zlib | 2026-03-10 |
| Fedora | FEDORA-2026-0aee6ab474 | F43 | mingw-zlib | 2026-03-10 |
| Fedora | FEDORA-2026-94519b94d8 | F44 | nextcloud | 2026-03-07 |
| Fedora | FEDORA-2026-b5bde68630 | F44 | nss | 2026-03-07 |
| Fedora | FEDORA-2026-1a199d8524 | F42 | opensips | 2026-03-06 |
| Fedora | FEDORA-2026-c0123ede74 | F42 | perl-Crypt-SysRandom-XS | 2026-03-11 |
| Fedora | FEDORA-2026-7b9874a01f | F43 | perl-Crypt-SysRandom-XS | 2026-03-11 |
| Fedora | FEDORA-2026-eb6b1039eb | F44 | perl-Crypt-URandom | 2026-03-07 |
| Fedora | FEDORA-2026-baf8782c7a | F42 | perl-Net-CIDR | 2026-03-10 |
| Fedora | FEDORA-2026-2792616d35 | F44 | pgadmin4 | 2026-03-07 |
| Fedora | FEDORA-2026-d781fd2f6b | F42 | php-zumba-json-serializer | 2026-03-05 |
| Fedora | FEDORA-2026-5ff99e948e | F43 | php-zumba-json-serializer | 2026-03-05 |
| Fedora | FEDORA-2026-ce5f5c292d | F44 | php-zumba-json-serializer | 2026-03-07 |
| Fedora | FEDORA-2026-0e9ef494fc | F43 | polkit | 2026-03-10 |
| Fedora | FEDORA-2026-1ace5758de | F44 | postgresql16-anonymizer | 2026-03-07 |
| Fedora | FEDORA-2026-c9fb6d2b76 | F42 | prometheus | 2026-03-07 |
| Fedora | FEDORA-2026-ce1dd0caa0 | F43 | prometheus | 2026-03-07 |
| Fedora | FEDORA-2026-cfa488b1ac | F42 | python-asyncmy | 2026-03-07 |
| Fedora | FEDORA-2026-9d9161bac3 | F43 | python-asyncmy | 2026-03-07 |
| Fedora | FEDORA-2026-cd9be7f17c | F44 | python-asyncmy | 2026-03-07 |
| Fedora | FEDORA-2026-ef5d97522f | F42 | python3.10 | 2026-03-07 |
| Fedora | FEDORA-2026-489dc1bc1b | F43 | python3.10 | 2026-03-07 |
| Fedora | FEDORA-2026-48d2e7135b | F44 | python3.10 | 2026-03-07 |
| Fedora | FEDORA-2026-8fa5a66a49 | F42 | python3.11 | 2026-03-07 |
| Fedora | FEDORA-2026-f17f6e94ca | F43 | python3.11 | 2026-03-07 |
| Fedora | FEDORA-2026-91d3384f04 | F44 | python3.11 | 2026-03-07 |
| Fedora | FEDORA-2026-14a63ba868 | F44 | python3.9 | 2026-03-07 |
| Fedora | FEDORA-2026-151bfcc2af | F43 | rust-pythonize | 2026-03-10 |
| Fedora | FEDORA-2026-0c4838b53c | F43 | staticcheck | 2026-03-07 |
| Fedora | FEDORA-2026-c1c45c4b2d | F44 | systemd | 2026-03-11 |
| Fedora | FEDORA-2026-1d05f1d152 | F42 | valkey | 2026-03-05 |
| Fedora | FEDORA-2026-8d275f4438 | F43 | valkey | 2026-03-05 |
| Fedora | FEDORA-2026-ca1077dd2e | F44 | valkey | 2026-03-07 |
| Fedora | FEDORA-2026-651ba4626f | F43 | vim | 2026-03-08 |
| Fedora | FEDORA-2026-7d3c7180c7 | F42 | yt-dlp | 2026-03-05 |
| Fedora | FEDORA-2026-937e768833 | F44 | yt-dlp | 2026-03-05 |
| Mageia | MGASA-2026-0051 | 9 | coturn | 2026-03-09 |
| Mageia | MGASA-2026-0052 | 9 | firefox | 2026-03-09 |
| Mageia | MGASA-2026-0050 | 9 | python-django | 2026-03-06 |
| Mageia | MGASA-2026-0048 | 9 | rsync | 2026-03-06 |
| Mageia | MGASA-2026-0053 | 9 | thunderbird | 2026-03-09 |
| Mageia | MGASA-2026-0049 | 9 | vim | 2026-03-06 |
| Mageia | MGASA-2026-0054 | 9 | yt-dlp | 2026-03-10 |
| Oracle | ELSA-2026-3864 | OL10 | delve | 2026-03-09 |
| Oracle | ELSA-2026-3842 | OL9 | delve | 2026-03-09 |
| Oracle | ELSA-2026-4173 | OL9 | gimp | 2026-03-10 |
| Oracle | ELSA-2026-4164 | OL10 | git-lfs | 2026-03-10 |
| Oracle | ELSA-2026-3985 | OL8 | git-lfs | 2026-03-10 |
| Oracle | ELSA-2026-3928 | OL9 | git-lfs | 2026-03-09 |
| Oracle | ELSA-2026-3477 | OL10 | gnutls | 2026-03-09 |
| Oracle | ELSA-2026-3669 | OL10 | go-rpm-macros | 2026-03-09 |
| Oracle | ELSA-2026-3668 | OL9 | go-rpm-macros | 2026-03-09 |
| Oracle | ELSA-2026-3840 | OL10 | image-builder | 2026-03-09 |
| Oracle | ELSA-2026-3839 | OL9 | image-builder | 2026-03-09 |
| Oracle | ELSA-2026-4012 | OL10 | kernel | 2026-03-10 |
| Oracle | ELSA-2026-50142 | OL7 | kernel | 2026-03-09 |
| Oracle | ELSA-2026-1581 | OL7 | kernel | 2026-03-09 |
| Oracle | ELSA-2026-50134 | OL7 | kernel | 2026-03-09 |
| Oracle | ELSA-2026-50134 | OL8 | kernel | 2026-03-09 |
| Oracle | ELSA-2026-3464 | OL8 | kernel | 2026-03-09 |
| Oracle | ELSA-2026-50134 | OL8 | kernel | 2026-03-09 |
| Oracle | ELSA-2026-50133 | OL8 | kernel | 2026-03-09 |
| Oracle | ELSA-2026-50142 | OL8 | kernel | 2026-03-09 |
| Oracle | ELSA-2026-3963 | OL8 | kernel | 2026-03-09 |
| Oracle | ELSA-2026-3488 | OL9 | kernel | 2026-03-09 |
| Oracle | ELSA-2026-50133 | OL9 | kernel | 2026-03-09 |
| Oracle | ELSA-2026-50133 | OL9 | kernel | 2026-03-09 |
| Oracle | ELSA-2026-3966 | OL9 | kernel | 2026-03-10 |
| Oracle | ELSA-2026-3551 | OL10 | libpng | 2026-03-09 |
| Oracle | ELSA-2026-2628 | OL7 | libsoup | 2026-03-09 |
| Oracle | ELSA-2026-3967 | OL8 | libvpx | 2026-03-10 |
| Oracle | ELSA-2026-4162 | OL10 | mysql8.4 | 2026-03-10 |
| Oracle | ELSA-2026-3939 | OL10 | nfs-utils | 2026-03-09 |
| Oracle | ELSA-2026-3938 | OL8 | nfs-utils | 2026-03-09 |
| Oracle | ELSA-2026-3940 | OL9 | nfs-utils | 2026-03-09 |
| Oracle | ELSA-2026-3638 | OL9 | nginx:1.24 | 2026-03-09 |
| Oracle | ELSA-2026-3752 | OL10 | osbuild-composer | 2026-03-09 |
| Oracle | ELSA-2026-3898 | OL8 | osbuild-composer | 2026-03-09 |
| Oracle | ELSA-2026-3753 | OL9 | osbuild-composer | 2026-03-09 |
| Oracle | ELSA-2026-3730 | OL9 | postgresql | 2026-03-09 |
| Oracle | ELSA-2026-3887 | OL10 | postgresql16 | 2026-03-09 |
| Oracle | ELSA-2026-4064 | OL8 | postgresql:12 | 2026-03-10 |
| Oracle | ELSA-2026-4024 | OL8 | postgresql:13 | 2026-03-10 |
| Oracle | ELSA-2026-4059 | OL8 | postgresql:15 | 2026-03-10 |
| Oracle | ELSA-2026-3896 | OL9 | postgresql:15 | 2026-03-09 |
| Oracle | ELSA-2026-4063 | OL8 | postgresql:16 | 2026-03-10 |
| Oracle | ELSA-2026-4110 | OL9 | postgresql:16 | 2026-03-09 |
| Oracle | ELSA-2026-4146 | OL8 | python-pyasn1 | 2026-03-10 |
| Oracle | ELSA-2026-2713 | OL7 | python3 | 2026-03-09 |
| Oracle | ELSA-2026-4165 | OL9 | python3.12 | 2026-03-10 |
| Oracle | ELSA-2026-4168 | OL9 | python3.9 | 2026-03-10 |
| Oracle | ELSA-2026-3517 | OL10 | thunderbird | 2026-03-09 |
| Oracle | ELSA-2026-3515 | OL8 | thunderbird | 2026-03-09 |
| Oracle | ELSA-2026-3516 | OL9 | thunderbird | 2026-03-09 |
| Oracle | ELSA-2026-3476 | OL10 | udisks2 | 2026-03-09 |
| Oracle | ELSA-2026-3507 | OL9 | valkey | 2026-03-09 |
| Red Hat | RHSA-2026:3864-01 | EL10 | delve | 2026-03-05 |
| Red Hat | RHSA-2026:3813-01 | EL10.0 | go-rpm-macros | 2026-03-05 |
| Red Hat | RHSA-2026:3814-01 | EL9.6 | go-rpm-macros | 2026-03-06 |
| Red Hat | RHSA-2026:3831-01 | EL10.0 | grafana | 2026-03-10 |
| Red Hat | RHSA-2026:3841-01 | EL8.2 | grafana | 2026-03-05 |
| Red Hat | RHSA-2026:3879-01 | EL8.4 | grafana | 2026-03-05 |
| Red Hat | RHSA-2026:3880-01 | EL8.6 | grafana | 2026-03-05 |
| Red Hat | RHSA-2026:3838-01 | EL8.8 | grafana | 2026-03-10 |
| Red Hat | RHSA-2026:3854-01 | EL9.0 | grafana | 2026-03-05 |
| Red Hat | RHSA-2026:3836-01 | EL9.2 | grafana | 2026-03-05 |
| Red Hat | RHSA-2026:3835-01 | EL9.4 | grafana | 2026-03-10 |
| Red Hat | RHSA-2026:3833-01 | EL9.6 | grafana | 2026-03-10 |
| Red Hat | RHSA-2026:3816-01 | EL10.0 | grafana-pcp | 2026-03-05 |
| Red Hat | RHSA-2026:3815-01 | EL8.4 | grafana-pcp | 2026-03-05 |
| Red Hat | RHSA-2026:3812-01 | EL8.6 | grafana-pcp | 2026-03-05 |
| Red Hat | RHSA-2026:3821-01 | EL8.8 | grafana-pcp | 2026-03-05 |
| Red Hat | RHSA-2026:3822-01 | EL9.0 | grafana-pcp | 2026-03-05 |
| Red Hat | RHSA-2026:3820-01 | EL9.2 | grafana-pcp | 2026-03-05 |
| Red Hat | RHSA-2026:3818-01 | EL9.4 | grafana-pcp | 2026-03-05 |
| Red Hat | RHSA-2026:3817-01 | EL9.6 | grafana-pcp | 2026-03-05 |
| Red Hat | RHSA-2026:3840-01 | EL10 | image-builder | 2026-03-05 |
| Red Hat | RHSA-2026:3839-01 | EL9 | image-builder | 2026-03-10 |
| Red Hat | RHSA-2026:4174-01 | EL10 | opentelemetry-collector | 2026-03-10 |
| Red Hat | RHSA-2026:3752-01 | EL10 | osbuild-composer | 2026-03-05 |
| Red Hat | RHSA-2026:3898-01 | EL8 | osbuild-composer | 2026-03-06 |
| Red Hat | RHSA-2026:3753-01 | EL9 | osbuild-composer | 2026-03-05 |
| Red Hat | RHSA-2026:3730-01 | EL9 | postgresql | 2026-03-05 |
| Slackware | SSA:2026-063-01 | nvi | 2026-03-04 | |
| SUSE | SUSE-SU-2026:20592-1 | SLE16 | 7zip | 2026-03-05 |
| SUSE | SUSE-SU-2026:0854-1 | SLE12 | ImageMagick | 2026-03-09 |
| SUSE | SUSE-SU-2026:0851-1 | SLE15 | ImageMagick | 2026-03-09 |
| SUSE | SUSE-SU-2026:0853-1 | SLE15 oS15.4 | ImageMagick | 2026-03-09 |
| SUSE | SUSE-SU-2026:0852-1 | SLE15 oS15.6 | ImageMagick | 2026-03-09 |
| SUSE | openSUSE-SU-2026:10278-1 | TW | ImageMagick | 2026-03-05 |
| SUSE | openSUSE-SU-2026:10295-1 | TW | NetworkManager-applet-strongswan | 2026-03-08 |
| SUSE | SUSE-SU-2026:20604-1 | SLE16 | assertj-core | 2026-03-05 |
| SUSE | openSUSE-SU-2026:20298-1 | oS16.0 | assertj-core | 2026-03-05 |
| SUSE | SUSE-SU-2026:20590-1 | SLE16 | autogen | 2026-03-05 |
| SUSE | SUSE-SU-2026:0855-1 | oS15.4 oS15.6 | c3p0 and mchange-commons | 2026-03-10 |
| SUSE | openSUSE-SU-2026:10279-1 | TW | c3p0 | 2026-03-05 |
| SUSE | openSUSE-SU-2026:10296-1 | TW | chromedriver | 2026-03-08 |
| SUSE | openSUSE-SU-2026:20332-1 | oS16.0 | chromium | 2026-03-07 |
| SUSE | openSUSE-SU-2026:0077-1 | osB15 | chromium | 2026-03-08 |
| SUSE | openSUSE-SU-2026:0078-1 | osB15 | chromium | 2026-03-08 |
| SUSE | SUSE-SU-2026:20538-1 | SLE-m6.2 | cockpit-machines, cockpit | 2026-03-05 |
| SUSE | SUSE-SU-2026:20576-1 | SLE16 | cockpit-machines, cockpit | 2026-03-05 |
| SUSE | SUSE-SU-2026:20540-1 | SLE-m6.2 | cockpit-repos | 2026-03-05 |
| SUSE | SUSE-SU-2026:20580-1 | SLE16 | cockpit-repos | 2026-03-05 |
| SUSE | SUSE-SU-2026:20550-1 | SLE-m6.2 | containerized-data-importer | 2026-03-05 |
| SUSE | openSUSE-SU-2026:10297-1 | TW | coredns | 2026-03-08 |
| SUSE | openSUSE-SU-2026:10311-1 | TW | corepack24 | 2026-03-09 |
| SUSE | SUSE-SU-2026:20600-1 | SLE16 | cpp-httplib | 2026-03-05 |
| SUSE | SUSE-SU-2026:20539-1 | SLE-m6.2 | docker | 2026-03-05 |
| SUSE | SUSE-SU-2026:20578-1 | SLE16 | docker | 2026-03-05 |
| SUSE | SUSE-SU-2026:20585-1 | SLE16 | docker-stable | 2026-03-05 |
| SUSE | SUSE-SU-2026:0826-1 | SLE-m5.3 SLE-m5.4 SLE-m5.5 oS15.4 oS15.6 | expat | 2026-03-05 |
| SUSE | SUSE-SU-2026:20642-1 | SLE-m6.2 | expat | 2026-03-09 |
| SUSE | SUSE-SU-2026:20627-1 | SLE16 | expat | 2026-03-06 |
| SUSE | SUSE-SU-2026:0812-1 | SLE12 | firefox | 2026-03-05 |
| SUSE | SUSE-SU-2026:20582-1 | SLE16 | firefox | 2026-03-05 |
| SUSE | openSUSE-SU-2026:10289-1 | TW | freetype2-devel | 2026-03-06 |
| SUSE | openSUSE-SU-2026:0073-1 | osB15 | gitea-tea | 2026-03-08 |
| SUSE | openSUSE-SU-2026:0074-1 | osB15 | gitea-tea | 2026-03-08 |
| SUSE | SUSE-SU-2026:20563-1 | SLE-m6.2 | glibc | 2026-03-05 |
| SUSE | SUSE-SU-2026:0829-1 | SLE15 oS15.6 | gnutls | 2026-03-05 |
| SUSE | openSUSE-SU-2026:10310-1 | TW | go1 | 2026-03-09 |
| SUSE | SUSE-SU-2026:20629-1 | SLE16 | go1.24-openssl | 2026-03-06 |
| SUSE | SUSE-SU-2026:20623-1 | SLE16 | go1.25-openssl | 2026-03-06 |
| SUSE | openSUSE-SU-2026:20301-1 | oS16.0 | go1.25-openssl | 2026-03-05 |
| SUSE | SUSE-SU-2026:20574-1 | SLE16 | golang-github-prometheus-prometheus | 2026-03-05 |
| SUSE | SUSE-SU-2026:0840-1 | MP4.3 SLE15 | grpc | 2026-03-06 |
| SUSE | openSUSE-SU-2026:20329-1 | oS16.0 | gstreamer-rtsp-server, gstreamer-plugins-ugly, | 2026-03-07 |
| SUSE | SUSE-SU-2026:20557-1 | SLE-m6.2 | haproxy | 2026-03-05 |
| SUSE | SUSE-SU-2026:20620-1 | SLE16 | haproxy | 2026-03-05 |
| SUSE | SUSE-SU-2026:20616-1 | SLE16 | haproxy | 2026-03-05 |
| SUSE | openSUSE-SU-2026:20327-1 | oS16.0 | helm | 2026-03-07 |
| SUSE | openSUSE-SU-2026:10280-1 | TW | incus | 2026-03-05 |
| SUSE | openSUSE-SU-2026:10300-1 | TW | jetty-annotations | 2026-03-08 |
| SUSE | SUSE-SU-2026:20615-1 | SLE16 | kernel | 2026-03-05 |
| SUSE | SUSE-SU-2026:20599-1 | SLE16 | kernel | 2026-03-05 |
| SUSE | SUSE-SU-2026:20570-1 | SLE16 SLE-m6.2 | kernel | 2026-03-05 |
| SUSE | SUSE-SU-2026:20564-1 | SLE16 SLE-m6.2 | kernel | 2026-03-05 |
| SUSE | SUSE-SU-2026:20562-1 | SLE16 SLE-m6.2 | kernel | 2026-03-05 |
| SUSE | SUSE-SU-2026:20561-1 | SLE16 SLE-m6.2 | kernel | 2026-03-05 |
| SUSE | SUSE-SU-2026:20555-1 | SLE16 SLE-m6.2 | kernel | 2026-03-05 |
| SUSE | SUSE-SU-2026:20560-1 | SLE16 SLE-m6.2 oS16.0 | kernel | 2026-03-05 |
| SUSE | openSUSE-SU-2026:10302-1 | TW | kubeshark-cli | 2026-03-08 |
| SUSE | SUSE-SU-2026:20571-1 | SLE-m6.2 | kubevirt | 2026-03-05 |
| SUSE | SUSE-SU-2026:20551-1 | SLE-m6.2 | kubevirt | 2026-03-05 |
| SUSE | SUSE-SU-2026:20610-1 | SLE16 | kubevirt | 2026-03-05 |
| SUSE | openSUSE-SU-2026:0072-1 | osB15 | libaec | 2026-03-07 |
| SUSE | openSUSE-SU-2026:10288-1 | TW | libblkid-devel | 2026-03-06 |
| SUSE | SUSE-SU-2026:0847-1 | SLE-m5.2 | libsoup | 2026-03-09 |
| SUSE | SUSE-SU-2026:0796-1 | SLE12 | libsoup | 2026-03-04 |
| SUSE | SUSE-SU-2026:0833-1 | SLE15 oS15.4 | libsoup | 2026-03-06 |
| SUSE | SUSE-SU-2026:0834-1 | SLE15 SLE-m5.3 SLE-m5.4 SLE-m5.5 oS15.4 | libsoup2 | 2026-03-06 |
| SUSE | SUSE-SU-2026:0811-1 | SLE15 oS15.6 | libsoup2 | 2026-03-05 |
| SUSE | SUSE-SU-2026:20647-1 | SLE-m6.2 | libxml2, libxslt | 2026-03-09 |
| SUSE | SUSE-SU-2026:20631-1 | SLE16 | libxml2, libxslt | 2026-03-06 |
| SUSE | SUSE-SU-2026:0801-1 | SLE15 SLE-m5.3 SLE-m5.4 SLE-m5.5 oS15.4 oS15.6 | libxslt | 2026-03-04 |
| SUSE | openSUSE-SU-2026:10281-1 | TW | mchange-commons | 2026-03-05 |
| SUSE | SUSE-SU-2026:0814-1 | SLE12 | mozilla-nss | 2026-03-05 |
| SUSE | SUSE-SU-2026:0813-1 | SLE15 SLE-m5.3 SLE-m5.4 SLE-m5.5 oS15.4 oS15.6 | mozilla-nss | 2026-03-05 |
| SUSE | SUSE-SU-2026:0800-1 | SLE15 | ocaml | 2026-03-04 |
| SUSE | SUSE-SU-2026:0830-1 | SLE15 oS15.6 | ocaml | 2026-03-05 |
| SUSE | SUSE-SU-2026:0824-1 | SLE-m5.4 oS15.4 | openCryptoki | 2026-03-05 |
| SUSE | SUSE-SU-2026:0831-1 | SLE15 oS15.6 | openvpn | 2026-03-05 |
| SUSE | SUSE-SU-2026:0825-1 | SLE15 oS15.6 | php-composer2 | 2026-03-05 |
| SUSE | SUSE-SU-2026:20641-1 | SLE-m6.2 | podman | 2026-03-09 |
| SUSE | SUSE-SU-2026:20626-1 | SLE16 | podman | 2026-03-06 |
| SUSE | SUSE-SU-2026:20587-1 | SLE16 | postgresql14 | 2026-03-05 |
| SUSE | SUSE-SU-2026:20588-1 | SLE16 | postgresql15 | 2026-03-05 |
| SUSE | SUSE-SU-2026:0828-1 | SLE15 oS15.6 | python-Authlib | 2026-03-05 |
| SUSE | SUSE-SU-2026:0821-1 | SLE15 oS15.6 | python-Django | 2026-03-05 |
| SUSE | SUSE-SU-2026:0849-1 | SLE15 oS15.4 oS15.6 | python-Flask | 2026-03-09 |
| SUSE | SUSE-SU-2026:0846-1 | oS15.6 | python-Markdown | 2026-03-09 |
| SUSE | SUSE-SU-2026:0802-1 | SLE12 | python | 2026-03-04 |
| SUSE | SUSE-SU-2026:0859-1 | MP4.3 SLE15 | python-aiohttp | 2026-03-11 |
| SUSE | SUSE-SU-2026:0858-1 | MP4.3 SLE15 oS15.4 oS15.6 | python-aiohttp | 2026-03-10 |
| SUSE | SUSE-SU-2026:20621-1 | SLE16 | python-azure-core | 2026-03-05 |
| SUSE | SUSE-SU-2026:20617-1 | SLE16 | python-azure-core | 2026-03-05 |
| SUSE | openSUSE-SU-2026:20322-1 | oS16.0 | python-joserfc | 2026-03-06 |
| SUSE | SUSE-SU-2026:0860-1 | oS15.6 | python-maturin | 2026-03-11 |
| SUSE | openSUSE-SU-2026:0069-1 | osB15 | python-nltk | 2026-03-05 |
| SUSE | SUSE-SU-2026:0805-1 | MP4.3 SLE15 oS15.4 oS15.6 | python-pip | 2026-03-04 |
| SUSE | openSUSE-SU-2026:20333-1 | oS16.0 | python-pypdf2 | 2026-03-07 |
| SUSE | SUSE-SU-2026:0838-1 | SLE15 SLE-m5.2 SLE-m5.3 SLE-m5.4 SLE-m5.5 | python-tornado | 2026-03-06 |
| SUSE | SUSE-SU-2026:20591-1 | SLE16 | python-urllib3_1 | 2026-03-05 |
| SUSE | openSUSE-SU-2026:20330-1 | oS16.0 | python-uv | 2026-03-07 |
| SUSE | openSUSE-SU-2026:10292-1 | TW | python311-Django | 2026-03-06 |
| SUSE | openSUSE-SU-2026:10282-1 | TW | python311-Django4 | 2026-03-05 |
| SUSE | openSUSE-SU-2026:10284-1 | TW | python311-PyPDF2 | 2026-03-05 |
| SUSE | openSUSE-SU-2026:10293-1 | TW | python311-joserfc | 2026-03-06 |
| SUSE | openSUSE-SU-2026:10304-1 | TW | python311-nltk | 2026-03-08 |
| SUSE | openSUSE-SU-2026:10285-1 | TW | python311-pillow-heif | 2026-03-05 |
| SUSE | openSUSE-SU-2026:10312-1 | TW | python311-pymongo | 2026-03-10 |
| SUSE | openSUSE-SU-2026:10283-1 | TW | python313-Django6 | 2026-03-05 |
| SUSE | SUSE-SU-2026:20543-1 | SLE-m6.2 | python313 | 2026-03-05 |
| SUSE | SUSE-SU-2026:20581-1 | SLE16 | python313 | 2026-03-05 |
| SUSE | SUSE-SU-2026:0832-1 | SLE15 | qemu | 2026-03-06 |
| SUSE | openSUSE-SU-2026:10313-1 | TW | rclone | 2026-03-10 |
| SUSE | SUSE-SU-2026:20603-1 | SLE16 | rhino | 2026-03-05 |
| SUSE | openSUSE-SU-2026:20323-1 | oS16.0 | roundcubemail | 2026-03-06 |
| SUSE | openSUSE-SU-2026:0070-1 | osB15 | roundcubemail | 2026-03-05 |
| SUSE | openSUSE-SU-2026:0071-1 | osB15 | roundcubemail | 2026-03-05 |
| SUSE | openSUSE-SU-2026:10286-1 | TW | ruby4.0-rubygem-rack | 2026-03-05 |
| SUSE | openSUSE-SU-2026:10287-1 | TW | sdbootutil | 2026-03-05 |
| SUSE | openSUSE-SU-2026:10305-1 | TW | tomcat | 2026-03-09 |
| SUSE | openSUSE-SU-2026:10306-1 | TW | tomcat10 | 2026-03-09 |
| SUSE | openSUSE-SU-2026:10307-1 | TW | tomcat11 | 2026-03-09 |
| SUSE | SUSE-SU-2026:0857-1 | SLE-m5.2 oS15.3 | util-linux | 2026-03-10 |
| SUSE | SUSE-SU-2026:0856-1 | SLE-m5.5 oS15.5 oS15.6 | util-linux | 2026-03-10 |
| SUSE | SUSE-SU-2026:0803-1 | oS15.6 | util-linux | 2026-03-04 |
| SUSE | SUSE-SU-2026:0848-1 | SLE15 | valkey | 2026-03-09 |
| SUSE | SUSE-SU-2026:0819-1 | SLE15 | virtiofsd | 2026-03-05 |
| SUSE | SUSE-SU-2026:0816-1 | SLE15 oS15.6 | virtiofsd | 2026-03-05 |
| SUSE | openSUSE-SU-2026:10308-1 | TW | virtiofsd | 2026-03-09 |
| SUSE | openSUSE-SU-2026:10309-1 | TW | weblate | 2026-03-09 |
| SUSE | SUSE-SU-2026:0806-1 | SLE15 | wicked2nm,suse-migration-services,suse-migration- sle16-activation,SLES16-Migration,SLES16-SAP_Migration | 2026-03-04 |
| SUSE | SUSE-SU-2026:20575-1 | SLE16 | wicked2nm | 2026-03-05 |
| SUSE | SUSE-SU-2026:0817-1 | SLE12 | wireshark | 2026-03-05 |
| SUSE | SUSE-SU-2026:0810-1 | oS15.6 | wireshark | 2026-03-05 |
| Ubuntu | USN-7968-2 | 22.04 24.04 25.10 | apache2 | 2026-03-09 |
| Ubuntu | USN-8075-1 | 16.04 18.04 20.04 22.04 24.04 | gimp | 2026-03-04 |
| Ubuntu | USN-8079-1 | 14.04 | less | 2026-03-06 |
| Ubuntu | USN-8070-2 | 14.04 | linux-aws, linux-lts-xenial | 2026-03-04 |
| Ubuntu | USN-8059-7 | 24.04 | linux-aws-fips | 2026-03-04 |
| Ubuntu | USN-8074-1 | 24.04 | linux-azure | 2026-03-04 |
| Ubuntu | USN-8074-2 | 24.04 | linux-azure-fips | 2026-03-04 |
| Ubuntu | USN-8070-3 | 16.04 | linux-fips | 2026-03-04 |
| Ubuntu | USN-8059-8 | 22.04 24.04 | linux-nvidia, linux-nvidia-6.8, linux-nvidia-lowlatency | 2026-03-10 |
| Ubuntu | USN-8060-7 | 22.04 | linux-nvidia | 2026-03-10 |
| Ubuntu | USN-8071-2 | 14.04 16.04 18.04 20.04 | nss | 2026-03-05 |
| Ubuntu | USN-8071-1 | 22.04 24.04 25.10 | nss | 2026-03-04 |
| Ubuntu | USN-8072-1 | 22.04 24.04 25.10 | postgresql-14, postgresql-16, postgresql-17 | 2026-03-04 |
| Ubuntu | USN-8077-1 | 16.04 18.04 20.04 | python-bleach | 2026-03-05 |
| Ubuntu | USN-8083-1 | 22.04 24.04 25.10 | python-geopandas | 2026-03-11 |
| Ubuntu | USN-8018-2 | 14.04 16.04 18.04 20.04 22.04 24.04 25.10 | python3.4, python3.5, python3.6, python3.7, python3.8, python3.9, python3.10, python3.11, python3.12, python3.13, python3.14 | 2026-03-09 |
| Ubuntu | USN-8073-1 | 22.04 24.04 25.10 | qemu | 2026-03-04 |
| Ubuntu | USN-8076-1 | 16.04 18.04 20.04 22.04 24.04 | qtbase-opensource-src | 2026-03-05 |
| Ubuntu | USN-8080-1 | 16.04 18.04 20.04 | yara | 2026-03-09 |
| Ubuntu | USN-8078-1 | 22.04 | zutty | 2026-03-06 |
Kernel patches of interest
Kernel releases
Architecture-specific
Build system
Core kernel
Development tools
Device drivers
Device-driver infrastructure
Documentation
Filesystems and block layer
Memory management
Networking
Security-related
Virtualization and containers
Miscellaneous
Page editor: Joe Brockmeier
