LWN.net Weekly Edition for July 17, 2025
Welcome to the LWN.net Weekly Edition for July 17, 2025
This edition contains the following feature content:
- Following up on the Python JIT: adding just-in-time compilation to the CPython interpreter is not a small task; this PyCon talk provides an update on how that work is going.
- Anubis sends AI scraperbots to a well-deserved fate: separating the bots from the humans in an attempt to save sites from hostile scrapers.
- Linux and Secure Boot certificate expiration: how should distributions react to the upcoming expiration of an important Microsoft signing key?
- SFrame-based stack unwinding for the kernel: integrating a mechanism to allow the kernel to create reliable user-space stack traces.
- Enforcement (or not) for module-specific exported symbols: how far should the kernel go to enforce its rules on the use of exported symbols by loadable modules?
- Fedora SIG changes Python packaging strategy: Fedora considers leaving more of its packing work to a language-specific repository.
This week's edition also includes these inner pages:
- Brief items: Brief news items from throughout the community.
- Announcements: Newsletters, conferences, security updates, patches, and more.
Please enjoy this week's edition, and, as always, thank you for supporting LWN.net.
Following up on the Python JIT
Performance of Python programs has been a major focus of development for the language over the last five years or so; the Faster CPython project has been a big part of that effort. One of its subprojects is to add an experimental just-in-time (JIT) compiler to the language; at last year's PyCon US, project member Brandt Bucher gave an introduction to the copy-and-patch JIT compiler. At PyCon US 2025, he followed that up with a talk on "What they don't tell you about building a JIT compiler for CPython" to describe some of the things he wishes he had known when he set out to work on that project. There was something of an elephant in the room, however, in that Microsoft dropped support for the project and laid off most of its Faster CPython team a few days before the talk.
Bucher only alluded to that event in the talk, and elsewhere has made
it clear that he intends to continue working on the JIT compiler
whatever the fallout. When he gave the talk back in May, he said that he
had been working with Python for around eight years, as a core developer
for six, part of the Microsoft CPython performance engineering team for
four, and has been working on the JIT compiler for the last two years.
While the team at Microsoft is often equated with the Faster CPython
project, it is really just a part of it; "our team collaborates with
lots of people outside of Microsoft
".
Faster CPython results
![Brandt Bucher [Brandt Bucher]](https://static.lwn.net/images/2025/pycon-bucher-sm.png)
The project has seen some great results over the last few Python releases.
Its work first appeared in 2022 as part of Python 3.11, which averaged 25%
faster than 3.10, depending on the workload; "no need to change your
code, you just upgrade Python and everything works
". In the years
since, there have been further improvements: Python 3.12 was 4% faster than
3.11, and 3.13 improved by 7% over 3.12. Python 3.14, which is due in
October, will be around 8% faster than its predecessor.
In aggregate, that means Python has gotten nearly 50% faster in less than four years, he said. Around 93% of the benchmarks that the project uses have improved their performance over that time; nearly half (46%) are more than 50% faster. 20% of the benchmarks are more than 100% faster. Those are not simply micro-benchmarks, the benchmarks represent real workloads; Pylint has gotten 100% faster, for example.
All of those increases have come without the JIT; they come from all of the
other changes that the team has been
working on, while "taking a kind of holistic approach to improving
Python performance
". Those changes have a meaningful impact on
performance and were done in such a way that the community can maintain
them. "This is what happens when companies fund Python core
development
", he said, "it's a really special thing
". On his
slides, that was followed by the crying emoji 😢 accompanied by an
uncomfortable laugh.
Moving on, he gave a "duck typing" example that he would refer to throughout the talk. It revolved around a duck simulator that would take an iterator of ducks and "quack" each one, then print the sound. As an additional feature, if a duck has an "echo" attribute that evaluates to true, it would double the sound:
def simulate_ducks(ducks): for duck in ducks: sound = duck.quack() if duck.echo: sound += sound print(sound)That was coupled with two classes that produced different sounds:
class Duck: echo = False def quack(self): return "Quack!" class RubberDuck: echo = True def __init__(self, loud): self.loud = loud def quack(self): if self.loud: return "SQUEAK!" return "Squeak!"
He stepped through an example execution of the loop in
simulate_ducks(). He showed the bytecode for the stack-based
Python virtual machine that was generated by the interpreter and stepped
through one iteration of the loop describing the changes to the stack and
to the duck and sound local variables. That process is
largely unchanged "since Python
was first created
".
Specialization
The 3.11 interpreter added specialized bytecode into the mix, where some of
the bytecode operations are changed to assume they are using a
specific type—chosen based on observing the execution of the code
a few times. Python is a dynamic language, so the interpreter always needs
to be able to fall back to, say, looking up the proper binary operator for the
types. But after running the loop a few times, it can assume that
"sound += sound" will be
operating on strings so it can switch to a bytecode with a fast path for
that explicit operation. "You actually have bytecode that can still
handle anything, but has inlined fast paths for the shape of your actual
objects and data structures and memory layout.
"
All of that underlies the JIT compiler, which uses the specialized bytecode
interpreter, and can be viewed as being part of the same pipeline, Bucher
said. The JIT compiler is not enabled by default in any build of Python,
however. As he described in last year's talk, the specialized bytecode
instructions get further broken down into micro-ops, which are "smaller
units of work within an individual bytecode instruction
". The
translation to micro-ops is completely automatic because the bytecodes are
defined in terms of them, "so this translation step is machine-generated
and very very fast
", he said.
The micro-ops can be optimized, that is basically the whole point of
generating them, he said. Observing the different types and values that
are being encountered when executing through the micro-ops will show
optimizations that can be applied. Some micro-ops can be replaced with
more efficient versions, others can be eliminated because they "are
doing work that is entirely redundant and that we can prove we can remove
without changing the semantics
". He showed a slide full of micro-ops
that corresponded to the duck loop and slowly replaced and eliminated
something approaching 25% of them, which corresponds to what the 3.14
version of the JIT does.
The JIT will then translate the micro-ops into machine code one-by-one, but it does so using the copy-and-patch mechanism. The machine-code templates for each of the micro-ops are generated at CPython compile time; it is somewhat analogous to the way the micro-ops themselves are generated in a table-driven fashion. Since the templates are not hand-written, fixing bugs in the micro-ops for the rest of the interpreter also fixes them for the JIT; that helps with the maintainability of the JIT, but also helps lower the barrier to entry for working on it, Bucher said.
Region selection
With that background out of the way, he moved on to some "interesting
parts of working on a JIT compiler
" that are often overlooked, starting
with region selection. Earlier, he had shown a sequence of micro-ops that
needed to be turned into machine code, but he did not describe how that
list was generated; "how did we get there in the first place?
"
The JIT compiler does not start off with such a sequence, it starts with
code like in his duck simulation. There are several questions that need to
be answered about that code based on its run-time activity. The first is:
"what do we want to compile?
" If something
is running only a few times, it is not a good candidate for JIT
compilation, but something that is running a lot is. Another question is
where should it be compiled? A function can be compiled in isolation or it
can be inlined into its callers and those can be compiled instead.
When should the code be compiled? There is a balance to be struck between
compiling things too early, wasting that effort because the code is not
actually running all that much, and too late, which may not actually make
the program any faster. The final question is "why?", he said; it only
makes sense to compile code if it is clear that compiling will make the
code more efficient. "If they are using really dynamic code patterns or
doing weird things that we don't actually compile well, then it's probably
not worth it.
"
One approach that can be taken is to compile entire functions, which is
known as "method at a time" or "method JIT". It "maps naturally to the
way we think about compilers
" because it is the way that many
ahead-of-time compilers work. So, when the JIT looks at
simulate_ducks(), it can just compile the entire function (the
for loop) wholesale, but there are some other opportunities for
optimization. If it recognizes that most of the time the loop operates on
Duck objects, it can inline the quack() function from it:
for duck in ducks: if duck.__class__ is Duck: sound = "Quack!" else: sound = duck.quack() ...If there are lots of RubberDuck objects too, that class's quack() method could be inlined as well. Likewise, the attribute lookup for duck.echo could be inlined for one or both cases, but that all starts to get somewhat complicated, he said; "
it's not always super-easy to reason about, especially for something that is running while you are compiling it".
Meanwhile, what if ducks is not a list, but is instead a generator? In simple cases, with a single yield expression, it is not that much different from the list case, but with multiple yield expressions and loops in the generator, it also becomes hard to reason about. That creates a kind of optimization barrier and that kind of code is not uncommon, especially in asynchronous programming contexts.
Another technique, and the one that is currently used in the CPython JIT, is to use a "tracing JIT" instead of a method JIT. The technique takes linear traces of the program's execution, so it can use that information to make optimization decisions. If the first duck is a Duck, the code can be optimized as it was earlier, with a guard based on the class and inlining the sound assignment. Next up is a lookup for duck.echo, but the code in the guarded branch has perfect type information; it already knows that it is processing a Duck, so it knows echo is false, and that if can be removed, leaving:
for duck in ducks: if duck.__class__ is Duck: sound = "Quack!" print(sound)"
This is pretty efficient. If you have just a list of Ducks, you're going to be doing kind of the bare minimum amount of work to actually quack all those ducks."
The code still needs to handle the case where the duck is not a Duck, but it does not need to compile that piece; it can, instead, just send it back to the interpreter if the class guard is false. If the code is also handling RubberDuck objects, though, eventually that else branch will get "hot" because it is being taken frequently.
At that point, the tracing can be turned back on to see what the code is doing. If we assume that it mostly has non-loud RubberDuck objects, the resulting code might look like:
elif duck.__class__ is RubberDuck: if self.loud: ... sound = "Squeak!Squeak!" print(sound) else: ...The two branches that are not specified would simply return to the regular interpreter when they are executed. Since the tracing has perfect type information, it knows that echo is true, so the sound should be doubled, but there is no need to actually use "+=" to get the result. So, now the function has the minimum necessary code to quack either a Duck or a non-loud RubberDuck. If those other branches start getting hot at some point, tracing can once again be used optimize it further.
One downside of the tracing JIT approach is that it can compile duplicates
of the same code, as with "print(sound)". In "very branchy
code
" Bucher said, "some things near the tail of those traces can be
duplicated quite a bit
". There are ways to reduce that duplication,
but it is a downside to the technique.
Another technique for selecting regions is called "meta tracing", but he
did not have time to go into it. He suggested that attendees ask their LLM
of choice "about the 'first Futamura projection' and don't misspell it
like me, it's not 'Futurama'
", Bucher said to some chuckles around the
room.
Memory management
JIT compilers "do really weird things with memory
". C programmers
are familiar with readable (or read-only) data, such as a const
array, and data that is both readable and writable is the normal case.
Memory can be dynamically allocated using malloc(), but that kind
of memory cannot be executed; since a JIT compiler needs memory that it can
read, write, and execute, it requires "the big guns
": mmap().
"If you know the right magic incantation, you can whisper to this thing
with all these secret flags and numbers
" to get memory that is
readable, writable, and executable:
char *data = mmap(NULL, 4096, PROT_READ | PROT_WRITE | PROT_EXEC, MAP_ANONYMOUS | MAP_PRIVATE, -1, 0);One caveat is that memory from mmap() comes in page-sized chunks, which is 4KB on most systems but can be larger. If the JIT code is, say, four bytes in length, that can be wasteful, so it needs to be managed carefully. Once you have that memory, he asked, how do you actually execute it? It turns out that "
C lets us do crazy things":
typedef int (*function)(int); ((function)data)(42);That first line creates a type definition named "function", which is a pointer to a function that takes an integer argument and returns an integer. The second line casts the data pointer to that type and then calls the function with an argument of 42 (and ignores the return value). "
It's weird, but it works."
He noted that the term "executable data" should be setting off alarm bells
in people's heads; "if you're a Rust programmer, this is what we call
'unsafe code'
" he said to laughter. Being able to write to memory that
can be executed is "a scary thing; at best you shoot yourself in the
foot, at worst it is a major security vulnerability
". For this reason,
operating systems often require that memory not be in that state. He said
that the memory should be mapped readable and writable, then filled in, and
switched to readable and executable using mprotect();
if there is a need to modify the data later, it can be switched back and
forth between the two states.
Debugging and profiling
When code is being profiled using one of the Python profilers, code that has been compiled should call all of the same profiling hooks. The easiest way to do that, at least for now, is to not JIT code that has profiler hooks installed. In recent versions of Python, profiling is implemented by using the specializing adaptive interpreter to change certain bytecodes to other, instrumented versions of them, which will call the profiler hooks. If the tracing encounters one of these instrumented bytecodes, it can shut the JIT down for that part of the code, but it can still run in other, non-profiled parts of the code.
A related problem occurs when someone enables profiling for code that has
already been JIT-compiled. In that case, Python needs to get out of the
JIT code as quickly as possible. That is handled by placing special
_CHECK_VALIDITY micro-ops just before "known safe points
"
where it can jump out of the JIT code and back to the interpreter. That
micro-op checks a one-bit flag; if it is set, the execution bails out of
the JIT code. That bit gets set when profiling is enabled, but it is also
used when code executes that could change the JIT optimizations (e.g. a
change of class attributes).
Something that just kind of falls out of that is the ability to support "the
weirder features of Python debuggers
". The JIT code is created based
on what the tracing has seen, but someone running pdb could
completely upend that state in various ways (e.g. "duck =
Goose()"). The validity bit can be used to avoid problems of that
sort as well.
For native profilers and debuggers, such as perf and GDB, there is a need
to unwind the stack through JIT frames, and interact with JIT frames, but
"the short answer is that it's really really complicated
". There
are lots of tools of this sort, for various platforms, that all work
differently and each has its own APIs for registering debug information in
different formats. The project members are aware of the problem, but are
trying to determine which tools need to be supported and what level of
support they actually need.
Looking ahead
The current Python release is 3.13; the JIT can be built into it by using the --enable-experimental-jit flag. For Python 3.14, which is out in beta form and will be released in October, the Windows and macOS builds have the JIT built-in, but it must be enabled by setting PYTHON_JIT=1 in the environment. He does not recommend enabling it for production code, but the team would love to hear about any results from using it: dramatic improvements or slowdowns, bugs, crashes, and so on. Other platforms, or people creating their own binaries, can enable the JIT with the same flag as for 3.13.
For 3.15, which is in a pre-alpha stage at this point, there are two GitHub
issues they are focusing on: "Supporting stack
unwinding in the JIT compiler" and "Make the JIT
thread-safe". The first he had mentioned earlier with regard to
support for native debuggers and profilers. The second is important since
the free-threaded build of CPython seems to be working out well and is
moving toward becoming the default—see PEP 779 ("Criteria for
supported status for free-threaded Python"), which was recently accepted
by the steering council. The Faster CPython developers think that
making the JIT thread-safe can be done without too much trouble; "it's
going to take a little bit of work and there's kind of a long tail of
figuring out what optimizations are actually still safe to do in a
free-threaded environment
". Both of those issues are outside of his
domain of expertise, however, so he hoped that others who have those skills
would be willing to help out.
In addition, there is a lot of ongoing performance work that is going into the 3.15 branch, of course. He noted, pointedly, that fast progress, especially on larger projects, will depend on the availability of resources. The words on his slide saying that changed to bold and he gave a significant cough to further emphasize the point.
As he wrapped up, he suggested PEP 659 ("Specializing Adaptive Interpreter") and PEP 744 ("JIT Compilation") for further information. For those who would rather watch something, instead of reading about it, he recommended videos of his talks (covered by LWN and linked above) from 2023 on the specializing adaptive interpreter and from 2024 on adding a JIT compiler. The YouTube video of this year's talk is available as well.
[Thanks to the Linux Foundation for its travel sponsorship that allowed me to travel to Pittsburgh for PyCon US.]
Anubis sends AI scraperbots to a well-deserved fate
Few, if any, web sites or web-based services have gone unscathed by the locust-like hordes of AI crawlers looking to consume (and then re-consume) all of the world's content. The Anubis project is designed to provide a first line of defense that blocks mindless bots—while granting real users access to sites without too much hassle. Anubis is a young project, not even a year old. However, its development is moving quickly, and the project seems to be enjoying rapid adoption. The most recent release of Anubis, version 1.20.0, includes a feature that many users have been interested in since the project launched: support for challenging clients without requiring users to have JavaScript turned on.
Block bots, bless browsers
AI scraping bots, at least the ones causing the most headaches, are designed to ignore robots.txt and evade detection by lying about their User-Agent header, and come from vast numbers of IP addresses, which poses a serious problem for site owners trying to fend them off. The swarms of scrapers can be devastating for sites, particularly those with dynamically generated content like Git forges.
How do site owners block bots while leaving the door open for the people those sites are supposed to serve? Most methods that might help also subject users to annoying CAPTCHAs, block RSS readers and Git clients, or introduce other friction that makes a site less usable. Blocking bots without too much friction is the problem that Anubis hopes to solve.
![Anubis mascot image [Anubis mascot image]](https://static.lwn.net/images/2025/anubis-happy.png)
The MIT-licensed project was announced in January by its author, Xe Iaso. Anubis is written in Go (plus some JavaScript used for testing clients); it is designed to sit between a reverse proxy, such as Caddy or NGINX, and the web-application server.
In a write-up of a lightning talk given at BSDCan 2025 in June, Iaso said that Anubis was initially developed simply to keep scrapers from taking down Iaso's Git server. They hosted the project on GitHub with the organization name Techaro. The name was supposed to be for a fake startup as a satire of the tech industry. Then GNOME started using the project, and it took off from there. Now Techaro is the name that Iaso is actually using for their business.
Many LWN readers are already familiar with the project's mascot, shown to the left, which is displayed briefly after a client browser has successfully passed Anubis's test. In the short time that the project has been available, it has already been pressed into service by a number of free-software projects, including the Linux Kernel Mailing List archive, sourcehut, FFmpeg, and others. A demo is available for those who have not yet encountered Anubis or just want a refresher.
The project takes its name from the Egyptian god of funerary rites; Anubis was reputed to weigh the hearts of the dead to determine whether the deceased was worthy of entering paradise. If the heart was lighter than a feather, the deceased could proceed to a heavenly reward—if not, they would be consumed by Ammit, the devourer of the dead.
How Anubis works
Sadly, we have no crocodile-headed god to dispatch AI scrapers for eternity. The next best thing is to deny them entry, as Anubis does, by putting up a proof-of-work challenge that helps sniff out scrapers in disguise. Iaso was inspired to take this approach by Adam Back's Hashcash system, originally proposed in 1997.
Hashcash was intended to make sending spam more expensive by requiring a "stamp" in each email's headers. The stamp would be generated by causing the client to run a hash-based proof-of-work task, which required some CPU time for every email. Email without a stamp would be routed to /dev/null, thus sparing a user's inbox. The idea was that Hashcash would be little burden for users sending email at normal volume, but too costly to generate large amounts of spam. Hashcash did not take off, to the detriment of all of our inboxes, but that doesn't mean the idea was entirely without merit.
When a request comes to Anubis, it can issue a challenge, evaluate the response, and (if the client passes) issue a signed JSON web token (JWT) cookie (named "techaro.lol-anubis-auth") that will allow the browser access to the site's resources for a period of time.
Anubis decides what action to take by consulting its policy rules. It has three possible actions: allow, challenge, or deny. Allow passes the request to the web service, while deny sends an error message that is designed to look like a successful request to AI scrapers. Challenge, as one might expect, displays the challenge page or validates that a client has passed the challenge before routing the request.
The default policy for Anubis is to challenge
"everything that might be a browser
", which is usually
indicated by the presence of the string Mozilla in the
User-Agent header. Because the creators of AI scrapers know that
their bots are unwelcome (and apparently have no ethics to speak of)
most scraper bots try to pass themselves off as browsers as well. But
they generally do not run client-side JavaScript, which means they can
be stymied by tools like Anubis.
Administrators have the choice of JSON or YAML for writing custom policy rules. Rules can match the User-Agent string, HTTP request header values, and the request path. It is also possible to filter requests by IP address or range. So, for example, one might allow a specific search engine bot to connect, but only if its IP address matches a specific range.
The difficulty of Anubis's challenge is also configurable. It offers fast and slow proof-of-work algorithms, and allows administrators to set a higher or lower difficulty level for running them. The fast algorithm uses optimized JavaScript that should run quickly. the slow algorithm is designed to waste time and memory. A site might have a policy that lowers the difficulty for a client that has specific session tokens or ratchets up the difficulty for clients that request specific resources. One might also set policy to automatically allow access to some resources, like robots.txt, without any challenge. The difficulty level is expressed as a number; a difficulty of 1 with the fast algorithm should take almost no time on a reasonably fast computer. The default is 4, which is noticeable but only takes a few seconds. A difficulty of 6, on the other hand, can take minutes to complete.
New features
The 1.20.0 release includes RPMs and Debian packages for several architectures, as well as other binaries for Linux and source code. Iaso would like to expand that to providing binary packages for BSD, as well as better testing for BSDs in general. In addition to the packages supplied by the project, Anubis is also packaged by a few Linux distributions, FreeBSD Ports, the Homebrew project, and others. However, many of the packages are not up-to-date with the most recent Anubis stable version. A container image for Anubis is also available for those who would prefer that method of deployment.
The release introduces robots2policy, a command-line tool that converts a site's robots.txt to Anubis policy. I tried it with LWN's robots.txt and it correctly added rules against allowing bots to access site search and mailing-list search, but only included one rule denying access to a specific crawler, as it expects each User-agent line to have its own allow or disallow rule.
Another new feature in 1.20.0 is custom-weight thresholds, a way for administrators to set variable thresholds for client challenges based on how suspicious a client is—much like scoring used by spam filters. Does a client's user-agent have the string "bot" in it? Administrators can assign more weight points and up the difficulty level of the proof-of-work challenge. Does a client present a cookie that indicates that it already has a session with a trusted service? It can have points taken off its weight score. If a score is low enough, a client can be waved through without any challenge at all. If the score is too high, the client can be assigned a particularly difficult challenge that will take longer to complete.
Some administrators have been hesitant to use Anubis because they would rather not block users who turn off JavaScript. Indeed, that was one reason that we opted not to deploy Anubis for LWN's bot problem. With 1.20.0, administrators can use the metarefresh challenge in addition to, or in place of, the JavaScript proof-of-work challenge. As the name suggests, it makes use of the meta refresh HTML element that can instruct a browser to refresh a page or redirect to a new page after a set time period, usually a few seconds. This method is currently off by default, as it is still considered somewhat experimental.
I installed 1.20.0 on a test Debian bookworm system running Apache. The documentation for Apache was missing a line in the sample configuration (the opening stanza for the mod_proxy module), but it was otherwise easy to get up and running. The project also has documentation for other environments that might require special configuration, such as using Anubis with Kubernetes or WordPress. Given the project's speed of development, the documentation seems well-tended overall; the pull request I filed for the missing stanza in the Apache documentation was reviewed almost immediately.
Anime girl and other complaints
Some have complained that the Anubis mascot is "not appropriate
for use on customer facing systems
" and asked for a
flag to disable it. Iaso has said
that the ability to change or disable branding is an "enterprise
feature
", though they are open to making it a feature if Anubis becomes
fiscally sustainable. While Anubis is free software, Iaso has asked ("but not
demand, these are words on the internet, not word of law
") people
not to remove the anime girl character from Anubis deployments unless
they support Anubis development financially.
Initially, there was some controversy because Anubis was using a mascot that was generated with AI, which did not sit well with some folks. In March, Iaso noted that they were commissioning a new mascot from CELPHASE, an artist based in the European Union, which became the current anime girl mascot.
Even with a professionally designed character, some organizations, such as Duke University, are hesitant to deploy Anubis in part because of its mascot. Duke experienced problems with bot traffic overwhelming several services, such as the Digital Repositories at Duke and its Archives & Manuscripts catalog, and performed a pilot project with Anubis.
The Assessment & User Experience Strategy department of the university library released a report in June, written by Sean Aery, about the pilot project. Aery reported that Anubis was effective—it blocked more than 4 million unwanted HTTP requests per day—while still allowing real users and well-behaved bots. Some real users were blocked too; 12 people reported having a problem with Anubis in one week, but that was usually due to having cookies disabled, according to the report.
However, despite Anubis's efficacy, the first limitation listed by Aery was the user interface for the challenge page:
The images, messages, and styles presented in the default challenge page UI are not ideal -- particularly its anime girl mascot image. Anubis lacks convenient seams to revise any of these elements.
As of this writing, Duke is "in discussions
" with Iaso about
"a sustainable way forward
" and is still using Anubis; in the
meantime, it has customized the page interface with artwork it has
deemed more appropriate.
Aery does list some real limitations that users should be aware of before adopting the project, however. For example, the report notes that it is very early days for the project and for the practice of fending off AI scrapers. The project is evolving rapidly, so administrators will have to stay on top of frequent updates. And there is no guarantee that Anubis will maintain its edge against scrapers in the long run. Today, Anubis works very well at blocking unwanted visitors, but tomorrow? Many scrapers aren't willing to abide by a gentle "no" now; they are unlikely to give up easily if Anubis and other bot-blockers start denying access to resources their ethically-challenged owners really want. Bot makers will start looking for and finding ways around the blockers. Blockers versus bots will no doubt be a arms race the same way that spam filtering is a never-ending battle against spammers.
Another limitation is Anubis's bus factor; Iaso is responsible for about half the commits to the project. More than 80 people have contributed to the project since its inception, but only one contributor (Jason Cameron) has more than ten commits to the repository.
Iaso is not just carrying most of the development load for Anubis; they are also trying to build a business around the project. The boundaries between Anubis, the open-source project, and Iaso's budding business are fuzzy; combining BDFL-type governance with the potential conflicts of interest coming from a related business has been known to go poorly. That is not to say that it will in this instance, of course.
Anubis already ships with advanced reputation checking that works only with a paid service called Thoth offered by Techaro. Thoth allows Anubis to filter IP addresses by geography (GeoIP) or BGP autonomous system numbers so administrators can block or challenge clients by region or provider (such as Cloudflare). As the project continues to gain traction, it will be interesting to see if Iaso accepts contributions that might conflict with Techaro paid services.
The future
Iaso said at BSDCan 2025 that they were not sure what the end game for Anubis is:
I want to make this into a web application firewall that can potentially survive the AI bubble bursting. Because right now the AI bubble bursting is the biggest threat to the business, as it were.
Since Anubis is, essentially, a one-person show, it raises the question of whether they will be able to stay ahead in the inevitable arms race against scrapers. Iaso did say that they hope to hire another developer, and to provide Anubis as a hosted service at some point. Iaso is also trying out new ways to sort bots from real browsers with various browser fingerprinting techniques, such as John Althouse's JA4 TLS and a novel method of Iaso's own devising called Techaro HTTP Request Fingerprinting Version 1 (THR1).
Anubis only denies scrapers access to content; it does not, as some
might wish, actually feed nonsense data to scrapers to poison the data
sets that are being created. There are, at least, two open-source
projects that do set out to punish unwanted crawlers by sending the
bots data to give them indigestion: Nepenthes and iocaine. Anubis has an open issue
to add a target for unverified requests that would allow
administrators to redirect clients to Nepenthes or iocaine instead of
simply blocking access. Iaso has said
that they are willing to do this but thinks "Anubis itself should
not be directly generating the poison
".
User "gackillis" has put
together a proof-of-concept project called jackal's
carapace that works together with Anubis. It "very slowly spits
out garbage data
", pointing to a tarpit for scrapers while Anubis
is loading. If the scraper takes the bait, it winds up in the tarpit
instead of being fed real content. However, gackillis warns (in all
caps) that the project is not ready for real-world deployments and
could spike CPU and network usage dramatically.
On July 6, Iaso announced
a pre-release of Anubis 1.21.0. The major changes in the upcoming
release include the addition of new storage types for Anubis's temporary
data, support for localized responses, and allowing access to Common Crawl by default "so
scrapers have less incentive to scrape
". It also includes a fix
for a pesky bug that
delivers an invalid response for some browsers after passing
Anubis's test. That bug was thought to be fixed in 1.20.0, but the bug
was reopened after
reports that users were still seeing the bug.
In a better world, Anubis would never have been necessary—at least not in its current form. It's an additional layer of complexity for site owners to manage and introduces friction for users browsing the web. But Anubis is a lesser evil when compared to having sites knocked offline by overzealous bot traffic or employing more burdensome CAPTCHAs. With luck, Iaso and other contributors will be able to stay one step (or more) ahead of the scrapers until the AI bubble bursts.
Linux and Secure Boot certificate expiration
Linux users who have Secure Boot enabled on their systems knowingly or unknowingly rely on a key from Microsoft that is set to expire in September. After that point, Microsoft will no longer use that key to sign the shim first-stage UEFI bootloader that is used by Linux distributions to boot the kernel with Secure Boot. But the replacement key, which has been available since 2023, may not be installed on many systems; worse yet, it may require the hardware vendor to issue an update for the system firmware, which may or may not happen. It seems that the vast majority of systems will not be lost in the shuffle, but it may require extra work from distributors and users.
Mateus Rodrigues Costa raised
the issue on the Fedora devel mailing list on July 8. He had noticed a
warning that came with "this month's Windows 11 cumulative
update
"; it talked about Secure Boot certificates that are scheduled to
expire starting in June 2026. Those particular certificates
are separate from the one used for shim, which expires much sooner. In any
case, the problem of certificate expiration is one that the Linux world
will need to tackle.
The situation is rather complicated. Daniel P. Berrangé pointed to a page at the Linux Vendor Firmware Service (LVFS) site that describes it. LVFS is the home of fwupd and other tools that are used to update system firmware from Linux. LVFS and fwupd are the subject of an LWN article from 2020.
There are multiple moving parts to the problem. In order to do a Secure Boot into the Linux kernel, the UEFI boot process requires the first-stage bootloader to be signed with a key in the firmware database that has not expired. Those keys are contained in certificates, which have other information, such as an expiration date and signature(s). The certificate expiration should largely only be a problem when installing a new distribution on a Secure Boot system; the shim that gets installed will have distribution-specific keys and can act as the root of trust for running other programs (e.g. GRUB) using those keys.
Currently, shim is signed with a Microsoft key from 2011 that expires on September 11. Past that point, installation media will no longer boot unless it has an updated shim that is signed with the Microsoft 2023 UEFI key for third-parties (which is different than the specific key mentioned in the Windows update). Any installed distribution should have a bootloader signed with its own key that will continue to boot.
But there are lots of systems out there with a firmware database that lacks Microsoft's new key, some have both old and new keys, while there are likely some that only have the new key and cannot Secure Boot Linux installation media at all at this point. Vendors can (and hopefully most will) provide firmware updates that add the new key, and installation media can be created with a shim signed by it, but those updates have to be installed on systems. That's where LVFS and fwupd come in.
LVFS is a repository of vendor-firmware updates of various sorts, which
fwupd and other tools can use to install the pieces that are needed
into the firmware from Linux.
Berrangé noted that older versions of fwupd were unable to solve all of
the problems, "but recent releases
have been enhanced to handle the updates that Linux users will need
to see, which should mitigate the worst of the impact
". There may
still be a bit of a bumpy ride, however: "Users should be 'aware' of the potential for trouble,
but hopefully the worst of the 'worry' part is handled by the OS vendors
and maintainers.
"
The KEK updates are going out at ~98% success, and db update is ~99% success -- but even 1% multiplied by millions of people is a fair few failures to deploy -- the "failed to write efivarfs" thing. What fixes it for some people is rebooting and clearing the BIOS to factory defaults -- this has the effect of triggering a "de-fragmentation" of the available efivar space so that there's enough contiguous space to deploy the update. The older your BIOS the more likely you are to hit this.
Hughes is referring to a known problem with space for new EFI variables.
For systems where the vendor provides no updates, disabling Secure Boot may be the only option to allow a new install. In a few short months, all existing installation images and media will not be installable with Secure Boot—that may already be true for systems that only have the new key. Secure Boot installation just got that much more complicated.
Beyond that, though, is the possibility of mistakes or problems with the
vendor updates. Hughes pointed out that at least one manufacturer has lost
access to the private part of its platform key (PK), which is a
vendor-specific key burned into the hardware when it is made. That means
the platform keys in the hardware need to be changed, which is uncharted
water and "a terrible idea from an attestation point of view
". In
addition, as Gerd Hoffman pointed
out, the KEK update process is new as well: "a KEK update has never
happened before so there are chances that bios vendors messed up things
and updating the KEK doesn't work
".
The thread has multiple reports on the Secure Boot certificates on various hardware models, as well as reports of updates to the KEK and database. One thing that is not entirely clear is whether the firmware implementations will actually enforce the expiration date on the 2011 key. A working system with a functional trust-chain based on that key might continue to work with a shim signed with that key, even after September. Any shim updates, for, say, security problems, would not be able to be signed with the old key, however—Microsoft will not sign anything using the expired key. That may lead to a "solution" of sorts, as Adam Williamson said:
In theory wouldn't we also have the option to ship an old shim for such cases? If the whole chain is old it should work, right? Of course, we'd then need some heuristic to figure out if we're on the old MS cert and install the old shim...
He said that it may not really make sense and users should just disable
Secure Boot. Hoffman agreed
with all of that, but pointed out the problem with shim updates: "Continuing running shim with known security bugs makes it
[kinda] pointless to have secure boot turned on
".
All in all, it seems like the Linux world is doing the best it can under the circumstances—as is so often the case when it comes to hardware from vendors that mostly care about Windows. Given that the Secure Boot root-of-trust keys (both the platform key and the signing key) are under the control of vendors—Microsoft and the hardware makers—it is always going to be a bit of a struggle to keep up. Since older hardware is something that Linux and distributions explicitly support, while the other vendors have long since moved on to the latest shiny, it was clearly going to lead to some tension there. One can only hope that the ride is as smooth as it can be.
SFrame-based stack unwinding for the kernel
The kernel's perf events subsystem can produce high-quality profiles, with full function-call chains, of resource usage within the kernel itself. Developers, however, often would like to see profiles of the whole system in one integrated report with, for example, call-stack information that crosses the boundary between the kernel and user space. Support for unwinding user-space call stacks in the perf events subsystem is currently inefficient at best. A long-running effort to provide reliable, user-space call-stack unwinding within the kernel, which will improve that situation considerably, appears to be reaching fruition.A process's call stack (normally) contains all of the information that is needed to recreate the chain of function calls that led to a given point in its execution. Each call pushes a frame onto the stack; that frame contains, among other things, the return address for the call. The problem is that exactly where that information lives on the stack is not always clear. Functions can (and do) put other information there, so there may be an arbitrary distance between the address in the stack pointer register and the base of the current call frame at any given time. That makes it hard for the kernel (or any program) to reliably work through the call chain on the stack.
One solution to the problem is to use frame pointers — dedicating a CPU register to always point to the base of the current call frame. The frame pointer will be saved to the call stack at a known offset for each function call, so frame pointers will reliably make the structure of the call stack clear. Unfortunately, frame pointers hurt performance; they occupy a CPU register, and must be saved and restored on each function call. As a result, building software (in both kernel and user space) without frame pointers is a common practice; indeed, it is the usual case.
Without frame pointers, the kernel cannot unwind the user-space call stack in the perf events subsystem. The best that can be done is to copy the user-space stack (through the kernel) to the application receiving the performance data, in the hope that the unwinding can be done in user space. Given the resolution at which performance data can be collected, the result is the copying of massive amounts of data, and a corresponding impact on the performance of the system.
Call-stack unwinding is not a new problem, of course. In user space, the solution has often involved debugging data generated in the DWARF format. DWARF data is voluminous and complicated to interpret, though; handling it properly requires implementing a special-purpose virtual machine. Trying to use DWARF in the kernel is a good way to lower one's popularity in that community.
So DWARF is not an answer to this problem. Some years ago, developers working on live patching (which also needs reliable stack unwinding) came up with a new data format called ORC; it is far simpler than DWARF and relatively compact. ORC has been a part of the kernel's build process ever since, but it has never been adapted to user space.
Enter SFrame
At least, ORC hadn't been adapted to user space until the SFrame project (which truly needs an amusing Lord-of-the-Rings-inspired name) was launched. SFrame is an ORC-inspired format that is designed to be as compact as possible. In an executable file that has been built with SFrame data, there is a special ELF section containing two tables that look somewhat like this:
The left-hand table (the "frame descriptor entries") contains one entry for each function; the table is sorted by virtual address. When the need comes to unwind a call stack, the first step is to take the current program counter and, using a binary search, find the frame descriptor entry that includes that address. Each entry contains, beyond the base address for the function, pointers to one or more "frame row entries". Each of those entries contains between one and 15 triplets with start and size fields describing a portion of the function's code, and an associated frame-base offset. The offset of the program counter from the base address is used to find the relevant triplet which, in turn, gives the offset from the current stack pointer to the base of the call frame. From there, the process can be repeated to work up the call stack, one frame at a time.
The GNU toolchain has the ability to build executables with SFrame data, though it seems that some of that support is still stabilizing. SFrame capability is evidently coming to LLVM soon as well. Thus far, though, the kernel is unable to make use of that data to generate user-space call traces.
See this article for more information about SFrame. There is a specification for the v2 SFrame format available; this is the format that the GNU toolchain currently supports. Version 3 of the SFrame format is under development, seemingly with the goal of deployment before SFrame is widely used. The new version adds some flexibility for finding the location of the top frame, the elimination of unaligned data fields (which are explicitly used by the current format to reduce its memory requirements), support for signal frames, and s390x architecture support, among other things.
Using SFrame in the kernel
Back in January Josh Poimboeuf posted a patch series adding support for SFrame for user-space call-stack unwinding in the perf events subsystem. He has since moved on to other projects; Steve Rostedt has picked up the work and is trying to push it through to completion. The original series is now broken into three core chunks, each of which addresses a part of the problem.
The SFrame data, being stored in an ELF section, can be mapped into a process's address space along with the rest of the executable. Accessing that data from the kernel, though, is subject to all of the constraints that apply to user-space memory. Notably, this data may be mapped into the address space, but it might not be resident in RAM, so the kernel must be prepared to take page faults when accessing it.
That, in turn, means that the kernel can only access SFrame data when it is running in process context. Data from the system's performance monitoring unit, though, tends to be delivered via interrupts. So the code that handles those interrupts, and which generates the kernel portion of the stack trace, cannot do the user-space unwinding. That work has to be deferred until a safe time, which is usually just before the kernel returns back to user space.
So the first patch series adds the infrastructure for this deferred unwinding. When the need to unwind a user-space stack is detected, a work item is set up and attached using the task_work mechanism; the kernel will then ensure that this work is done at a time when user-space memory access is safe. Support for deferred, in-kernel stack unwinding on systems where frame pointers are in use is also added as part of this series.
The second series teaches the perf events subsystem to use this new infrastructure. The resulting interface for user space is somewhat interesting. A call into the kernel can generate a large number of performance events, each of which may have a stack trace associated with it. But the user-space portion of that trace can only be created just before the return to user space. So the perf events subsystem will report any number of kernel-side traces, followed by a single user-space trace at the end. Since the user-space side of the call chain does not change while this is happening, a single trace is all that is needed. User space must then put the pieces back together to obtain the full trace. The last patch in the series updates the perf tool to perform this merging and output unified call chains.
Finally, the third series adds SFrame support to all of this machinery. When an executable is loaded, any SFrame sections are mapped as well. Within the kernel, a maple tree is used to track the SFrame sections associated with each text range in the executable. The kernel does not, however, track the SFrame sections associated with shared libraries, so the series contains a new prctl() operation so that the C library can provide that information explicitly. That part of the API is likely to change (the relevant patch suggests that a separate system call should be added) before the series is merged.
With the SFrame information at hand, using it to generate stack traces is a relatively straightforward job; the kernel just has to use the SFrame tables to work through the frames on the stack. The result should be fast and reliable, with a minimal impact on the performance of the workload being measured.
The sum total of this work is quite a bit of new infrastructure added to the core kernel, including code that runs in some of the most constrained contexts. So the chances are good that there are still some minor problems to work out. The bulk of the hard work would appear to be done, though. It may take another development cycle or three, but the multi-year project to get SFrame support into the kernel would appear to be reaching its conclusion.
Enforcement (or not) for module-specific exported symbols
Loadable kernel modules require access to kernel data structures and functions to get their job done; the kernel provides this access by way of exported symbols. Almost since this mechanism was created, there have been debates over which symbols should be exported, and how. The 6.16 kernel gained a new export mechanism that limits access to symbols to specific kernel modules. That code is likely to change soon, but the addition of an enforcement mechanism has since been backed out.Restrictions on exported symbols are driven by two core motivations, the first of which is to ensure that kernel modules are truly modular and do not access core parts of the kernel. The intent is limit the amount of damage a module can do, and to keep kernel modules from changing fundamental aspects of the kernel's operation. The other motivation is a desire to make life difficult for proprietary kernel modules by explicitly marking exports that are so fundamental to the kernel's design that any code making use of them must be a derived product of the kernel. Those symbols are unavailable to any module that does not declare a GPL-compatible license.
There have been many discussions about the proper exporting of symbols over the years; see the LWN kernel index for the history. The most recent example may be this discussion on whether the in-progress user-space stack-unwinding improvements should be made available to the out-of-tree LTTng module. These discussions do not appear to have impeded the exporting of symbols in general; there are nearly 38,000 exported symbols in the upcoming 6.16 kernel.
That kernel release will also include a new member of the EXPORT_SYMBOL() macro family:
EXPORT_SYMBOL_GPL_FOR_MODULES(symbol, "mod1,mod2");
This macro will cause the given symbol to be exported, but only to the modules named in the second argument, and only if those modules are GPL-licensed. It is intended for use with symbols that kernel developers would strongly prefer not to export at all, but which are needed by one or more in-tree subsystems when those subsystems are built as modules. There are no users of this macro in the 6.16 kernel, but a few are waiting in the wings. The symbol exports needed for the proposed unification of kselftests and KUnit are expressed this way, for example.
The linux-next repository contains a couple of other users that are planned for merging in 6.17; included therein is the exporting of the rigorously undocumented anon_inode_make_secure_inode() exclusively for the KVM subsystem. That particular export came about as the result of a discussion following the posting, by Shivank Garg, of this patch making that function available to KVM to fix a security issue. Vlastimil Babka suggested using the new, module-specific mechanism, a suggestion that Garg quickly adopted.
Christian Brauner, having evidently learned about the new mechanism in this discussion, said that he would happily switch a number of other core filesystem-level exports to the module-specific variety, but there was one problem: some of those exports are not GPL-only, so there would need to be a version of the macro that does not impose the extra license restriction. Christoph Hellwig disagreed with the need for a non-GPL version, though, saying that any export that is limited to specific in-tree modules is, by definition, only exported to GPL-licensed code. He suggested just removing the GPL_ portion of the name of the macro, instead.
Babka pointed out one other potential difficulty, though: while the module-specific exports are intended for in-tree modules only, there is nothing in the kernel that enforces this expectation. So an evil maintainer of a proprietary module could, for example, simply name that module kvm, and anon_inode_make_secure_inode() would become available to it. To address this perceived problem, Babka posted a patch series that added enforcement of the in-tree-only rule, and also changed the name of the macro to EXPORT_SYMBOL_FOR_MODULES().
It is worth noting that this enforcement implementation, were it to be applied, would still have the potential to be circumvented in external modules. The only way that the kernel knows that a specific module is of the in-tree variety is if the module itself tells it so. Specifically, the kernel looks at the .modinfo ELF section in the binary kernel module for a line reading "intree=Y"; if that line is found, the module is deemed to be an in-tree module. If an evil developer is willing to impersonate a privileged module to gain access to a specific symbol, they probably will not be daunted by the prospect of adding a false in-tree declaration to the module as well.
In any case, though, that change ended up being dropped in the second version of the patch series. Masahiro Yamada had raised a concern with the new enforcement mechanism: some developers of in-tree modules do their work on out-of-tree versions until the work is merged. In other words, sometimes, replacing an in-tree module with an out-of-tree version using the same name is a legitimate use case. Rather than break a workflow that some developers may depend on, Babka opted to remove the enforcement mechanism entirely, leaving only the name change.
In this case, as always, the purpose of the removed change was not to create a bulletproof defense against attempts to circumvent the export mechanism. It was, instead, to make the intent of the development community clear, so that anybody engaging in such abuse has no excuses when they are caught. The existing mechanisms in the kernel should be sufficient for that purpose; anybody who is willing to bypass them would probably not have been slowed down much by the addition of another hurdle.
Fedora SIG changes Python packaging strategy
Fedora's NeuroFedora special-interest group (SIG) is considering a change of strategy when it comes to packaging Python modules. The SIG, which consists of three active members, is struggling to keep up with maintaining the hundreds of packages that it has taken on. What's more, it's not clear that the majority of packages are even being consumed by Fedora users; the group is trying to determine the right strategy to meet its goals and shed unnecessary work. If its new packaging strategy is successful, it may point the way to a more sustainable model for Linux distributions to provide value to users without trying to package everything under the sun.
NeuroFedora SIG
The goal of the NeuroFedora SIG is to make it easy to use Fedora as a platform for neuroscience research. The SIG provides more than 200 applications for computational neuroscience, data analysis, and neuro-imaging, as well as a Fedora Comp-Neuro Lab spin with many applications for neuroscience preinstalled.
On June 6, Ankur Sinha started a discussion on the NeuroFedora issue tracker about changing the SIG's packaging strategy. He said that the group was no longer sure that providing native packages was the right way to serve the target audience:
A lot of our packages are python libraries but I'm seeing more and more users just rely on pip (or anaconda etc.) to install their packages instead of using dnf installed packages. I was therefore, wondering if packaging up Python libraries off PyPI was worth really doing.
Sinha said he had discussed the topic at Flock, which was held in early June, with Karolina Surma, a member of Fedora's Python SIG. Surma said that the Python SIG focuses on core Python packages, such as the Python itself, pip, and setuptools; Red Hat's Python developers had attempted to automatically convert all of the Python Packaging Index (PyPI) to RPMs in a Copr repository, but Surma said the resulting packages were not of sufficient quality to recommend. For Python libraries, it was up to packagers to determine if the use case required system packages.
Sinha tried to capture some of the reasons for and against providing
native Python packages via the SIG. Since most of the upstream
documentation refers to pip for installing modules from PyPI, Sinha thought it was
unlikely that most users would turn to DNF for Python libraries. Pip
also allows users to install different versions of libraries in virtual
environments, which cannot be done with DNF. On the other hand,
Sinha said
that distribution packaging is important because it allows packagers
to push out updates in a timely manner. "When installing from
forges directly, there doesn't seem to be a way of users being
notified if packages that they're using have new updates.
"
Benjamin Beasley said
that the main reason to package libraries should be to support
packaged Python-based tools and applications. He added
that he saw value in having packages for command-line tools so that
users could use them without having to care about what language the
tools were written in or knowing how to use language-specific tools or
repositories. Distribution packaging could be reserved for
"software that is difficult to install, with things like
system-wide configuration files, daemons, and awkward
dependencies
". If a Python library is not needed by other
packages, on the other hand, there is not much justification for
packaging it.
Testing and documentation versus packaging
Sinha asked if it even made sense to package applications that could be installed from PyPI and suggested that for some pip-installable packages it might be better for the SIG to provide documentation on installation rather than maintaining a package. For example, a pip package might require Fedora's GTK3 development package (gtk3-devel) to be installed in order to build and install correctly. He wondered if the SIG should package that application or just maintain documentation on how to install it with pip on Fedora.
After a few days, Sinha put out an update with his plan; he would create a list of Python packages of interest to the SIG, write a script to see if they could be successfully installed using pip, and orphan those that were not required to be native packages. He also asked again for input and specifically tagged SIG member Sandro Janke.
Two days later Janke appeared and agreed
that it made sense to reduce the number of packages maintained by the
SIG, as the current workload was challenging. He said that it would be
a good idea to standardize on one tool (e.g. pip or uv) to recommend to
users. "Supporting (and understanding) one tool well is better than
supporting many tools half-heartedly
". He was less convinced that
the SIG should take responsibility for testing and documentation of
Python modules that were not maintained by the SIG. "When it comes
to local installation my first instinct is that the onus is with the
user.
"
After gathering feedback from the SIG, Sinha floated
a general guideline of "prioritizing software that could not be
installed from an upstream index
". He said he would post to
Fedora's python-devel mailing list to gather feedback, but it was
ultimately up to the NeuroFedora SIG since it would have to do the work.
On June 25, Sinha started that discussion on Fedora's python-devel list. To lighten its packaging load and better address user needs, Sinha said that the SIG had discussed a new strategy. The group would prioritize packaging software that was either difficult to install or completely unavailable on PyPI. If a package was installable with pip, then it would be dropped from Fedora.
Instead, the group would continue to test that software it cared about could be installed using pip; the SIG would provide documentation about software status and additional information required to get it working. NeuroFedora contributors would also report any problems to the respective upstreams and submit fixes when possible.
Michael Gruber said that the proposal was coming at just the right time. It would help to reduce the set of RPM-packaged applications, which would be beneficial when Fedora has to do a mass-rebuild of packages due to a Python change. Gruber also liked the idea of having a method to test if PyPI packages were installable on Fedora, and that might be beneficial to other language ecosystems as well.
Making a list
No one seemed to object to the plan, and Sinha wrote a script to find the leaf packages—those that are not required as dependencies for other packages—among the Python packages maintained by the SIG. He posted a list of more than 110 packages that the group could consider dropping. Beasley went through the list and provided a summary of packages that should be kept for various reasons, including some that were not Python packages but those that offered Python bindings. That brought the list down to nearly 70 packages, including a handful that had already been orphaned or retired for the upcoming Fedora 43 release.
One outstanding question is which versions of Python to test modules
against. Each Fedora release has a default version of
Python—version 3.13 in Fedora 42—but older and newer
versions of Python are available from its repositories as well. For
example, users can install Python 3.6, 3.9, 3.10, 3.11, 3.12, or
3.14 alongside the default version if they need an older or newer
version for whatever reason. Janke noted
that upstream maintainers are often slow to adapt to changes in the
latest Python releases. He asked if the SIG should support the
"bleeding edge
" or the oldest default version in a supported
Fedora release. As of this writing, that would be Fedora 41, which
also uses Python 3.13.
On July 5, Sinha said that he had implemented a pytest-based checker to verify whether pip-installed packages install cleanly on a Fedora system. He also has an implementation of a searchable package list (sample screenshot here) that will display software that has been tested by the NeuroFedora SIG. The SIG has not deployed the changes yet, or finalized the list of packages to be dropped, but it seems likely this work will land in time for the Fedora 43 release in September.
At the moment, this strategy is limited to the NeuroFedora SIG, and will only impact a small percentage of Fedora packages. However, this hybrid approach of packaging the software that most benefits from being included in the distribution, while testing and documenting packages that can be installed from the language repository directly is something that might be worth examining on a larger scale and beyond the Python ecosystem.
The number of people interested in maintaining native packages for Fedora (and other distributions) has not kept up with the growth of software available in various language ecosystems. NeuroFedora's strategy might be a good way for Linux distributions to support developers and users while lessening the load on volunteer contributors.
Brief items
Kernel development
Kernel release status
The current development kernel is 6.16-rc6, released on July 13. It includes a fix for a somewhat scary regression that came up over the week; Linus said:
So I was flailing around blaming everybody and their pet hamster, because for a while it looked like a drm issue and then a netlink problem (it superficially coincided with separate issues with both of those subsystems).But I did eventually figure out how to trigger it reliably and then it bisected nicely, and a couple of days have passed, and I'm feeling much better about the release again. We're back on track, and despite that little scare, I think we're in good shape.
Stable updates: 6.15.6, 6.12.37, 6.6.97, 6.1.144, and 5.15.187 were released on July 10, followed by 6.12.38, 6.6.98, 6.1.145, and 5.15.188 (containing a single fix for AMD CPUs) on July 14.
The 6.15.7, 6.12.39, 6.6.99, 6.1.146, 5.15.189, 5.10.240, and 5.4.296 updates are in the review process; they are due on July 17.
Quotes of the week
When I was but a wee little bairn, my mother would always tell me "never merge briefly tested patches when you're at -rc6".— Andrew Morton
So this is just a heads-up that I will *NOT* be taking any epoll patches AT ALL unless they are— Linus Torvalds
(a) obvious bug fixes(b) clearly separated into well-explained series with each individual patch simple and obvious.
[...]
Because long-term, epoll needs to die, or at least be seen as a legacy interface that should be cut down, not something to be improved upon.
Distributions
Parrot 6.4 released
Parrot is a Debian-based distribution with an emphasis on security improvement and tools; the 6.4 release is now available. "Many tools, like Metasploit, Sliver, Caido and Empire received important updates, the Linux kernel was updated to a more recent version, and the latest LTS version of Firefox was provided with all our privacy oriented patches.".
Distributions quote of the week
On a personal note, I found this very meaningful: I was once told, "never meet your heroes." However, in the Unix community, by and large my heroes are wonderfully pleasant, generous, and kind people in real life, all of whom have either indirectly or directly had a profound influence on the course of my career and life.— Dan Cross
Development
Hyprland 0.50.0 released
Version 0.50.0 of Hyprland, a compositor for Wayland, has been released. Changes include a new render-scheduling option that "can drastically improve FPS on underpowered devices, while coming at no performance or latency cost when the system is doing alright", an option to exclude applications from screen sharing, a new test suite, and more.
Development quotes of the week
Okay, so if you don't have a mental model of a program, then maybe an LLM could improve your productivity. However, we agreed earlier that the main purpose of writing software is to build a mental model. If we outsource our work to the LLM are we still able to effectively build the mental model? I doubt it.— John WhilesSo should you avoid using these tools? Maybe. If you expect to work on a project long term, want to truly understand it, and wish to be empowered to make changes effectively then I think you should just write some code yourself. If on the other hand you are just slopping out slop at the slop factory, then install cursor4 and crack on - yolo.
It's spelled s-f-r-a-m-e but it's pronounced DURBATULUK.— jemarch
Miscellaneous
The Software in the Public Interest 2024 annual report
Software in the Public Interest has released its annual report for 2024. It includes reports from the long list of projects housed under the SPI umbrella, but the financial statements are not included at this time.
Page editor: Daroc Alden
Announcements
Newsletters
Distributions and system administration
Development
Meeting minutes
Calls for Presentations
CFP Deadlines: July 17, 2025 to September 15, 2025
The following listing of CFP deadlines is taken from the LWN.net CFP Calendar.
Deadline | Event Dates | Event | Location |
---|---|---|---|
July 31 | September 26 September 28 |
Qubes OS Summit | Berlin, Germany |
July 31 | August 28 | Yocto Project Developer Day 2025 | Amsterdam, The Netherlands |
August 3 | September 27 September 28 |
Nextcloud Community Conference 2025 | Berlin, Germany |
August 3 | October 3 October 4 |
Texas Linux Festival | Austin, US |
August 4 | December 8 December 10 |
Open Source Summit Japan | Tokyo, Japan |
August 15 | November 18 November 20 |
Open Source Monitoring Conference | Nuremberg, Germany |
August 15 | September 19 September 21 |
Datenspuren 2025 | Dresden, Germany |
August 31 | September 26 September 28 |
GNU Tools Cauldron | Porto, Portugal |
September 1 | October 18 October 19 |
OpenFest 2025 | Sofia, Bulgaria |
September 10 | December 11 December 13 |
Linux Plumbers Conference | Tokyo, Japan |
September 10 | December 13 December 15 |
GNOME Asia Summit 2025 | Tokyo, Japan |
September 11 | November 20 | NLUUG Autumn Conference 2025 | Utrecht, The Netherlands |
If the CFP deadline for your event does not appear here, please tell us about it.
Upcoming Events
Events: July 17, 2025 to September 15, 2025
The following event listing is taken from the LWN.net Calendar.
Date(s) | Event | Location |
---|---|---|
July 14 July 20 |
DebConf 2025 | Brest, France |
July 16 July 18 |
EuroPython | Prague, Czech Republic |
July 24 July 29 |
GUADEC 2025 | Brescia, Italy |
July 31 August 3 |
FOSSY | Portland OR, US |
August 5 | Open Source Summit India | Hyderabad, India |
August 9 August 10 |
COSCUP 2025 | Taipei City, Taiwan |
August 14 August 16 |
Open Source Community Africa Open Source Festival | Lagos, Nigeria |
August 15 August 17 |
Hackers On Planet Earth | New York, US |
August 16 August 17 |
Free and Open Source Software Conference | Sankt Augustin, Germany |
August 25 August 27 |
Open Source Summit Europe | Amsterdam, Netherlands |
August 28 | Yocto Project Developer Day 2025 | Amsterdam, The Netherlands |
August 28 August 29 |
Linux Security Summit Europe | Amsterdam, Netherlands |
August 29 August 31 |
openSUSE.Asia Summit | Faridabad, India |
August 30 August 31 |
UbuCon Asia 2025 | Kathmandu, Nepal |
September 8 | Workshop on eBPF and kernel extensions (colocated with ACM SIGCOMM) | Coimbra, Portugal |
If your event does not appear here, please tell us about it.
Security updates
Alert summary July 10, 2025 to July 16, 2025
Dist. | ID | Release | Package | Date |
---|---|---|---|---|
AlmaLinux | ALSA-2025:10635 | 10 | gnome-remote-desktop | 2025-07-11 |
AlmaLinux | ALSA-2025:10742 | 8 | gnome-remote-desktop | 2025-07-11 |
AlmaLinux | ALSA-2025:10631 | 9 | gnome-remote-desktop | 2025-07-11 |
AlmaLinux | ALSA-2025:10672 | 8 | go-toolset:rhel8 | 2025-07-11 |
AlmaLinux | ALSA-2025:10677 | 10 | golang | 2025-07-11 |
AlmaLinux | ALSA-2025:10676 | 9 | golang | 2025-07-11 |
AlmaLinux | ALSA-2025:10585 | 9 | jq | 2025-07-11 |
AlmaLinux | ALSA-2025:10371 | 10 | kernel | 2025-07-11 |
AlmaLinux | ALSA-2025:10669 | 8 | kernel | 2025-07-11 |
AlmaLinux | ALSA-2025:10379 | 9 | kernel | 2025-07-11 |
AlmaLinux | ALSA-2025:10670 | 8 | kernel-rt | 2025-07-11 |
AlmaLinux | ALSA-2025:10630 | 10 | libxml2 | 2025-07-11 |
AlmaLinux | ALSA-2025:10698 | 8 | libxml2 | 2025-07-11 |
AlmaLinux | ALSA-2025:10699 | 9 | libxml2 | 2025-07-11 |
AlmaLinux | ALSA-2025:10549 | 10 | podman | 2025-07-11 |
Debian | DLA-4241-1 | LTS | ffmpeg | 2025-07-14 |
Debian | DLA-4240-1 | LTS | redis | 2025-07-12 |
Debian | DLA-4238-1 | LTS | sslh | 2025-07-09 |
Debian | DLA-4239-1 | LTS | thunderbird | 2025-07-11 |
Fedora | FEDORA-2025-282f181e6f | F42 | cef | 2025-07-13 |
Fedora | FEDORA-2025-c05ae72339 | F41 | chromium | 2025-07-10 |
Fedora | FEDORA-2025-87af8315ff | F42 | chromium | 2025-07-10 |
Fedora | FEDORA-2025-0b7e43532e | F41 | git | 2025-07-13 |
Fedora | FEDORA-2025-b5fe483928 | F42 | git | 2025-07-11 |
Fedora | FEDORA-2025-814d6183dd | F41 | gnutls | 2025-07-15 |
Fedora | FEDORA-2025-16a24364ce | F42 | gnutls | 2025-07-13 |
Fedora | FEDORA-2025-785afc6856 | F41 | helix | 2025-07-10 |
Fedora | FEDORA-2025-0cde7282be | F42 | helix | 2025-07-10 |
Fedora | FEDORA-2025-6d7a183951 | F42 | httpd | 2025-07-13 |
Fedora | FEDORA-2025-1c5013e137 | F41 | linux-firmware | 2025-07-15 |
Fedora | FEDORA-2025-6b6824140a | F42 | linux-firmware | 2025-07-12 |
Fedora | FEDORA-2025-b1082e9269 | F42 | luajit | 2025-07-12 |
Fedora | FEDORA-2025-0f35c5dbbb | F41 | mingw-djvulibre | 2025-07-14 |
Fedora | FEDORA-2025-e286662e59 | F42 | mingw-djvulibre | 2025-07-14 |
Fedora | FEDORA-2025-47916db6c7 | F41 | mingw-python-requests | 2025-07-14 |
Fedora | FEDORA-2025-5ea2b69c03 | F42 | mingw-python-requests | 2025-07-14 |
Fedora | FEDORA-2025-2a7a853bc7 | F41 | pam | 2025-07-10 |
Fedora | FEDORA-2025-f142899732 | F41 | perl | 2025-07-13 |
Fedora | FEDORA-2025-30244ebfc7 | F42 | perl | 2025-07-12 |
Fedora | FEDORA-2025-da047483d8 | F41 | php | 2025-07-13 |
Fedora | FEDORA-2025-2c344545bf | F42 | php | 2025-07-13 |
Fedora | FEDORA-2025-d8f9b425fa | F41 | python-requests | 2025-07-13 |
Fedora | FEDORA-2025-87207b946a | F42 | python-requests | 2025-07-12 |
Fedora | FEDORA-2025-a8abfbb35c | F41 | python3.6 | 2025-07-13 |
Fedora | FEDORA-2025-266a1353a1 | F42 | python3.6 | 2025-07-13 |
Fedora | FEDORA-2025-785afc6856 | F41 | rust-blazesym-c | 2025-07-10 |
Fedora | FEDORA-2025-0cde7282be | F42 | rust-blazesym-c | 2025-07-10 |
Fedora | FEDORA-2025-785afc6856 | F41 | rust-clearscreen | 2025-07-10 |
Fedora | FEDORA-2025-0cde7282be | F42 | rust-clearscreen | 2025-07-10 |
Fedora | FEDORA-2025-785afc6856 | F41 | rust-gitui | 2025-07-10 |
Fedora | FEDORA-2025-0cde7282be | F42 | rust-gitui | 2025-07-10 |
Fedora | FEDORA-2025-785afc6856 | F41 | rust-nu-cli | 2025-07-10 |
Fedora | FEDORA-2025-0cde7282be | F42 | rust-nu-cli | 2025-07-10 |
Fedora | FEDORA-2025-785afc6856 | F41 | rust-nu-command | 2025-07-10 |
Fedora | FEDORA-2025-0cde7282be | F42 | rust-nu-command | 2025-07-10 |
Fedora | FEDORA-2025-785afc6856 | F41 | rust-nu-test-support | 2025-07-10 |
Fedora | FEDORA-2025-0cde7282be | F42 | rust-nu-test-support | 2025-07-10 |
Fedora | FEDORA-2025-785afc6856 | F41 | rust-procs | 2025-07-10 |
Fedora | FEDORA-2025-0cde7282be | F42 | rust-procs | 2025-07-10 |
Fedora | FEDORA-2025-785afc6856 | F41 | rust-which | 2025-07-10 |
Fedora | FEDORA-2025-0cde7282be | F42 | rust-which | 2025-07-10 |
Fedora | FEDORA-2025-b712778148 | F41 | salt | 2025-07-14 |
Fedora | FEDORA-2025-c903306aee | F42 | salt | 2025-07-14 |
Fedora | FEDORA-2025-785afc6856 | F41 | selenium-manager | 2025-07-10 |
Fedora | FEDORA-2025-dda04d7a84 | F41 | selenium-manager | 2025-07-13 |
Fedora | FEDORA-2025-0cde7282be | F42 | selenium-manager | 2025-07-10 |
Fedora | FEDORA-2025-89abd49c4a | F42 | selenium-manager | 2025-07-13 |
Fedora | FEDORA-2025-29c6186ffb | F41 | sudo | 2025-07-10 |
Fedora | FEDORA-2025-8e4e6cf21e | F41 | thunderbird | 2025-07-10 |
Fedora | FEDORA-2025-c0d9be4e68 | F42 | thunderbird | 2025-07-11 |
Fedora | FEDORA-2025-785afc6856 | F41 | uv | 2025-07-10 |
Fedora | FEDORA-2025-0cde7282be | F42 | uv | 2025-07-10 |
Mageia | MGASA-2025-0204 | 9 | dpkg | 2025-07-11 |
Mageia | MGASA-2025-0207 | 9 | firefox | 2025-07-11 |
Mageia | MGASA-2025-0206 | 9 | gnupg2 | 2025-07-11 |
Mageia | MGASA-2025-0205 | 9 | golang | 2025-07-11 |
Mageia | MGASA-2025-0208 | 9 | qtimageformats6 | 2025-07-15 |
Oracle | ELSA-2025-10844 | OL10 | cloud-init | 2025-07-16 |
Oracle | ELSA-2025-10551 | OL8 | container-tools:rhel8 | 2025-07-10 |
Oracle | ELSA-2025-11030 | OL8 | emacs | 2025-07-16 |
Oracle | ELSA-2025-10181 | OL7 | firefox | 2025-07-16 |
Oracle | ELSA-2025-11140 | OL9 | glib2 | 2025-07-16 |
Oracle | ELSA-2025-10742 | OL8 | gnome-remote-desktop | 2025-07-14 |
Oracle | ELSA-2025-10631 | OL9 | gnome-remote-desktop | 2025-07-10 |
Oracle | ELSA-2025-10672 | OL8 | go-toolset:rhel8 | 2025-07-16 |
Oracle | ELSA-2025-10677 | OL10 | golang | 2025-07-14 |
Oracle | ELSA-2025-10676 | OL9 | golang | 2025-07-10 |
Oracle | ELSA-2025-9318 | OL8 | javapackages-tools:201801 | 2025-07-10 |
Oracle | ELSA-2025-10618 | OL8 | jq | 2025-07-10 |
Oracle | ELSA-2025-10669 | OL8 | kernel | 2025-07-14 |
Oracle | ELSA-2025-10837 | OL9 | kernel | 2025-07-16 |
Oracle | ELSA-2025-20470 | OL9 | kernel | 2025-07-16 |
Oracle | ELSA-2025-9331 | OL7 | libvpx | 2025-07-10 |
Oracle | ELSA-2025-10630 | OL10 | libxml2 | 2025-07-10 |
Oracle | ELSA-2025-10698 | OL8 | libxml2 | 2025-07-14 |
Oracle | ELSA-2025-10699 | OL9 | libxml2 | 2025-07-10 |
Oracle | ELSA-2025-11035 | OL8 | lz4 | 2025-07-16 |
Oracle | ELSA-2025-9332 | OL7 | mpfr | 2025-07-10 |
Oracle | ELSA-2025-9741 | OL7 | perl-File-Find-Rule | 2025-07-14 |
Oracle | ELSA-2025-9740 | OL7 | perl-File-Find-Rule-Perl | 2025-07-10 |
Oracle | ELSA-2025-11036 | OL8 | python-setuptools | 2025-07-16 |
Oracle | ELSA-2025-11043 | OL8 | python3.11-setuptools | 2025-07-16 |
Oracle | ELSA-2025-11044 | OL8 | python3.12-setuptools | 2025-07-16 |
Oracle | ELSA-2025-11042 | OL8 | socat | 2025-07-16 |
Red Hat | RHSA-2025:11101-01 | EL9.2 | fence-agents | 2025-07-15 |
Red Hat | RHSA-2025:11102-01 | EL9.4 | fence-agents | 2025-07-15 |
Red Hat | RHSA-2025:10855-01 | EL10 | glib2 | 2025-07-15 |
Red Hat | RHSA-2025:11140-01 | EL9 | glib2 | 2025-07-15 |
Red Hat | RHSA-2025:10780-01 | EL9.2 | glib2 | 2025-07-10 |
Red Hat | RHSA-2025:11066-01 | EL10 | glibc | 2025-07-15 |
Red Hat | RHSA-2025:10867-01 | EL9.0 | java-17-openjdk | 2025-07-16 |
Red Hat | RHSA-2025:10830-01 | EL9.0 | kernel | 2025-07-16 |
Red Hat | RHSA-2025:11045-01 | EL9.2 | kernel | 2025-07-16 |
Red Hat | RHSA-2025:10671-01 | EL9.2 | kernel | 2025-07-16 |
Red Hat | RHSA-2025:11245-01 | EL9.4 | kernel | 2025-07-16 |
Red Hat | RHSA-2025:10829-01 | EL9.0 | kernel-rt | 2025-07-16 |
Red Hat | RHSA-2025:10675-01 | EL9.2 | kernel-rt | 2025-07-16 |
Red Hat | RHSA-2025:9328-01 | EL10 | libblockdev | 2025-07-10 |
Red Hat | RHSA-2025:10796-01 | EL7 | libblockdev | 2025-07-10 |
Red Hat | RHSA-2025:9878-01 | EL8 | libblockdev | 2025-07-10 |
Red Hat | RHSA-2025:9320-01 | EL8.2 | libblockdev | 2025-07-10 |
Red Hat | RHSA-2025:9321-01 | EL8.4 | libblockdev | 2025-07-10 |
Red Hat | RHSA-2025:9322-01 | EL8.6 | libblockdev | 2025-07-10 |
Red Hat | RHSA-2025:9323-01 | EL8.8 | libblockdev | 2025-07-10 |
Red Hat | RHSA-2025:9327-01 | EL9 | libblockdev | 2025-07-10 |
Red Hat | RHSA-2025:9324-01 | EL9.0 | libblockdev | 2025-07-10 |
Red Hat | RHSA-2025:9325-01 | EL9.2 | libblockdev | 2025-07-10 |
Red Hat | RHSA-2025:9326-01 | EL9.4 | libblockdev | 2025-07-10 |
Red Hat | RHSA-2025:11036-01 | EL8 | python-setuptools | 2025-07-15 |
Red Hat | RHSA-2025:11043-01 | EL8 | python3.11-setuptools | 2025-07-15 |
Red Hat | RHSA-2025:11044-01 | EL8 | python3.12-setuptools | 2025-07-15 |
Red Hat | RHSA-2025:10779-01 | EL9.2 | sudo | 2025-07-10 |
Red Hat | RHSA-2025:10707-01 | EL9.4 | sudo | 2025-07-10 |
Slackware | SSA:2025-190-01 | git | 2025-07-09 | |
Slackware | SSA:2025-192-02 | httpd | 2025-07-11 | |
Slackware | SSA:2025-192-01 | kernel | 2025-07-11 | |
Slackware | SSA:2025-196-01 | libxml2 | 2025-07-15 | |
Slackware | SSA:2025-196-02 | libxml2 | 2025-07-15 | |
SUSE | openSUSE-SU-2025:15335-1 | TW | afterburn | 2025-07-12 |
SUSE | SUSE-SU-2025:02283-1 | SLE12 | audiofile | 2025-07-11 |
SUSE | openSUSE-SU-2025:15320-1 | TW | avif-tools | 2025-07-09 |
SUSE | openSUSE-SU-2025:15326-1 | TW | chmlib-devel | 2025-07-10 |
SUSE | openSUSE-SU-2025:15336-1 | TW | cmctl | 2025-07-12 |
SUSE | SUSE-SU-2025:20459-1 | SLE-m6.1 | containerd | 2025-07-09 |
SUSE | openSUSE-SU-2025:15319-1 | TW | djvulibre | 2025-07-09 |
SUSE | SUSE-SU-2025:02289-1 | SLE15 SLE-m5.1 SLE-m5.2 SLE-m5.3 SLE-m5.4 SLE-m5.5 oS15.6 | docker | 2025-07-11 |
SUSE | openSUSE-SU-2025:15325-1 | TW | firefox | 2025-07-10 |
SUSE | openSUSE-SU-2025:15337-1 | TW | git | 2025-07-12 |
SUSE | SUSE-SU-2025:20471-1 | SLE-m6.1 | glib2 | 2025-07-16 |
SUSE | openSUSE-SU-2025:15329-1 | TW | go1 | 2025-07-10 |
SUSE | SUSE-SU-2025:02296-1 | SLE15 SES7.1 oS15.6 | go1.23 | 2025-07-11 |
SUSE | SUSE-SU-2025:02295-1 | SLE15 SES7.1 oS15.6 | go1.24 | 2025-07-11 |
SUSE | SUSE-SU-2025:20465-1 | SLE-m6.0 | gpg2 | 2025-07-09 |
SUSE | SUSE-SU-2025:20458-1 | SLE-m6.1 | gpg2 | 2025-07-09 |
SUSE | SUSE-SU-2025:20472-1 | SLE-m6.1 | gpg2 | 2025-07-16 |
SUSE | SUSE-SU-2025:02259-1 | SLE15 oS15.6 | gpg2 | 2025-07-09 |
SUSE | SUSE-SU-2025:02304-1 | SLE12 | gstreamer-plugins-base | 2025-07-14 |
SUSE | SUSE-SU-2025:02302-1 | SLE15 oS15.6 | gstreamer-plugins-base | 2025-07-14 |
SUSE | SUSE-SU-2025:02303-1 | SLE12 | gstreamer-plugins-good | 2025-07-14 |
SUSE | SUSE-SU-2025:20457-1 | SLE-m6.1 | helm | 2025-07-09 |
SUSE | openSUSE-SU-2025:15338-1 | TW | k9s | 2025-07-12 |
SUSE | SUSE-SU-2025:02308-1 | MP4.2 SLE15 SLE-m5.1 SLE-m5.2 SES7.1 | kernel | 2025-07-14 |
SUSE | SUSE-SU-2025:02262-1 | MP4.3 SLE15 SLE-m5.3 SLE-m5.4 oS15.4 | kernel | 2025-07-10 |
SUSE | SUSE-SU-2025:02312-1 | SLE11 | kernel | 2025-07-15 |
SUSE | SUSE-SU-2025:02307-1 | SLE15 | kernel | 2025-07-14 |
SUSE | SUSE-SU-2025:02320-1 | SLE15 SLE-m5.1 SLE-m5.2 | kernel | 2025-07-15 |
SUSE | SUSE-SU-2025:02322-1 | SLE15 SLE-m5.3 SLE-m5.4 | kernel | 2025-07-15 |
SUSE | SUSE-SU-2025:02264-1 | SLE15 SLE-m5.5 oS15.5 | kernel | 2025-07-10 |
SUSE | SUSE-SU-2025:02321-1 | SLE15 SLE-m5.5 oS15.5 | kernel | 2025-07-15 |
SUSE | openSUSE-SU-2025:15339-1 | TW | liboqs-devel | 2025-07-12 |
SUSE | openSUSE-SU-2025:15323-1 | TW | libpoppler-cpp2 | 2025-07-09 |
SUSE | SUSE-SU-2025:02276-1 | SLE15 oS15.6 | libsoup | 2025-07-10 |
SUSE | SUSE-SU-2025:02277-1 | SLE15 oS15.6 | libsoup2 | 2025-07-10 |
SUSE | SUSE-SU-2025:02278-1 | MP4.3 SLE15 SLE-m5.3 SLE-m5.4 SLE-m5.5 oS15.4 | libssh | 2025-07-10 |
SUSE | SUSE-SU-2025:02281-1 | SLE12 | libssh | 2025-07-10 |
SUSE | SUSE-SU-2025:02279-1 | SLE15 SLE-m5.1 SLE-m5.2 SES7.1 | libssh | 2025-07-10 |
SUSE | openSUSE-SU-2025:15321-1 | TW | libxml2-2 | 2025-07-09 |
SUSE | SUSE-SU-2025:02294-1 | SLE12 | libxml2 | 2025-07-11 |
SUSE | SUSE-SU-2025:02260-1 | SLE15 | libxml2 | 2025-07-09 |
SUSE | SUSE-SU-2025:02275-1 | SLE15 SLE-m5.1 SLE-m5.2 SES7.1 oS15.6 | libxml2 | 2025-07-10 |
SUSE | SUSE-SU-2025:02314-1 | SLE15 SLE-m5.5 oS15.5 oS15.6 | libxml2 | 2025-07-15 |
SUSE | SUSE-SU-2025:20464-1 | SLE-m6.0 | openssl-3 | 2025-07-09 |
SUSE | SUSE-SU-2025:01885-2 | SLE12 | perl-YAML-LibYAML | 2025-07-10 |
SUSE | openSUSE-SU-2025:15340-1 | TW | php8 | 2025-07-12 |
SUSE | SUSE-SU-2025:02317-1 | SLE12 | poppler | 2025-07-15 |
SUSE | SUSE-SU-2025:02318-1 | SLE15 oS15.5 | poppler | 2025-07-15 |
SUSE | SUSE-SU-2025:02324-1 | SLE15 oS15.6 | poppler | 2025-07-16 |
SUSE | SUSE-SU-2025:02309-1 | MP4.3 SLE15 SLE-m5.3 SLE-m5.4 oS15.4 | protobuf | 2025-07-15 |
SUSE | SUSE-SU-2025:02310-1 | SLE15 SLE-m5.5 oS15.5 | protobuf | 2025-07-15 |
SUSE | SUSE-SU-2025:02311-1 | SLE15 oS15.6 | protobuf | 2025-07-15 |
SUSE | SUSE-SU-2025:20463-1 | SLE-m6.1 | python-cryptography | 2025-07-09 |
SUSE | SUSE-SU-2025:20462-1 | SLE-m6.1 | python-setuptools | 2025-07-09 |
SUSE | openSUSE-SU-2025:15324-1 | TW | python311-pycares | 2025-07-09 |
SUSE | SUSE-SU-2025:02297-1 | SLE12 | python36 | 2025-07-11 |
SUSE | SUSE-SU-2025:02329-1 | MP4.3 SLE15 oS15.4 | rmt-server | 2025-07-16 |
SUSE | SUSE-SU-2025:02330-1 | SLE15 | rmt-server | 2025-07-16 |
SUSE | SUSE-SU-2025:02198-2 | SLE15 | runc | 2025-07-16 |
SUSE | SUSE-SU-2025:20468-1 | SLE-m6.1 | stalld | 2025-07-16 |
SUSE | SUSE-SU-2025:02280-1 | SLE15 SES7.1 oS15.6 | tomcat | 2025-07-10 |
SUSE | SUSE-SU-2025:02261-1 | SLE15 oS15.6 | tomcat10 | 2025-07-09 |
SUSE | openSUSE-SU-2025:15341-1 | TW | trivy | 2025-07-12 |
SUSE | SUSE-SU-2025:02282-1 | MP4.3 SLE15 SES7.1 oS15.6 | umoci | 2025-07-11 |
SUSE | SUSE-SU-2025:02271-1 | SLE12 | wireshark | 2025-07-10 |
SUSE | SUSE-SU-2025:02272-1 | SLE15 | wireshark | 2025-07-10 |
SUSE | SUSE-SU-2025:02326-1 | MP4.3 SLE15 SLE-m5.3 SLE-m5.4 oS15.4 | xen | 2025-07-16 |
SUSE | SUSE-SU-2025:02290-1 | SLE12 | xen | 2025-07-11 |
SUSE | SUSE-SU-2025:02315-1 | SLE15 | xen | 2025-07-15 |
SUSE | SUSE-SU-2025:02319-1 | SLE15 SLE-m5.1 SLE-m5.2 SES7.1 oS15.3 | xen | 2025-07-15 |
SUSE | SUSE-SU-2025:02325-1 | SLE15 SLE-m5.5 oS15.5 | xen | 2025-07-16 |
SUSE | SUSE-SU-2025:02316-1 | SLE15 oS15.6 | xen | 2025-07-15 |
SUSE | openSUSE-SU-2025:15342-1 | TW | xen | 2025-07-12 |
Ubuntu | USN-7545-3 | 16.04 18.04 20.04 22.04 24.04 25.04 | apport | 2025-07-14 |
Ubuntu | USN-7631-1 | 22.04 24.04 25.04 | djvulibre | 2025-07-09 |
Ubuntu | USN-7626-2 | 16.04 18.04 20.04 22.04 | git | 2025-07-09 |
Ubuntu | USN-7626-3 | 16.04 18.04 20.04 22.04 | git | 2025-07-10 |
Ubuntu | USN-7634-1 | 24.04 25.04 | glibc | 2025-07-14 |
Ubuntu | USN-7635-1 | 22.04 24.04 25.04 | gnutls28 | 2025-07-14 |
Ubuntu | USN-7637-1 | 24.04 | jpeg-xl | 2025-07-15 |
Ubuntu | USN-7632-1 | 22.04 24.04 | libyaml-libyaml-perl | 2025-07-09 |
Ubuntu | USN-7608-6 | 22.04 | linux-xilinx-zynqmp | 2025-07-11 |
Ubuntu | USN-7633-1 | 22.04 24.04 | nix | 2025-07-14 |
Ubuntu | USN-7629-1 | 22.04 24.04 25.04 | protobuf | 2025-07-09 |
Ubuntu | USN-7630-1 | 16.04 18.04 20.04 22.04 24.04 24.10 25.04 | resteasy, resteasy3.0 | 2025-07-11 |
Ubuntu | USN-7636-1 | 24.04 | roundcube | 2025-07-15 |
Kernel patches of interest
Kernel releases
Architecture-specific
Build system
Core kernel
Development tools
Device drivers
Device-driver infrastructure
Documentation
Filesystems and block layer
Memory management
Networking
Security-related
Virtualization and containers
Miscellaneous
Page editor: Jonathan Corbet