LWN.net Weekly Edition for June 19, 2025
Welcome to the LWN.net Weekly Edition for June 19, 2025
This edition contains the following feature content:
- Enhancing screen-reader functionality in modern GNOME: a GNOME developer explains how visually-impaired people use Linux, and accessibility improvements included in GNOME 48.
- The hierarchical constant bandwidth server scheduler: a proposal to improve scheduling on systems with multiple realtime processes.
- CoMaps emerges as an Organic Maps fork: a look at an emerging free-software mobile-navigation app project.
- A parallel path for GPU restore in CRIU: an effort to eliminate bottlenecks in restoring GPU applications with Checkpoint/Restore in Userspace (CRIU).
- FAIR package management for WordPress: the FAIR.pm project is looking to decentralize package distribution for WordPress.
- Parallelizing filesystem writeback: LSFMM+BPF discussion on using multiple threads for buffered I/O writeback.
- Supporting NFS v4.2 WRITE_SAME: Anna Schumaker led a discussion at LSFMM+BPF about where it would be best to implement a feature for NFS.
- Getting Lustre upstream: a session at LSFMM+BPF about how best to bring Lustre back into the Linux kernel.
This week's edition also includes these inner pages:
- Brief items: Brief news items from throughout the community.
- Announcements: Newsletters, conferences, security updates, patches, and more.
Please enjoy this week's edition, and, as always, thank you for supporting LWN.net.
Enhancing screen-reader functionality in modern GNOME
Accessibility features and the work that goes into developing those features often tend to be overlooked and are poorly understood by all but the people who actually depend on such features. At Fedora's annual developer conference, Flock, Lukáš Tyrychtr sought to improve understanding and raise awareness about accessibility with his session on accessibility barriers and screen-reader functionality in GNOME. His talk provided rare insight into the world of using and developing open-source software for visually-impaired users—including landing important accessibility improvements in the latest GNOME release.
I did not attend Flock, which was held in Prague from June 5 to June 8. However, I was able to watch the talk shortly after it was given via the recording of the live stream from the event. Slides from the talk have not yet been published.
Understanding accessibility
Much of Tyrychtr's talk was about simply laying the groundwork for the audience to
understand accessibility tools and what visually-impaired users need. He began by
introducing himself as a member of Red Hat's tools and accessibility team. Most
of the team is working on the tools part, but he is working on the accessibility
part. "Of course, one of the reasons is that I'm blind
", he said, and he runs
into accessibility problems quite often. That may be an advantage, but it can also be
a disadvantage, as the work would benefit from new perspectives—if anyone
wanted to help, he said, they would be welcome. Before getting to the
current status of GNOME accessibility, though, there was some
background to cover:
First we will learn how the visually impaired use the computer, and why it's so different, why we need keyboard access and features so advanced that the security guys are frightened of us.
A screen reader, he said, is a piece of software that gives a visually-impaired person information about what is on the screen through speech output. It also announces changes to the screen content and allows the user to interact with that content. For example, reviewing text on the screen, telling the user about available controls, and so on.
Linux users have "basically only one screen reader we can use
",
which is Orca. The job of a screen reader is to
tell the user a lot of information, and it usually does this by using a speech synthesizer (or
sometimes "speech engine"). It gets some text as an input, and it produces sound as an
output. On Linux, the screen reader does not talk to the speech engine directly, Tyrychtr
said. It uses a user-session daemon called a speech dispatcher so that screen-reader
developers don't have to create a direct interface for each speech-synthesis engine.
He said that the most commonly used engine is a fork of eSpeak called eSpeak NG. Other open-source options include the Festival Speech
Synthesis System, and RHVoice,
which he said was "quite nice if you give it the right data for a
voice
". The English voices were good, but it only has a few Czech
voices to choose from. And, of course, there's an AI-entrant as well
in the form of Piper:
a local neural text-to-speech system that runs locally. Once again, he
noted that the English voices for Piper are "quite nice
", and
complained that the Czech voices sounded weird. "But the
computations aren't so complicated, and you don't need a huge graphic
card far to run this thing
", so it is quite usable if the right
voice is available.
Using accessibility tools
Next, he wanted to talk about how visually-impaired people use the
computer. "Very often I hear the same questions again and again, so let's get them
out of the way
". Even though there is assistive hardware for blind users or
people with low vision, most use normal computers with normal keyboards for one
simple reason: the cost of special assistive hardware can be "'wow', or at least
crazy, maybe insane as well
". He did not go into specifics, but a brief search
shows that keyboards with
built-in speech synthesis might run more than $600, while a portable
Braille display can be nearly $10,000. Some visually-impaired people do use
specialized hardware, he said, but the majority use standard computer hardware.
Touch typing is something that visually-impaired people have to learn, and learn
early, Tyrychtr said: "I started learning the keyboard even before I had my own
computer and that was a good decision because I wasn't lost after I got
one
". There was much more than touch typing to learn, too, because it's not very
common for visually impaired people to use a mouse.
So, after we get a computer, we have to learn all the keyboard shortcuts, and there are a lot of them—even without using Emacs, which some of the visually impaired actually do. But, personally, I'm sticking to more mainstream user interfaces, so I can actually fix the issues for more users.
After learning all the shortcuts, "we actually need the applications to be
keyboard accessible
". He said that application accessibility was complicated;
instead of trying to squeeze it into his Flock presentation, he mentioned
presentations he had given previously at FOSDEM and DevConf.cz for
those who wanted to learn about the keyboard-accessibility topic.
All the key events
He also wanted to explain why visually-impaired users had such "weird
requirements for hardware system access
". The screen reader needs to see all
keyboard events, and do something with them. This is because the screen
reader has its own keyboard shortcuts, perhaps in the hundreds, including a modifier
key that toggles the screen reader on or off. To avoid clashing with system and
application shortcuts, all keyboard input needs to be sent to the screen reader
first. Some of the input is meant to request that the screen reader tell the
user what their location is on the screen, review
notifications, describe
elements on the screen, and more. Those commands should not be sent on to the
compositor or application at all:
The visually-impaired user very often wants to hear what he or she is typing on a keyboard, so you need to see the keyboard events for this as well. You can't actually restrict this to text controls. You want this service to be available everywhere, so it's consistent, and not surprising for the user. So you want to see the shortcut, see the event, but now you want to pass it along so it actually does the usual thing that it does in the application or compositor and so on.
Users also need a shortcut to tell Orca to shut up, he said. "When the speech
synthesis is speaking, it may be something quite long you don't want to hear
completely
", so it's handy to have a shortcut to stop it from reading the entire
text. In summary, the screen reader wants to see every keyboard event, every key
press, every key release, even for modifiers. Then it can do one of two things:
consume the event or observe it and pass it along to the compositor or
application.
GTK 4 and AT-SPI2
The next part of the story is how the screen reader actually works, or in this
case, worked in the past before GTK 4 was released at the
end of 2020. The Assistive Technology
Service Provider Interface (AT-SPI2) registry daemon provided an interface over
DBus for keyboard reporting to the screen reader. As Matthias Clasen explained in a
blog post about
GTK 4 accessibility changes, this was done "in an awkwardly indirect
way
" in GTK 2 and GTK 3 that involved translating from GTK
widgets to accessibility-toolkit interfaces and then from the AT-SPI interfaces into
Python objects:
This multi-step process was inefficient, lossy, and hard to maintain; it required implementing the same functionality over at least three components, and it led to discrepancies between the documented AT-SPI methods and properties, and the ones actually sent over the accessibility bus.
So, GTK 4 featured a complete rewrite of the accessibility subsystem. It now
handles communication with the AT-SPI registry daemon directly, without any
intermediate libraries. That meant, Tyrychtr said, that keyboard input from GTK 4
applications was broken for screen readers. The solution was simple and used direct
communication with the X.org server, but it did not take Wayland into account because
"not many users, or at least not many visually-impaired users, were using
wayland
" at the time. Now, he said, everything was working as intended until
"Wayland actually happened
".
GTK 4 applications did not send the legacy keyboard events, he said, which
was the only source of keyboard events that remained on a Wayland session. Users were
unable to do anything screen-reader specific; they could not stop speech synthesis of
long text, open Orca's settings, review window components, or even read a text
box. Meanwhile, the number of GTK 4 applications kept growing, and X.org was
being phased out for GNOME on Fedora and RHEL. "The situation, I'd say, was quite
sad.
" It was possible that visually-impaired users might switch
away from Linux completely.
Something had to happen
Finally, he said, "we started to do something
" about the state of
accessibility on GNOME. The first step was to get people together who worked on the
components that needed to change. Tyrychtr said that Clasen helped facilitate work
with Mutter display server and AT-SPI
maintainers to decide what should and shouldn't happen.
The first draft of the solution was an accessibility DBus interface
that connected to the compositor's accessibility interface and sent
only the keyboard shortcuts that should be used by the screen
reader. "This was quite nice because now the decision to process
whether to process the shortcut was handled by the same code that
actually got the hardware events
". This improved the speed, he
said, but "just sending keyboard events whether someone wants them
or not isn't a good idea
".
The next iteration required the screen reader to ask the compositor
for keyboard events, rather than parsing all the event sent to the
accessibility interface. That, however, had problems too. "You
don't actually want an API which basically gives you keylog capability
without any work
". Any process running as the user could ask for,
and receive, everything the user typed "without any question asked,
so this definitely wasn't the best thing to have
". They needed to
limit access to privileged clients.
The solution was to provide a check that an application owns the right DBus
service name. "There's no cryptography handshakes or so on
". Currently, Mutter
uses a hard-coded list of service names, but that could be changed. He said that, in
theory, the compositor could ask the user for confirmation of a service before
providing access to key events, but they "wanted to do anything to move this
design to the finish line, so no interacting things were implemented
". That
decision is compositor-dependent, though, so other compositors could operate
differently and implement their own access checks.
Now there was a working prototype for keyboard monitoring that worked on Wayland; the next step was to get things upstream. To avoid having to do major rewrites in Orca, Tyrychtr created a backend called the a11y-manager for AT-SPI that was merged and included when version 2.56 was released in March.
Orca still needed a few changes, though. The old interfaces used hardware keyboard codes, he said, which did not work with Wayland. They decided to use XKB key symbols ("keysyms") instead, which he said that every compositor uses somewhere in keyboard-event processing.
The most complicated thing, according to Tyrychtr, was getting
support for a client interface into the compositor. It took some
back-and-forth with the maintainer for Mutter, "but we managed to
get this API into Mutter 48
" as well as GNOME 48 with a recent
AT-SPI release, so now everything works as he wants it to work on
Wayland with GNOME. Mutter is not the only compositor, though. He said
that the same interface would be included with the KWin compositor in
KDE 6.4 and hoped that other compositors would include it as
well.
Tyrychtr said that usually he has worked on smaller fixes, so this project was one of the larger projects that he has completed, and he has learned a lot from the experience. The most important thing, he said, was that talking to people was much more important than coding:
Because when you start talking with people and you get them together you can come up with some ideas which everyone can agree on quite easily. Then you have the programming, but you already know what to do and what's the right approach. So you just code it.
Q&A
The first question was whether Tyrychtr knew how different communities with the same goals could work more closely together on accessibility topics. Tyrychtr said that was a great question, but he had no specific answers. There are accessibility rooms, he said, presumably referring to GNOME's Matrix instance, but he had no suggestion that included all developer groups that might be working on accessibility topics.
The next question was: what would stop another process from claiming the DBus name
used for accessibility? Wasn't it just a race against the clock? Tyrychtr said he was
afraid someone would ask that. "At least for the blind users, the screen reader is
the very first thing or within the very first things which start
". But, on a
system where that was not the case, making it foolproof would require a portal proxy
and some user verification.
Another attendee said that things regarding accessibility "often start with bad
news of what was broken, and then it was fixed
". Was there any coordination in
place to avoid breaking things first, or is there some way for interested users to
spread awareness of pending breakage? Tyrychtr said, "we actually knew what would happen, but the
accessibility stuff wasn't popular hot new stuff in the old times around
2022
". He added, "we tried to get to the developers, but we probably didn't say
it so bluntly, so they didn't understand how much of a bad news this would
be
". Now, however, he thought that the situation was much better and that
awareness of accessibility issues was much greater, so he hoped something like the
accessibility problems with GTK 4 would not happen again in the future.
The hierarchical constant bandwidth server scheduler
The POSIX realtime model, which is implemented in the Linux kernel, can ensure that a realtime process obtains the CPU time it needs to get its job done. It can be less effective, though, when there are multiple realtime processes competing for the available CPU resources. The hierarchical constant bandwidth server patch series, posted by Yuri Andriaccio with work by Luca Abeni, Alessio Balsini, and Andrea Parri, is a modification to the Linux scheduler intended to make it possible to configure systems with multiple realtime tasks in a deterministic and correct manner.The core concept behind POSIX realtime is priority — the highest-priority task always runs. If there are multiple processes at the same priority, the result depends on whether they are configured as SCHED_FIFO tasks (in which case the running task gets the CPU until it voluntarily gives it up) or as SCHED_RR (causing the equal-priority tasks to share the CPU in time slices). This model allows a single realtime task to monopolize a CPU indefinitely, perhaps at the expense of other realtime tasks that also need to run.
In an attempt to improve support for systems with multiple realtime tasks,
the realtime group
scheduling feature was added to the 2.6.25 kernel in 2008 by Peter
Zijlstra. It allows a system administrator to put realtime tasks into
control groups, then to limit the amount of CPU time available to each
group. This feature works and is used, but it has never been seen as an
optimal solution. It is easy to misconfigure (the documentation warns that
"fiddling with these settings can result in an unstable system
"),
complicates the scheduler in a number of ways, and lacks a solid
theoretical underpinning.
Kernel developers are not always strongly motivated by theoretical proofs, preferring to observe how real-world systems actually behave. But a certain amount of rigor can be appropriate in the realtime realm. If a realtime system is being used to, for example, guide a motor vehicle on a busy highway, it's important that said system quickly obtain its needed CPU time when that time is needed. "It appears to work most of the time" is not deemed sufficient for such applications.
What realtime group scheduling is aiming for is called a "constant bandwidth server" (CBS) scheduler. In such a system, each task is characterized by two parameters: the amount of CPU time it needs (the "worst-case execution time") and how often it needs that time (the "period"). In a properly configured system, the scheduler can ensure that every realtime task gets the CPU time it needs when it needs that time. There is a CBS scheduler in the kernel now, in the form of the deadline scheduler (see this article by the much-missed Daniel Bristot de Oliveira for details), but this scheduler is not hierarchical, and requires realtime tasks to be written specifically for it.
The hierarchical CBS server is an extension of the CBS concept to allow multiple realtime tasks to be organized (and controlled) in a hierarchy — specifically, the control-group hierarchy on Linux systems. It was first described in this 2001 paper by Giuseppe Lipari and Sanjoy Baruah. The actual implementation, of course, is somewhat removed from an academic paper, but it relies on that paper's theoretical underpinnings.
When running under the hierarchical constant bandwidth server, each control group has two knobs to control the resource reservation: cpu.rt_period_us (the period) and cpu.rt_runtime_us (the required CPU time); both knobs are expressed in microseconds. If cpu.rt_period_us is set to 1000000, and cpu.rt_runtime_us is set to 100000, then the tasks placed within that control group will be entitled to 100ms of CPU time every second, or 10% of one CPU overall. Within the group, tasks will compete with each other using the normal priority rules.
To implement this guarantee, the hierarchical CBS scheduler reuses a mechanism that was first added for a different purpose: the deadline server. It has long been deemed important to set aside a small amount of CPU time for normal-priority tasks to run, even if a realtime task wants all of the time it can get. That small window of time can be used, for example, to regain control of a system if a realtime task has gone off the rails and will not release the CPU.
Implementing this behavior has proved challenging in the past; for years, a throttling mechanism was used to push realtime tasks out of the CPU for 5% (by default) of the available time. This exclusion would happen, though, even if nothing else needed to run, causing the CPU to go idle when a realtime task was runnable. This behavior irritated developers enough that a solution was eventually found.
That solution was to create a special task called a "deadline server" that would run under the deadline scheduler. Deadline tasks have the highest priority of all under Linux, with the ability to push aside even POSIX realtime tasks. The deadline server is configured to run in such a way that it is entitled to 5% of the available CPU time (50ms out of every second); it uses that time to run tasks in the normal-priority run queues. If there are tasks waiting to run, the deadline server ensures that they get a chance; if, instead, there are no runnable tasks, the deadline server just gives up the CPU, making it available for any realtime tasks that may be competing for it.
Deadline servers, thus, were added to ensure that non-realtime tasks could run, but they can also be pressed into service for realtime tasks. When a control group is configured to run under the hierarchical CBS server, a deadline server is set up to run for the execution time and period configured for that group. When the deadline server runs, it will schedule the tasks within that group in the usual way, ensuring that the group as a whole gets the resources assigned to it, even if there are higher-priority realtime tasks trying to run elsewhere in the system.
The algorithm described in the paper only envisions a two-level control-group hierarchy, with the root group being one of those levels. That should suffice for most real-world realtime applications; excess complexity does not help in the design of a correctly functioning realtime system. Even so, the final patch in the series modifies the implementation to allow deeper control-group hierarchies. Only the leaf groups (those with no child groups) are able to contain realtime tasks, though. Given the use of absolute CPU-time and period values in CBS, multiple layers of control do not make sense the way they do with normal (non-realtime) group scheduling.
The end result is a solution that is, hopefully, more deterministic than realtime group scheduling while having less of an impact on the scheduler's code. One positive indication in the latter regard is that the patch series, after removing the existing realtime group code, adds the new functionality while reducing the size of the scheduler by nearly 400 lines.
This work was discussed at the 2025 Power Management and Scheduling in the Linux Kernel Summit, so there should be a general consensus on the direction that it is taking. The number of comments on the posted series, as of this writing, is small; that will probably change over time as other developers get a chance to look at the code. In any case, a change of this nature is likely to take a while before it is considered ready for the mainline kernel; as always, one should never assume that a deadline applies to the acceptance of realtime patches.
CoMaps emerges as an Organic Maps fork
The open-source mobile app Organic Maps is used by millions of people on both the Android and iOS platforms. In addition to featuring offline maps (generated from OpenStreetMap cartography) and turn-by-turn navigation, it also promises its users greater privacy than proprietary options. However, controversial decisions taken by the project's leaders, feelings of disenfranchisement among contributors, and even accusations of embezzlement have precipitated a divide in the community, leading to a new fork called CoMaps.The creation of Organic Maps
Organic Maps traces its ancestry back to the navigation software MapsWithMe, created by Yuri Melnichek, Viktor Havaka, and Alexander Borsuk in 2010. Although the app itself was released under a proprietary license, MapsWithMe used open data from OpenStreetMap with its own rendering engine. MapsWithMe was renamed to MAPS.ME in June 2014, shortly before being sold to Russian internet company Mail.ru Group (now known as VK) in December of the same year for a sum equivalent to nearly $10 million.
The original founders continued working on the software as employees of Mail.ru until 2016. During this period, MAPS.ME was released as open-source software for the first time, with its code being made available under the Apache-2.0 license on GitHub.
Four years of steady development followed, but this status quo came to an abrupt end in November 2020 when Mail.ru sold MAPS.ME, considering the app a "non-core" part of its business. By the end of 2020, MAPS.ME's new owner had completely rewritten the software around a different rendering engine, returning again to a proprietary license in the process, albeit still using OpenStreetMap data.
The new version of MAPS.ME was met with widespread disapproval from its users, and OMaps was released merely days later as a community fork of the last open-source version of MAPS.ME. The main driving force behind OMaps was Roman Tsisyk, described as the project's "interim project manager", but Havaka and Borsuk — two of the original MAPS.ME developers — were also active contributors. OMaps became Organic Maps in April 2021.
Disquiet in the Organic Maps community
In November 2023, contributors noticed an unexpected pull request to the Organic Maps GitHub repository by Tsisyk. Users who selected hotels in the app were to be shown a button that would take them to the hotel's listing on KAYAK, a travel search engine owned by Booking.com. As described in the release notes:
After selecting some hotels, you can see an experimental "Details on Kayak" button that opens the Kayak website with photos and reviews about the selected hotel. If you make any booking using this button, Organic Maps receives a small referral bonus to fund the project development.
These links did not contain any data that could be used to identify the user, but they would tell KAYAK that the user arrived from Organic Maps. Community members immediately began voicing concerns; some felt that the change compromised Organic Map's promises of privacy, some objected to what they felt was advertising (a description vehemently opposed by Tsisyk and Borsuk), and others simply questioned why they had not been consulted when Tsisyk merged his own pull request less than two weeks after creating it. F-Droid marked Organic Maps with an "antifeature" label to warn users of the presence of advertisements.
Tsisyk had told the FLOSS Weekly podcast only six months prior that advertising was not on the table for Organic Maps:
Because if you don't have ads, you don't have tracking, you don't collect personal data, you don't need to send any data to any server... we are not in this business, you know, in the business of showing advertisements. So we don't need to do it.
One year later, Tsisyk took another step that would spark controversy. In December 2024 he revealed that, since 2021, there had been a server component that was kept secret from the wider community. The Organic Maps app, he said, used this when downloading maps to select the fastest mirrors. His rationale for this disclosure was a decision by Borsuk to remove a copy of the MIT license from the component's hidden repository, and then to enable logging of requests to the server:
This subtle, almost unnoticed modification effectively privatized the open-source repository by this individual, preventing any further open-source collaboration. Furthermore, the next change of enabling the logs, clearly violates our commitment to privacy. To my knowledge, this decision was not discussed with any other contributors, including those who had previously contributed to the repository.
Tsisyk said he had reverted Borsuk's changes:
I am making the code from before November 23, 2024, publicly available again under MIT. As one of the authors who contributed to the code while it was under the MIT license, I have the full right to take this action. Proprietary changes after "No MIT yet, sorry" and "Observe server abusers when needed" has been removed or reverted.
Borsuk did not dispute that he made the changes, but retaliated by removing Tsisyk's privileged access to Organic Map's GitHub organization. According to Tsisyk, Havaka restored his access the next week.
Open letter
These growing tensions came to a head on April 16, when an open letter was sent to Havaka, Tsisyk, and Borsuk. The authors of the letter were not disclosed, but the early signatories include active contributors to Organic Maps. The letter outlined demands for a governance structure that would, among other things, guarantee that Organic Maps cannot become a for-profit venture.
In an addendum, the authors elaborated further, with specific allegations of mismanagement, including the claim that Tsisyk attempted to remove Borsuk and Havaka from the GitHub organization, causing GitHub to freeze the repository. Another accusation is that Borsuk misappropriated donations to fund a personal vacation. To date, no sources for either of these two claims have been provided in the letter or its subsequent follow-up messages.
Financial transparency is a key theme in the letter:
At the same time all other contributors were consistently denied any access to any financial information (even to the totals of money donated/spent).
The legal entity owned by Havaka and Tsisyk, which they established in July 2021 to manage Organic Map's finances and trademark, is registered in Estonia as an osaühing, a type of limited-liability company. This means that annual financial reports are available from the Estonian e-Business Register. However, the claim is not unfounded; these reports only contain basic company information and a balance sheet, and don't include records of individual transactions. Additionally, the 2024 annual report has not yet been published, so the effects of the KAYAK affiliate links and other actions may not be visible until the end of June, when the 2024 report is due.
CoMaps is born
While waiting for a response from the letter's recipients, some community members started preparing to fork Organic Maps. By the end of April, Organic Maps's source code had been copied to a new repository on Codeberg and donations started to be accepted on OpenCollective. The preliminary name for the fork was CoMaps; it was decided to use this permanently after a public vote that ended on May 20.
Progress in the establishment of this project has been swift; as of this writing, the CoMaps project already has a web site, accounts on almost all major social-media platforms, and a preview version of the app available for Android devices.
However, the development of CoMaps as an independent community has not been
without its own tensions. Prompted by a discussion to choose a logo for
CoMaps, "mray", a designer, suggested the
creation of a formal "design lead" role that would be responsible for
"lead[ing] the effort to find answers to the most pressing branding
questions
". However, Oleg Risewell opposed the idea, saying that CoMaps
was to have no defined leaders:
CoMaps operates differently than a typical company, and even differently than other FOSS projects. As contributors there are no formal roles, even a person who will be hired at some point will be a contributor who contributes a lot. People contribute work for everyone, and others who are interested can weigh in. Often the input is very valuable and helps improve the final result significantly. The organization is horizontal, and there is less reliance on judgement of one person and more focus on a collaborative approach.
The discussion did not lead to a compromise, and mray closed the issue in frustration. Regardless of whether CoMaps has defined leaders, it is becoming clear who the movers and shakers are. Of the 488 comments on CoMap's governance repository at the time of writing, just over half are from the top three commenters: 127 (26%) were written by Risewell, 93 (19%) by Konstantin Pastbin, and 39 (8%) by "Zyphlar". The remaining 47% of comments were written by 67 other people. Looking at the commits in the repository show Pastbin as the top contributor so far.
The future
It is unlikely that the fork will affect users directly, at least in the near future. As negotiation efforts mentioned in the open letter are seemingly at an impasse, it seems probable that Organic Maps and CoMaps will coexist and be developed independently, with both options available in app stores.
One possible eventuality for the two projects is a soft fork, where the apps are developed in parallel, with different funding sources and development teams but sharing mostly identical source code. This organization could be complicated by a proposal made on May 12 to relicense CoMaps under the GNU AGPLv3 rather than the current Apache-2.0 license, a suggestion already attracting significant discussion on Codeberg and Zulip. In such a case, Organic Maps would be forced to follow suit and adopt the AGPL in order to incorporate improvements contributed first to CoMaps, whereas CoMaps would face no legal problems taking patches from Organic Maps. Pastbin had already advocated for Organic Maps to be relicensed under a copyleft license back in September 2023, but this idea was mostly ignored and made no headway.
Another scenario is that CoMaps and Organic Maps diverge over time until patches from one cannot easily be applied to the other, resulting in a hard fork. This was the eventual situation for Forgejo, which was forked from Gitea under somewhat similar circumstances.
Could a compromise be found, and the CoMaps and Organic Maps efforts fold back into a single project? Such a resolution is not unheard of in the open-source community. In 2016, the router firmware OpenWrt was forked to produce LEDE, prompted by familiar concerns about transparency. But less than two years later, they merged back together again, keeping the OpenWrt name but incorporating governance processes from LEDE.
The history of MAPS.ME is ample evidence that the OpenStreetMap community is not one to let a good code base die. But what if worst comes and the fork really does split the community and hamper development? Luckily, users are not without alternatives for open-source navigation apps. For instance, OsmAnd has a distinct pedigree from Organic Maps and CoMaps, but shares a similar feature set. Those who contribute to OpenStreetMap may be interested in StreetComplete, which creates location-specific "quests" ranging from identifying road surfaces to recording the collection times of mailboxes, as well as offering a basic editor for adding new map features. Both are available under the GPLv3 license, although OsmAnd contains some proprietary graphics.
There are many immediate decisions still to be made by the CoMaps developers. They can select a conventional non-profit structure, establishing a governing board and formally electing leaders, or they can pursue a more ad-hoc governance model as advocated by Risewell. In either case, the community could still slip into a "benevolent dictator for life" model, be it Risewell, Pastbin, or someone else at the helm. It is not as if those remaining on the Organic Maps side will have an easy ride either; discussions between the shareholders are apparently still stalled, and any aspirations for a commercial funding model will be threatened by the prospect of competing with the resolutely non-profit CoMaps community for market share. The future presents uncharted territory for both projects.
A parallel path for GPU restore in CRIU
The fundamental concept of checkpoint/restore is elegant: capture a process's state and resurrect it later, perhaps elsewhere. Checkpointing meticulously records a process's memory, open files, CPU state, and more into a snapshot. Restoration then reconstructs the process from this state. This established technique faces new challenges with GPU-accelerated applications, where low-latency restoration is crucial for fault tolerance, live migration, and fast startups. Recently, the restore process for AMD GPUs has been redesigned to eliminate substantial bottlenecks.
The challenges of GPU checkpoint/restore
Restoring GPU applications is complex. GPU internal states vary significantly between vendors and are often opaque to general-purpose tools like Checkpoint/Restore in Userspace (CRIU). That project is widely used for checkpointing and restoration; it bridges the differences between vendors with a flexible plugin mechanism. These plugins, which are dynamically loadable shared libraries, often vendor-supplied, provide specialized logic for GPU states. CRIU invokes plugin functions at specific junctures, known as hooks. Both AMD (for its AMDGPU driver stack) and NVIDIA have developed and contributed CRIU plugins.
The AMDGPU plugin, for example, uses the DUMP_EXT_FILE hook during checkpointing to save the state of the AMD driver and GPU video-RAM (VRAM) content. Symmetrically, the RESTORE_EXT_FILE hook (prior to the work discussed here) handled the preparation of the driver state and VRAM repopulation during restore.
Understanding the original GPU restore bottleneck
The traditional CRIU restore process for a GPU application, particularly with the AMDGPU plugin, involves several sequential steps that can lead to significant delays. Initially, the main CRIU process forks a child, destined to become the target application process. This child, called the restore process, then undertakes most restoration tasks.
This restore process first reestablishes essential states, including file descriptors, GPU-driver state (often via ioctl()), and, critically, GPU-memory content, typically using system DMA for VRAM transfers. Following this, CRIU tackles the host memory. This intricate step involves unmapping all existing memory regions within the restore process and then mapping the new memory segments from the snapshot. To execute this without disrupting itself, the restore process jumps to a "restorer blob", a small, self-contained piece of code in a safe memory region. Finally, with the core memory layout in place, the main CRIU process restores any remaining state dependent on these new mappings.
The performance bottleneck lies in the sequential execution of the most time-consuming operations: restoring GPU content and then restoring host memory. Both are handled by the single-threaded restore process. While GPU content is being restored, all other logic in the child process is blocked. Simply offloading GPU restore to a background thread within the restore process is problematic, however. When CRIU needs to restore host memory, it unmaps all old mappings, including libraries (like libdrm or libc) potentially still used by the GPU-restoration thread, leading to conflicts. This limits any parallelization benefits significantly.
Enabling parallel GPU restore
To decouple GPU-content restoration from the host-memory setup, a recently merged CRIU patch by Yanning Yang, Dong Du (the authors of this article), Yubin Xia, and Haibo Chen from Shanghai Jiao Tong University, which followed the GPU Checkpoint/Restore made On-demand and Parallel (gCROP) idea, introduces a new POST_FORKING plugin hook.
This POST_FORKING hook is invoked by the main CRIU process. It's called after the restore process is forked but before the main CRIU process waits for the restore child to complete its tasks. This timing is key.
The new hook allows GPU-content restore to be offloaded from the child restore process to the main CRIU process. This frees the restore child to proceed with other tasks, notably host-memory restoration, in parallel with its parent handling the GPU-VRAM repopulation.
A pivotal challenge is granting the main CRIU process access to GPU memory regions originally managed by the target process. AMD GPUs, via the AMDGPU driver, manage memory using "buffer objects". These buffer objects, representing contiguous memory areas in VRAM, can be exported as dma-buf file descriptors. Dma-buf is a kernel framework for sharing buffers between device drivers and processes.
Once a buffer object is exported as a dma-buf, its file descriptor can be passed to another process via a Unix domain socket. The receiving process imports the file descriptor to access the memory. This mechanism underpins the new parallel restore strategy: the restore process transfers dma-buf file descriptors, along with restore commands, to the main CRIU process.
Adapting the AMDGPU plugin for parallel restore
Implementing this delegation required changes to key AMDGPU plugin hook functions. The amdgpu_plugin_restore_file() function, called by the restore process, previously handled all driver state and GPU content transfer. Now, its role is to identify necessary buffer objects and send these file descriptors with restoration metadata to the main CRIU process. It no longer performs the data transfers itself.
The new amdgpu_plugin_post_forking() function, tied to the new hook, executes in the main CRIU process. It typically launches a background thread that receives the file descriptors and commands from the restore process. The socket used by the background thread is opened during AMDGPU plugin initialization, prior to the restore process forking. This guarantees the socket is already available when the restore process sends file descriptors. The background thread then imports the buffer object and performs the actual GPU-content restoration using system DMA. This occurs concurrently with the child process restoring host memory.
Finally, amdgpu_plugin_resume_devices_late(), also called by the main CRIU process but much later, acts as a synchronization point. By this stage, the restore child has completed its memory setup. The main CRIU process uses this hook to ensure its background GPU-content restoration is complete, possibly by notifying and waiting for the worker thread(s).
The enhanced GPU-restore procedure now allows for significant concurrency. After the main CRIU process forks the restore child, it prepares, via amdgpu_plugin_post_forking(), to handle GPU restoration, often by starting a worker thread. The restore child, through amdgpu_plugin_restore_file(), sends dma-buf file descriptors with commands to this worker thread. Then, critical operations proceed in parallel. On our evaluation platform, parallel restore achieves a 34.3% improvement in the restore time of our test application when the data is in the page cache, and a 7.6% improvement when restoring from disk. The test scripts and implementation details are available here.
Looking ahead
The journey to optimal GPU checkpoint/restore, however, is ongoing. Continued collaboration between the CRIU community and GPU vendors is vital for refining these mechanisms across diverse architectures. The quest for faster, more robust checkpoint/restore for complex, accelerated applications remains an active and important area of development.
[Acknowledgments: The parallel restore patch is significantly improved with comments and suggestions from the CRIU community: Andrei Vagin, Radostin Stoyanov, and David Yat Sin.]
FAIR package management for WordPress
The last year has been a rocky one for the WordPress community. Matt
Mullenweg—WordPress co-founder and
CEO of WordPress hosting company Automattic—started a messy public spat with
WP Engine in September and
has proceeded to use his control of the project's WordPress.org
infrastructure as weapons against the company, with the community
caught in the crossfire. It is not surprising, then, that on
June 6 a group of WordPress community participants announced the
Federated
and Independent Repositories Package Manager (FAIR.pm) project. It
is designed to be a decentralized alternative to WordPress.org with a
goal of building "public digital infrastructure that is both
resilient and fair
".
Background
Mullenweg used his keynote at last year's WordCamp US to complain that Automattic competitor WP Engine was not contributing enough to the WordPress ecosystem. In addition, he said that it violated WordPress trademarks by removing some of the content-management system's (CMS) revision features.
As a result, WP Engine traded cease-and-desist letters with Automattic, and then filed a 62-page complaint against the company and Mullenweg personally. Since then, Mullenweg has been using his control of the WordPress.org infrastructure as a weapon against WP Engine, as well as members of the WordPress community that have questioned his actions. He blocked WP Engine's access to WordPress.org, terminated a number of people's WordPress.org accounts, seized control of WP Engine's plugins hosted on WordPress.org, and more.
In October, a project called AspirePress
was launched to respond to the problem that "every WordPress
instance in the world has a single point of failure
" in the form
of WordPress.org. That project sought to provide a community-run
mirror for WordPress plugins and updates, and also helped lay the
groundwork for FAIR.pm.
WP Engine was granted a preliminary injunction, on December 10; the order forced Mullenweg to unblock WP Engine's access to WordPress.org, reactivate accounts, and restore its ownership of the Advanced Custom Fields (ACF) plugin on the site that had been hijacked by an Automattic-controlled fork of the plugin.
Shortly after the injunction, an anonymous group of WordPress
contributors published an open
letter asking Mullenweg to propose "community-minded
solutions
" to objections about the project's governance, lack of
transparency, and conflicts of interest. That did not happen. But
Mullenweg did deactivate the
WordPress.org accounts of Joost de Valk,
Karim Marucchi, Sé Reed, Heather Burns, and
Morten Rand-Hendriksen, after De Valk and
Marucchi
separately published their visions for WordPress without a benevolent
dictator for life (BDFL). Mullenweg said that he was doing so to
"give this project the push it needs to get off the
ground
". De Valk later wrote
that Mullenweg's post helped serve as a catalyst for FAIR.pm:
Several groups had been working on related problems, each bringing different perspectives, needs, and capabilities. Rather than announce a new umbrella, we focused on connecting the dots between these parallel efforts. We're not naming everyone in that alignment publicly yet, but it's safe to say: this is a group of groups, and that's its strength.
What followed were weeks of collaboration. We mapped out which parts of the ecosystem needed reinforcement. Some things were urgent: plugin update services, the plugin and theme directories, static assets like emojis and avatars, and the dashboard feeds. We began with mirrors and drop-in replacements, but the plan was always broader than that.
In January, Automattic also announced it was reallocating its resources away from WordPress development in response to the lawsuit. Since then, things have been relatively quiet, but the entire saga has undermined confidence in WordPress as the go-to, stable, and safe open-source CMS that individuals, businesses, and hosting companies could trust for long-term web projects.
WordPress.org is at the center of the project's development and distribution; it is hard-wired into the CMS as the source of plugins, themes, and updates for anyone running the CMS on their own infrastructure—or those providing hosting to others. One of the things driven home by the conflict between Automattic and WP Engine is how much control Mullenweg has as BDFL over the larger community via WordPress.org, and that he is willing to use it in a decidedly non-benevolent fashion. It is only surprising that the larger community has waited this long to collaborate on an alternative.
The project
The Linux Foundation's press release about FAIR.pm diplomatically avoids mentioning any of the messy backstory; it simply touts the advantages of a vendor-neutral package-management system for WordPress, and introduces the co-chairs of the project's technical steering committee (TSC). The co-chairs are Carrie Dils, Mika Epstein, and Ryan McCue. Dils is known for her WordPress advocacy work, Epstein is the former manager of the WordPress.org plugin repository, and McCue has been a contributor for more than 20 years, including being the developer behind the WordPress REST API. The full list of steering committee members includes more than 30 people from the larger WordPress community who have served as the initial organizers of the project.
The TSC, according to the technical charter, is responsible for all technical oversight of the FAIR project. The TSC has voting rights in the project, as well as the ability to modify or approve modifications to the project's source code, documents, and other artifacts. Contributors to the project may become organizers by a majority vote of the TSC.
What is not yet clear is who's footing the bill for FAIR.pm. The Linux Foundation's press release does not include any information about project sponsors. It merely says that announcements about governance and funding are forthcoming, and invites companies or organizations interested in participating to get in touch. This is somewhat unusual. Similar announcements, such as the announcement for Valkey, and for OpenTofu, have included at least some of the companies that were backing the new efforts. It is unclear whether this indicates that the FAIR.pm announcement was assembled hastily, that companies funding it want to avoid the spotlight for now, or both.
So far, the project has developed many of the pieces it will need to provide distributed package management for WordPress. This includes a protocol for its decentralized distribution system, a plugin to enable WordPress to use FAIR repositories, a Mini FAIR plugin for small-scale hosting and distribution of WordPress packages, as well as the server software used for the main FAIR.pm server to provide repository information and distribute packages to WordPress systems. Note that, in FAIR.pm parlance, themes, plugins, language packs, WordPress updates, and even WordPress "core" are all called packages. The protocol documentation is licensed under the Creative Commons Attribution 4.0 license, and all of the software is available under the GPLv2 (the same license used for WordPress itself).
How it works
Traditionally, WordPress extensions are identified by the directory name of the plugin, called a "slug". For example, when the ACF plugin is installed, its code is located in the wp-content/plugins/advanced-custom-fields directory; the slug for ACF is "advanced-custom-fields". The slug is the unique identifier used to search WordPress.org for plugins to install, to check for updates of installed plugins, and to report statistics.
WordPress users can install plugins that are not listed on WordPress.org—but any such plugins are essentially second-class citizens in that ecosystem. Firstly, they are not discoverable via the WordPress administrative interface, or on the WordPress.org directory. Secondly, if a plugin developer wishes to implement updates that do not require WordPress.org, they have to use a third-party tool for updating or implement their own upgrade handling method in the extension—or require users to upload new versions manually.
These are not obstacles if a developer has developed a custom extension for their personal site, but it serves as friction against adoption for plugins that are meant for widespread use. Since many plugins and themes are developed with commercial interests in mind, this is a significant hurdle for anyone who wishes to—as business folks like to say—monetize WordPress extensions.
Perhaps even more worrisome for the commercial providers is the
fact that WordPress installations, by default, phone home to WordPress.org
with information about installed plugins and other data. The FAIR
documentation notes that only a small amount of that data is
provided to plugin publishers. The full data is only available to
WordPress.org, "and there are no guarantees about
conflicts-of-interest in the usage of this data
".
The FAIR project, in contrast, says it will operate an open analytics system that will collect data to be made available (in anonymized, aggregated form) for everyone:
No single actor has special access to this data, and the analytics system is operated by the FAIR Web Foundation for the benefit of all. Strict rules govern the use of and access to this data.
This data can be used by plugin maintainers to check data about their own plugins, by discovery servers to find which plugins exist, and for any other purpose such as research.
To work with the FAIR protocol each plugin needs to have a W3C standard decentralized identifier (DID), instead of a slug, that is used to fetch a document that includes information about the plugin repository, license, authors, type of package, security contact information, cryptographic signing keys, and more. This does mean that themes and plugins will have to be modified to work with the FAIR plugin and infrastructure, but the documentation for doing so appears unfinished.
Another problem that has been made clear over the last year is one of moderation; currently, the WordPress.org server administrators choose which packages are published, which might be marked "insecure", and more. As demonstrated with the ACF plugin, the WordPress.org administrators can also take over a plugin completely. That might be a good thing, if a widely used plugin has been abandoned or has a security vulnerability that needs immediate response. But that power can be, and has been, abused.
The documentation
for the FAIR protocol indicates that the FAIR.pm server will offer a
moderation system to "protect the ecosystem
", but others can
implement additional moderation systems on top of the protocol:
Anyone can offer their own moderation service which users can opt in to. For example, users might choose to only install packages which have been manually approved by a trustworthy source, or they may pay a security company to help them block less-secure packages. Hosts might also offer a moderation service to their customers which blocks incompatible plugins.
Users can choose their moderation services, including disabling the built-in, FAIR-provided moderation service.
The FAIR plugin—which is not yet considered stable—is designed to federate a number of features that currently depend on WordPress.org: this includes CMS updates, several WordPress APIs, server health checks, as well as the event and news feeds displayed in the administrative dashboard.
The plugin will allow users to search discovery servers for new plugins. Currently the plugin is set up to use the AspirePress package mirror that launched last year, but it is expected that other FAIR providers will come online—and users can change the configuration to use whatever providers they would like. At least, they will be able to once more providers come online. Version 0.2.0 of the plugin was released on June 6.
What's next
The project page mentions a FAIR distribution of WordPress that would allow hosting providers and users to provision the CMS with FAIR.pm preinstalled from the start. To date, there is no repository for that work. Ironically, Automattic's slowdown in WordPress development should make it easier for the FAIR.pm project to maintain a fork of WordPress in the long-run.
Judging by the current state of the plugin and documentation, there is a fair amount of work ahead before the WordPress ecosystem can declare independence from WordPress.org. The project has three working groups at launch to do that work: the temporary technical independence working group responsible for initial bootstrapping of the protocol and plugins, a permanent group responsible for long-term development and operation of the FAIR package manager, as well as a community working group to support contributors.
It would appear that Mullenweg got his wish; De Valk and others went ahead with an independent WordPress project, and it seems to have quite a few people from around the community lending a hand. It will be interesting to see how it evolves and whether the larger community leaves WordPress.org behind.
Parallelizing filesystem writeback
Writeback for filesystems is the process of flushing the "dirty" (written) data in the page cache to storage. At the 2025 Linux Storage, Filesystem, Memory Management, and BPF Summit (LSFMM+BPF), Anuj Gupta led a combined storage and filesystem session on some work that has been done to parallelize the writeback process. Some of the performance problems that have been seen with the existing single-threaded writeback came up in a session at last year's summit, where the idea of doing writeback in parallel was discussed.
Gupta began by noting that Kundan Kumar, who posted the topic proposal, was supposed to be leading the session, but was unable to attend. Kumar and Gupta have both been working on a prototype for parallelizing writeback; the session was meant to gather feedback on it.
Currently, writeback for buffered I/O is single-threaded, though applications are issuing multithreaded writes, which can lead to contention. The backing storage device is represented in the kernel by a BDI (struct backing_dev_info), and each BDI has a single writeback thread that processes the struct bdi_writeback embedded in it. Each bdi_writeback has a single list of inodes that need to be written back and a single delayed_work instance, which makes the process single-threaded.
Based on ideas from Dave Chinner and Jan Kara in the topic-proposal thread, he and Kumar added a struct bdi_writeback_ctx to contain context information for a bdi_writeback, Gupta said. An array of the writeback contexts was added to the BDI, which allows for multiple threads doing writeback for a BDI. That can be seen in a patch series posted by Kumar at the end of May. They see two levels of parallelism that could be added, but focused on the high-level parallelism in the prototype, leaving low-level parallelism for later work.
The high-level parallelism splits up the work based on the filesystem geometry, Gupta said. For example, XFS has allocation groups that can be used to partition the inodes needing writeback into separate contexts of the BDI. The low-level parallelism could be added by having multiple delayed_work entries in the bdi_writeback; different types of filesystem operations, such as block allocation versus pure overwrites for XFS, could be handled by separate workqueues at that level.
Christoph Hellwig was not sure that allocation groups were the right choice for partitioning (sharding) the inodes for writeback. Gupta said that testing to see what kind of workloads benefit when using the allocation groups will be important. Parallel writeback will be an option; users can disable it if it is not beneficial for their workloads.
Amir Goldstein asked how severe the current bottleneck with single-threaded writeback is. Gupta said that he did not have any numbers on that. Chris Mason said that in his testing it largely depends on whether large folios are available; the easiest way to see performance problems with single-threaded writeback is to turn off large folios for XFS. In some simple testing, he could get around 800MB per second on XFS with large folios disabled before the kernel writeback thread was saturated. With large folios enabled, that number goes to around 2.4GB per second.
Hellwig was curious where the cycles were being spent in the no-large-folios case. Mason said that he did not remember the details, though pages were being moved around a lot to different lists. Matthew Wilcox suggested that the XArray API might be part of the problem, since there is no API to clear the dirty bits on a range of folios. If the kernel is clearing those bits one-by-one, that might be a bottleneck; it could be fixed, but he has not found the time to do so.
Jeff Layton said that the network filesystems would need some way to limit
how much parallelization was happening for writeback. There may be
underlying network conditions that will necessitate limiting parallel
writeback. Gupta acknowledged that need and Hellwig said that local
filesystems may also need a way to do that at times. One thing that
Hellwig would like to see change is the Btrfs-internal mechanism for having
multiple threads doing writeback with compression and checksumming; having
a common solution with multiple writeback threads "would remove a lot of
crap
". He is concerned that other filesystems will pick up that code,
so having a common solution would head that off; Mason said he was not
opposed to that idea, though there may be large files that need their
writeback handled specially.
Over the remote link, Jan Kara said that what he was hearing was that most of the interest was in the low-level parallelism, which would parallelize the writeback of individual inodes. He cautioned that there are various problems with doing that, including potential problems with data-integrity measurements, since writes are done in a different order.
Hellwig said that the "global-ish linked lists
" that are used to
track the inodes for writeback are a data structure that is known to cause
scalability problems. Adding more threads to work on those lists will make
things worse; he wondered if using an XArray instead would be better.
Using allocation groups to shard the inode lists (or XArrays) makes sense
for XFS—other filesystems may have their own obvious choice for
sharding—but the management of the threads can be separate from the
management of the lists. Other criteria could be used to determine the
thread to handle a specific inode.
Mason agreed and wanted to suggest something similar. For the first
version of multithreaded writeback, he said that keeping a single inode
list would make sense. Rather than work out how to shard the inodes to
different lists, simply "focus on being able to farm out multiple inodes
at a time under writeback
"; that will help determine where the lock
contention is. Gupta was concerned about the lock contention for the inode
list, but Kara did not think it would be that significant compared to all
of the other work needed for writeback.
Once the basics of the feature are working, Goldstein said, it may make sense to provide a means for filesystems to help choose inodes for the different threads. Filesystems may have information that can assist the writeback machinery to achieve better parallelism.
Mason got in the last word of the session by relaying another bottleneck that he had found in his testing but had forgotten earlier: freeing clean pages. Once dirty pages have been written, those clean pages are freed by the kswapd kernel thread, which results in another bottleneck. He suggested that there may need to be some parallelism added there as well.
Supporting NFS v4.2 WRITE_SAME
At the 2025 Linux Storage, Filesystem, Memory Management, and BPF Summit (LSFMM+BPF), Anna Schumaker led a discussion about implementing the NFS v4.2 WRITE_SAME command in both the NFS client and server. WRITE_SAME is meant to write large amounts of identical data (e.g. zeroes) to the server without actually needing to transfer all of it over the wire. In her topic proposal, Schumaker wondered whether other filesystems needed the functionality, so that it should be implemented at the virtual filesystem (VFS) layer, or whether it should simply be handled as an NFS-specific ioctl().
The NFS WRITE_SAME operation was partly inspired by the SCSI WRITE
SAME command, she began; it is "intended for databases to be
able to initialize a bulk of records all at once
". It offloads much of
the work to the server side. So far, Schumaker has been implementing
WRITE_SAME with an ioctl() using a structure that looks
similar to the application
data block structure defined in the NFS v4.2 RFC for use by
WRITE_SAME.
On the server side, it would make sense to have a function that gets called to process the WRITE_SAME command, but it would be nice if that same function was available to clients; they could use it as a fallback when the server does not implement WRITE_SAME. Other filesystems could potentially also use the functionality, either with the SCSI WRITE SAME or for some other filesystem-specific use case.
The application data block allows for WRITE_SAME commands that
write various patterns to the storage, but Christoph Hellwig suggested that
all of that complexity should be avoided. He was responsible
for writing the
WRITE_SAME definition for NFS and for killing off the Linux block-layer
support for the SCSI WRITE SAME patterns; "don't do it
", he
said with laugh. WRITE_SAME for zeroing is "perfectly
fine
", SCSI supports that, but "exposing all the detailed, crazy
patterns
" is "not sane
". Getting the semantics right for all of
the different cases is extremely difficult. Schumaker said that sounded
reasonable.
There is already an API available for clients to use, Amir Goldstein said: fallocate() with the FALLOC_FL_ZERO_RANGE flag. Schumaker said that NFS did not have support for that flag, but Goldstein said that support could be added as the way to provide access to WRITE_SAME. Hellwig said that there was a patch set that he had not yet looked at closely to add an FALLOC_FL_WRITE_ZEROES flag that would force the zeroes to be written; it might be a better API for WRITE_SAME. That series is now on v5 and seems to be progressing toward inclusion.
Matthew Wilcox wondered whether only being able to write zeroes would make
the WRITE_SAME feature less than entirely useful; he remembered a
"a certain amount of pushback because databases need a specific
pattern
". There was a fair amount of joking about which of the two
Oracle databases (the other being MySQL) he meant; Wilcox works for Oracle,
as does Schumaker, who seemed to indicate that she had talked to the MySQL
group. In the end, someone seemed to sum up that only supporting zeroing
is reasonable: "zeroes are good
".
Chuck Lever, who also works for Oracle, said that he had spoken to the
Oracle database group. That database does not use the Linux NFS client, so
the group did not care about support for WRITE_SAME in the client.
The group's concern was mostly about support for WRITE_SAME in
proprietary NFS servers, he said. Wilcox asked: "and Linux NFS
servers?
" Lever said that Oracle databases do not deploy on systems
that use those.
Getting Lustre upstream
The Lustre filesystem has a long history, some of which intersects with Linux. It was added to the staging tree in 2013, but was bounced out of staging in 2018, due to a lack of progress and a development model that was incompatible with the kernel's. Lustre may be working its way back into the kernel, though. In a filesystem-track session at the 2025 Linux Storage, Filesystem, Memory Management, and BPF Summit (LSFMM+BPF), Timothy Day and James Simmons led a discussion on how to get Lustre into the mainline.
Day began with an overview of Lustre, which is a "high-performance
parallel filesystem
". It is typically used by systems with lots of GPUs that
need to be constantly fed with data (e.g. AI workloads) and for
checkpointing high-performance-computing (HPC) workloads. A file is split
up into multiple chunks that are stored on different servers. Both the
client and server implementations run in the kernel, similar to NFS. For
the past ten or more years, the wire and disk formats have been "pretty
stable
" with "very little change
"; Lustre has good
interoperability between different versions, unlike in the distant past where both
server and client needed to be on the same version.
The upstreaming project has been going on for a long time at this
point, he said. A fork of the client was added to the staging tree and
resided there for around five years before "it got ejected, essentially
due to insufficient progress
". It was a "bad fit
" for the
kernel, since most developers worked on the out-of-tree version, rather
than what was in staging.
But "the dream of actually getting upstream still continued
". There
have been more than 1000 patches aimed at getting the code ready for the
kernel since it got ejected; around 600 of those were from Neil Brown and 200
came from Simmons. Roughly 1/3 of the patches that have gone into the
out-of-tree repository since the staging removal
have been related to the upstream goal, Simmons said.
Day said that the biggest question is how the project can move from its
out-of-tree development model to one that is based around the upstream
kernel repository. The current state is "a giant filesystem, #ifdef-ed
to hell and back to get it working with a bunch of kernel versions
".
The next stage, which is currently being worked on and is slated to
complete in the next year or so, is to split the compatibility code out of
the core filesystem code; the goal is to eventually have two separate trees
for those pieces. The core filesystem tree would go into the kernel tree,
while the compatibility code, which is meant to support customers on
older kernels, would continue to
live in a Lustre repository.
Another area that needs attention is changes to the development process to
better mesh with kernel development. The Lustre project does not use
mailing lists; it uses a Gerrit instance
instead. "We have got to figure out how to adapt.
" Simmons said
that there are some developers who are totally Gerrit-oriented and some who
could live with a mailing list; "we have to figure out how to please
both audiences
".
Amir Goldstein said that the only real requirement is that the project post the patches once to the mailing list before merging; there is no obligation to do patch review on the list. Simmons said that he and Brown have maintained a Git tree since Lustre was removed from staging; it is kept in sync and updated to newer kernels. All of the patches are posted to the lustre-devel mailing list and to a Patchwork instance, so all of the history is open and available for comments or criticism, he said.
Josef Bacik asked about what went wrong when Lustre was in the staging
tree. From his perspective, Lustre has been around for a long time and
there are no indications that it might be abandoned, so why did it not make
the jump into the mainline fs/ directory? Simmons said that the
project normally has a few different features being worked on at any given
time, but Greg Kroah-Hartman, who runs the staging tree, did not want any
patches that were not cleanups. So the staging version fell further and
further behind the out-of-tree code. Bacik said that made sense to him;
"that's like setting you up to fail
".
Christian Brauner said that he would like to come up with a "more streamlined
model
" for merging new filesystems, where the filesystem community
makes a collective decision on whether the merge should happen. The
community has recently been "badly burned by 'anybody can just send a
filesystem for inclusion' and then it's upstream and then we have to deal with
all of the fallout
". As a VFS maintainer, he does not want to be the
one making the decision, but merging a new filesystem "puts a burden on
the whole community
" so it should be a joint decision.
Bacik reiterated that no one was concerned that Lustre developers were
going to disappear, but that there are other concerns. It is important
that Lustre is using folios everywhere, for example, and is using "all
of the modern things
"; that sounds a little silly coming from him,
since Btrfs is still only halfway there, he said. Simmons said that the
Lustre developers completely agree; there is someone working on the folio
conversion currently. At the summit, he and Day have been talking with David
Howells about using his netfs
library.
Jeff Layton asked if the plan was to merge both the client and the server.
Simmons said that most people are just asking for the client and that is a
slower-moving code base. The client is "a couple-hundred patches per
month, the server is three to four times the volume of patches
", which
makes it harder to keep up with the kernel. Layton said: "baby steps
are good
", though Day noted that it is harder to test Lustre without
having server code in the kernel.
The reason he has been pushing for just merging the client, Ted Ts'o said,
is because the server is "pretty incestuous with ext4
". The server
requires a bunch of ext4 symbols and there is "a need to figure out how
to deal with that
". That impacts ext4 development, but other changes,
such as the plan to rewrite the jbd2
journaling layer to not require buffer heads,
may also be complicated by the inclusion of the Lustre server, he said.
Simmons asked about posting patches to the linux-fsdevel mailing list before Lustre is upstream so that the kernel developers can start to get familiar with the code. Bacik said that made sense, but that he would not really dig into the guts of Lustre; he is more interested in the interfaces being used and whether there will be maintenance problems for the kernel filesystem community in the Lustre code. Goldstein suggested setting up a Lustre-specific mailing list, but Simmons noted that lustre-devel already exists and is being archived; Brauner suggested getting it added to lore.kernel.org
The intent is that when Linus Torvalds receives a pull request for a new filesystem that he can see that the code has been publicly posted prior to that, Goldstein said. It will also help if the Git development tree has a mirror on git.kernel.org, Ts'o said. Bacik said that he thought it was a probably a lost cause to try to preserve all of the existing Git history as part of the merge, though it is up to Torvalds; instead, he suggested creating a git.kernel.org archive tree that people can consult for the history prior to the version that gets merged.
Given that Lustre targets high performance, Ts'o said, it will be important to support large folios. Simmons said that someone was working on that, and that it is important to the project; the plan is to get folio support, then to add large folios. Matthew Wilcox said that was fine, as long as the page-oriented APIs were getting converted. Many of those APIs are slowly going away, so the Lustre developers will want to ensure the filesystem is converted ahead of those removals.
Brief items
Kernel development
Kernel release status
The current development kernel is 6.16-rc2, released on June 15. Linus said:
Pretty quiet week, with a pretty small rc2 as a result. That's not uncommon, and things tend to pick up at rc3, but this is admittedly even smaller than usual.
Distributions
Rocky Linux 10.0 released
Version 10.0 of the Rocky Linux distribution has been released. As with the AlmaLinux 10.0 release, Rocky Linux 10.0 is based on Red Hat Enterprise Linux (RHEL) 10. See the release notes for details.
Development
Git 2.50.0 released
Version 2.50.0 of the Git source-code management system has been released with a long list of new user features, performance improvements, and bug fixes. See the announcement and this GitHub blog post for details.
KDE Plasma 6.4 released
The KDE Project has announced the Plasma 6.4 release. New features include more flexible tiling features, improvements to the Spectacle screen capture utility, a number of accessibility enhancements, and much more. See the changelog for a complete list of new features, enhancements, and bug fixes.
Changes to Kubernetes Slack (Kubernetes Contributors blog)
The Kubernetes project has announced
that it will be losing its "special status
" with the Slack communication platform and will be
downgraded to the free tier in a matter of days:
On Friday, June 20, we will be subject to the feature limitations of free Slack. The primary ones which will affect us will be only retaining 90 days of history, and having to disable several apps and workflows which we are currently using. The Slack Admin team will do their best to manage these limitations.
The project has a FAQ
covering the change, its impacts, and more. The CNCF projects staff
has proposed
a move to the Discord service as
the best option to handle the more than 200,000 users and thousands of
posts per day from the Kubernetes community. The Kubernetes Steering
Committee will be making its decision "in the next few weeks
".
Summaries from the 2025 Python Language Summit
The Python Software Foundation blog is carrying a set of detailed summaries from the 2025 Python Language Summit:
The Python Language Summit 2025 occurred on May 14th in Pittsburgh, Pennsylvania. Core developers and special guests from around the world gathered in one room for an entire day of presentations and discussions about the future of the Python programming language.
Topics covered include making breaking changes less painful, free-threaded Python, interaction with Rust, and challenges faced by the Steering Council.
Radicle Desktop released
The Radicle peer-to-peer code collaboration project has released Radicle Desktop: a graphical interface designed to simplify more complex parts of using Radicle such as issue management and patch reviews.
Radicle Desktop is not trying to replace your terminal, IDE, or code editor - you already have your preferred tools for code browsing. It won't replace our existing app.radicle.xyz and search.radicle.xyz for finding and exploring projects. It also doesn't run a node for you. Instead, it communicates with your existing Radicle node, supporting your current workflow and encourages gradual adoption.
LWN covered Radicle in March 2024.
Development quote of the week
And what I just don't understand about this whole discussion: We're talking about people who want to be frozen in time for 5 years straight during this "maintenance support" window by the vendor (whom they are paying), with only access to security fixes. But somehow they do want to run the latest Postgres Major release, even though the one that they had running still receives bug fixes and security fixes. I just don't understand who these people are. Why do they care about having no changes to their system to avoid breakage as much as possible, except for their piece of primary database software, of which they're happily running the bleeding edge.— Jelte Fennema-Nio
Page editor: Daroc Alden
Announcements
Newsletters
Distributions and system administration
Development
Meeting minutes
Calls for Presentations
CFP Deadlines: June 19, 2025 to August 18, 2025
The following listing of CFP deadlines is taken from the LWN.net CFP Calendar.
| Deadline | Event Dates | Event | Location |
|---|---|---|---|
| June 20 | August 29 August 31 |
openSUSE.Asia Summit | Faridabad, India |
| June 22 | December 10 December 11 |
Open Source Experience 2025 | Paris, France |
| June 30 | November 7 November 8 |
South Tyrol Free Software Conference | Bolzano, Italy |
| June 30 | October 21 October 24 |
PostgreSQL Conference Europe | Riga, Latvia |
| July 11 | September 29 October 1 |
X.Org Developers Conference | Vienna, Austria |
| July 13 | October 28 October 29 |
Cephalocon | Vancouver, Canada |
| July 15 | November 15 November 16 |
Capitole du Libre 2025 | Toulouse, France |
| July 31 | September 26 September 28 |
Qubes OS Summit | Berlin, Germany |
| July 31 | August 28 | Yocto Project Developer Day 2025 | Amsterdam, The Netherlands |
| August 3 | September 27 September 28 |
Nextcloud Community Conference 2025 | Berlin, Germany |
| August 3 | October 3 October 4 |
Texas Linux Festival | Austin, US |
| August 4 | December 8 December 10 |
Open Source Summit Japan | Tokyo, Japan |
| August 15 | November 18 November 20 |
Open Source Monitoring Conference | Nuremberg, Germany |
| August 15 | September 19 September 21 |
Datenspuren 2025 | Dresden, Germany |
If the CFP deadline for your event does not appear here, please tell us about it.
Upcoming Events
Events: June 19, 2025 to August 18, 2025
The following event listing is taken from the LWN.net Calendar.
| Date(s) | Event | Location |
|---|---|---|
| June 23 June 25 |
Open Source Summit North America | Denver, CO, US |
| June 26 June 27 |
Linux Security Summit North America | Denver, CO, US |
| June 26 June 28 |
Linux Audio Conference | Lyon, France |
| June 26 June 28 |
openSUSE Conference | Nuremberg, Germany |
| July 1 July 3 |
Pass the SALT Conference | Lille, France |
| July 13 July 19 |
DebCamp 26 | Santa Fe, Argentina |
| July 13 July 18 |
DebConf 26 | Santa Fe, Argentina |
| July 14 July 20 |
DebConf 2025 | Brest, France |
| July 16 July 18 |
EuroPython | Prague, Czech Republic |
| July 24 July 29 |
GUADEC 2025 | Brescia, Italy |
| July 31 August 3 |
FOSSY | Portland OR, US |
| August 5 | Open Source Summit India | Hyderabad, India |
| August 9 August 10 |
COSCUP 2025 | Taipei City, Taiwan |
| August 14 August 16 |
Open Source Community Africa Open Source Festival | Lagos, Nigeria |
| August 15 August 17 |
Hackers On Planet Earth | New York, US |
| August 16 August 17 |
Free and Open Source Software Conference | Sankt Augustin, Germany |
If your event does not appear here, please tell us about it.
Security updates
Alert summary June 12, 2025 to June 18, 2025
| Dist. | ID | Release | Package | Date |
|---|---|---|---|---|
| AlmaLinux | ALSA-2025:8814 | 10 | .NET 8.0 | 2025-06-13 |
| AlmaLinux | ALSA-2025:7599 | 10 | .NET 8.0 | 2025-06-17 |
| AlmaLinux | ALSA-2025:8814 | 10 | .NET 8.0 | 2025-06-17 |
| AlmaLinux | ALSA-2025:8812 | 8 | .NET 8.0 | 2025-06-12 |
| AlmaLinux | ALSA-2025:8813 | 9 | .NET 8.0 | 2025-06-13 |
| AlmaLinux | ALSA-2025:8816 | 10 | .NET 9.0 | 2025-06-13 |
| AlmaLinux | ALSA-2025:7601 | 10 | .NET 9.0 | 2025-06-17 |
| AlmaLinux | ALSA-2025:8817 | 9 | .NET 9.0 | 2025-06-12 |
| AlmaLinux | ALSA-2025:9148 | 10 | buildah | 2025-06-17 |
| AlmaLinux | ALSA-2025:9147 | 9 | buildah | 2025-06-17 |
| AlmaLinux | ALSA-2025:9143 | 9 | containernetworking-plugins | 2025-06-17 |
| AlmaLinux | ALSA-2025:8125 | 10 | firefox | 2025-06-17 |
| AlmaLinux | ALSA-2025:8655 | 9 | glibc | 2025-06-12 |
| AlmaLinux | ALSA-2025:8184 | 10 | gstreamer1-plugins-bad-free | 2025-06-17 |
| AlmaLinux | ALSA-2025:8743 | 8 | kernel | 2025-06-11 |
| AlmaLinux | ALSA-2025:8643 | 9 | kernel | 2025-06-12 |
| AlmaLinux | ALSA-2025:8128 | 10 | libsoup3 | 2025-06-17 |
| AlmaLinux | ALSA-2025:8844 | 8 | mod_security | 2025-06-12 |
| AlmaLinux | ALSA-2025:8837 | 9 | mod_security | 2025-06-12 |
| AlmaLinux | ALSA-2025:9146 | 10 | podman | 2025-06-17 |
| AlmaLinux | ALSA-2025:9144 | 9 | podman | 2025-06-17 |
| AlmaLinux | ALSA-2025:9149 | 10 | skopeo | 2025-06-17 |
| AlmaLinux | ALSA-2025:9145 | 9 | skopeo | 2025-06-17 |
| AlmaLinux | ALSA-2025:7517 | 10 | sqlite | 2025-06-17 |
| AlmaLinux | ALSA-2025:8196 | 10 | thunderbird | 2025-06-17 |
| AlmaLinux | ALSA-2025:8608 | 10 | thunderbird | 2025-06-17 |
| AlmaLinux | ALSA-2025:8047 | 10 | unbound | 2025-06-17 |
| AlmaLinux | ALSA-2025:7509 | 10 | valkey | 2025-06-17 |
| AlmaLinux | ALSA-2025:8550 | 10 | varnish | 2025-06-17 |
| AlmaLinux | ALSA-2025:7524 | 10 | xz | 2025-06-17 |
| Arch Linux | ASA-202506-2 | curl | 2025-06-13 | |
| Arch Linux | ASA-202505-15 | ghostscript | 2025-06-13 | |
| Arch Linux | ASA-202506-4 | go | 2025-06-13 | |
| Arch Linux | ASA-202506-5 | konsole | 2025-06-13 | |
| Arch Linux | ASA-202506-6 | python-django | 2025-06-13 | |
| Arch Linux | ASA-202506-1 | roundcubemail | 2025-06-13 | |
| Arch Linux | ASA-202506-3 | samba | 2025-06-13 | |
| Debian | DSA-5942-1 | stable | chromium | 2025-06-12 |
| Debian | DLA-4219-1 | LTS | gst-plugins-bad1.0 | 2025-06-17 |
| Debian | DSA-5941-1 | stable | gst-plugins-bad1.0 | 2025-06-11 |
| Debian | DLA-4220-1 | LTS | konsole | 2025-06-17 |
| Debian | DLA-4221-1 | LTS | libblockdev | 2025-06-17 |
| Debian | DSA-5943-1 | stable | libblockdev | 2025-06-17 |
| Debian | DLA-4214-1 | LTS | node-tar-fs | 2025-06-11 |
| Debian | DLA-4215-1 | LTS | ublock-origin | 2025-06-12 |
| Debian | DLA-4218-1 | LTS | webkit2gtk | 2025-06-16 |
| Fedora | FEDORA-2025-5566a46596 | F41 | aerc | 2025-06-14 |
| Fedora | FEDORA-2025-8efa183a30 | F42 | aerc | 2025-06-14 |
| Fedora | FEDORA-2025-aa9ea529fb | F41 | chromium | 2025-06-15 |
| Fedora | FEDORA-2025-41bc291ca0 | F42 | chromium | 2025-06-13 |
| Fedora | FEDORA-2025-e375586840 | F41 | fido-device-onboard | 2025-06-17 |
| Fedora | FEDORA-2025-164ac0d01f | F41 | gh | 2025-06-13 |
| Fedora | FEDORA-2025-82ac5e4065 | F42 | gh | 2025-06-13 |
| Fedora | FEDORA-2025-333708f4ce | F41 | golang-x-perf | 2025-06-15 |
| Fedora | FEDORA-2025-ee0831e677 | F42 | golang-x-perf | 2025-06-15 |
| Fedora | FEDORA-2025-c53905e83d | F41 | libkrun | 2025-06-14 |
| Fedora | FEDORA-2025-4fc3431dab | F42 | libkrun | 2025-06-14 |
| Fedora | FEDORA-2025-49ae47f4ef | F41 | mingw-icu | 2025-06-13 |
| Fedora | FEDORA-2025-879f3d7695 | F42 | mingw-icu | 2025-06-13 |
| Fedora | FEDORA-2025-bf4581952d | F41 | nginx-mod-modsecurity | 2025-06-13 |
| Fedora | FEDORA-2025-0b71a7299d | F42 | nginx-mod-modsecurity | 2025-06-13 |
| Fedora | FEDORA-2025-d4849e6cf3 | F41 | python-django4.2 | 2025-06-17 |
| Fedora | FEDORA-2025-76b69d1931 | F41 | python3.10 | 2025-06-13 |
| Fedora | FEDORA-2025-f41fafb942 | F42 | python3.10 | 2025-06-13 |
| Fedora | FEDORA-2025-56b4c0f4c4 | F41 | python3.11 | 2025-06-14 |
| Fedora | FEDORA-2025-81adcd3389 | F42 | python3.11 | 2025-06-14 |
| Fedora | FEDORA-2025-3436f3d2b4 | F41 | python3.12 | 2025-06-14 |
| Fedora | FEDORA-2025-41dc96c19a | F42 | python3.12 | 2025-06-14 |
| Fedora | FEDORA-2025-cebde6a6e3 | F41 | python3.9 | 2025-06-13 |
| Fedora | FEDORA-2025-6efe030226 | F42 | python3.9 | 2025-06-13 |
| Fedora | FEDORA-2025-26640e9e35 | F41 | rust-git-interactive-rebase-tool | 2025-06-17 |
| Fedora | FEDORA-2025-c53905e83d | F41 | rust-kbs-types | 2025-06-14 |
| Fedora | FEDORA-2025-4fc3431dab | F42 | rust-kbs-types | 2025-06-14 |
| Fedora | FEDORA-2025-c53905e83d | F41 | rust-sev | 2025-06-14 |
| Fedora | FEDORA-2025-4fc3431dab | F42 | rust-sev | 2025-06-14 |
| Fedora | FEDORA-2025-c53905e83d | F41 | rust-sevctl | 2025-06-14 |
| Fedora | FEDORA-2025-4fc3431dab | F42 | rust-sevctl | 2025-06-14 |
| Fedora | FEDORA-2025-883496c803 | F41 | thunderbird | 2025-06-17 |
| Fedora | FEDORA-2025-1ac9269cc4 | F42 | thunderbird | 2025-06-13 |
| Fedora | FEDORA-2025-a89cb837a1 | F41 | valkey | 2025-06-13 |
| Fedora | FEDORA-2025-129268f8e4 | F42 | valkey | 2025-06-15 |
| Fedora | FEDORA-2025-8043d4cd71 | F41 | wireshark | 2025-06-15 |
| Fedora | FEDORA-2025-b979c16d88 | F42 | wireshark | 2025-06-15 |
| Fedora | FEDORA-2025-ad2565414f | F41 | yarnpkg | 2025-06-13 |
| Fedora | FEDORA-2025-732290e75c | F42 | yarnpkg | 2025-06-13 |
| Gentoo | 202506-01 | Emacs | 2025-06-12 | |
| Gentoo | 202506-10 | File-Find-Rule | 2025-06-12 | |
| Gentoo | 202506-02 | GStreamer, GStreamer Plugins | 2025-06-12 | |
| Gentoo | 202506-05 | GTK+ 3 | 2025-06-12 | |
| Gentoo | 202506-13 | Konsole | 2025-06-15 | |
| Gentoo | 202506-03 | LibreOffice | 2025-06-12 | |
| Gentoo | 202506-08 | Node.js | 2025-06-12 | |
| Gentoo | 202506-09 | OpenImageIO | 2025-06-12 | |
| Gentoo | 202506-07 | Python, PyPy | 2025-06-12 | |
| Gentoo | 202506-06 | Qt | 2025-06-12 | |
| Gentoo | 202506-04 | X.Org X server, XWayland | 2025-06-12 | |
| Gentoo | 202506-11 | YAML-LibYAML | 2025-06-12 | |
| Gentoo | 202506-12 | sysstat | 2025-06-15 | |
| Mageia | MGASA-2025-0186 | 9 | mariadb | 2025-06-11 |
| Mageia | MGASA-2025-0185 | 9 | roundcubemail | 2025-06-11 |
| Oracle | ELSA-2025-8812 | OL8 | .NET 8.0 | 2025-06-13 |
| Oracle | ELSA-2025-8813 | OL9 | .NET 8.0 | 2025-06-13 |
| Oracle | ELSA-2025-8815 | OL8 | .NET 9.0 | 2025-06-13 |
| Oracle | ELSA-2025-8817 | OL9 | .NET 9.0 | 2025-06-16 |
| Oracle | ELSA-2025-9147 | OL9 | buildah | 2025-06-17 |
| Oracle | ELSA-2025-9143 | OL9 | containernetworking-plugins | 2025-06-17 |
| Oracle | ELSA-2025-9162 | OL9 | gimp | 2025-06-17 |
| Oracle | ELSA-2025-9060 | OL8 | git-lfs | 2025-06-17 |
| Oracle | ELSA-2025-9106 | OL9 | git-lfs | 2025-06-17 |
| Oracle | ELSA-2025-8686 | OL8 | glibc | 2025-06-13 |
| Oracle | ELSA-2025-8918 | OL8 | grafana-pcp | 2025-06-13 |
| Oracle | ELSA-2025-8916 | OL9 | grafana-pcp | 2025-06-13 |
| Oracle | ELSA-2025-9150 | OL9 | gvisor-tap-vsock | 2025-06-17 |
| Oracle | ELSA-2025-20372 | OL7 | kernel | 2025-06-13 |
| Oracle | ELSA-2025-8743 | OL8 | kernel | 2025-06-13 |
| Oracle | ELSA-2025-20372 | OL8 | kernel | 2025-06-13 |
| Oracle | ELSA-2025-20372 | OL8 | kernel | 2025-06-13 |
| Oracle | ELSA-2025-8643 | OL9 | kernel | 2025-06-13 |
| Oracle | ELSA-2025-20368 | OL9 | kernel | 2025-06-13 |
| Oracle | ELSA-2025-9080 | OL9 | kernel | 2025-06-17 |
| Oracle | ELSA-2025-9119 | OL8 | libvpx | 2025-06-17 |
| Oracle | ELSA-2025-9118 | OL9 | libvpx | 2025-06-17 |
| Oracle | ELSA-2025-8958 | OL8 | libxml2 | 2025-06-13 |
| Oracle | ELSA-2025-8844 | OL8 | mod_security | 2025-06-13 |
| Oracle | ELSA-2025-8837 | OL9 | mod_security | 2025-06-13 |
| Oracle | ELSA-2025-8514 | OL8 | nodejs:20 | 2025-06-13 |
| Oracle | ELSA-2025-9144 | OL9 | podman | 2025-06-17 |
| Oracle | ELSA-2025-9145 | OL9 | skopeo | 2025-06-17 |
| Oracle | ELSA-2025-8756 | OL8 | thunderbird | 2025-06-13 |
| Red Hat | RHSA-2025:9166-01 | EL10 | apache-commons-beanutils | 2025-06-18 |
| Red Hat | RHSA-2025:9114-01 | EL9 | apache-commons-beanutils | 2025-06-18 |
| Red Hat | RHSA-2025:7160-01 | EL9 | bootc | 2025-06-16 |
| Red Hat | RHSA-2025:8974-01 | EL8.8 | go-toolset:rhel8 | 2025-06-12 |
| Red Hat | RHSA-2025:8737-01 | EL9.2 | golang | 2025-06-12 |
| Red Hat | RHSA-2025:8689-01 | EL9.4 | golang | 2025-06-12 |
| Red Hat | RHSA-2025:8682-01 | EL9 | grafana | 2025-06-12 |
| Red Hat | RHSA-2025:8915-01 | EL10 | grafana-pcp | 2025-06-12 |
| Red Hat | RHSA-2025:8918-01 | EL8 | grafana-pcp | 2025-06-12 |
| Red Hat | RHSA-2025:8983-01 | EL8.8 | grafana-pcp | 2025-06-12 |
| Red Hat | RHSA-2025:8916-01 | EL9 | grafana-pcp | 2025-06-12 |
| Red Hat | RHSA-2025:8984-01 | EL9.0 | grafana-pcp | 2025-06-12 |
| Red Hat | RHSA-2025:8982-01 | EL9.2 | grafana-pcp | 2025-06-12 |
| Red Hat | RHSA-2025:8975-01 | EL9.4 | grafana-pcp | 2025-06-12 |
| Red Hat | RHSA-2025:6990-01 | EL9 | grub2 | 2025-06-16 |
| Red Hat | RHSA-2025:8981-01 | EL8.2 | gstreamer1-plugins-bad-free | 2025-06-12 |
| Red Hat | RHSA-2025:8980-01 | EL8.4 | gstreamer1-plugins-bad-free | 2025-06-12 |
| Red Hat | RHSA-2025:8976-01 | EL8.8 | gstreamer1-plugins-bad-free | 2025-06-12 |
| Red Hat | RHSA-2025:8978-01 | EL9.0 | gstreamer1-plugins-bad-free | 2025-06-12 |
| Red Hat | RHSA-2025:8977-01 | EL9.2 | gstreamer1-plugins-bad-free | 2025-06-12 |
| Red Hat | RHSA-2025:8979-01 | EL9.4 | gstreamer1-plugins-bad-free | 2025-06-12 |
| Red Hat | RHSA-2025:7313-01 | EL9 | keylime-agent-rust | 2025-06-16 |
| Red Hat | RHSA-2025:9179-01 | EL7 | libsoup | 2025-06-17 |
| Red Hat | RHSA-2025:8480-01 | EL8.2 | libsoup | 2025-06-17 |
| Red Hat | RHSA-2025:8482-01 | EL8.6 | libsoup | 2025-06-17 |
| Red Hat | RHSA-2025:8481-01 | EL9.0 | libsoup | 2025-06-17 |
| Red Hat | RHSA-2025:8958-01 | EL8 | libxml2 | 2025-06-11 |
| Red Hat | RHSA-2025:9016-01 | EL9.4 | libxslt | 2025-06-12 |
| Red Hat | RHSA-2025:8844-01 | EL8 | mod_security | 2025-06-11 |
| Red Hat | RHSA-2025:8837-01 | EL9 | mod_security | 2025-06-11 |
| Red Hat | RHSA-2025:8922-01 | EL9.0 | mod_security | 2025-06-11 |
| Red Hat | RHSA-2025:8937-01 | EL9.2 | mod_security | 2025-06-11 |
| Red Hat | RHSA-2025:8917-01 | EL9.4 | mod_security | 2025-06-11 |
| Red Hat | RHSA-2025:8902-01 | EL9.4 | nodejs:20 | 2025-06-11 |
| Red Hat | RHSA-2025:8890-01 | EL8.6 | perl-FCGI:0.78 | 2025-06-11 |
| Red Hat | RHSA-2025:7317-01 | EL9 | python3.12-cryptography | 2025-06-16 |
| Red Hat | RHSA-2025:7147-01 | EL9 | rpm-ostree | 2025-06-16 |
| Red Hat | RHSA-2025:7241-01 | EL9 | rust-bootupd | 2025-06-16 |
| Red Hat | RHSA-2025:8756-01 | EL8 | thunderbird | 2025-06-18 |
| Red Hat | RHSA-2025:8784-01 | EL8.8 | thunderbird | 2025-06-18 |
| Red Hat | RHSA-2025:7163-01 | EL9 | xorg-x11-server | 2025-06-16 |
| Red Hat | RHSA-2025:7165-01 | EL9 | xorg-x11-server-Xwayland | 2025-06-16 |
| Slackware | SSA:2025-167-01 | libxml2 | 2025-06-16 | |
| Slackware | SSA:2025-162-01 | mozilla | 2025-06-11 | |
| Slackware | SSA:2025-168-01 | xorg | 2025-06-17 | |
| SUSE | SUSE-SU-2025:01989-1 | MP4.3 SLE15 SLE-m5.0 SLE-m5.1 SLE-m5.2 SLE-m5.3 SLE-m5.4 SLE-m5.5 oS15.3 oS15.4 oS15.5 oS15.6 | Multi-Linux Manager Client Tools | 2025-06-18 |
| SUSE | SUSE-SU-2025:01987-1 | SLE12 | Multi-Linux Manager Client Tools | 2025-06-18 |
| SUSE | SUSE-SU-2025:01985-1 | MGR4.3 MGR4.3.15.2 oS15.4 | Multi-Linux Manager | 2025-06-18 |
| SUSE | SUSE-SU-2025:01994-1 | SUSE Manager Server | 2025-06-18 | |
| SUSE | SUSE-SU-2025:01962-1 | MP4.3 SLE15 SES7.1 | apache2-mod_auth_openidc | 2025-06-16 |
| SUSE | SUSE-SU-2025:01953-1 | SLE15 oS15.6 | apache2-mod_auth_openidc | 2025-06-13 |
| SUSE | SUSE-SU-2025:01559-1 | SLE15 | audiofile | 2025-06-12 |
| SUSE | SUSE-SU-2025:20377-1 | SLE-m6.0 | docker | 2025-06-12 |
| SUSE | SUSE-SU-2025:20393-1 | SLE-m6.1 | docker | 2025-06-13 |
| SUSE | SUSE-SU-2025:20385-1 | SLE-m6.0 | docker-compose | 2025-06-12 |
| SUSE | SUSE-SU-2025:02002-1 | SLE12 | gdm | 2025-06-18 |
| SUSE | SUSE-SU-2025:02005-1 | SLE15 | gdm | 2025-06-18 |
| SUSE | SUSE-SU-2025:02004-1 | SLE15 oS15.4 | gdm | 2025-06-18 |
| SUSE | SUSE-SU-2025:02003-1 | SLE15 oS15.6 | gdm | 2025-06-18 |
| SUSE | SUSE-SU-2025:01992-1 | MP4.3 SLE15 oS15.3 oS15.4 oS15.5 oS15.6 | golang-github-prometheus-alertmanager | 2025-06-18 |
| SUSE | SUSE-SU-2025:01988-1 | MP4.3 SLE15 SLE-m5.0 SLE-m5.1 SLE-m5.2 SLE-m5.3 SLE-m5.4 SLE-m5.5 SES7.1 oS15.6 | golang-github-prometheus-node_exporter | 2025-06-18 |
| SUSE | SUSE-SU-2025:01990-1 | MP4.3 SLE15 oS15.6 | golang-github-prometheus-prometheus | 2025-06-18 |
| SUSE | SUSE-SU-2025:02006-1 | SLE15 oS15.6 | govulncheck-vulndb | 2025-06-18 |
| SUSE | SUSE-SU-2025:01991-1 | SLE15 oS15.6 | grafana | 2025-06-18 |
| SUSE | SUSE-SU-2025:01961-1 | SLE-m5.1 | grub2 | 2025-06-16 |
| SUSE | SUSE-SU-2025:01596-2 | SLE15 | helm | 2025-06-12 |
| SUSE | SUSE-SU-2025:20380-1 | SLE-m6.0 | iputils | 2025-06-12 |
| SUSE | SUSE-SU-2025:01487-2 | MP4.3 SLE15 SES7.1 oS15.6 | java-11-openjdk | 2025-06-16 |
| SUSE | SUSE-SU-2025:01954-1 | SLE15 oS15.6 | java-1_8_0-openj9 | 2025-06-13 |
| SUSE | SUSE-SU-2025:01982-1 | MP4.2 SLE15 SLE-m5.1 SLE-m5.2 SES7.1 oS15.3 | kernel | 2025-06-17 |
| SUSE | SUSE-SU-2025:20413-1 | SLE-m6.0 | kernel | 2025-06-17 |
| SUSE | SUSE-SU-2025:20408-1 | SLE-m6.0 | kernel | 2025-06-17 |
| SUSE | SUSE-SU-2025:01983-1 | SLE12 | kernel | 2025-06-17 |
| SUSE | SUSE-SU-2025:01919-1 | SLE15 | kernel | 2025-06-12 |
| SUSE | SUSE-SU-2025:01951-1 | SLE15 | kernel | 2025-06-13 |
| SUSE | SUSE-SU-2025:01967-1 | SLE15 | kernel | 2025-06-16 |
| SUSE | SUSE-SU-2025:01972-1 | SLE15 | kernel | 2025-06-17 |
| SUSE | SUSE-SU-2025:01918-1 | SLE15 SLE-m5.3 SLE-m5.4 | kernel | 2025-06-12 |
| SUSE | SUSE-SU-2025:01966-1 | SLE15 SLE-m5.5 oS15.5 | kernel | 2025-06-16 |
| SUSE | SUSE-SU-2025:01964-1 | SLE15 oS15.6 | kernel | 2025-06-16 |
| SUSE | SUSE-SU-2025:01965-1 | SLE15 oS15.6 | kernel | 2025-06-16 |
| SUSE | SUSE-SU-2025:02000-1 | SLE15 oS15.6 | kernel | 2025-06-18 |
| SUSE | SUSE-SU-2025:01945-1 | SLE15 oS15.6 | kubernetes-old | 2025-06-13 |
| SUSE | SUSE-SU-2025:01940-1 | oS15.5 oS15.6 | kubernetes1.23 | 2025-06-13 |
| SUSE | SUSE-SU-2025:01941-1 | oS15.5 oS15.6 | kubernetes1.24 | 2025-06-13 |
| SUSE | SUSE-SU-2025:20394-1 | SLE-m6.1 | less | 2025-06-13 |
| SUSE | SUSE-SU-2025:01939-1 | SLE15 SES7.1 | libcryptopp | 2025-06-13 |
| SUSE | SUSE-SU-2025:20375-1 | SLE-m6.0 | libsoup | 2025-06-12 |
| SUSE | SUSE-SU-2025:20379-1 | SLE-m6.0 | open-vm-tools | 2025-06-12 |
| SUSE | SUSE-SU-2025:20406-1 | SLE-m6.0 | openssl-3 | 2025-06-17 |
| SUSE | SUSE-SU-2025:02001-1 | SLE12 | pam | 2025-06-18 |
| SUSE | SUSE-SU-2025:01748-2 | SLE15 | postgresql15 | 2025-06-12 |
| SUSE | SUSE-SU-2025:01952-1 | SLE15 oS15.6 | python-Django | 2025-06-13 |
| SUSE | SUSE-SU-2025:20407-1 | SLE-m6.0 | python-cryptography | 2025-06-17 |
| SUSE | SUSE-SU-2025:01999-1 | SLE12 | python-requests | 2025-06-18 |
| SUSE | SUSE-SU-2025:01998-1 | SLE15 SLE-m5.1 SLE-m5.2 SLE-m5.3 SLE-m5.4 SLE-m5.5 oS15.3 | python-requests | 2025-06-18 |
| SUSE | SUSE-SU-2025:20412-1 | SLE-m6.0 | python-setuptools | 2025-06-17 |
| SUSE | SUSE-SU-2025:01997-1 | SLE12 | python3-requests | 2025-06-18 |
| SUSE | SUSE-SU-2025:01466-1 | SLE15 | rabbitmq-server | 2025-06-11 |
| SUSE | SUSE-SU-2025:01548-1 | SLE15 | rabbitmq-server313 | 2025-06-11 |
| SUSE | SUSE-SU-2025:20403-1 | screen | 2025-06-13 | |
| SUSE | SUSE-SU-2025:20395-1 | SLE-m6.1 | sqlite3 | 2025-06-13 |
| SUSE | SUSE-SU-2025:20405-1 | SLE-m6.0 | systemd | 2025-06-17 |
| SUSE | SUSE-SU-2025:01946-1 | SLE15 oS15.6 | thunderbird | 2025-06-13 |
| SUSE | SUSE-SU-2025:20410-1 | SLE-m6.0 | ucode-intel | 2025-06-17 |
| SUSE | SUSE-SU-2025:01942-1 | SLE15 | valkey | 2025-06-13 |
| SUSE | SUSE-SU-2025:01921-1 | SLE12 | wget | 2025-06-12 |
| SUSE | SUSE-SU-2025:01968-1 | SLE15 oS15.6 | wireshark | 2025-06-16 |
| SUSE | SUSE-SU-2025:01978-1 | MP4.3 SLE15 oS15.4 | xorg-x11-server | 2025-06-17 |
| SUSE | SUSE-SU-2025:01981-1 | SLE15 | xorg-x11-server | 2025-06-17 |
| SUSE | SUSE-SU-2025:01977-1 | SLE15 SES7.1 | xorg-x11-server | 2025-06-17 |
| SUSE | SUSE-SU-2025:01979-1 | SLE15 oS15.5 | xorg-x11-server | 2025-06-17 |
| SUSE | SUSE-SU-2025:01980-1 | SLE15 oS15.6 | xorg-x11-server | 2025-06-17 |
| SUSE | SUSE-SU-2025:01975-1 | SLE15 | xwayland | 2025-06-17 |
| SUSE | SUSE-SU-2025:01974-1 | SLE15 oS15.6 | xwayland | 2025-06-17 |
| SUSE | SUSE-SU-2025:01904-1 | oS15.4 | yelp | 2025-06-11 |
| Ubuntu | USN-7571-1 | 14.04 | c3p0 | 2025-06-16 |
| Ubuntu | USN-7536-2 | 20.04 22.04 24.04 24.10 25.04 | cifs-utils | 2025-06-16 |
| Ubuntu | USN-7569-1 | 16.04 20.04 22.04 | dojo | 2025-06-16 |
| Ubuntu | USN-7576-1 | 22.04 24.04 24.10 25.04 | dwarfutils | 2025-06-18 |
| Ubuntu | USN-7565-1 | 16.04 18.04 | libsoup2.4 | 2025-06-11 |
| Ubuntu | USN-7550-7 | 22.04 | linux-nvidia-tegra-igx | 2025-06-13 |
| Ubuntu | USN-7567-1 | 14.04 16.04 18.04 20.04 22.04 24.04 24.10 25.04 | modsecurity-apache | 2025-06-16 |
| Ubuntu | USN-7575-1 | 22.04 | mujs | 2025-06-18 |
| Ubuntu | USN-7572-1 | 22.04 24.04 24.10 25.04 | node-katex | 2025-06-18 |
| Ubuntu | USN-7555-3 | 20.04 | python-django | 2025-06-17 |
| Ubuntu | USN-7555-2 | 22.04 24.04 24.10 25.04 | python-django | 2025-06-16 |
| Ubuntu | USN-7570-1 | 18.04 20.04 22.04 24.04 24.10 25.04 | python3.13, python3.12, python3.11, python3.10, python3.9, python3.8, python3.7, python3.6 | 2025-06-16 |
| Ubuntu | USN-7568-1 | 14.04 16.04 18.04 20.04 22.04 24.04 24.10 25.04 | requests | 2025-06-16 |
| Ubuntu | USN-7566-1 | 22.04 24.04 24.10 25.04 | webkit2gtk | 2025-06-11 |
| Ubuntu | USN-7573-2 | 16.04 18.04 20.04 | xorg-server, xorg-server-hwe-16.04, xorg-server-hwe-18.04 | 2025-06-18 |
| Ubuntu | USN-7573-1 | 22.04 24.04 24.10 25.04 | xorg-server, xwayland | 2025-06-17 |
Kernel patches of interest
Kernel releases
Architecture-specific
Build system
Core kernel
Development tools
Device drivers
Device-driver infrastructure
Filesystems and block layer
Memory management
Networking
Virtualization and containers
Miscellaneous
Page editor: Joe Brockmeier
