LWN.net Weekly Edition for July 18, 2024
Welcome to the LWN.net Weekly Edition for July 18, 2024
This edition contains the following feature content:
- SUSE asks openSUSE to consider name change: concerns over branding and project governance drive an extensive discussion in the openSUSE community.
- Nix alternatives and spinoffs: there are a number of choices available for people wanting to work with a Nix-like system.
- A look at Linux Mint 22: the latest version of this popular Ubuntu derivative distribution.
- A hash table by any other name: the proposed "Rosebush" kernel data structure.
- Development statistics for the 6.10 kernel: where the contributions to 6.10 came from.
- Two more LSFMM+BPF 2024 sessions:
- Hierarchical storage management, fanotify, FUSE, and more: a discussion on implementing HSM using fanotify or FUSE and some problems encountered, especially with regard to executing from files that are not local.
- Changing the filesystem-maintenance model : a discussion on ways to prevent filesystem bugs that should be caught earlier from reaching the mainline, by changing how filesystem testing is done.
- Reports from OSPM 2024, part 1: the first set of sessions held at the 2024 Power Management and Scheduling in the Linux Kernel (OSPM) Summit.
This week's edition also includes these inner pages:
- Brief items: Brief news items from throughout the community.
- Announcements: Newsletters, conferences, security updates, patches, and more.
Please enjoy this week's edition, and, as always, thank you for supporting LWN.net.
SUSE asks openSUSE to consider name change
SUSE has, in a somewhat clumsy fashion, asked openSUSE to consider rebranding to clear up confusion over the relationship between SUSE the company and openSUSE as a community project. That, in turn, has opened conversations about revising openSUSE governance and more. So far, there is no concrete proposal to consider, no timeline, or even a process for the community and company to follow to make any decisions.
The openSUSE name came about after Novell acquired SUSE, and
then renamed SUSE Linux to openSUSE in 2006, at the same time it
introduced SUSE Linux Enterprise products. The openSUSE project
followed with a board appointed
by SUSE in 2007. Since the inception of the openSUSE board, its
chair has been appointed by the corporate entity behind SUSE, which
has changed several times along the way, with the other seats elected by members of the
openSUSE project. The project has always been "sponsored by" the
corporate entity without a independent organization of its own to hold
trademarks, manage infrastructure, or to accept donations and
sponsorships.
On July 7, openSUSE board member Shawn Dunn started a discussion on the project mailing list with a request from SUSE to consider rebranding the project. This was a follow-up to a talk given at the openSUSE Conference 2024 making the case that it was time to pursue a new name for the project.
The openSUSE project has had a few occasions to ponder its relationship with the SUSE mothership of late. The openSUSE board has discussed an independent foundation and governance in the past, but that has not yielded results. Project member Patrick Fitzgerald founded a separate Geeko Foundation as a UK not-for-profit to handle money for openSUSE-specific activities. In 2023, the project had a contest to select new logos that caused a ruckus in the community. The project also faced uncertainty about the future of openSUSE Leap due to SUSE's plans for a new "Adaptable Linux Platform" (ALP). An announcement in January indicates that Leap will go forward, but the episode underscores the unequal relationship between SUSE and openSUSE.
A request
The talk, co-presented by Robert Sirchia and Richard Brown, was called "We're all grown up: openSUSE is not SUSE". Both are SUSE employees, but Sirchia said he was officially speaking for SUSE while Brown was taking the community perspective and not officially representing SUSE. Each made the case that openSUSE should be rebranded.
SUSE, said Sirchia, is committed to openSUSE but concerned about the SUSE brand being diluted, because "the combined breadth of SUSE and openSUSE offerings are hard to define and defend". The blurred line between SUSE and openSUSE has become "a constant conversation" that causes distractions and detracts from the work of the company and community. Even though SUSE controls the openSUSE trademark, Sirchia made it clear that it was a request to have a conversation about rebranding—not a demand—with no specific timeline to reach a decision. But, he said, it was a conversation that should be done openly, "not something done in a backroom", and all ideas were welcome.
Brown said that, "purely from the community side", he sees similar problems with the openSUSE brand causing "distraction and detraction". One of the areas of confusion he called out was with openSUSE Leap, which is based on SUSE Linux Enterprise. Since Leap shares packages with SUSE's enterprise products, SUSE may not accept contributions that it does not want carried over to its commercial products, which can be confusing and discouraging for openSUSE community members.
Brown suggested that individual projects should stop using the openSUSE name entirely, and to have projects go forward under their current names (e.g. Tumbleweed, Leap, MicroOS) and "maybe shove Linux after some of them". The larger openSUSE project, he said, should create an umbrella similar to the Cloud Native Computing Foundation (CNCF) with a uniform trademark policy, code of conduct, and shared resources and infrastructure, but with each project keeping its own identity under the umbrella. To that end he said that he was already planning to drop "openSUSE" from the openSUSE Aeon project that LWN covered in June. Neither Brown nor Sirchia put forward suggestions for what a new name for the umbrella organization should be.
Reactions
The request to consider a name change brought up some strong
reactions on the mailing list. Axel Braun accused SUSE
of "putting a gun on the project
", and Lukáš Krejza said that it
looks like SUSE wants to take advantage of the community's work
without having to protect the community's trademarks.
Krejza also asked what other changes were coming, and if this
meant that SUSE's policy of doing
development in openSUSE Factory first was going to change.
Jeff Mahoney, who leads SUSE's Linux Systems organization, which
has more than 200 developers who contribute to openSUSE, said
that the problem is not primarily the difficulty in protecting trademarks, but
the "constant confusion
" about where the lines are drawn between openSUSE and
SUSE. Facebook page and forum administrator Jim Henderson also
complained that many users don't know the difference between SUSE and
openSUSE and that it takes multiple explanations to get the point across.
Mahoney said that the "Factory First" policy was sometimes
violated, but said it was swiftly fixed when that happened. Further,
he said that there has been no talk of changing that policy and he wouldn't
support such a change if the topic came up.
Douglas DeMaio sought
to make a case for rebranding the project
from a marketing point of view, and noted that sharing a brand meant
that neither SUSE or openSUSE fully controlled their brand—to
the detriment of both. He pointed out that if one entity had bad
publicity, it also spilled over to the other. "For example, if
openSUSE faces a security issue, customers might associate this problem
with SUSE as well, even if SUSE is not affected.
" In addition, the
brands might compete with one another or lose customers to
"competitors with clearer and more distinct branding
". In another
message, he argued that openSUSE Leap had probably been harmful to SUSE's
growth and asked the community to think of SUSE as "a big brother here
trying to guide a sibling to a new, more successful place
".
"No confidence"
As the conversation continued, it drifted beyond branding into the
related topics of openSUSE's governance and the Geeko
Foundation. During his talk, Brown had alluded to the board "reinterpreting its
mandate and doing stuff that it shouldn't". The project's guiding
principles state that the board is empowered to "provide guidance
and support existing governance structures, but shouldn't direct or
control development
", but during discussion on the mailing list Brown claimed the board had exceeded
this authority. He said
that the board had "intervened and forcefully changed who was
allowed to present
" about the name change at the openSUSE
Conference on behalf of SUSE. He also complained that the board had
improperly intervened in forcing a continuation of openSUSE's MicroOS
Desktop KDE after the edition had been removed due to a lack of
maintainers. "I currently hold no confidence in the current
openSUSE Board and think it's absolutely essential the openSUSE
project establishes a new governance model.
"
DeMaio took
the blame for a miscommunication about the talk, and asked to
"just move forward
". Attila Pinter said
that the project had seen "quite a few communication errors or lack
of communication
" from the board and asserted that the matter
deserved more attention than acknowledgment of fault. "Was there
any specific reason why the Board decided who can and cannot give the
presentation on the rebranding?
" DeMaio brushed
off Pinter's question in his reply. When Pinter followed
up, DeMaio did not respond at all.
Gertjan Lettink weighed
in on July 12 with an "open letter" to the board and project,
saying that the project has to rebrand, but that there was "no
good way the Board could drive the resulting changes coming
up
". He said that the board had failed to show "any real
proactive leadership in some situations where it should have
",
which was why he resigned
from the openSUSE board in April.
Clarification
Pinter expressed
support for the rebranding, but said that if that effort moves
forward "some aspects of the Geeko Foundation need to be
clarified
". Simon Lees agreed
with Pinter, but said that the discussion needed to go beyond the
foundation to the general structure of the project. Mahoney wrote
that the three topics are intertwined and "if handled properly, present a
unique opportunity for the project to re-establish itself
".
Mahoney observed that openSUSE had a small community compared to
similar projects like Fedora, Arch Linux, and Ubuntu. He said that
there were a few reasons for that: the perception that
contributing to openSUSE was "like working for SUSE for free
",
an unclear governance structure, and lack of a clear mission for
openSUSE:
I don't have a strong opinion on the renaming, but it could be an opportunity to revitalize the project under a new name, new governance, and new independence. That, in turn, could make the project more attractive to new contributors and users who aren't SUSE employees. With projects that used to be wildly popular on the decline, it could be the right opportunity to make a splash.
Dunn replied
that he was looking into governance of other projects and what
openSUSE's could look like. "Because at the moment, there's just
not anything else on the table, and we're all just kind of flailing
our arms.
"
Henne Vogelsang wrote
that Mahoney's list of reasons was incomplete, and misleading. "The reason we are
only able to sustain a small (shrinking) community is that we, as a
community, do not invest too much into growing our community. Full
stop.
" The project, he said, should be focusing on improving its
infrastructure, going out and recruiting new contributors, and work
harder to make existing contributors happy. "My suggestion to all
members of the openSUSE Community: Stop talking about grand ideas, get
to work.
"
Dunn said
that he couldn't disagree more. If Vogelsang's ideas for
attracting new contributors were viable, Dunn said, "it would have
worked by now
." Tony Walker replied
that he had moved to openSUSE after years as a Debian contributor and
the looser structure of the openSUSE project gave him pause. The trend
towards more formalization of communication, structure, and governance
in the larger ecosystem has been a benefit, he said. "A lot of
contributors and users find the stability of that structure very
comforting.
"
Sponsors would also find a more formal structure comforting—specifically, one that separates openSUSE's money from SUSE's—according to several people participating in the conversation. Gertjan Lettink wrote that several offers of sponsorship for openSUSE were lost because sponsors did not want to send money to SUSE. Sarah Julia Kriesch related a similar tale.
Foundation
The Geeko Foundation, according to the "Geeko Foundation Update" talk from the openSUSE Conference 2024, has raised €25,000 in the past 12 months. Currently the money is used to fund travel sponsorships.
Even though the foundation is providing a way for individuals and
sponsors to donate money without having it pass through SUSE, Martin
Schröder pointed
out that it's not the openSUSE foundation. The
organization is not accountable to openSUSE's board or directly to the
openSUSE community. Pinter asked
whether the board had an agreement with the foundation, or if there
was a document "that clearly defines the roles and boundaries of
the foundation
". He also noted that Fitzgerald holds 75% of the
voting power for the foundation, and asked if there was a reason for
this.
Fitzgerald replied
that he was not aware that he held majority voting power. There are
three trustees, he said, "and it was my /assumption /that control
would be divided amongst them equally. I'll get back to all of you on
this.
" As for an agreement with the board, he said that there was
nothing formal but perhaps there should be. He also pointed to the
organization's founding
document and other
filings. Lees said
that, as an unofficial statement from an openSUSE board member, he sees it as a
separate entity that helps fill needs on a case-by-case basis that the
project cannot handle otherwise. "I think it makes sense for it to
continue to exist in that role until these discussions around name
governance and structure are settled.
"
Or else?
The discussion was started under the premise that SUSE was making a
request, not a demand, of the openSUSE community. However, after a bit
more than a week of discussion, it was strongly suggested there was a
right and a wrong decision—and the wrong decision could have
unpleasant consequences. On July 15, Brown wrote
that if the project failed to act on SUSE's request to stop using the
SUSE brand "we will be choosing to decrease the good will between
SUSE and openSUSE
". In that event, he said that it was unlikely
that SUSE would escalate matters to force the name change: "What I would
imagine is an outcome that's would actually far worse -
Apathy and a tendency to put priorities elsewhere.
"
SUSE, Brown said, has other open-source projects that it sponsors around its products. If openSUSE shows that it is not aligned with SUSE's interests, he said that he expected SUSE would refocus its efforts on projects that are aligned:
SUSE doesn't want history to record that it was the big mean corporation that forced its community to do something. But just because things are being said nicely doesn't mean they should be ignored.
[...] Ultimately, I believe that if openSUSE continues to travel in a direction that hinders the SUSE brand, or ignores the need to address [its] governance issues, we need to be prepared for history recording that openSUSE drove itself to obsolescence by failing to listen to the needs of one of its largest stakeholders.
After Brown's email, several participants, including Pinter, Lees, and Fitzgerald, signaled agreement for the rebranding and need for governance changes.
Assuming the discussion concludes, or simply tapers off, with support for rebranding, the next steps are still unclear. Details such as timing, methodology for choosing a new name, whether SUSE might support a separate legal entity for the project, and many other related questions are still left unaddressed. One hopes, for the sake of the project's users and contributors, these questions are addressed sooner rather than later.
Nix alternatives and spinoffs
Since the disagreements that led to Eelco Dolstra stepping down from the NixOS Foundation board, there have been a number of projects forked from or inspired by Nix that have stepped up to compete with it. Two months on, some of these projects are now well-established enough to look at what they have to offer and how they compare to each other. Overall, users have a number of good options to choose from, whether they're seeking a compatible replacement for Nix (the configuration language and package manager) or NixOS (the Linux distribution), or something that takes the same ideas in a different direction.
To establish a baseline for the comparison, the Nix project is still operative and working on business as usual. NixOS 24.05 was released on-schedule at the end of May, and 24.11 will follow later this year. Despite that, the project has seen more contributors leave — and continuing issues with upheaval in the community. While the project seems unlikely to go away, it has definitely suffered a blow.
Auxolotl
Auxolotl (Aux) is a "proactive friendly fork
of the Nix ecosystem
" (according to the description on its
wiki). Auxolotl was forked
by Jake Hamilton, but has already
seen many other contributors come onboard. Since the code bases have not had much
time to diverge, Auxolotl is still compatible with Nix, and a computer
running one can seamlessly install packages from the other.
While the documentation advises that "Auxolotl may make technical improvements to
Nix and NixOS in the future
", the main focus of the project is not on
changing the fundamental design of NixOS, but on continuing development with
a radically different community structure.
In an
introductory post, Hamilton describes Auxolotl's approach to governance:
Authority is given from a central Steering Committee that establishes the direction of the project and delegates that authority to responsible parties. The structures in the Aux governance model are:
- Committees: Bodies managing logistics for the project unrelated to technical implementation
- Special Interest Groups: Teams dedicated to specific parts of the Aux ecosystem
- Working Groups: Teams formed for Special Interest Groups to collaborate with one another
Since Auxolotl is still so new, special interest groups (SIGs) are currently being set up just by asking volunteers to get involved. In the future, as the project grows, committees (including the Steering Committee) and SIGs will be filled using elections. Currently, the Steering Committee is comprised of Hamilton, Isabel Roses, and Skyler Grey as a boot group to get the project off the ground. The Steering Committee is supposed to leave technical decisions up to the SIGs. Of course, the NixOS Foundation board was also supposed to leave technical decisions up to NixOS's various contributors. As with any open-source project, managing the community can be just as hard or harder than working on the code, and it remains to be seen how well Auxolotl will adapt.
Still, Auxolotl has gained a lot of interest for such a young project, and has already made good progress on establishing infrastructure and promoting community discussion. The project provides a template for anyone wishing to install a new Auxolotl system, or migrate their existing NixOS system. Auxolotl is also compatible with another Nix spinoff: Lix.
Lix
Lix is a fork of Nix (the package manager), rather than NixOS (the operating system), with the goal of having a healthier community and a focus on backward compatibility. As a fork, Lix remains compatible with existing package definitions and configurations. Lix can even be used transparently as the Nix implementation for a NixOS or Auxolotl system. Recent versions of Nix have included some backward-incompatible changes that nixpkgs, NixOS's repository of package definitions, have not yet caught up with. So, in order to maintain compatibility with the package definitions that users are actually using, Lix is forked from an earlier version of the Nix code base.
While Lix is getting started, the project is being led by a core team that votes on technical topics as they arise. The project's final governance structure is not decided yet, but is likely to end up being more distributed and community-led than Nix, with the contributors soundly rejecting the idea of having a benevolent dictator for life.
Lix has incorporated some changes that have been lagging in the upstream
implementation. The project has already switched to
using Meson as a build system (instead of
plain Make) — a change that Dolstra had been blocking for some time.
Other improvements listed in the first
release announcement include performance improvements, improved error
messages, improvements to the interactive user interface, and bug fixes.
According to Lix's
web site, "Lix
intends to be an evolving language – a robust language versioning system will
allow the language to grow and evolve without sacrificing
backwards-compatibility or correctness.
"
That isn't the only large-scale change that Lix is planning to make, however.
The project also intends to start phasing in the use of Rust — not as a
from-scratch rewrite, but as an incremental addition to the existing C++
code base. Whether this means that Lix will be able to share any code with Tvix —
the existing from-scratch rewrite of Nix — remains to be seen, but the project's
FAQ explicitly welcomes collaboration
with other projects: "We welcome anyone who wants to develop for both Lix and
another implementation – including CppNix and Tvix, and our open-source
implementation absolutely allows any developer to integrate our code into any
license-compatible project.
"
Tvix
Tvix is a project by The Virus Lounge (TVL) to rewrite Nix in Rust that started in 2021. TVL is a loose association of people who work together to develop open-source software. Any from-scratch rewrite is a huge undertaking, so it's not surprising that Tvix is not yet compatible with Nix. But the project has been making progress. The most recent development update explains how the project is using nixpkgs as a giant compatibility test to identify and prioritize places that do not yet match the behavior of Nix.
Tvix is largely able to evaluate the core language, and is now working on all the other components that go into a package manager. Recently, it added a server that can share Nix packages over the same interface as the original Nix implementation. So although it is not yet usable as a drop-in replacement, it seems likely that the additional interest from and collaboration with other implementations could make it a viable alternative sooner rather than later. Plus, using Rust does have some ancillary benefits, such as allowing Tvix to compile to WebAssembly.
Tangram and Brioche
There are also some projects that seek to build on the ideas that Nix popularized, without being direct successors. Tangram and Brioche are both hybrid build systems/package managers that use TypeScript as a configuration language. Although based on separate code bases, Brioche is being developed by Kyle Lacy, who used to work for Tangram Inc., the company behind Tangram — so the projects share some architectural similarities.
Like Nix, these package managers use a Turing-complete configuration language, sandboxing, and content-addressed storage to specify dependencies exactly and share them between computers. On top of that, they add some interesting features. For example, Brioche has built in support for producing packed executables that bundle all their dependencies in the application itself. However, the main appeal of both projects is definitely the idea of using a general-purpose language to specify a build.
Guix
Of course, Tangram and Brioche are not the only projects to take inspiration from Nix. They aren't even the projects that most closely embrace the idea of coupling Nix's model to a more general-purpose language. Guix is a package manager of the same kind that uses Scheme as its configuration language.
Guix has not been much affected by the drama in the Nix community, so the comparison that LWN published earlier this year remains valid. Guix was a smaller project than Nix, but with contributors leaving Nix for Auxolotl, Lix, and other alternatives, the comparison might be more favorable now. In any case, Guix remains a stable operating system with an approach to package management similar to NixOS's.
Conclusion
Despite the number of maintainers choosing to move on to other projects, Nix and NixOS are still receiving plenty of attention. With companies like Determinate Systems, Anduril, and many others backing it, the project seems likely to remain around for some time. But users who seek an alternative have several options.
For users who want a full-fledged operating system with a relatively long history of stability, Guix remains a good option — although the project's stance of only packaging and supporting open-source software could be troublesome for people who still depend on some proprietary software. Users who want to keep their existing NixOS configurations and make a lateral move into a community with many of the same contributors but a different governance structure may find Auxolotl suits their needs.
Users who want to evaluate Nix code (as part of an Auxolotl deployment or otherwise) but don't want to continue relying on the original Nix implementation can switch to using Lix, and expect a backward-compatible experience. Users with niche needs (such as WebAssembly support), or who feel strongly about memory-safety, could become involved in the development of Tvix.
And users who want the benefits of Nix as a build system, including precise dependency management and better caching, but who don't have a vested interest in the Nix ecosystem might try Brioche, Tangram, or one of the other systems that has embraced ideas from Nix.
A look at Linux Mint 22
Linux Mint has released a beta of its next long-term-support (LTS) release, Linux Mint 22 (code-named "Wilma"), based on Ubuntu 24.04. Aside from the standard software updates that come with any major upgrade, some of Wilma's largest selling points are what it doesn't have; namely snap packages or GNOME applications that have broken theming on non-GNOME desktops like Mint's Cinnamon desktop.
Over the years, Ubuntu has doubled down on the snap format for shipping software. Mint decided to buck that and disable the Snap Store starting with Linux Mint 20 and has continued to work around snap-formatted software since. Likewise, Mint has continued to use GTK to develop the Cinnamon desktop and its own applications, but as more GNOME-specific conventions are being woven into GTK libraries, Mint is having to carry more work to ensure that default applications carried over from GNOME (such as the GNOME Calculator, or Calendar) blend in with the desktop.
Installation
Even though Mint is based on Ubuntu, it takes very little from its parent distribution in terms of system management software and utilities. This is apparent from the get-go when one sits down to install the distribution, which has its own installer rather than reusing Ubuntu's subiquity installer.
Linux Mint installation is short and simple. The first screen of the install wizard suggests that the user read the release notes, before proceeding with the installation. During the rest of the install, users are only asked to make a few decisions such as their language preference, keyboard layout, whether to install multimedia codecs, and whether to erase the disk to install Mint or manage partitioning manually. Unfortunately, unlike many other Linux distributions at this point, Mint does not make setting up full disk encryption part of the default install. Users have to dive into "Advanced features" and select LVM partitioning to have the option of setting up disk encryption. Users are offered the option of encrypting their home directory during user setup.
After installation, Mint's Welcome application is displayed and offers a series of configuration options and tweaks that users might want to perform on a new system. This ranges from choosing a different widget theme and color scheme to configuring system settings, adjusting the system firewall, browsing available software, or setting up system snapshots.
Snapshots
Timeshift is a Mint-developed system snapshot and restore utility that is billed as being similar to the Time Machine tool in macOS. It has two methods of taking snapshots, rsync and Btrfs. The default is rsync, because Mint does not create Btrfs volumes when installing. (Btrfs is an option in advanced partitioning, but creating a Btrfs partitioning scheme is an exercise left to the user.) Snapshots are saved under /timeshift on the root partition (/) by default, but it can be configured to save to an external disk as well. Each snapshot is a full system snapshot. It is meant to be a system recovery tool so home directories are excluded from snapshots with the default configuration, but users can easily adjust this if needed. One missing feature is the ability to sync snapshots to remote filesystems, which seems an odd omission given that it uses rsync to take the snapshots.
In rsync mode snapshots are taken using hard links to deduplicate files. That means that the first snapshot can take quite a bit of space, but the subsequent backups consume far less space. Users can restore individual files or full system snapshots with Timeshift. Individual files can be restored by browsing the filesystem tree of a snapshot using the file manager. If a user selects a full system snapshot, Timeshift does a dry-run calculation and shows all of the files that will be created, restored, or deleted. Selecting Next gives the user a warning about data modification, and then selecting Next again runs the restore and reboots the system. In testing, it seemed to work fine, and did not trash the system or eat any data.
Installing software
Mint offers a pretty good starter set of software for a desktop system. It includes typical options like LibreOffice, a document scanner, Firefox, Thunderbird, Transmission for users who want to use BitTorrent, as well as Rhythmbox and Celluloid for playing music and video files, respectively. If those applications aren't enough, Mint has its own software "store", imaginatively named Software Manager, for users to search for and install applications in Debian package format and Flatpak-packaged applications from Flathub. In Mint 22, the application received some performance enhancements, and a new security policy for Flatpaks. Specifically, only Flatpaks that have been verified by Flathub are displayed. Users can override this, but Software Manager will continue to display a warning if users view an unverified application.
As mentioned, one of the ways that Mint diverges from Ubuntu is that it disables support for snap packages by default, and the project maintains native Debian packages for some software that Canonical ships only in snap format. This includes Firefox and Thunderbird, but some software is still only available for Mint in snap format. For instance, a search for LXD only turns up the "wrapper" that installs LXD as a snap. Incus, the fork of LXD, is of course available as a native Debian package.
Since Mint is largely a retooled Ubuntu 24.04, it provides the same Linux kernel, system libraries, and tools as come with that version of Ubuntu. Specifically, it includes Linux 6.8.0, systemd v255.4, GNU C Library (glibc) 2.39, Binutils 2.42, and GCC 13.2.0. According to the new features page for Mint 22, it will follow Ubuntu's hardware enablement (HWE) kernel series. This means that it follows Ubuntu's rolling release kernels that track newer upstream kernel releases instead of staying on 6.8.x for its entire lifecycle.
Twice removed
The bulk of software available for Linux Mint comes from Ubuntu's repositories. This means that Mint is not exercising any quality control or oversight for many of the packages, and it has no direct ability to fix a bug when it is found and reported. For example, a user reported errors during installation of the Bluefish editor for developers. Rick Calixte, a member of the Mint development team, replied that the errors would not impact running the program, and they were an artifact of Ubuntu switching to Python 3.12 for the Ubuntu 24.04 release, while Debian had still been on 3.11. His guidance was to report the bug upstream. Mint 22 also inherits Ubuntu's restrictions on the use of user namespaces, which is causing a number of applications to crash, according to this issue on GitHub.
Mint depends on Debian and Ubuntu when it comes to responding to security updates, unless a vulnerability happens to impact one of its own applications, such as this vulnerability that affected MintUpload. Mint does not seem to have a security team or point of contact to report security vulnerabilities to, or any information about its security policies in the FAQ or documentation. Since Linux Mint is popular primarily as a personal desktop, security may not be a top priority for many of its users; but Mint also advertises an OEM install and claims that the distribution is suitable for companies in its FAQ. One would expect security communications and contacts to be table stakes for OEM and corporate use cases.
Users are notified of updates, including security updates, via Mint's Update Manager application. The application displays a shield icon in the Mint taskbar, and if updates are available it will include a notification on the icon. Users can control how often Mint polls for new updates (the default is every two hours), and whether to show notifications for all updates or just security and kernel updates. It allows fine-grained control over updates via the graphical interface that is usually absent from graphical update tools. Users can choose to enable automatic updates, block specific packages, and more.
Cinnamon, XApps, and Libadwaita
Mint's desktop, Cinnamon,
is a fork of GNOME that started in 2011 as a reaction to
GNOME 3. It has many forked
applications and utilities as well, but it has continued to use
GTK. It calls its forked applications XApps,
and the goal is to "produce generic applications for traditional
GTK desktop environments
" with modern toolkits but standard
title-bars, menus, etc. In April, the project mentioned that it was
downgrading several applications, including the GNOME calculator,
System Monitor, and GNOME Calendar, to avoid the GTK4/Libadwaita updates that
caused GNOME applications to adopt styles that look out of place on
Mint and other desktops.
GNOME's design language (its look and feel) is implemented through Libadwaita, a library that provides GNOME application developers with widgets, objects, and other user-interface (UI) tooling for their applications. GNOME tries to target not only desktops and workstations, but mobile devices such as phones and tablets as well. This means, as mentioned in LWN's coverage of GNOME 46, that applications may not conform to the expectations of desktop users.
Right now, the effort to de-GNOME-ify the default applications is a mixed success. The older versions of applications that Mint has swapped in, like GNOME Calculator, File Roller, and System Monitor, all have some of the standard GNOME UI conventions like the hamburger button and lack standard menus (e.g., File, Edit, View, etc. at the top of the window) that stand out next to non-GNOME applications. But they haven't been upgraded to Libadwaita 1.5, which breaks theming for non-GNOME desktops and applies conventions such as "adaptive dialogs", which are dialogs that are attached to the application's main menu. That means that a window's child dialog can't be moved outside the parent window, which effectively renders the dialogs invisible when alt-tabbing through applications and blocks the main window while the dialog is open.
Longer term, the project has said that it will try to rally other GTK-based projects, like MATE and Xfce, around XApps, but at the moment the site for XApps is still under construction.
Cinnamon itself hasn't received a lot of major updates in Mint 22. The new features page indicates that it has seen some performance improvements and minor enhancements like better keybinding support for Cinnamon extensions (called "Spices"). That is not terribly surprising or disappointing: Cinnamon is a mature desktop that is not in need of major enhancements and its users probably prefer that it does not change a great deal between releases.
The final release of Linux Mint 22 is expected in late July and it will be supported through 2029. The beta period for Linux Mint is typically two weeks, but in the most recent "Monthly News" blog, project lead Clement Lefebvre indicated that users were reporting major bugs that needed to be addressed before declaring the release stable. Overall, Mint is a well-polished Linux desktop distribution. It's potentially a good choice for users who want a desktop system with a long lifecycle, a classic GNOME 2-ish look-and-feel, and compatibility with Ubuntu, without the emphasis on the snap packaging format.
A hash table by any other name
On June 25, Matthew Wilcox posted
a second version of a patch set
introducing a new
data structure called rosebush, which
"is a resizing, scalable, cache-aware, RCU optimised hash
table.
" The kernel already has generic hash tables, though, including
rhashtable. Wilcox believes that the design of
rhashtable is not the best choice for performance, and has written rosebush as
an alternative for use in the
directory-entry cache (dcache) — the filesystem cache used to speed up
file-name lookup.
Rosebush is intended to present roughly the same API as rhashtable, with the main difference for users being the performance of the two hash tables — something which is critically important to the dcache, since it is referenced during almost all filesystem operations. All hash tables have the same basic structure, but the details can be quite important for performance. The key of the value being looked up (a file name, for example) is hashed, which is used to select a bucket from a top-level array of buckets. Multiple keys can have the same hash, however, so the hash table must either store multiple keys per bucket (the approach taken by rosebush and rhashtable), or use a more complicated bucket-selection algorithm (such as linear probing or cuckoo hashing). Rosebush uses arrays as fixed-size buckets, whereas rhashtable instead uses linked lists. While there are other differences between the two data structures, it is the avoidance of linked lists that Wilcox thinks will make the biggest difference.
To look up a key in a rosebush, the key is hashed. In the normal case, the rosebush has a two-level structure; the hash is first used to select a bucket from an array of buckets, and then looked up within that bucket using a linear scan. The array has a number of elements that is a power of two, so the last N bits of the hash can be used to efficiently select an entry. In order to adjust to the number of items stored in the rosebush, the top-level array is resizeable. This does mean that the occasional insertion into the table will need to allocate a new top-level array and rehash the keys into new buckets, but the average insertion only has to do a constant amount of work. If few enough items have been added to the rosebush (the exact number depends on the architecture), the rosebush will just skip the top-level array and fall back to using a single bucket. This saves some memory for rosebushes that are not actually being used. Buckets are stored with the hashes first, followed by the associated values. This ensures fewer cache lines need to be fetched to look up an element in a bucket. Finally, the value associated with the key is returned.
In order to efficiently support concurrent access, rosebush uses read-copy-update (RCU) to protect each bucket. There is a whole-hash-table spinlock that is taken during resizing operations, but the vast majority of accesses should only need to use RCU on the specific bucket being read from or updated. Since reading under RCU protection is approximately free, this means that the hash table has excellent concurrent read performance — a necessity for the dcache.
The lists rhashtable uses are intrusive, singly-linked lists. The kernel makes extensive use of embedded linked list structures. Intrusive pointers do have downsides, however. For one thing, a structure with an embedded linked list entry in it cannot be stored in multiple lists. It also uses extra memory when the structure is not being stored in a list. The fact that rosebush does not use intrusive pointers allows an item to be stored in multiple rosebushes, or the same rosebush under different keys; the benefit is a side effect of the performance-motivated design decision to avoid linked lists.
The main problem with linked lists in this context is pointer-chasing: linked lists require the CPU to access several potentially unrelated parts of memory in order to iterate through the bucket. Rosebush avoids this by using an array to hold bucket entries, reducing the number of cache misses incurred by iterating over a bucket. Wilcox explained the motivation in his cover letter:
Where I expect rosebush to shine is on dependent cache misses. I've assumed an average chain length of 10 for rhashtable in the above memory calculations. That means on average a lookup would take five cache misses that can't be speculated. Rosebush does a linear walk of 4-byte hashes looking for matches, so the CPU can usefully speculate the entire array of hash values (indeed, we tell it exactly how many we're going to look at) and then take a single cache miss fetching the correct pointer. Add that to the cache miss to fetch the bucket and that's just two cache misses rather than five.
In response to the first version of the patch set (from February), Herbert Xu disagreed, pointing out that an rhashtable is resized whenever the table reaches 75% capacity, so the average bucket contains only one item. Therefore uses of rhashtable will incur many fewer cache misses than Wilcox had predicted.
David Laight thought that was considering the wrong metric, however. Since the kernel never has any reason to look up an empty bucket, it is more interesting to consider the average length of non-empty buckets. Xu acknowledged the point, but didn't think it changed the ultimate analysis, since the average chain length should still be nowhere near 10 items.
Reviewers were, as ever, skeptical in the face of assertions about performance
without corresponding measurements. Peng Zhang
asked "how
much performance improvement is expected if it is applied to dcache?
"
Wilcox didn't reply to Zhang's question.
The other reviewers did seem fairly happy with the second version of the patchset,
mainly suggesting small changes to the documentation and test suite.
Development statistics for the 6.10 kernel
The 6.10 kernel was released on July 14 after a nine-week development cycle. This time around, 13,312 non-merge changesets were pulled into the mainline repository — the lowest changeset count since 5.17 in early 2022. Longstanding tradition says that it is time for LWN to gather some statistics on where the new code for 6.10 came from and how it got to the mainline; read on for the details.As a reminder, much of the information that follows (and much that doesn't) can be had at any time from the LWN kernel source database, which is available to LWN subscribers.
A total of 1,918 developers contributed to 6.10; that is lower than recent releases but essentially equal to the 1,921 contributors to 6.5 in August of last year. Of those developers, 242 made their first kernel contribution in 6.10. The most active contributors to this release were:
Most active 6.10 developers
By changesets Krzysztof Kozlowski 357 2.7% Kent Overstreet 270 2.0% Andy Shevchenko 226 1.7% Uwe Kleine-König 176 1.3% Darrick J. Wong 160 1.2% Jani Nikula 133 1.0% Matthew Wilcox 125 0.9% Ville Syrjälä 122 0.9% Eric Dumazet 120 0.9% Ian Rogers 102 0.8% Hans de Goede 101 0.8% Dmitry Baryshkov 101 0.8% Christoph Hellwig 94 0.7% Takashi Iwai 94 0.7% Geert Uytterhoeven 90 0.7% Arnd Bergmann 87 0.7% Damien Le Moal 87 0.7% Wolfram Sang 82 0.6% Namhyung Kim 78 0.6% Pierre-Louis Bossart 77 0.6%
By changed lines Dmitry Baryshkov 64636 10.0% Darrick J. Wong 23781 3.7% Philipp Hortmann 18687 2.9% Bingbu Cao 14333 2.2% Boris Brezillon 14090 2.2% Wedson Almeida Filho 10335 1.6% Kent Overstreet 9144 1.4% David Howells 8347 1.3% Bitterblue Smith 6117 0.9% Hans de Goede 5821 0.9% Namhyung Kim 5792 0.9% Ian Rogers 5592 0.9% Arnd Bergmann 5492 0.8% Benjamin Tissoires 4924 0.8% Michal Wajdeczko 4804 0.7% Tushar Vyavahare 4647 0.7% Shahab Vahedi 4629 0.7% Fiona Klute 4485 0.7% Akhil R 4316 0.7% Jordan Rife 4135 0.6%
Krzysztof Kozlowski was the leading contributor of commits to the 6.10 kernel, producing over five changes for every day of this release cycle; this work was focused on refactoring throughout the driver subsystems and devicetree changes. Kent Overstreet's work was mostly focused on fixes for the bcachefs filesystem, but he also implemented much of the memory-allocation profiling subsystem. Andy Shevchenko contributed cleanups throughout the driver tree, Uwe Kleine-König continued the driver-refactoring work that has led to 1,883 changes since the 6.4 release, and Darrick Wong did a lot of work on the XFS filesystem, mostly focused on the in-progress online-repair functionality.
In the lines-changed column, Dmitry Baryshkov removed a number of header files that can be automatically generated at build time. Philipp Hortmann removed a couple of unloved drivers from the staging tree. Bingbu Cao added a number of Intel media drivers, and Boris Brezillon contributed the Panthor driver for Arm Mali GPUs.
The top testers and reviewers this time around were:
Test and review credits in 6.10
Tested-by Daniel Wheeler 141 11.7% Kees Cook 46 3.8% Pucha Himasekhar Reddy 43 3.6% Hans Holmberg 28 2.3% Dennis Maisenbacher 28 2.3% Jarkko Sakkinen 25 2.1% Mark Pearson 24 2.0% Arnaldo Carvalho de Melo 22 1.8% Philipp Hortmann 22 1.8% Atish Patra 17 1.4% Björn Töpel 16 1.3% Sujai Buvaneswaran 15 1.2% Pierre-Louis Bossart 14 1.2% Ryan Roberts 14 1.2%
Reviewed-by Christoph Hellwig 303 3.4% Simon Horman 237 2.7% Bard Liao 165 1.9% Andy Shevchenko 149 1.7% Krzysztof Kozlowski 123 1.4% Konrad Dybcio 118 1.3% David Sterba 116 1.3% AngeloGioacchino Del Regno 105 1.2% Dmitry Baryshkov 104 1.2% Jani Nikula 93 1.1% Kees Cook 90 1.0% Rob Herring 86 1.0% Hans de Goede 85 1.0% Andrew Lunn 85 1.0%
These lists do not change much from one release to the next; the developers who are able to find the time for this sort of work tend to be in it for the long haul.
A total of 203 companies (that we know of) supported work on the 6.10 kernel; the most active of those were:
Most active 6.10 employers
By changesets Intel 2031 15.3% 980 7.4% Red Hat 922 6.9% (Unknown) 891 6.7% Linaro 838 6.3% (None) 690 5.2% AMD 610 4.6% Oracle 443 3.3% Meta 424 3.2% SUSE 327 2.5% IBM 321 2.4% Huawei Technologies 301 2.3% Qualcomm 247 1.9% Renesas Electronics 246 1.8% Pengutronix 221 1.7% (Consultant) 214 1.6% NVIDIA 205 1.5% Arm 160 1.2% NXP Semiconductors 135 1.0% Collabora 131 1.0%
By lines changed Intel 88245 13.6% Linaro 86382 13.3% (Unknown) 71831 11.1% Red Hat 46754 7.2% 37411 5.8% Oracle 28846 4.4% AMD 25793 4.0% Collabora 21917 3.4% (None) 19151 2.9% Meta 16586 2.6% Microsoft 14554 2.2% NVIDIA 10990 1.7% IBM 10350 1.6% ST Microelectronics 8179 1.3% Bootlin 7746 1.2% Qualcomm 7636 1.2% Realtek 7509 1.2% SUSE 7268 1.1% Arm 7019 1.1% NXP Semiconductors 6910 1.1%
For the most part, this list looks as it usually does. It is worth noting, though, that Intel contributed more than twice as many changesets to 6.10 as any other company — over 15% of the total.
Tree flatness
One of
the suggested topics for the upcoming Maintainers Summit (to be held on
September 17) was whether the merge tree is too flat. In other words,
are there too many maintainers sending pull requests directly to Linus
Torvalds rather than going through intermediate-level maintainers? Using
the treeplot utility from gitdm, one can see what this tree looks
like. An unreadable version appears to the right; clicking on it
(preferably on a system with a large monitor) will shed more light.
One conclusion that can be drawn is that, while the tree does indeed seem flat, with a lot of trees feeding directly into the mainline (note that 63 such trees have been collapsed into one line in the diagram), there is also a fair amount of work going through more than one repository as well. There would appear to be a slow trend toward adding levels to the hierarchy — but it is indeed slow.
Another way to look at this is to create a histogram showing how many levels of repository each commit went through:
Trees transited by each commit Trees Commits 0 26 26 1 8,242 8,242 2 4,817 4,817 3 227 227
The 26 commits that passed through zero trees before landing in the mainline were those committed directly by Torvalds. Nearly two-thirds of all commits went through a single tree before going upstream, but a substantial number also went through at least one other tree first. Whether that indicates an overly flat tree is hard to say.
One other thing worth noting: the trees shown in red in the plot are those that did not include a signed tag authenticating their creators. Torvalds has been increasingly insistent that pull requests should point to signed tags over the last few years, with the result that nearly every pull request he does now includes a signature. Most subsystem maintainers clearly do the same, but there still an occasional unsigned pull that finds its way upstream.
In any case, it seems evident from the above that, even if the process could always benefit from improvements, it is working reasonably well. The kernel community continues to absorb changes at a high rate and put out releases on a predictable schedule. As of this writing, there are just over 12,000 changesets poised to flow into the mainline from linux-next, so the 6.11 kernel (which will likely be released on September 15, putting the next merge window in the middle of several important conferences) will be busy as well.
Hierarchical storage management, fanotify, FUSE, and more
Amir Goldstein led a filesystem-track session at the 2024 Linux Storage, Filesystem, Memory Management, and BPF Summit on his project to build a hierarchical storage management (HSM) system using fanotify. The idea is to monitor file access in order to determine when to retrieve content from non-local storage (e.g. the cloud). The session was a follow-up to last year's introduction to the project, which covered some of the problems he had encountered; this year, he was updating attendees on its status and progress, along with some other problem areas that he wanted to discuss.
Background
He began with a quick overview of the use case for HSM and how his project fits in. A file that is stored in the cloud, say, can appear to be local to the system. When the file is accessed, an fanotify listener reacts to that event to make the file available locally. Once the data requested has been filled into the file, the listener can return to the program that accessed the file.
Last year, he described a problem with a deadlock that can happen under some circumstances when the listener daemon starts writing data to the file after it has been notified that the file has been accessed. To fix that, he took some of the advice that had been offered and moved the fsnotify hooks outside of the VFS locks, which resolved that; those patches were merged for Linux 6.8. He has had those patches available for a while, and several developers had incorporated them into their testing, including the kernel-test robot; he was thankful for those efforts, in part because the robot found some performance regressions due to new hooks that were added. He was able to restructure the fsnotify code and get that into 6.9, which reduced the overhead, both for his use case and for other users.
He has been working on some "pre-content events" for fanotify for a while; several developers have reviewed and tested that code, Goldstein said. The current code lives in his "fan_pre_content" branch at GitHub. The new FAN_PRE_ACCESS and FAN_PRE_MODIFY events have some new FAN_EVENT_INFO_TYPE_RANGE information attached to them. It describes the range of the file that is affected. In his proof-of-concept (PoC) implementation that is also on GitHub, those events allow opening a large file that is located elsewhere on the net, such as a Linux tarball, and reading just the first part of it.
His branch has also added the ability for fanotify to return errors other than EPERM. Currently, the listener can only return success or EPERM, but with his changes other errors, such as EBUSY and ENOSPC, can be returned.
Open questions
![Amir Goldstein [Amir Goldstein]](https://static.lwn.net/images/2024/lsfmb-goldstein-sm.png)
He uses real filesystems, such as Btrfs or XFS, as the local storage for his HSM; when the files are not populated, they are sparse files locally, thus use no storage space. If the HSM daemon has not been started, however, he does not want users to be able to access those files. One idea to prevent that is to have a "moderated mount" using an HSM mount option that would only allow access when the daemon is running; accesses at other times would result in EPERM. He wondered if attendees had thoughts or opinions on that.
As part of his work on HSM, he looked at what other operating systems have done; Windows has long had HSM support that is based on NTFS "reparse points", which are persistent marks on files that refer to the driver needed to access them. Perhaps Linux could do something similar by adding some kind of persistent mark to files instead of using the moderated-mount idea.
Josef Bacik said that he did not like the mount-option idea because options cannot be applied differently to separate subvolumes; containers can have access to different Btrfs subvolumes, which would mean they all need to be treated the same way. Goldstein suggested that containers could use different bind mounts for the subvolumes, with different options applied, which Bacik agreed would work.
But Bacik prefers the persistent-mark idea, in part because of some problems he has encountered in executing files that get filled in via the fanotify technique. When using execve() to execute a remote file, there is no opportunity for the HSM to write the data into the file; by the time the writes can be done, execve() has mapped the memory such that an ETXTBUSY error is returned for any writes. Both Bacik and Goldstein noted that Btrfs has an ioctl() command that can be used to avoid the problem, which is where Bacik sees the persistent-mark feature as a way around the problem. But that is a filesystem-specific solution and Goldstein would like to be able to solve the problem in a more general way.
Goldstein said that he had two different versions of the PoC code in his tree (some of which is described in his HSM wiki); one uses evictable marks on the inode to mark files that have been fully populated, while the other implements a form of persistent marks using extended attributes (xattrs). There is already precedent for fanotify to use protected xattrs in the "security.fanotify" namespace, so he added a "mark" xattr there.
Jan Kara was not in favor of the persistent marks as xattrs, however. He was concerned about who would own the mark; if there are multiple HSM implementations, how would the kernel decide which to use for a given file? For Windows, the reparse point includes an identifier to specify what storage driver needs to be used for the file, Goldstein said. That could work, Kara said, though it is a bit ugly for VFS code to have to root around in xattrs
execve()
The conversation turned back to the execve() problem and Bacik said that he had another solution for that, which he suspected that many would hate even more. His employer, Meta, wants to be able to execute binaries that may not be present locally, so adding a persistent mark that would tell the system to invoke a usermode helper to populate the file would make that possible; it would not use fanotify at all, he said. With a chuckle, Bacik said that he was ignoring the screams from Christian Brauner in the background. Kara suggested a new system call, which Bacik thought might be "a more palatable solution".
Brauner asked for more clarification of the problem being solved, which revolves around the restrictions on writable memory being mapped for execution. Something needs to write the contents of the file in order to populate it, but the memory to be used by execve() is in a private mapping, Aleksa Sarai said, so it cannot be populated later. A usermode helper could route around that, Bacik said.
Brauner said that his main concern is that the usermode helper feature is not namespace-aware, which has security and other implications. Basing the API around that has a lot of impact on extensibility and maintainability in the future, which also concerns him. Bacik agreed, but said that the usermode helper is initiated from the kernel side so he thought it might be more acceptable rather than allowing the fanotify listener to do so from user space. Brauner suggested that looking at the usability and security angles should help determine which is the better approach.
But the problem is deeper, Goldstein said. The file is marked as being "deny write" once it is opened by execve(), so nothing else can successfully even open the file. Getting around that, while still going through the VFS layer, would break some fundamental pieces of that layer, he said. Writing to the file by way of a Btrfs ioctl() command could work, as would a new system call that gets around the VFS, but populating the file using write() will not.
FUSE
Moving on, Goldstein noted that FUSE can be used instead of fanotify and it is much more flexible, but it requires that all of the I/O flow through the FUSE daemon. That does not perform as well as people want, which is why he has been working with others on FUSE passthrough. So FUSE could be another path toward an HSM solution.
Bacik said that he was not sure which path he would be taking, but he would be investigating both. "I have money on both horses", Goldstein said to laughter. Ted Ts'o noted that Dan Rosenberg has been working on the FUSE BPF filesystem, which is aimed at Android being able to do incremental loading of executables; "so there's actually some other work happening in this space". Goldstein said that FUSE passthrough had just been merged for Linux 6.9 and was partly based on the Android patches. The plan is for the BPF pieces to come in once the FUSE passthrough feature has stabilized.
Bacik said that FUSE makes sense for Android, but does not for Meta, at least for some of its uses. If the FUSE daemon crashes, suddenly applications start crashing in production; the fanotify solution is more self-contained. Goldstein said that for "mostly passthrough filesystems", the fanotify solution is probably better—"if we can get it to work", he said with a chuckle as time expired.
The YouTube video of the session and Goldstein's slides are available.
Changing the filesystem-maintenance model
Maintenance of the kernel is a difficult, often thankless, task; how it is being handled, the role of maintainers, burnout, and so on are recurring topics at kernel-related conferences. At the 2024 Linux Storage, Filesystem, Memory Management, and BPF Summit, Josef Bacik and Christian Brauner led a session to discuss possible changes to the way filesystems are maintained, though Bacik took the lead role (and the podium). There are a number of interrelated topics, including merging new filesystems, removing old ones, making and testing changes throughout the filesystem tree, and more.
New model?
The current model for filesystem maintenance—"that we all know; we love it"—is that each filesystem has its own tree and pull requests are sent to Linus Torvalds directly, Bacik began. If changes to the virtual filesystem (VFS) layer are needed, those are sent to Brauner or Al Viro depending on what part of the VFS they touch. This works well for established filesystems, but it can run into problems when there are changes that cross filesystem boundaries, such as folios, the new mount API, namespace things, etc. That kind of work is happening more frequently, and gets discussed at LSFMM+BPF, but many of the plans made do not come to fruition, perhaps in part because "nobody feels empowered to make those decisions and push that work through".
![Josef Bacik [Josef Bacik]](https://static.lwn.net/images/2024/lsfmb-bacik-sm.png)
There is also "a little bit of vagueness about who owns what"; in particular, merging new filesystems can go smoothly or can be a "painful contentious thing" where the decision is left to Torvalds. That is not necessarily a bad thing, but it does create uncertainty for submitters of new filesystems; it can also lead to the concerns of experienced developers being waved away when Torvalds tires of the arguments and simply merges the filesystem. It is a strength of the Linux community that Torvalds can step in and just merge things, but it means that new filesystem developers need to be persistent and to try to sway Torvalds to ignore the other voices in order to get their code merged, Bacik said.
For most of a year now, he has been proposing that the filesystem community in Linux move toward a model like those of some other kernel subsystems, notably networking and graphics. In both cases, they have multiple, disparate pieces that all funnel through the same maintainer and tree. That can clearly be seen in the eye chart that accompanies our 6.10 development statistics article. Bacik thinks that changing in that way would help remove some of the pain points when making changes across the filesystems.
When tree-wide changes are made, currently the developer needs to go figure out where the right trees are for each filesystem, which can be painful. In fact, he said, Btrfs is probably one of the worst offenders in that regard, because its tree is on GitHub; its policies and procedures are different from the others as well. Recently, Christoph Hellwig has been doing some work in Btrfs and told Bacik that those differences are irritating enough that it makes him want to avoid working on Btrfs.
Overall, Bacik feels that, because of those differences, the filesystem community is more difficult to deal with than it should be, especially given how much code is shared between the filesystems. So he was proposing the creation of a single tree that all of the filesystems push into; pull requests would no longer be sent to Linus by the individual filesystem maintainers, but would come from the maintainer of that tree. That is mostly a mechanical change, but he would like to see some cultural changes as well.
Viro noted that there are lots of filesystems in the kernel tree that do not use VFS; they are specialized filesystems that live in the security and arch trees, among other places. He wondered if Bacik really wanted to have filesystems for s390 and ppc (PowerPC), for example, live in this new tree. Bacik said that those were not his focus; he is after the filesystems that need changes like folio support, converting to the new mount API, switching to iomap, and so on.
But Viro pointed out that the new mount API affects all filesystems, including, say, the USB gadget filesystems. Bacik said that was a good point, which invalidated some of what he was suggesting. But his goal is to make it easier for people who are doing cross-filesystem work to "get their patches merged without having to understand how each individual filesystem community works".
Viro suggested having a single topic branch that was shared for, say, folio work. That works for VFS topic branches, Bacik said, because all of the filesystems are going to pay attention to those. Every time there needs to be a branch for non-VFS changes, though, it requires a negotiation of whose tree that will all flow through, he said, which he would like to try to avoid.
An fs-next?
Dave Chinner gave a few-day-old example of this kind of problem. Some changes to iomap that caused an XFS regression had gone into Brauner's tree. It was only discovered because of testing that was being done on linux-next, which is where XFS and the iomap changes met. Earlier, iomap changes were flowing through the XFS tree, so this problem would have been caught sooner. Since the merge window was happening while the conference was going on, that bug had probably already been merged into the mainline, Chinner said.
So the changes that were recently made to try to "centralize some of the infrastructure up into VFS trees is actually backfiring on us", Chinner said. That is because the next step of being able to share those changes down to the individual filesystems for testing is not being done. What he would like to see is an fs-next tree that is a combination of all of the filesystem trees that are planned for the next release; it would become the common test base that everyone uses for regression testing, but would not be pushed as a whole into the mainline.
Someone in the audience asked why linux-next could not be used. Chinner said that it is not reliable enough for use as a test base. Fs-next would have better reliability, Chinner said in answer to the obvious followup, because it would not have "device failures and all sorts of driver problems". Historically, linux-next has not been reliable enough to use for regression testing or day-to-day development, he said.
Shifting gears a little, Brauner said that he does not think that all of the filesystems need to flow through a single tree that gets passed to Torvalds for merging. That is infeasible, in his mind, both technically and socially. A lot of the people in the room probably enjoy the fact that they get to send pull requests to Torvalds, he said. A better solution would be to have an fs-next tree.
Ted Ts'o said "linux-next is not terrible", but that, in practice, he waits until -rc6 or -rc7 before he does a test merge of the ext4-development branch into the mainline -rc tree for integration testing. The major problems that occur have generally been worked out by -rc6 or -rc7; "somebody else has taken the arrows first". Sometimes that falls by the wayside, though, for example when it those -rc releases happen just before a conference like LSFMM+BPF, which is a downside to waiting that long for integration testing, he said.
No matter whether it is done using linux-next or fs-next, Ts'o said, the important thing is for filesystem communities to be testing more than just their own branch. It is a big change, though, that needs to be centralized in order to work, Brauner said. He has his set of tests that he runs on changes to the VFS infrastructure that he picks up, but "I have zero idea what the XFS test matrix is and it is probably a bit larger". There is no centralized testing, so "we don't actually know if what we are doing is not breaking anyone".
Chinner said that each filesystem is in its own silo, but that there is a need to test the common code. Neal Gompa said that each filesystem had its own mechanisms for testing, reporting, coverage calculation, etc., but there is no reason those all could not run on a system where people could push code to give them a reasonable level of confidence that they did not break things.
Common testing infrastructure
Matthew Wilcox thought that much of this problem could be solved by talking to linux-next maintainer Stephen Rothwell. It would probably not be difficult for him to merge all of the filesystem trees into a separate branch. But even before a common branch gets created, there needs to be common testing infrastructure, Kent Overstreet said. It would only require a couple of racks of hardware to set up; he has been hoping not to have to do that himself, but may have to. (As of June 14, he had started work on a shared filesystem test cluster.) When branches were pushed to it, all of the tests for all of the different filesystems would run and the system would provide a "simple dashboard" to report results. The current email-based system does not work, he said, since he cannot determine which tests are being run.
Bacik said that he has an automated system for testing Btrfs; he could add, say, tests for XFS, bcachefs, and others to that. But there is a lot of work that the Btrfs developers do to maintain the set of tests that should be excluded (and the reasons why) from a run of fstests. That is done to ensure that the test results are usable. Someone would need to do that work for the other filesystems.
Chinner said that there is no need for Btrfs developers to test XFS or other filesystems. The important thing is for filesystem developers to test their filesystem "with the same tree" to shake out integration problems with the cross-filesystem work that is happening. So, all of the latest code for each filesystem would be in the same tree for testing by the various communities using their own tests and infrastructure. That means that the tree would be integrated, rather than trying to integrate all of the different testing infrastructure that each filesystem has built up.
There is a need to be able to "go somewhere and click on something" to see that a change "worked for everything that is relevant", Brauner said. Gompa agreed with Chinner that the existing infrastructure could be used, but the testing could be done against a single tree, such as fs-next, that filesystem developers use as the basis for (and eventual destination of) their work. The automated-testing systems that each filesystem uses would also be based around the integration tree; perhaps a dashboard that collects the results from each individual filesystem's infrastructure could be developed. Eventually, moving everyone to the same testing infrastructure, with a single integrated dashboard and reporting mechanism, could be worked on. The idea would be to take "baby steps" toward a world where doing cross-filesystem development (or even just following it, as he does) is "a hell of a lot easier".
Instead of testing a for-next branch in, say, the XFS tree, the test target should be the fs-next tree, Chinner said; nothing else changes in the development workflow or in the actual tests that are run. Ts'o said that there may still be tests of various other branches as part of the filesystem development, but that the fs-next tree would be tested daily, say, by each of the filesystems.
David Howells asked if there could be a common base with the iomap changes, folio-support changes, and the like for use by the filesystems. Viro said that could be created, but it would require agreement on exactly which trees are included. Howells wondered about how pull requests would be done, since they cannot be based on fs-next. There would still need to be development trees that are maintained so that pull requests can be made to Torvalds, Chinner said, but that code will have been tested with the integration tree to head off problems before they reach Torvalds.
There was some scattered conversation, largely not using the microphones, about exactly which patches should go where, and when, to try to avoid problems with bugs reaching the mainline that should have been caught earlier. Brauner said that io_uring is often a place where changes that he makes affect a completely different tree; he just creates a stable branch with those changes that Jens Axboe pulls into his tree for testing. Some of that will still need to happen, but the problem with the iomap changes that was mentioned by Chinner was an example of where fs-next would have helped.
Bacik closed the session by noting that he would still like to figure out how to make the lives of cross-filesystem developers easier so that they did not have to try to figure out how to integrate with a bunch of different filesystem-development communities. But that would have to wait since the session was out of time.
A YouTube video of the session is available.
Reports from OSPM 2024, part 1
The sixth edition of the Power Management and Scheduling in the Linux Kernel (OSPM) Summit took place on May 30-31 2024, and was graciously hosted by the Institut de Recherche en Informatique de Toulouse (IRIT) in Toulouse, France. This is the first of a series of articles describing the discussions held at OSPM 2024; topics covered include latency hints, energy-aware scheduling, ChromeOS, and user-space schedulers.
IRIT is one of the largest French joint research units (unité mixte de
recherche), with more than 700 collaborators and the stated mission of
placing "mankind and its environment at the heart of computer
science
"; it is a fitting partner for a gathering of kernel developers
looking to optimize energy consumption in Linux. The list of sponsors for
the Summit doesn't end at the IRIT: it also includes Arm, Linaro,
and the Scuola Superiore
Sant'Anna in Pisa as the host for the Summit web site.
The following contains summaries of the sessions from the first half day of the event. A recording of the entire summit is available as a playlist on the IRIT YouTube channel.
How to set some latency hints to CFS tasks
Vincent Guittot started out the summit with an attempt at characterizing the various latency constraints that tasks may have, as a way of directing a renewed effort toward a "latency-nice" interface. Since the algorithm for the scheduler has changed from completely fair scheduling (CFS) to EEVDF, the prior discussions don't apply anymore. He sees two main groups of tasks: those that are sensitive to scheduling latency in one way or another, and those that aren't, but which benefit from a longer time slice and little to no preemption.
Guittot raised the issue that, with EEVDF, a task's virtual runtime doesn't account for performance variations caused by DVFS (CPU-frequency scaling). When executing on a CPU running at a lower frequency, the task makes less progress, yet it consumes the same amount of absolute time as another that's running at a higher clock. He thinks that the virtual runtime can be easily normalized and made frequency-invariant, but he's also concerned about how this may affect the limit on lag (a measure of under-service experienced by a task) offered by the original EEVDF algorithm.
With the latest EEVDF patches, user-space applications can set their own time slice using the sched_setattr() system call. This puts the burden of measuring their own run time and requesting a suitable time-slice value onto applications. When considering an interface that allows a tighter collaboration between kernel and applications to set an appropriate slice value, Daniel Bristot de Oliveira observed that there isn't, as of now, a means for user space to notify the scheduler of the end of a work cycle, which would be beneficial for those tasks with a somewhat regular and recurrent nature. The sched_yield() system call comes close, but Peter Zijlstra would prefer adding a new system call for that specific purpose.
Enrico Bini asked if it would be advantageous to group tasks in cores according to their time granularity (time-slice length). The rationale is that, if the scheduler mixes tasks with long and short slices, it seems inevitable that either the long-running tasks will often be preempted, or the short ones will experience high latency. Zijlstra agreed it would be beneficial to incorporate this logic into the load balancer.
Dietmar Eggemann stressed that, whatever the latency API will turn out to be, it is necessary to identify an agreed-upon set of benchmarks to evaluate progress along the way. He mentioned that, in Android, many of the ideas proposed during this session have been tested, and some turned out to make no difference.
For the recording of Guittot's session on latency hints, see this video.
Fixing, improving, and extending energy-aware scheduling
During the second session of the morning, Guittot identified a set of quirks in the energy-aware scheduling (EAS) engine that he's determined to correct. He presented data from an Arm "three-gear" big.LITTLE CPU. The device has four "little" (slow but energy-efficient) cores, three "mid" cores, and one "big" (fast but energy-expensive) core. A number of concerns came to light:
- A small task runs on a big core for no reason. Consider a small, periodic task that needs to run quickly with more CPU time than a little core could provide. A mid core would be perfect, but the big core is too much. It is possible to use utilization clamping and set a minimum amount of CPU power for this task to keep it off the little cores. If a larger task is occupying one of the mid cores, though, EAS would place the smaller one on the big core, even if there's a free mid. Guittot's proposed solution is to apply the cost of the predicted operating performance point (OPP) only for the runnable time of tasks, as reported by PELT historical data.
- Spare CPU capacity vs. actual OPP cost. At task wakeup, EAS picks a candidate core in each performance domain (little, mid, and big), according to each core's load compared to its capacity; Guittot argues that looking directly at the cost for the predicted OPP would yield better placements. Energy-model maintainer Lukasz Luba isn't fully convinced, as there are diminishing returns when using more sophisticated placement algorithms.
- More room than spare capacity would suggest. When cores report having no spare capacity, EAS isn't effective and the resulting placements can be highly imbalanced. In Guittot's example, clamping the maximum utilization exacerbates this condition, as cores look smaller than they are. Guittot pointed out that when a task has a utilization that is larger than the capacity of a CPU, it can be serviced nonetheless; it only needs to run a little longer.
- Over-utilization scrambles all placements. Imagine some kernel thread waking up and running for just a handful of microseconds. If that causes a core to cross the 80% utilization tipping point, EAS will be disengaged globally and the ordinary load balancer will take over, invalidating all previous efforts to optimize the system's power consumption. Guittot said that, for those cores and domains that aren't overutilized, it is better to continue using an energy-efficient task placement algorithm.
- uclamp_max prevents EAS from running often enough. Smartphones are becoming increasingly powerful, but all that horsepower isn't meant to be exercised constantly. The practice is thus to clamp the maximum utilization of tasks, but as Guittot reports, this results in less-frequent wakeups, and thus fewer opportunities for EAS to optimize for power. Guittot intends to introduce the notion of misfit tasks with respect to power needs, and when such cases are detected, use load balancing to pull these tasks onto a more appropriate core.
For the recording of Guittot's session on EAS, see this video.
ChromeOS and EEVDF
The most important application in a ChromeOS laptop is, of course, the Chrome web browser; Youssef Esmat described the progress he and his team have made in integrating the EEVDF scheduler with these systems, especially with the browser. Chrome's threads can be classified by their tolerance to latency in three groups: user-interface-related threads, which capture user input and need the highest priority, background threads, such as those representing inactive tabs, which are not critical to user experience, and everything else, sitting somewhere in between these two extremes.
Esmat stressed that Chrome's perceived performance depends heavily on satisfying the latency and throughput needs of its UI threads. These have to be serviced promptly, and at the same time are vulnerable to preemption as they are long-running. The EEVDF algorithm is based on the assumption that tasks, in general, don't behave like that. Instead, it assumes that, if a task needs to be scheduled quickly, it also completes quickly and, conversely, if it runs for a prolonged time, it can tolerate higher scheduling latencies.
For this analysis, he compared the EEVDF scheduler with CFS, using web-application workloads such as Google Meet and Google Docs, and measuring input latency and percentage of dropped frames. Stock EEVDF performed worse than CFS; Esmat discovered that quadrupling the base EEVDF time-slice length, which defaults to 1.5ms for a two-core machine, restored parity with CFS. The real gain came from removing the eligibility mechanism altogether from EEVDF and scheduling only based on the deadline value: this change would make the new scheduler outperform CFS by some 30% in the metrics of interest. The reason for this improvement is that, with eligibility, background threads would become eligible and preempt UI threads before they could finish their time slice; as discussed, the quality of Chrome's user experience depends on UI threads running until completion.
Being the clear winner in the initial comparison, the version of EEVDF with a large base time slice and no eligibility was selected for field testing, and deployed to users for real-world comparison with CFS. The new scheduler outperformed CFS in page-load speed (time to first content paint, time to largest content paint) and almost all aspects of UI input latency: key pressed, mouse pressed and touch pressed. There is one regression though, in latency for gesture scrolling, and Esmat's team is working on a reproducer so that the issue can be studied and solved.
For the recording of Esmat's session on EEVDF benchmarking, see this video.
Writing a Linux scheduler in Rust that runs in user space
Andrea Righi introduced scx_rustland, a framework to write CPU schedulers for Linux that run as user-space programs. Initially a project aimed at teaching operating systems concepts to undergraduate students, it led Righi to appreciate the convenience of the change/build/run workflow to modify the kernel's behavior without rebooting. This effort was able to prove that user-space scheduling is not only possible, but can even reach respectable performance, such as video gaming at 60 frames per second while compiling the Linux kernel at the same time. A lively discussion followed, centered on the merits and disadvantages of Righi's and similar frameworks, and on whether or not there's a place in Linux for something like sched_ext, the kernel patch upon which Righi's work is based.
Juri Lelli recalled starting out in scheduler development; after trying both a research framework for scheduler plugins and coding his algorithms directly in the kernel, he concluded the learning curve was the same. Righi replied that creating schedulers in user space, in Rust, is of great appeal for the new generation. Bristot objected that a plugin framework only relieves developers from some implementation boilerplate, which is definitely not the hard part of scheduler development. Esmat argued that, for undergraduate students, there's a lot of value in quickly applying their changes to a running Linux system; it could get them excited about kernel development. Himadri Chhaya-Shailesh believes sched_ext will be a great teaching aid at universities, since one can test scheduler code without recompiling the kernel. In her experience as a lecturer, kernel compilation takes a large part of the lab sessions.
Thomas Gleixner maintains that the core machinery for scheduling in Linux isn't modular enough to expose hooks such as those added by sched_ext, at least not without introducing large technical debt. John Stultz asked what sched_ext is offering in terms of correctness testing for user-provided schedulers; Righi mentioned the sched_ext watchdog, which prevents scheduling starvation at run time. I agreed that a generic test suite for custom schedulers would be helpful to distribution vendors; it would facilitate the interaction during support requests, allowing users to show that their third-party scheduler meets basic quality standards. Christian Loehle commented on benchmark results achieved with custom schedulers: getting, say, higher frames per second than stock Linux in gaming, without any mention of power consumption, for example, is an unfair comparison. Righi replied that custom schedulers can afford to be workload-specific; reporting only the metric the scheduler set out to optimize seems acceptable in this context.
For the recording of Righi's session on user-space scheduling, see this video.
[Thanks to Daniel Lezcano and Juri Lelli for proofreading this article.]
Brief items
Security
Security quotes of the week
Just putting "privacy" in the name of a feature doesn't make it less creepy. Considering today's branding trends it might even go the other way. "Your privacy is important to us" is the new "your call is important to us." If you dig into the literature behind PPA [Privacy-preserving attribution], you will find some mathematical claims about how it prevents tracking of individuals. This is interesting math if you like that kind of thing. But in practice the real-world privacy risks are generally based on group discrimination, so it's not really accurate to call a system "privacy-preserving" just because it limits individual tracking. Even if the math is neato.— Don Marti
Kernel development
Kernel release status
The 6.10 kernel is out; it was released on July 14. Linus said:
So the final week was perhaps not quite as quiet as the preceding ones, which I don't love - but it also wasn't noisy enough to warrant an extra rc.
Changes in 6.10 include the removal of support for some ancient Alpha CPUs, shadow-stack support for the x32 sub-architecture, Rust-language support on RISC-V systems, support for some Windows NT synchronization primitives (though it is marked "broken" in 6.10), the mseal() system call, fsverity support in the FUSE filesystem subsystem, ioctl() support in the Landlock security module, the memory-allocation profiling subsystem, and more.
See the LWN merge-window summaries (part 1, part 2) and the KernelNewbies 6.10 page for more details.
Stable updates: 6.9.9, 6.6.39, and 6.1.98 were released on July 11. 6.6.40 and 6.1.99 followed on July 15 with a single fix for a USB regression.
The 6.9.10, 6.6.41, 6.1.100, 5.15.163, 5.10.222, 5.4.280, and 4.19.318 updates are in the review process; they are due on July 18.
An empirical study of Rust for Linux
The research value of this USENIX paper by Hongyu Li et al. is not entirely clear, but it does show that the Rust-for-Linux project is gaining wider attention.
Despite more novice developers being attracted by Rust to the kernel community, we have found their commits are mainly for constructing Rust-relevant toolchains as well as Rust crates alone; they do not, however, take part in kernel code development. By contrast, 5 out of 6 investigated drivers (as seen in Table 5) are mainly contributed by authors from the Linux community. This implies a disconnection be- tween the young and the seasoned developers, and that the bar of kernel programming is not lowered by Rust language.
As a bonus, it includes a ChatGPT analysis of LWN and Hacker News comments.
Silva: How to use the new counted_by attribute in C (and Linux)
Gustavo A. R. Silva describes the path to safer flexible arrays in the kernel, thanks to the counted_by attribute supported by Clang 18 and GCC 15.
There are a number of requirements to properly use the counted_by attribute. One crucial requirement is that the counter must be initialized before the first reference to the flexible-array member. Another requirement is that the array must always contain at least as many elements as indicated by the counter.
See also: this article from 2023.
Distributions
Redox to implement POSIX signals in user space
Redox has received a grant to work on implementing POSIX-compatible signals. The draft design calls for them to be implemented nearly completely in user space.
So far, the signals project has been going according to plan, and hopefully, POSIX support for signals will be mostly complete by the end of summer, with in-kernel improvements to process management. After that, work on the userspace process manager will begin, possibly including new kernel performance and/or functionality improvements to facilitate this.
Distribution quote of the week
If we have to tell our users and sysadmins to do "X" on Debian server systems (using ifupdown or potentially sd-networkd), while doing "Y" on Debian desktop systems (using NetworkManager), while doing "Z" on Debian cloud systems (using Netplan), while doing something totally different on RaspberryPi (or alike) boards that run a Debian server setup, but using WiFi as their primary network interface, that's just a really bad user experience.
Using Debian should NOT feel like using different distros. And we really need a common way to do network configuration. With Netplan we can tell people to just use use the "dhcp4: true" setting (for example), which will work on all Debian systems and is automatically translated to the corresponding backend for server/desktop/cloud/embedded usecases.
All while giving sysadmins the [flexibility] to fully utilize the underlying network daemon directly, if they feel like writing native configuration for it (or don't like Netplan).
Development
Blender 4.2 LTS released
Version 4.2 LTS of the Blender open-source 3D creation suite has been released. Major improvements include a rewrite of the EEVEE render engine, faster rendering, and much more. See the showcase reel for examples of work created by the Blender community with this release. See the text release notes for even more about 4.2 LTS, which will be maintained until July 2026.
digiKam 8.4.0 released
Version 8.4.0 of the digiKam photo editing and management application has been released. This release includes an update of the LibRaw RAW decoder which brings support for many new cameras, a new version of the LensFun toolkit, a feature for automatic translation of image tags, GMIC-Qt 3.4.0, and many bug fixes. See the announcement for full details.
GNOME Foundation Announces Transition of Executive Director
The GNOME Foundation has announced that executive director Holly Million is stepping down at the end of July, and will be replaced by Richard Littauer as interim executive director:
On behalf of the whole GNOME community, the Board of Directors would like to give our utmost thanks to Holly for her achievements during the past 10 months, including drafting a bold five-year strategic plan for the Foundation, securing two important fiscal sponsorship agreements with GIMP and Black Python Devs, writing our first funding proposal that will now enable the Foundation to apply for more grants, vastly improving our financial operations, and implementing a break-even budget to preserve our financial reserves.
The Foundation's Interim Executive Director, Richard Littauer, brings years of open source leadership as part of his work as an organizer of SustainOSS and CURIOSS, as a sustainability coordinator at the Open Source Initiative, and as a community development manager at Open Source Collective, and through open source contributions to many projects, such as Node.js and IPFS. The Board appointed Richard in June and is confident in his ability to guide the Foundation during this transitional period.
Million says she is leaving to pursue a PhD in psychology. The board plans to announce its search plan for a permanent executive directory after GUADEC, which takes place July 19 through 24.
Development quote of the week
Valkey represents – whatever the project's ultimate fate might be – the first real, major pushback from a market standpoint against the prevailing relicensing trend.
To be clear, no one should get carried away and expect Valkeys to begin popping up everywhere. It's important to note that there are many variables that impact the friction involved in forking a project and the viability of sustaining it long term. Some projects are easier to fork than others, unquestionably, and Redis – if only because it was a project with many external contributors – was lower friction than some.
Not every project that is re-licensed can or will be forked. But investors, boards and leadership that are pushing for re-licensing as a core strategy will, moving forward, have to seriously consider the possibility of a fork as a potentially more meaningful cost. Where would be re-licensors previously expected no major project consequences from their actions, the prospect of a Valkey-like response is a new consideration.
Page editor: Daroc Alden
Announcements
Newsletters
Distributions and system administration
Development
Miscellaneous
Calls for Presentations
CFP Deadlines: July 18, 2024 to September 16, 2024
The following listing of CFP deadlines is taken from the LWN.net CFP Calendar.
Deadline | Event Dates | Event | Location |
---|---|---|---|
August 5 | September 20 September 22 |
Qubes OS Summit | Berlin, Germany |
August 8 | September 19 September 20 |
Git Merge | Berlin, Germany |
August 15 | November 19 November 21 |
Open Source Monitoring Conference 2024 | Nuremberg, Germany |
August 15 | October 10 October 12 |
LibreOffice Conference | Esch-sur-Alzette, Luxembourg, Luxembourg |
August 18 | December 4 December 5 |
Cephalocon | Geneva, Switzerland |
August 18 | October 3 October 4 |
PyConZA 2024 | Cape Town, South Africa |
August 19 | October 9 October 11 |
X.Org Developer's Conference 2024 | Montréal, Canada |
August 30 | October 9 October 10 |
oneAPI Devsummit Hosted by UXL Foundation | Everywhere |
September 2 | October 7 October 8 |
GStreamer Conference 2024 | Montréal, Canada |
September 6 | September 7 September 8 |
Kangrejos 2024 - The Rust for Linux Workshop | Copenhagen, Denmark |
If the CFP deadline for your event does not appear here, please tell us about it.
Upcoming Events
Events: July 18, 2024 to September 16, 2024
The following event listing is taken from the LWN.net Calendar.
Date(s) | Event | Location |
---|---|---|
July 15 July 19 |
Netdev 0x18 | Silicon Valley, CA, US |
July 19 July 24 |
GNOME Users and Developer Conference | Denver, CO, US |
July 28 August 4 |
DebConf24 | Busan, South Korea |
August 1 August 4 |
Free and Open Source Software Yearly conference | Portland, OR, US |
August 7 August 10 |
Wikimania 2024 | Katowice, Poland |
August 7 August 11 |
Fedora Flock 2024 | Rochester New York, US |
August 14 August 16 |
Devconf.US 2024 | Boston, US |
August 17 August 18 |
FrOSCon 19 | Sankt Augustin, Germany |
August 31 September 2 |
UbuCon Asia 2024 | Jaipur, India |
September 7 September 8 |
Kangrejos 2024 - The Rust for Linux Workshop | Copenhagen, Denmark |
September 7 September 12 |
Akademy 2024 | Würzburg, Germany |
September 10 September 13 |
RustConf 2024 | Montreal, Canada |
September 14 September 16 |
GNU Tools Cauldron | Prague, Czech Republic |
If your event does not appear here, please tell us about it.
Security updates
Alert summary July 11, 2024 to July 17, 2024
Dist. | ID | Release | Package | Date |
---|---|---|---|---|
AlmaLinux | ALSA-2024:4438 | 8 | dotnet6.0 | 2024-07-11 |
AlmaLinux | ALSA-2024:4439 | 9 | dotnet6.0 | 2024-07-10 |
AlmaLinux | ALSA-2024:4451 | 8 | dotnet8.0 | 2024-07-11 |
AlmaLinux | ALSA-2024:4450 | 9 | dotnet8.0 | 2024-07-11 |
AlmaLinux | ALSA-2024:4422 | 9 | fence-agents | 2024-07-10 |
AlmaLinux | ALSA-2024:4420 | 8 | virt:rhel and virt-devel:rhel | 2024-07-11 |
Debian | DSA-5729-1 | stable | apache2 | 2024-07-11 |
Debian | DSA-5728-1 | stable | exim4 | 2024-07-10 |
Debian | DSA-5727-1 | stable | firefox-esr | 2024-07-10 |
Debian | DSA-5730-1 | stable | kernel | 2024-07-15 |
Debian | DSA-5731-1 | stable | kernel | 2024-07-16 |
Fedora | FEDORA-2024-7c36291390 | F39 | cups | 2024-07-13 |
Fedora | FEDORA-2024-56fb9c0762 | F40 | dotnet8.0 | 2024-07-11 |
Fedora | FEDORA-2024-9484b6915b | F39 | erlang-jose | 2024-07-16 |
Fedora | FEDORA-2024-a8d7972ef6 | F40 | erlang-jose | 2024-07-16 |
Fedora | FEDORA-2024-fc815ee65f | F39 | firefox | 2024-07-11 |
Fedora | FEDORA-2024-5b06c85574 | F39 | golang | 2024-07-17 |
Fedora | FEDORA-2024-df2c70dba9 | F39 | krb5 | 2024-07-17 |
Fedora | FEDORA-2024-1f68985052 | F40 | krb5 | 2024-07-13 |
Fedora | FEDORA-2024-599bb2cb73 | F40 | mingw-python-certifi | 2024-07-16 |
Fedora | FEDORA-2024-fefc75bce4 | F39 | mingw-python3 | 2024-07-12 |
Fedora | FEDORA-2024-1ecab28e50 | F40 | mingw-python3 | 2024-07-12 |
Fedora | FEDORA-2024-d9c7181a19 | F40 | onnx | 2024-07-11 |
Fedora | FEDORA-2024-9820d9491f | F39 | pgadmin4 | 2024-07-13 |
Fedora | FEDORA-2024-e0b0ad79b2 | F39 | python-urllib3 | 2024-07-12 |
Fedora | FEDORA-2024-7bba7e65d3 | F39 | python3.6 | 2024-07-13 |
Fedora | FEDORA-2024-732aedeb4d | F40 | python3.6 | 2024-07-13 |
Fedora | FEDORA-2024-9bf3ff4133 | F40 | qt6-qtbase | 2024-07-11 |
Fedora | FEDORA-2024-8ca9261bdd | F39 | squid | 2024-07-11 |
Fedora | FEDORA-2024-110b39017e | F40 | squid | 2024-07-11 |
Fedora | FEDORA-2024-89d685e856 | F39 | wordpress | 2024-07-11 |
Fedora | FEDORA-2024-6a4ffde369 | F40 | wordpress | 2024-07-11 |
Fedora | FEDORA-2024-eef12396fc | F40 | yarnpkg | 2024-07-13 |
Fedora | FEDORA-2024-72fb215fcd | F39 | yt-dlp | 2024-07-16 |
Mageia | MGASA-2024-0269 | 9 | firefox, nss | 2024-07-16 |
Mageia | MGASA-2024-0264 | 9 | freeradius | 2024-07-14 |
Mageia | MGASA-2024-0261 | 9 | golang | 2024-07-11 |
Mageia | MGASA-2024-0263 | 9 | kernel, kmod-xtables-addons, kmod-virtualbox, and dwarves | 2024-07-13 |
Mageia | MGASA-2024-0266 | 9 | kernel-linus | 2024-07-14 |
Mageia | MGASA-2024-0268 | 9 | libreoffice | 2024-07-15 |
Mageia | MGASA-2024-0259 | 9 | netatalk | 2024-07-10 |
Mageia | MGASA-2024-0262 | 9 | php | 2024-07-11 |
Mageia | MGASA-2024-0260 | 9 | poppler | 2024-07-10 |
Mageia | MGASA-2024-0270 | 9 | sendmail | 2024-07-16 |
Mageia | MGASA-2024-0265 | 9 | squid | 2024-07-14 |
Mageia | MGASA-2024-0267 | 9 | tomcat | 2024-07-15 |
Oracle | ELSA-2024-4438 | OL8 | dotnet6.0 | 2024-07-11 |
Oracle | ELSA-2024-4439 | OL9 | dotnet6.0 | 2024-07-11 |
Oracle | ELSA-2024-4451 | OL8 | dotnet8.0 | 2024-07-11 |
Oracle | ELSA-2024-4450 | OL9 | dotnet8.0 | 2024-07-11 |
Oracle | ELSA-2024-4422 | OL9 | fence-agents | 2024-07-11 |
Oracle | ELSA-2024-4457 | OL9 | openssh | 2024-07-11 |
Oracle | ELSA-2024-4367 | OL8 | pki-core | 2024-07-11 |
Oracle | ELSA-2024-4351 | OL8 | virt:ol and virt-devel:rhel | 2024-07-11 |
Red Hat | RHSA-2024:4580-01 | EL8.8 | cups | 2024-07-16 |
Red Hat | RHSA-2024:4508-01 | EL7 | firefox | 2024-07-15 |
Red Hat | RHSA-2024:4517-01 | EL8 | firefox | 2024-07-15 |
Red Hat | RHSA-2024:4586-01 | EL8.2 | firefox | 2024-07-17 |
Red Hat | RHSA-2024:4500-01 | EL9 | firefox | 2024-07-15 |
Red Hat | RHSA-2024:4501-01 | EL9.0 | firefox | 2024-07-15 |
Red Hat | RHSA-2024:4549-01 | EL7 | ghostscript | 2024-07-15 |
Red Hat | RHSA-2024:4537-01 | EL8.2 | ghostscript | 2024-07-15 |
Red Hat | RHSA-2024:4544-01 | EL8.4 | ghostscript | 2024-07-15 |
Red Hat | RHSA-2024:4462-01 | EL8.6 | ghostscript | 2024-07-10 |
Red Hat | RHSA-2024:4527-01 | EL8.8 | ghostscript | 2024-07-15 |
Red Hat | RHSA-2024:4541-01 | EL9.2 | ghostscript | 2024-07-15 |
Red Hat | RHSA-2024:4579-01 | EL8.8 | git | 2024-07-16 |
Red Hat | RHSA-2024:4546-01 | EL8.6 | git-lfs | 2024-07-15 |
Red Hat | RHSA-2024:4545-01 | EL8.8 | git-lfs | 2024-07-15 |
Red Hat | RHSA-2024:4543-01 | EL9.0 | git-lfs | 2024-07-15 |
Red Hat | RHSA-2024:4504-01 | EL9.2 | httpd | 2024-07-11 |
Red Hat | RHSA-2024:4573-01 | EL8 | java-21-openjdk | 2024-07-17 |
Red Hat | RHSA-2024:4577-01 | EL8.2 | kernel | 2024-07-16 |
Red Hat | RHSA-2024:4547-01 | EL8.6 | kernel | 2024-07-15 |
Red Hat | RHSA-2024:4583-01 | EL9 | kernel | 2024-07-17 |
Red Hat | RHSA-2024:4533-01 | EL9.2 | kernel | 2024-07-15 |
Red Hat | RHSA-2024:4548-01 | EL9.2 | kernel | 2024-07-15 |
Red Hat | RHSA-2024:4554-01 | EL9.2 | kernel-rt | 2024-07-16 |
Red Hat | RHSA-2024:4528-01 | EL9.0 | less | 2024-07-15 |
Red Hat | RHSA-2024:4529-01 | EL9.2 | less | 2024-07-15 |
Red Hat | RHSA-2024:4575-01 | EL8.2 | linux-firmware | 2024-07-16 |
Red Hat | RHSA-2024:4576-01 | EL8.2 | nghttp2 | 2024-07-16 |
Red Hat | RHSA-2024:4559-01 | EL9.2 | nodejs | 2024-07-16 |
Red Hat | RHSA-2024:4457-01 | EL9 | openssh | 2024-07-10 |
Red Hat | RHSA-2024:4581-01 | EL9.0 | podman | 2024-07-16 |
Red Hat | RHSA-2024:4456-01 | EL8.4 | python3 | 2024-07-10 |
Red Hat | RHSA-2024:4499-01 | EL8 | ruby | 2024-07-11 |
Red Hat | RHSA-2024:4542-01 | EL9.2 | ruby | 2024-07-15 |
Red Hat | RHSA-2024:4502-01 | EL9 | skopeo | 2024-07-15 |
Slackware | SSA:2024-192-01 | mozilla | 2024-07-10 | |
SUSE | openSUSE-SU-2024:0201-1 | osB15 | Botan | 2024-07-16 |
SUSE | SUSE-SU-2024:2405-1 | SLE15 oS15.6 | apache2 | 2024-07-11 |
SUSE | SUSE-SU-2024:1014-2 | SLE-m5.5 | avahi | 2024-07-12 |
SUSE | SUSE-SU-2024:1136-2 | SLE-m5.5 | c-ares | 2024-07-12 |
SUSE | SUSE-SU-2024:1704-2 | SLE-m5.5 | cairo | 2024-07-12 |
SUSE | SUSE-SU-2024:2478-1 | SLE-m5.1 | cockpit | 2024-07-15 |
SUSE | SUSE-SU-2024:2476-1 | SLE-m5.2 | cockpit | 2024-07-15 |
SUSE | SUSE-SU-2024:2477-1 | SLE-m5.3 | cockpit | 2024-07-15 |
SUSE | SUSE-SU-2024:2494-1 | SLE-m5.4 | cockpit | 2024-07-16 |
SUSE | SUSE-SU-2024:2003-2 | SLE-m5.5 | cups | 2024-07-12 |
SUSE | SUSE-SU-2024:2467-1 | SLE-m5.5 | fdo-client | 2024-07-12 |
SUSE | SUSE-SU-2024:2399-1 | SLE15 SES7.1 oS15.5 oS15.6 | firefox | 2024-07-11 |
SUSE | SUSE-SU-2024:2077-2 | SLE-m5.5 | gdk-pixbuf | 2024-07-12 |
SUSE | SUSE-SU-2024:1807-2 | SLE-m5.5 | git | 2024-07-12 |
SUSE | SUSE-SU-2024:2495-1 | MP4.3 SLE15 SLE-m5.3 SLE-m5.4 oS15.4 | kernel | 2024-07-16 |
SUSE | SUSE-SU-2024:2493-1 | SLE12 | kernel | 2024-07-16 |
SUSE | SUSE-SU-2024:2384-1 | SLE15 SLE-m5.1 SLE-m5.2 | kernel | 2024-07-10 |
SUSE | SUSE-SU-2024:2385-1 | SLE15 SLE-m5.3 SLE-m5.4 | kernel | 2024-07-10 |
SUSE | SUSE-SU-2024:2394-1 | SLE15 SLE-m5.5 oS15.5 | kernel | 2024-07-10 |
SUSE | SUSE-SU-2024:2171-2 | SLE-m5.5 | libarchive | 2024-07-12 |
SUSE | SUSE-SU-2024:2541-1 | SLE12 | libndp | 2024-07-17 |
SUSE | SUSE-SU-2024:2409-1 | MP4.3 SLE15 SLE-m5.5 oS15.4 oS15.5 oS15.6 | libvpx | 2024-07-11 |
SUSE | SUSE-SU-2024:2408-1 | SLE15 SES7.1 | libvpx | 2024-07-11 |
SUSE | SUSE-SU-2024:2496-1 | SLE12 | nodejs18 | 2024-07-16 |
SUSE | SUSE-SU-2024:2542-1 | SLE15 oS15.4 oS15.5 | nodejs18 | 2024-07-17 |
SUSE | SUSE-SU-2024:2543-1 | SLE15 oS15.5 | nodejs20 | 2024-07-17 |
SUSE | SUSE-SU-2024:2401-1 | SLE15 SLE-m5.1 SLE-m5.2 SLE-m5.3 SLE-m5.4 SLE-m5.5 oS15.5 oS15.6 | oniguruma | 2024-07-11 |
SUSE | SUSE-SU-2024:2393-1 | SLE15 oS15.6 | openssh | 2024-07-10 |
SUSE | SUSE-SU-2024:0738-2 | SLE-m5.5 | openvswitch3 | 2024-07-12 |
SUSE | SUSE-SU-2024:2475-1 | SLE12 | p7zip | 2024-07-15 |
SUSE | SUSE-SU-2024:2050-2 | SLE15 SLE-m5.1 SLE-m5.2 SES7.1 | podman | 2024-07-15 |
SUSE | SUSE-SU-2024:1376-2 | SLE-m5.5 | polkit | 2024-07-12 |
SUSE | SUSE-SU-2024:1863-2 | SLE-m5.5 | python-Jinja2 | 2024-07-12 |
SUSE | SUSE-SU-2024:2481-1 | oS15.4 oS15.6 | python-black | 2024-07-15 |
SUSE | SUSE-SU-2024:2462-1 | SLE-m5.5 | python-urllib3 | 2024-07-12 |
SUSE | SUSE-SU-2024:2400-1 | MP4.3 SLE15 oS15.4 oS15.5 oS15.6 | python-zipp | 2024-07-11 |
SUSE | SUSE-SU-2024:2397-1 | SLE15 oS15.6 | python-zipp | 2024-07-11 |
SUSE | SUSE-SU-2024:2479-1 | MP4.3 SLE15 SLE-m5.2 SLE-m5.3 SLE-m5.4 SLE-m5.5 SES7.1 oS15.3 oS15.5 oS15.6 osM5.3 osM5.4 | python3 | 2024-07-15 |
SUSE | SUSE-SU-2024:2414-1 | MP4.3 SLE15 oS15.4 oS15.5 | python310 | 2024-07-12 |
SUSE | SUSE-SU-2024:1987-2 | SLE-m5.5 | skopeo | 2024-07-12 |
SUSE | SUSE-SU-2024:2463-1 | SLE-m5.5 | squashfs | 2024-07-12 |
SUSE | SUSE-SU-2024:2415-1 | SLE15 SLE-m5.5 oS15.5 oS15.6 | thunderbird | 2024-07-12 |
SUSE | SUSE-SU-2024:2028-2 | SLE-m5.5 | tiff | 2024-07-12 |
SUSE | SUSE-SU-2024:2539-1 | SLE12 | tomcat | 2024-07-17 |
SUSE | SUSE-SU-2024:2485-1 | SLE15 SES7.1 oS15.5 oS15.6 | tomcat | 2024-07-15 |
SUSE | SUSE-SU-2024:2413-1 | SLE15 oS15.5 oS15.6 | tomcat10 | 2024-07-11 |
SUSE | SUSE-SU-2024:2468-1 | SLE-m5.5 | traceroute | 2024-07-12 |
SUSE | SUSE-SU-2024:2174-2 | SLE-m5.5 | wget | 2024-07-12 |
SUSE | SUSE-SU-2024:2534-1 | SLE12 | xen | 2024-07-16 |
SUSE | SUSE-SU-2024:2535-1 | SLE15 | xen | 2024-07-16 |
SUSE | SUSE-SU-2024:2533-1 | SLE15 SLE-m5.1 SLE-m5.2 SES7.1 oS15.3 | xen | 2024-07-16 |
SUSE | SUSE-SU-2024:2531-1 | SLE15 oS15.6 | xen | 2024-07-16 |
Ubuntu | USN-6885-2 | 20.04 22.04 24.04 | apache2 | 2024-07-11 |
Ubuntu | USN-6894-1 | 16.04 | apport | 2024-07-11 |
Ubuntu | USN-6897-1 | 20.04 22.04 24.04 | ghostscript | 2024-07-15 |
Ubuntu | USN-6899-1 | 20.04 22.04 24.04 | gtk+2.0, gtk+3.0 | 2024-07-16 |
Ubuntu | USN-6898-1 | 20.04 22.04 | linux, linux-azure, linux-azure-5.15, linux-gcp, linux-gke, linux-gkeop, linux-gkeop-5.15, linux-ibm, linux-intel-iotg, linux-intel-iotg-5.15, linux-kvm, linux-nvidia, linux-oracle | 2024-07-15 |
Ubuntu | USN-6896-1 | 18.04 20.04 | linux, linux-azure, linux-azure-5.4, linux-bluefield, linux-gcp, linux-gcp-5.4, linux-gkeop, linux-ibm, linux-ibm-5.4, linux-kvm | 2024-07-12 |
Ubuntu | USN-6893-1 | 24.04 | linux, linux-azure, linux-gcp, linux-ibm, linux-intel, linux-lowlatency, linux-oem-6.8, linux-raspi | 2024-07-11 |
Ubuntu | USN-6895-1 | 22.04 23.10 | linux, linux-gcp, linux-nvidia-6.5, linux-raspi | 2024-07-12 |
Ubuntu | USN-6868-2 | 18.04 | linux-aws-5.4 | 2024-07-10 |
Ubuntu | USN-6866-3 | 14.04 | linux-azure | 2024-07-10 |
Ubuntu | USN-6895-2 | 22.04 | linux-azure-6.5, linux-gcp-6.5 | 2024-07-16 |
Ubuntu | USN-6893-2 | 24.04 | linux-gke, linux-nvidia | 2024-07-16 |
Ubuntu | USN-6864-3 | 24.04 | linux-gke | 2024-07-11 |
Ubuntu | USN-6896-2 | 18.04 | linux-hwe-5.4, linux-oracle-5.4 | 2024-07-16 |
Ubuntu | USN-6892-1 | 20.04 | linux-ibm-5.15 | 2024-07-10 |
Ubuntu | USN-6888-2 | 18.04 | python-django | 2024-07-11 |
Ubuntu | USN-6891-1 | 14.04 16.04 18.04 20.04 22.04 23.10 | python3.5, python3.6, python3.7, python3.8, python3.9, python3.10, python3.11, python3.12 | 2024-07-11 |
Kernel patches of interest
Kernel releases
Architecture-specific
Core kernel
Development tools
Device drivers
Device-driver infrastructure
Documentation
Filesystems and block layer
Memory management
Networking
Security-related
Virtualization and containers
Miscellaneous
Page editor: Joe Brockmeier