Leading items
Welcome to the LWN.net Weekly Edition for October 2, 2025
This edition contains the following feature content:
- Fedora floats AI-assisted contributions policy: a proposed policy on AI-based contributions for Fedora is meeting some resistance.
- Linting Rust code in the kernel: two talks from Kangrejos about lint tools for kernel code.
- Jumping into openSUSE Leap 16: a look at the first major Leap release in seven years.
- The phaseout of the mmap() file operation: an effort to switch away from the mmap() entry in the file_operations structure in favor of a new mmap_prepare() function.
- Development statistics for 6.17: the traditional look at where the code for the new kernel came from—with some bug-statistic reporting on top.
- Managing encrypted filesystems with dirlock: a new tool for managing encrypted filesystems on devices running SteamOS—and beyond.
This week's edition also includes these inner pages:
- Brief items: Brief news items from throughout the community.
- Announcements: Newsletters, conferences, security updates, patches, and more.
Please enjoy this week's edition, and, as always, thank you for supporting LWN.net.
Fedora floats AI-assisted contributions policy
The Fedora Council began a process to create a policy on AI-assisted contributions in 2024, starting with a survey to ask the community its opinions about AI and using AI technologies in Fedora. On September 25, Jason Brooks published a draft policy for discussion; so far, in keeping with the spirit of compromise, it has something to make everyone unhappy. For some it is too AI-friendly, while others have complained that it holds Fedora back from experimenting with AI tooling.
Fedora's AI survey
Aoife Moloney asked
for suggestions in May 2024, via Fedora's discussion forum, on survey
questions to learn "what our community would like and perhaps even
need from AI capabilities in Fedora
". Many of Fedora's contributor
conversations take place on the Fedora devel mailing list, but Moloney
did not solicit input for the survey questions there.
Tulio Magno Quites Machado Filho suggested
asking whether the community should accept contributions generated by
AI, and if AI-generated responses to mailing lists should be
prohibited. Josh Boyer had
ideas for the survey, including how Fedora defines AI and whether
contributions to the project should be used as data by Fedora to
create models. Justin Wheeler wanted
to understand "the feelings that someone might have when we
talk about 'Open Source' and 'AI/ML' at the same time
". People
likely have strong opinions about both, he said, but what about when
the topics are combined?
Overall, there were only a handful of suggested questions. Matthew Miller, who was the Fedora Project Leader (FPL) at the time, pointed out that some of the questions proposed by commenters were good questions but not good survey questions.
In July, Moloney announced on the forum and via Fedora's devel-announce list that the survey had been published. Unfortunately, it is no longer available online, and the questions were not included in the announcements.
The way the survey's questions and answers were structured turned out to be a bit contentious; some felt that the survey was biased toward AI/ML inclusion in Fedora. Lyude Paul wanted a way to say Fedora should not, for example, include AI in the operating system itself in any capacity. That was not possible with the survey as written:
I'd like to make sure I'm getting across to surveyors that tools like [Copilot] are things that should actively be kept away from the community due to the enormous PR risk they carry. Otherwise it makes it feel like the only option that's being given is one that feels like it's saying "well, I don't think we should keep these things out of Fedora - I just feel they're less important."
Moloney acknowledged
that the questions were meant to "keep the tone of this survey
positive about AI
" because it is easy to find negatives for the
use of AI, "and we didn't want to take that route
":
We wanted to approach the questions and uncertainty around AI and its potential uses in Fedora from a positive view and useful application, but I will reassure you that just because we are asking you to rank your preference of AI in certain areas of the project, does not mean we will be introducing AI into all of these areas.
We are striving to understand peoples preference only and any AI introductions into Fedora will always be done in the Fedora way - an open conversation about intent, community feedback, and transparent decision-making and/or planning that may follow after.
DJ Delorie complained
on the devel list that there was no way to mark all options as a poor fit for Fedora.
Moloney repeated
the sentiment about positive tone in reply. Tom Hughes responded "if you're only interested in positive responses then
we can say the survey design is a success - just a shame that the
results will be meaningless
". Several other Fedora community
members chimed in with complaints about the survey, which was
pulled
on July 3, and then relaunched
on July 10, after some revisions.
It does not appear that the full survey results were ever published online. Miller summarized the results during his State of Fedora talk at Flock 2024, but the responses were compressed into a less-than-useful form. The survey asked separate questions whether AI was useful for specific tasks, such as testing or coding, and whether respondents would like to see AI used for those specific tasks in Fedora. So, for example, a respondent could say "yes" to using AI for testing, but say "no" or that they are uncertain to the question of whether they'd like to see AI used for contributions.
Instead of breaking out the responses by task, all of the responses have been lumped into an overall result broken out into two groups, user or contributor. If a respondent answered yes to some questions but uncertain to others, they were counted as "Yes + Uncertain". That presentation does not seem to correspond with how the questions were posed to those taking the survey.
The only conclusion that can be inferred from these graphs is that a majority of respondents chose "uncertain" in a lot of cases, and that there is less uncertainty among users than contributors. Miller's slides are available online. The survey results were discussed in more detail in a Fedora Council meeting on September 11, 2024; the video of the meeting is available on YouTube.
Analysis
The council had asked Greg Sutcliffe, who has a background as a
data scientist to analyze and share the results from the
survey. He began the discussion by saying that the survey
cannot be interpreted as "this is what Fedora thinks
", because
it failed to deal with sampling bias.
He also noted other flaws in the
survey, such as giving respondents the option of choosing "uncertain"
as an answer; Sutcliffe said it was not clear whether uncertain meant the
respondent did not know enough about AI to answer the question, or
whether it meant the respondent knew enough to answer, but was
ambivalent in some way. He also said that it was "interesting that
we don't ask about the thing we actually care finding out about
":
how the respondents feel about AI in general. Without understanding
that, it is challenging to place other answers in context.
One thing that was clear is that the vast majority of respondents were Fedora users, rather than contributors. The survey asked those who responded to identify their role with Fedora, with options such as developer, packager, support, QA, and user. Out of about 3,000 responses, more than 1,750 chose "user" as their role.
Sutcliffe noted that "'no' is pretty much the largest category
in every case
" where respondents were asked about use of AI for
specific tasks in Fedora; across the board, the survey seemed to
indicate a strong bent toward rejecting the use of AI
overall. However, respondents were less negative about using AI
depending on the context. For example, respondents were overwhelmingly
negative about having AI included in the operating system and being
used for Fedora development or moderation; the responses regarding the
use of AI technologies in testing or Fedora infrastructure were more
balanced.
"On the negative side"
Miller asked him to sum up the results, with the caveat
that the survey did not support general conclusions about what Fedora
thinks. Sutcliffe replied that "the sentiment seems to be more on the
negative side [...] the vast majority don't think this is a good idea,
or at least they don't see a place
" for AI.
Given that, it seems odd that when Brooks announced the draft
policy that the council had put forward, he summarized the survey
results as giving "a clear message
" that Fedora's community
sees "the potential for AI to help us build a better platform
"
with valid concerns about privacy, ethics, and quality. Sutcliffe's
analysis seemed to indicate that the survey delivered a muddled
message, but one that could best be summed up as "no thanks" to
AI if one were to reach a conclusion at all. Neal Gompa said
that he did not understand how the conclusions for the policy were
made, "because it doesn't seem like it jives with the community
sentiment from most contributors
".
Some might notice that there has been quite a bit of time since the
survey and the council's draft policy. This may be because there is
not a consensus within the council on what Fedora should be doing or
allowing. The AI guidelines were a topic in the September 10
council meeting. (Meeting
log.) David Cantrell said that he was unsure that Fedora could
have a real policy right now, "because the field is so
young
". Miro Hrončok pointed to the Gentoo
AI policy, which expressly forbids contributing AI-generated
content to Gentoo. He did not want to replicate that
policy as-is, but "they certainly do have a point
", he said. Cantrell
said Gentoo's policy is what he would want for Fedora's policy right
now, with an understanding it could be changed later. LWN covered Gentoo's policy in
April 2024.
Cantrell added that his concerns about copyright, ownership, and creator rights with AI had not been addressed:
So far my various conversations have been met with shrugs and responses like 'eh, we'll figure it out', which to me is not good enough. [...]
Any code now that I've written and put on the internet is now in the belly of every LLM and can be spit back out at someone sans the copyright and licensing block. Does no one care about open source licensing anymore? Did I not get the memo?
FPL Jef Spaleta said that he did not understand how splitting out
Fedora's "thin layer of contributions
" does anything to address
Cantrell's copyright concerns. "I'm not saying it's moot. I'm
saying you're drawing the line in the sand at the wrong place.
"
Moloney eventually reminded the rest of the council that time was
running short in the meeting, and suggested that the group come to an
agreement on what to do next. "We have a ticket, we have a WIP and
we have a lot of opinions, all the ingredients to
make...something.
"
Proposed policy
There are several sections to the initial draft policy from September 25; it addresses AI-assisted contributions, the use of AI for Fedora project management, as well as policy for use of AI-powered features in Fedora and how Fedora project data could be used for training AI models.
The first draft encouraged the use of AI assistants in contributing
to Fedora, but stressed that "the contributor is always the author
and is fully accountable for their contributions
". Contributors
are asked, but not required, to disclose when AI tools have
"significantly assisted
" creation of a work. Usage of AI tools
for translation to "overcome language barriers
" is welcome.
It would put guardrails around using AI tools for reviewing
contributions; the draft says that AI tools may assist in providing feedback,
but "AI should not make the final determination on whether a
contribution is accepted or not
". The use of AI for Fedora project
management, such as deciding code-of-conduct matters or reviewing
conference talks, is expressly forbidden. However, the use of automated
note-taking and spam filtering are allowed.
Many vendors, and even some open-source projects, are rushing to push AI-related features into operating systems and software whether users want them or not. Perhaps the most famous (or infamous) of these is Microsoft's Copilot, which is deeply woven into the Windows 11 operating system and notoriously difficult to turn off. Fedora is unlikely to go down that path anytime soon; any AI-powered features in Fedora, the policy says, must be opt-in—especially those that send data to a remote service.
The draft policy section on use of Fedora project data prohibits
"aggressive scraping
" and suggests contacting Fedora's
Infrastructure team for "efficient data access
". It does not,
however, address the use of project data by Fedora itself; that is,
there is no indication about how the project's data could be used by
the project in creating any models or in when using AI tools on
behalf of the project. It also does not explain what criteria might be
used to grant "efficient" access to Fedora's data.
Feedback
The draft received quite a bit of feedback in short order. John Handyman said that the language about opt-in features was not strong enough. He suggested that the policy should require any AI features not only to be opt-in, they should be optional components that must be installed by the user consciously. He also wanted the policy to prefer models running locally on the user's machine, rather than those that send data to a service.
Hrončok distanced himself a bit from the proposal in the forum
discussion after it was published; he said
that he agreed to go forward with public discussion of the proposal,
but made it known, "I did not write this proposal, nor did I sign
it as 'proposed by the Council'
". Moloney clarified
that Brooks had compiled the policy and council members had been asked
to review the policy and provide any feedback they might have. If
there is feedback that requires significant changes ("which there
has been
"), then it should be withdrawn, revised, and
re-proposed.
Red Hat vice president of core platforms, Mike McGrath, challenged
the policy's prohibition on using AI to make decisions when reviewing
contributions. "A ban of this kind seems to step away from Fedora's
'first' policies without even having done the experiment to see what
it would look like today.
" He wanted to see Fedora "get in
front of RHEL again with a more aggressive approach to AI
". He
would hate, he said, for Fedora to be a "less attractive innovation
engine than even CentOS Stream
".
Fedora Engineering Steering Committee (FESCo) member Fabio
Valentini was
not persuaded that inclusion and use of AI technologies
constitutes innovation. Red Hat and IBM could experiment with AI all
they want, but "as far as I can tell, it's pretty clear that most
Fedora contributors don't want this to happen in Fedora
itself
".
McGrath said
that has been part of the issue; he had not been a top contributor to
Fedora for a long time, but he could not understand why Fedora and its
governing boards "have become so well known for what they don't
want. I don't have a clue what they do want.
" The policy, he said,
was a compromise between Fedora's
mission and nay-sayers:
I just think it sends a weak message at a time where Fedora could be leading and shaping the future and saying "AI, our doors are open, let's invent the future again".
Spaleta replied
that much of the difference between establishing policy between Fedora
and RHEL comes down to trust. Contribution access to RHEL and CentOS is
hierarchical, he said, and trust is established differently there than
within Fedora. Even those packagers who work on both may have
different levels of trust within Red Hat and Fedora. There is not a
workable framework, currently, for non-deterministic systems to
establish the same level of trust as a human within Fedora. The policy
may not be as open to experimenting with AI as McGrath might like, but
he would rather find a way to move forward with a more prohibitive
policy than to "grind on a policy discussion for another year
without any progress, stuck on the things we don't have enough
experience with to feel comfortable
".
Graham White, an IBM employee who is the technical lead for the Granite AI agents, objected to a part of the policy that referenced AI slop:
I've been working in the industry and building AI models for a shade over 20 years and never come across "AI slop". This seems derogatory to me and an unnecessary addition to the policy.
Clearly, White has not been reviewing LWN's article submissions queue or paying much attention to open-source maintainers who have been wading through the stuff.
Redraft
After a few days of feedback, Brooks said
that it was clear that the policy was trying to do too much by being a
statement about AI as well as a policy for AI usage. He said that he
had made key changes to the policy, including removing
"biased-sounding rules
" about AI slop, using RFC 2119 standard
language such as SHOULD and MUST NOT rather than weaker terms, and
removed some murky language about honoring Fedora's licenses. He also
disclosed that he used AI tools extensively in the revision
process.
The shorter
policy draft says that contributors should disclose the use of AI
assistance; non-disclosure "should be exceptional and
justifiable
". AI must not be used to make final determinations
about contributions or community standing, though it does not prohibit
automated tooling for pre-screening tasks like checking packages for
common errors. User-facing features, per the policy, must be opt-in
and require explicit user consent before being activated. The shorter
policy retains the prohibition on disruptive or aggressive scraping,
and leaves it to the Fedora Infrastructure team to grant data access.
Daniel P. Berrangé questioned
whether parts of the policy were necessary at all, or if they were
specific to AI tools. In particular, he noted that questions around
sending user data to remote services had come up well before AI was
involved. He suggested that Fedora should have a general policy on how
it evaluates and approves tools that process user data, "which
should be considered for any tool whether using AI or not
". An AI
policy, he said, should not be a stand-in for scenarios where Fedora
has been missing a satisfactory policy until now.
Prohibition is impossible
The revised policy as drafted and the general spirit of the policy as being in favor of generative AI tools will not please people who are against the use of generative AI in general. However, it does have the virtue of being more practical in terms of enforcement than flat-out forbidding the use of generative AI tools.
Spaleta said
that a strict prohibition against AI tools would not stop people from
using AI tools; it would "only serve to keep people from talking
about how they use it
". Prohibition is essentially unenforceable
and "only reads as a protest statement
". He also said
that there is "an entire ecosystem of open source LM/LLM work that
this community appears to be completely disconnected from
"; he did
not want to "purity test
" their work, but to engage with them
about ethical issues and try to address them.
Next
There appears to be a growing tension between what Red Hat and IBM would like to see from Fedora versus what its users and community contributors want from the project. Red Hat and IBM have already come down in favor of AI as part of their product strategies, the only real questions are what to develop and offer to the customers or partners.
The Fedora community, on the other hand, has quite a few people who feel strongly against AI technologies for various ethical, practical, and social reasons. The results, so far, of turning people loose with generative AI tools on unsuspecting open-source projects has not been universally positive. People join communities to collaborate with other people, not to sift through the output of large language models. It is possible that Red Hat will persuade Fedora to formally endorse a policy of accepting AI-assisted content, but it may be at the expense of users and contributors.
The discussion continues. Fedora's change policy requires a minimum two-week period for discussion of such policies before the council can vote. A policy change needs to be passed with the "full consensus" model, meaning that it must have at least three of the eight current council members voting in favor of the policy and no votes against. At the moment, the council could vote on a policy as soon as October 9. What the final policy looks like, if one is accepted at all, and how it is received by the larger Fedora community remains to be seen.
Linting Rust code in the kernel
Klint is a Rust compiler extension developed by Gary Guo to run some kernel-specific lint rules, which may also be useful for embedded system development. He spoke about his recent work on the project at Kangrejos 2025. The next day, Alejandra González led a discussion about Rust's normal linter, Clippy. The two tools offer complementary approaches to analyzing Rust kernel code, although both need some additional direction and support from kernel developers to reach their full potential.
Klint
Klint was started in 2021 to find places where Rust code in the kernel was ignoring allocation errors. That is no longer necessary — the Rust for Linux project ended up rewriting the allocation interfaces to make ignoring allocation failures more difficult — but klint still has a few other useful checks. Mainly, klint is used to check that code does not sleep while holding a spinlock.
The last two years have mostly been maintenance, and not much new development, Guo said. But he did still have some updates to share. The simplest change is adding a shorthand for common klint annotations:
#[klint::preempt_count(expect = 1..)] // can now be written #[klint::atomic_context_only]
Other improvements are tied to the way that klint works. Clippy mainly focuses on "local" lint rules: suggestions that usually only involve a handful of lines of code, often contained in a single function. In contrast, klint focuses on kernel-specific properties of the entire program. To manage that, klint hooks into the Rust compiler's unstable internal APIs to access the mid-level intermediate representation (MIR).
This means that some of the improvements to klint are really just improvements to the Rust compiler's MIR optimizations. For example, klint now understands that the following function is actually permitted, where it couldn't before:
#[klint::atomic_context] fn foo(x: Option<SleepOnDrop>) -> Option<SleepOnDrop> { if let Some(v) = x { Some(v) } else { None } }
The SleepOnDrop type is a hypothetical example type that must sleep in order to release resources when it is freed, which means that it cannot be freed in an atomic context. Option is Rust's built in way to indicate something that may be null, so a value of type Option<SleepOnDrop> is either None (in which case it does not actually contain a value of type SleepOnDrop), or it is Some(v). The if let syntax matches x against the pattern Some(v), binding the name v to the SleepOnDrop value inside x whenever one is present. The else clause applies when x is None. Put together, the function is just an elaborate way to write the identity function: it returns whatever argument is passed to it.
The tricky part comes from Rust's semantics around automatically cleaning up variables when they go out of scope. If a value is pattern-matched (as in the first branch), the responsibility for freeing the underlying memory passes on to whatever part of the program takes ownership of the pieces. In this case, the Option's tag (Some or None) gets implicitly dropped (which does nothing, because the tag is just a number), and the value gets handed to the code inside the if-let block, which re-wraps it in an Option and returns it to the caller. At no point is the SleepOnDrop value dropped, so there is no sleep. In the else branch, on the other hand, x hasn't been pattern-matched, and it isn't returned to the caller or stored anywhere, so Rust requires it to be dropped. This calls the drop() method for Option<SleepOnDrop> ... but because x is None, there is no actual SleepOnDrop value to be dropped, so it still doesn't sleep.
To a human programmer, it is obvious that no sleep occurs, and that the whole function ought to be optimized into "return x;". The Rust compiler does actually optimize this whole thing into "return x;", but it does so at a later phase of the optimization pipeline than klint runs. Previously, the compiler did not provide enough information to klint to see that this optimization would definitely occur, so as far as klint was concerned, the call to drop() in the else branch might have dropped a value of type SleepOnDrop, and therefore slept in an atomic context. Recently, the compiler started providing enough information for klint to make the correct inference here, although there are some closely related examples that still pose problems.
Guo presented another example of some code with moderately complex control flow where a human programmer could easily see that a lock was unlocked on every code path, but klint's automatic analysis wasn't able to make that same determination.
Despite these limitations, klint does find
real bugs. Unfortunately, it also sometimes finds a "not a bug
", Guo
said. In particular, it highlighted a place in Android's
Rust Binder drivers where the
code might have slept in an atomic context if a reference count hit zero,
except that the code path was unreachable because the function in question was only
called while other references to the resource were held. So, the code wasn't
technically capable of misbehaving, but it was still something that could be
cleaned up a bit, Guo explained.
Another new klint feature is better errors for the kernel's build_assert!() macro. Rust code in the kernel can use a few different kinds of asserts. static_assert!() is similar to C11's static assertions, and triggers an error at compile time. assert!() does the same thing at run time. But occasionally there are conditions that can only be checked after the code has already been monomorphized for various reasons. Rather than make those run-time checks, kernel developers can use build_assert!() to make those asserts fail during linking if they are violated.
The way build_assert!() works is by conditionally referencing an undefined symbol. If the compiler's optimizer can't prove that the condition is false, the reference to the undefined symbol will remain in the generated object code, and the linker will complain when producing the final binary. Since this relies on the optimizer, it can sometimes produce false positives; but it may still be better than a run-time check in a hot loop. The error message produced by the linker is somewhat cryptic; klint now supports using DWARF debug information to provide an actual stack trace when a call to build_assert!() fails.
Andreas Hindborg asked how much time this new feature adds to the build process. Guo explained that it was quite minimal, especially because it was only triggered when the build failed. Tyler Mandry asked whether Guo was concerned with the brittle nature of build_assert!(). Guo didn't really see a way around it; he said that if it did start breaking consistently, the project would likely file a bug against LLVM. Despite that, he did say to prefer static_assert!() to build_assert!() where possible.
Benno Lossin asked about the possibility of integrating klint into the build
system, to provide the better error messages for all Rust for Linux programmers.
"I definitely want to make that happen,
" Guo agreed. There are some
problems with doing so, though: klint depends heavily on the internal
details of the Rust compiler, which means that it really needs to be developed
out of tree to keep up.
Overall, people were generally approving of the improvements to klint, small as they may be. Unsurprisingly, the kinds of people who become involved in Rust for Linux are excited about the possibilities of having the computer validate the correctness of their kernel code.
Clippy
Clippy is the more typical way to find potential problems with Rust code. While originally designed for user-space Rust code, many of its lint rules apply to kernel code as well. González is a member of the Rust project's Clippy team, currently focusing on improving the performance of the program. She opened her session with a quick update on current work, before asking a number of questions about what the attending kernel developers wanted to see from Clippy in the future.
Clippy can be configured with a clippy.toml configuration file; that feature has been unstable for several years, to permit experimentation with the format of different options, but in practice the configuration file format hasn't changed much in that time. The project is working on stabilizing the format so that the Rust for Linux project can make use of it without worry.
![Alejandra González [Alejandra González]](https://static.lwn.net/images/2025/alejandra-gonzález-kangrejos-small.png)
That's part of a general push to prioritize work that benefits Rust for Linux, González said. Rust's contributors care deeply about its success, and if Rust for Linux succeeds then it shows that Rust can be used in existing complex, low-level C programs — so, transitively, the people working on Clippy want Rust for Linux to succeed too.
González's work on making Clippy faster doesn't relate directly to that, but the amount of Rust code in the kernel is growing. Miguel Ojeda, one of the conference organizers, had previously shared a graph showing an exponential increase in the amount of Rust code in the kernel with a doubling period of approximately 18 months. If that keeps up, Clippy performance will be important, so González is trying to stay ahead of the problem. Currently, Clippy is about 40%-60% faster than this time last year, she said. Her first question for the attendees was whether Clippy performance was currently a problem for them.
Alice Ryhl said that she develops with Clippy enabled all of the time, and she hasn't had any problems with it so far. Performance is important to her since she runs Clippy constantly, but probably not worth prioritizing over other things. Ojeda agreed, saying that Linus Torvalds cares a lot about not having false positives in kernel linting tools; since Ojeda enforces Clippy-clean builds in stable kernels, focusing on ensuring that Clippy remains completely reliable is more important.
He also asked whether González might want him to turn on Clippy-warnings-as-errors in the Rust continuous-integration infrastructure (which already tests kernel builds) to catch behavior changes. González agreed that would probably be a good idea.
Mandry asked what Clippy lint rules were enabled in the kernel. The answer is
that
the kernel currently uses Clippy's defaults, with the exception of a tweak to
recognize the kernel's
dbg!() macro. The kernel mostly uses Clippy's defaults, but
additionally enables rules related to safety and documentation comments, among
others. [Thanks to Ojeda for the correction.] That is a reasonable choice because
the default Clippy lint rules are usually the most sensible and generally do
not have false positives. González asked whether the
kernel could benefit from extending some particular category of lint rules. Daniel
Almeida asked for a check that could warn when code referred to a type in an
unnecessarily verbose way. For example, if a function is already imported
elsewhere in the module, he would want to be warned about places that refer to
it by its full import path. That's a kind of comment that he often runs into in
reviews of his own code.
González agreed that Clippy could do that. Ojeda and Lossin briefly discussed whether that request was already covered by one of the feature requests that the Rust for Linux project had filed with the Clippy team; the eventual conclusion was that some big requests should be broken up into smaller pieces so that the volunteers who contribute to Clippy find them more approachable. With that change, Almeida's request may soon be granted.
González asked whether there were features in other linters for other languages
that people might like to see integrated into Clippy. "Oof. I could come up
with things, yeah,
" Ojeda replied, before demurring due to lack of time.
González asked anyone with an idea to send her an email. Her next question was
about whether anyone was writing tooling that depended on the format of Clippy's
command-line interface output; the general answer was no — and that if the Clippy developers wanted to improve
an error message, they should definitely do that rather than trying to keep the
format exactly the same. Similarly, the assembled developers were not interested
in reducing Clippy's verbosity. Since the build is usually Clippy-clean, it's
better to have a long, explanatory error than to make the developer go hunting
down auxiliary information.
González finished up by once again asking people to reach out to her, and reaffirming her personal commitment to supporting Rust for Linux's use of Clippy.
Jumping into openSUSE Leap 16
The openSUSE project is nearing the release of Leap 16, its first major release since openSUSE Leap 15 in May 2018. This release brings some changes to the core of the distribution aside from the usual software upgrades; YaST has been retired, SELinux has replaced AppArmor as the default mandatory access control (MAC) system, and more. If all goes according to plan, Leap 16 final should be released in early October, with planned support through 2031.
A lot has happened behind the scenes at SUSE since the last major Leap release: the company was sold by Micro Focus to EQT Partners in 2018, acquired Kubernetes management company Rancher Labs in 2020, went public in 2021, and then was taken private again in 2023. Through all that, the folks making SUSE have tried to continue business as usual to keep developing all of the offerings from SUSE and openSUSE.
Leap, tumble, or roll slowly
The openSUSE project offers various editions that have different update cycles, which may be confusing to folks who are thinking of trying the distribution for the first time. OpenSUSE Leap is a traditional Linux distribution based on the source from SUSE Linux Enterprise (SLE), so it conveys most of the stability and predictability that users might want from an "enterprise" version without the costs—or support. It also has had the pace of an enterprise version; major releases generally come every three to four years, and Leap has once-yearly minor releases.
The release cycle for Leap may also be a bit confusing; a minor release may contain breaking changes and major upgrades that one might not expect in a long-term-support release. For example, there may be major updates to desktop environments between minor releases; 15.4 and 15.5 included GNOME 41, while 15.6 jumped ahead to GNOME 45. Leap 15.5 dropped Python 2 support entirely, whereas other long-term-support distributions have continued to keep up Python 2 for the lifetime of the release. However, the project has avoided doing other major breaking changes, such as dropping YaST, as part of minor releases. If there's a document that spells out the expected lifecycle of various components, I haven't located it.
The project recommends Leap for conservative users who prioritize a working system over having the latest software. More adventurous, or impatient, users may want to opt for one of the openSUSE rolling releases such as Tumbleweed; it pushes out updates continuously from the rolling development repository Factory after the packages have passed automated testing. Slowroll is also a rolling-release distribution that aims to provide more stability than Tumbleweed by sending out monthly updates without the longer update cycle of Leap. The project also has image-based releases for the desktop; Aeon features the GNOME desktop, and Kalpa offers KDE. LWN covered Aeon in June 2024.
Installing Leap
Leap is available for x86_64, Arm 64-bit (aarch64), PowerPC (ppc64le), and s390x systems. The project offers two image types: an offline installer image with a large package set and a network installer image that requires a network connection to fetch software outside of the base set of packages.
The new Agama installer is easy to use and should be approachable enough for both new and experienced users. The installer allows users to move back and forth between steps easily; one can, for example, move from the software selection step back to choosing a hostname, or configuring networking. It is even possible to start over and select a different version of openSUSE entirely. The Agama installer images for openSUSE Leap 16 also give users the option of Leap Micro 6.2, a version of Leap designed to run workloads in containers or virtual machines.
Agama has reasonable defaults for automatic disk partitioning, but custom partitioning is harder than it should be. For example, it is not immediately obvious how to resize partitions, or switch from Btrfs to another type of filesystem, such as XFS. Those options are present, but difficult to find; they are located in the drop-down "Details" menu, instead of being available through the "Change" menu where one would expect to find them.
During installation, users can pick from a subset of available software, such as the preferred desktop environment or software to set up a KVM virtualization host, mail server, and so forth. The desktop options are GNOME 48, KDE Plasma 6.4.2, and Xfce 4.20; the installer offers to set up Xfce with a still-experimental Wayland session rather than an X11 session. The packages for an X11 Xfce session can be added after installation, but not before. That seems like an odd choice since the Wayland session support for Xfce is known to be buggy and incomplete. Indeed, it was a bit of a hassle to install the X11 support in a virtual machine while using Xfce's Wayland session; there were quite a few glitches in drawing the windows, and input was so laggy as to be almost unusable. The Xfce X11 session was fine.
The big three desktops are not the only desktop environments, window managers, or Wayland compositors for Leap 16; Cinnamon, LXQt, MATE, Sway, and others are available too. But it is not possible to select any of those options until after installation. I was pleased to see that Leap 16 had the niri Wayland compositor packaged, but less so after I noticed it was a relatively old version from September 2024.
AppArmor has been the default security module for openSUSE for many years, but SUSE's Cathy Hu opened a discussion about switching to SELinux last year. The response was generally positive, though a few people expressed the hope that the project would continue to offer AppArmor as an option. The good news is that AppArmor is still available, but the bad news is that it is not present as an option in the installer. Users can choose not to install SELinux, but AppArmor has to be set up after the install.
Overall, the switch to Agama seems to be a success, though there are areas (such as partitioning) that need a bit more work if they are to accommodate users who need more complex setups.
Leap 16, at least as of this writing, includes a mostly up-to-date selection of software. For example, it offers GNU Emacs 30.1, GCC 15.1.1, the GNU C Library (glibc) 2.40, Perl 5.42.0, Python 3.13.5, RPM 4.20.1, Ruby 3.4.3, and Vim 9.1, which are all relatively recent releases if not the latest from upstream. The kernel is reported as 6.12.0, but it includes quite a few backports from later kernels and assorted patches from SUSE as well. The source, including patches, is available on GitHub.
Software management
The first order of business after installing a new distribution is to run updates and install the software I use day to day. With YaST gone, SUSE now offers Myrlyn as its graphical software package and repository manager. It started as a SUSE Hack Week project named YQPkg in November 2024; it is built with Qt6 and uses libzypp as its backend. While the GNOME Software and KDE Discover graphical-software-management utilities are slick and user-friendly, they fall down when it comes to installing software outside of the desktop applications, such as development tools or system applications.
Myrlyn provides a lot of the functionality users can get from managing software directly with zypper or rpm, without requiring them to learn the command-line incantations for doing so. For example, viewing a package's dependencies or the files it contains is just a matter of point-and-click in Myrlyn. Many folks will no doubt find it more convenient to use a GUI tool to search available packages and skim the list for desired software than needing to use "zypper search" in the terminal.
I found myself wishing that Fedora had a similar tool. It's not overly difficult, for example, to work with Fedora's package groups using "dnf group" commands, but Myrlyn provides a much better interface for doing the same thing (though openSUSE uses the term "patterns" rather than "groups").
Myrlyn has made quite a bit of progress in the short time since its creation; I've found it to be stable and quite usable. It does not offer the same "app store" experience as Discover or Software, but it is a much better tool for managing all of the software available for Leap—with the exception of Flatpaks.
In early discussions around the development of Leap 16, there were concerns that important software like Firefox and Thunderbird would only be available as Flatpaks. That has proven not to be the case; Firefox is installed as a regular package from the openSUSE repositories. Thunderbird is not installed by default, but it is available as a regular package; it can be installed from Flathub for those who wish, if they take the extra steps to install Flatpak support and enable the Flathub repository. Users can manage Flatpaks using the flatpak command-line utility or through the desktop software-management applications.
Much of YaST has been removed for Leap 16. It is still possible to install YaST and use its terminal user interface, but the graphical support for it is gone. The Cockpit web-based-administration tool is the heir apparent for YaST. It may not be a one-to-one replacement for YaST, but it should help to fill the gap. Like YaST, Cockpit is modular; it has core functionality like user management, a web console for command-line access, and so forth, plus add-on applications for using NetworkManager, managing storage, working with virtual machines, and more.
Leap 16 has Cockpit version 340, which was released in June, but Cockpit development moves pretty quickly; the project just released version 347 on September 17. The openSUSE project has packaged the official and SUSE-developed applications for Cockpit, except for cockpit-ostree and Cockpit Files. Since Leap does not use OSTree, it is not a problem that openSUSE hasn't packaged that application, but not including the Files application for managing files on a remote host is a bit of a miss. It was first released about a week after Cockpit 340, so it's a little surprising that it's missing.
I spent a roughly equal amount of time using openSUSE's GNOME and KDE desktops; both desktops seem to hew pretty closely to the project defaults, and there are no big surprises for GNOME or KDE fans coming to openSUSE from another Linux distribution. Without the openSUSE colors and branding, which I found pleasant, it would be difficult to tell at first glance if one were using Debian, Fedora, or openSUSE.
Leap 16 sets up Snapper to manage snapshots of the root (/) filesystem. Snapshots of the root filesystem are taken automatically when using Zypper (or Myrlyn) to install, upgrade, or remove software. This can be handy if one needs to revert a package upgrade or removal that has broken networking or another crucial component. What Leap does not have, with the removal of YaST, is a GUI for managing snapshots or rolling back to previous snapshots.
Out of the box, Snapper is not set up to take snapshots of Btrfs subvolumes like /home, /opt, or /srv. Users can add add /home or other subvolumes manually, however. There is a good tutorial for working with Snapper, but I wonder how many users might not even know that Leap has the capability under the hood since it's not particularly discoverable.
Support
Traditionally, the cadence for openSUSE Leap was to release a minor release every year with 18 months of support thereafter; this was to give users a short overlap between the last version and the new version. So, when 15.4 came out, users could expect 15.5 in a year, and had six months before they would need to upgrade from 15.3 to continue getting updates.
The plan for Leap 16 is to continue the annual cadence for minor releases but to give two years of support for each point release. Users who start with Leap 16.0 on day one will have two years before they need to upgrade—though they can, of course, move to the new release sooner. Given the way that some components, such as desktops, are upgraded it may be more accurate to think of each point release of Leap 16 as having its own two-year lifecycle, rather than thinking of Leap 16 as having a six-year lifecycle, even though the project says it is supported through 2031.
With the pending release of Leap 16, there are no more minor releases planned for the Leap 15 series. The end of life for 15.6 is scheduled for April 30, 2026. Assuming that the schedule on openSUSE's roadmap holds, that gives users on 15.6 about six months before updates end. The project has a system upgrade guide and migration tool to help people move from 15 to 16. Overall, if one is in the market for a general-purpose Linux distribution with long-term support, Leap 16 seems like a good option.
The phaseout of the mmap() file operation
The file_operations structure in the kernel is a set of function pointers implementing, as the name would suggest, operations on files. A subsystem that manages objects which can be represented by a file descriptor will provide a file_operations structure providing implementations of the various operations that a user of the file descriptor may want to carry out. The mmap() method, in particular, is invoked when user space calls the mmap() system call to map the object behind a file descriptor into its address space. That method, though, is currently on its way out in a multi-release process that started in 6.17.The file_operations structure was introduced in the 0.95 release in March 1992; at that point it supported the basic read() and write() operations and not much else. Support for mmap() first appeared in 0.98.2 later that year, though it took a while before it actually worked as expected. The interface has evolved a bit over time, of course; in current kernels, its prototype is:
int (*mmap) (struct file *, struct vm_area_struct *);
The vm_area_struct structure (usually referred to as a VMA) describes a range of a process's address space; in this case, it provides mmap() with information about the offset within the file that is to be mapped, how much is to be mapped, the intended page protections, and the address range where the mapping will be. The driver implementing mmap() is expected to do whatever setup is necessary to make the right thing happen when user space accesses memory within that range. There are hundreds of mmap() implementations within the kernel, some of which are quite complex.
As described in this 6.17 commit by Lorenzo Stoakes, though, there are some significant problems with this API. The mmap() method is invoked after the memory-management layer has done much of its setup for the new mapping. If the operation fails at the driver layer, all of that setup must be unwound, which can be a complicated task. The real problem, though, is that mmap() gives the driver direct access to the VMA, which is one of the core memory-management data structures. The driver can make changes to the VMA, and many do with gusto. Those changes can force the memory-management layer to redo some of its setup; worse, they can introduce bugs or create other types of unpleasant surprises.
Over the years, a number of important memory-management structures have been globally exposed in this way; more recently, developers have been working to make more of those structures private to the memory-management code. One step in that direction is to retire the mmap() method in favor of a new API that more clearly constrains what code outside of the memory-management layer can do.
Replacing mmap()
This work began with the introduction of the new mmap_prepare() callback in 6.17:
int (*mmap_prepare)(struct vm_area_desc *);
That method receives a pointer to the new vm_area_desc structure:
struct vm_area_desc { /* Immutable state. */ struct mm_struct *mm; unsigned long start; unsigned long end; /* Mutable fields. Populated with initial state. */ pgoff_t pgoff; struct file *file; vm_flags_t vm_flags; pgprot_t page_prot; /* Write-only fields. */ const struct vm_operations_struct *vm_ops; void *private_data; };
This new method is intended to eventually replace mmap(); a driver cannot provide both mmap_prepare() and mmap() in the same file_operations structure. mmap_prepare() is called much earlier in the mapping process, before the VMA itself is set up. If it returns a failure status, there is a lot less work to clean up within the memory-management code. The vm_area_desc structure is intended to provide the driver with only the information it needs to set up the mapping, and to allow it to specify specific VMA changes to be made once the VMA itself is set up.
Thus, for example, the driver can modify pgoff (the offset within the file where the mapping starts) if needed to meet alignment or other constraints. Various flags and the page protections can be changed, and the driver can provide a vm_operations_struct pointer with callbacks to handle page faults, protection changes, and other operations on the mapping. If the mapping succeeds, the memory-management layer will copy information from this structure into the VMA while keeping a grip on the overall contents of that VMA.
The next step
That was the state of the API as merged for the 6.17 release; it was enough to support the conversion of a number of drivers over from mmap() and begin the long process of deprecating that interface. As noted above, though, some drivers do complex things in their mmap() implementations, and this API is not sufficient for their needs. Thus, Stoakes has been working on an expansion of mmap_prepare() for a wider range of use cases.
The new capabilities are based around yet another new structure, which is added to struct vm_area_desc (as a field named action):
struct mmap_action { union { /* Remap range. */ struct { unsigned long start; unsigned long start_pfn; unsigned long size; pgprot_t pgprot; } remap; }; enum mmap_action_type type; int (*success_hook)(const struct vm_area_struct *vma); int (*error_hook)(int err); };
This structure tells the memory-management core what the driver would like to see happen after the VMA has been set up and is valid. The actions defined in this patch set are MMAP_NOTHING (do nothing), MMAP_REMAP_PFN, which causes the address space covered by the VMA to be mapped to a range of page-frame numbers beginning at start_pfn, and MMAP_IO_REMAP_PFN, which performs a similar remapping into device-hosted memory. The driver could perform this remapping itself, one page at a time, in its fault() vm_operations_struct method, but it is much more efficient to just do the whole range at once.
There are also two callbacks in that structure. The
success_hook() callback will be called upon the successful
completion of the requested action. That callback is passed a pointer to
the VMA, but it is a pointer to a const structure, so the callback
should not be able to make any changes there. This callback is used
in the /dev/zero driver to perform a "very unique and
rather concerning
" (according to Stoakes) change that driver makes to
the mapping. The error_hook() is called if things go wrong; it
can provide a different error code to be returned as a way of filtering
errors that should not make it back to user space.
This series is in its fourth revision as of this writing; it still seems to be going through a relatively high rate of change in response to review comments. Whether it will settle in time for the 6.18 merge window is unclear at this point, so the work to remove the mmap() callback may have to wait another cycle before proceeding. Even after that, though, there will still be those hundreds of mmap() implementations to convert, so this task will not be complete for some time yet.
Development statistics for 6.17
The 6.17 development cycle ended on September 28 with the release of the 6.17 kernel. This cycle brought in 13,089 non-merge changesets, a slowdown from its predecessor but still within the normal bounds for recent kernels. The time has come for a look at where those changes came from, with a bit of a side trip into bug statistics.Work on 6.17 was contributed by 2,038 developers, of whom 298 made their first kernel contribution during this cycle. The most active contributors this time around were:
Most active 6.17 developers
By changesets Bartosz Golaszewski 207 1.6% Sean Christopherson 168 1.3% Takashi Iwai 162 1.2% Al Viro 141 1.1% Krzysztof Kozlowski 138 1.1% Jakub Kicinski 135 1.0% Eric Biggers 127 1.0% Ian Rogers 121 0.9% Rob Herring 104 0.8% David Lechner 92 0.7% Filipe Manana 90 0.7% Anusha Srivatsa 90 0.7% Jani Nikula 88 0.7% SeongJae Park 87 0.7% Masahiro Yamada 86 0.7% Matthew Wilcox 84 0.6% Alex Deucher 83 0.6% Dmitry Baryshkov 83 0.6% Lad Prabhakar 82 0.6% Binbin Zhou 79 0.6%
By changed lines Dennis Dalessandro 48357 7.8% Takashi Iwai 20562 3.3% Bingbu Cao 16171 2.6% Luca Weiss 12815 2.1% Eric Biggers 12775 2.1% Rob Clark 8666 1.4% Rob Herring 8095 1.3% Ian Rogers 7008 1.1% Liu Ying 6803 1.1% Jakub Kicinski 6489 1.0% Jani Nikula 5739 0.9% Ivan Vecera 5261 0.8% Lorenzo Pieralisi 5158 0.8% Svyatoslav Ryhel 5115 0.8% Frank Li 5074 0.8% Dmitry Baryshkov 4724 0.8% Sean Christopherson 4554 0.7% Andrea della Porta 4531 0.7% Cathy Xu 4378 0.7% Thomas Zimmermann 4240 0.7%
The top contributor of changesets this time around was Bartosz Golaszewski, who carried out some extensive refactoring in the GPIO and pin-control driver subsystems. Sean Christopherson, as always, was busy throughout the KVM subsystem. Takashi Iwai, beyond maintaining the sound subsystem, eliminated large numbers of strcpy() calls there. Al Viro made extensive changes in the virtual filesystem layer (and beyond), and Krzysztof Kozlowski worked mostly with devicetree bindings and system-on-chip drivers.
Dennis Dalessandro only contributed two commits to 6.17, but the one removing the old "qib" Infiniband driver put him at the top of the "lines changed" column. Takashi Iwai reorganized some codec drivers. Bingbu Cao added the IPU7 input system driver to the staging tree, Luca Weiss added a number of drivers for the Milos system-on-chip, and Eric Biggers added a set of tests for cryptographic functions.
In this cycle, 8.1% of the commits carried Tested-by tags, while 53.6% had Reviewed-by tags. The top testers and reviewers for 6.17 were:
Test and review credits in 6.17
Tested-by Daniel Wheeler 96 7.8% Randy Dunlap 46 3.8% Rinitha S 45 3.7% Antonino Maniscalco 42 3.4% Tomi Valkeinen 34 2.8% Neil Armstrong 29 8.7% Vikash Garodia 26 2.1% K Prateek Nayak 26 2.1% Sairaj Kodilkar 23 1.9% Hiago De Franco 21 1.7%
Reviewed-by Simon Horman 237 2.5% Dmitry Baryshkov 180 1.9% Geert Uytterhoeven 154 1.6% Neil Armstrong 147 1.6% Andy Shevchenko 142 1.5% Krzysztof Kozlowski 140 1.5% David Sterba 122 1.3% Ilpo Järvinen 122 1.3% Laurent Pinchart 117 1.3% Andrew Lunn 107 1.1%
Daniel Wheeler remains unapproachable as the top tester of commits going into the kernel. On the review side, Simon Horman managed to review nearly four patches for each day of this development cycle; 31 developers reviewed at least one patch per day during this time.
Employer information
There were 209 employers identified as having supported work on the 6.17 kernel; the most active of those were:
Most active 6.17 employers
By changesets Intel 1313 10.0% (Unknown) 1118 8.5% 1102 8.4% Red Hat 953 7.3% Linaro 679 5.2% SUSE 612 4.7% Meta 517 3.9% AMD 515 3.9% Qualcomm 420 3.2% NVIDIA 416 3.2% (None) 408 3.1% Renesas Electronics 332 2.5% Oracle 293 2.2% Huawei Technologies 273 2.1% Arm 265 2.0% NXP Semiconductors 225 1.7% Linutronix 192 1.5% IBM 176 1.3% (Consultant) 172 1.3% BayLibre 151 1.2%
By lines changed Intel 69817 11.3% Cornelis Networks 48357 7.8% 47540 7.7% (Unknown) 45034 7.3% SUSE 36005 5.8% Red Hat 30886 5.0% Qualcomm 29443 4.8% Meta 23930 3.9% NVIDIA 22106 3.6% Linaro 18498 3.0% AMD 16085 2.6% NXP Semiconductors 15617 2.5% (None) 13361 2.2% Renesas Electronics 13205 2.1% Fairphone 12815 2.1% Arm 12577 2.0% Huawei Technologies 10675 1.7% IBM 10577 1.7% Analog Devices 9147 1.5% Microsoft 6865 1.1%
These results change little from one release to the next. Cornelis Networks is an unusual name to see here, though; sharp-eyed readers will notice that the number of lines changed is exactly equal to that of Dennis Dalessandro, who was mentioned above.
Bugs introduced and fixed
When kernel developers fix a bug, they normally include a Fixes tag that identifies the commit in which the bug was introduced. Among other things, that allows for various types of interesting analysis, including for bug lifetime and such. The 6.17 kernel, for example, fixes 245 bugs that were introduced in 6.16, but also two that have been present since the beginning of the Git era in 2005 (subscribers can consult this KSDB page for lots of details on where the bugs fixed in 6.17 came from).
One specific question that one might attempt to answer with this data is: are kernel developers fixing more bugs than they introduce (thus reducing total bugs) or not? We asked that question nearly three years ago, with a result that looked like this:
At that time, the plot appeared to show that, as of the 5.0 release, the number of bugs fixed exceeded the number introduced, but that result was never expected to match reality. As has often been shown, it takes a long time to find all of the bugs introduced in a release; in 2022, there had not been enough time to find all of the bugs that showed up in 5.0 (released in 2019).
Running the same analysis now produces this plot:
As one might expect, the 5.0 kernel is now shown to have introduced more bugs than it fixed; the apparent crossover point has moved to 5.8. With enough handwaving, one can come up with all kinds of conclusions from this shift. For example, there were 16 kernel releases made between those two plots, but the crossover point only moved by eight, suggesting that it is taking longer to find the requisite number of bugs in any given kernel release, despite the observable fact that we are fixing more bugs in each release over time. If that reasoning holds, there may eventually come a point where we can say with confidence that a given release has fixed more bugs than it introduced.
Another question one can ask is: which commits have required the most fixes over time — which were the buggiest commits ever made to the kernel? As of 6.17, the answer to that question is:
Commit Fixes Description 1da177e4c3f4 657 Linux-2.6.12-rc2 dd08ebf6c352 159 drm/xe: Introduce a new DRM driver for Intel GPUs 8700e3e7c485 79 Soft RoCE driver e126ba97dba9 75 mlx5: Add driver for Mellanox Connect-IB adapters 9d71dd0c7009 58 can: add support of SAE J1939 protocol 46a3df9f9718 54 net: hns3: Add HNS3 Acceleration Engine & Compatibility Layer Support 604326b41a6f 51 bpf, sockmap: convert to generic sk_msg interface d889913205cf 48 wifi: ath12k: driver for Qualcomm Wi-Fi 7 devices 98686cd21624 46 wifi: mt76: mt7996: add driver for MediaTek Wi-Fi 7 (802.11be) devices d5c65159f289 46 ath11k: driver for Qualcomm IEEE 802.11ax devices e7096c131e51 46 net: WireGuard secure network tunnel 1738cd3ed342 44 net: ena: Add a driver for Amazon Elastic Network Adapters (ENA) 76ad4f0ee747 41 net: hns3: Add support of HNS3 Ethernet Driver for hip08 SoC e1eaea46bb40 39 tty: n_gsm line discipline 1e51764a3c2a 38 UBIFS: add new flash file system 1ac5a4047975 37 RDMA/bnxt_re: Add bnxt_re RoCE driver 1c1008c793fa 35 net: bcmgenet: add main driver file 7733f6c32e36 34 usb: cdns3: Add Cadence USB3 DRD Driver 25fdd5933e4c 33 drm/msm: Add SDM845 DPU support 54a611b60590 33 Maple Tree: add new data structure 3d82904559f4 33 usb: cdnsp: cdns3 Add main part of Cadence USBSSP DRD Driver b48c24c2d710 32 RDMA/irdma: Implement device supported verb APIs c09440f7dcb3 31 macsec: introduce IEEE 802.1AE driver c0c050c58d84 31 bnxt_en: New Broadcom ethernet driver. 7724105686e7 29 IB/hfi1: add driver files 4c8ff7095bef 29 f2fs: support data compression 6a98d71daea1 28 RDMA/rtrs: client: main functionality 96518518cc41 27 netfilter: add nftables 3f518509dedc 26 ethernet: Add new driver for Marvell Armada 375 network unit c948b5da6bbe 26 wifi: mt76: mt7925: add Mediatek Wi-Fi7 driver for mt7925 chips e2f34481b24d 26 cifsd: add server-side procedures for SMB3 3c4d7559159b 26 tls: kernel TLS support a49d25364dfb 26 staging/atomisp: Add support for the Intel IPU v2 d2ead1f360e8 25 net/mlx5e: Add kTLS TX HW offload support 726b85487067 25 qla2xxx: Add framework for async fabric discovery 119f5173628a 25 drm/mediatek: Add DRM Driver for Mediatek SoC MT8173. 152d1faf1e2f 25 arm64: dts: qcom: add SC8280XP platform c156633f1353 25 Renesas Ethernet AVB driver proper 44e694958b95 24 drm/xe/display: Implement display support a05829a7222e 24 cfg80211: avoid holding the RTNL when calling the driver
Commit 1da177e4c3f4 is the original commit that started the Git era, so any
bugs seemingly introduced there could have come anytime in the first
14 years of the kernel's development history. As Andrew Morton
recently observed:
"we really blew it that time!
". Perhaps most notable is the
second-place commit, adding the xe graphics driver, which only landed in
the kernel for the 6.8 release and has quickly accumulated Fixes tags.
Beyond that, it remains true that many of the most-fixed commits in the
kernel history come from the networking subsystem, for reasons that are far
from clear.
As of this writing, there are just over 11,300 non-merge commits waiting in linux-next, which has not been updated for a few days. Those commits will soon spill into the mainline for the 6.18 release, starting the whole process over yet again. As always, LWN will be there keeping an eye on that release as it comes into shape; stay tuned.
Managing encrypted filesystems with dirlock
As with a mobile phone, a portable gaming device like the Steam Deck can contain lots of personal information that the owner would like to keep secret—especially given that such devices can do far more than gaming. Alberto Garcia worked with his colleagues at Igalia and people at Valve, the company behind the Steam gaming platform, to come up with a new tool to manage encrypted filesystems for SteamOS, which is a Linux distribution optimized for gaming. Garcia gave a talk about that tool, dirlock, at Open Source Summit Europe, which was held in Amsterdam in late August. In the talk, he looked at the design process for the encrypted-files feature, the alternatives considered, and why they made the choices they did.
Over a long career at Igalia, he has worked on many different projects,
including GNOME, the Maemo and MeeGo mobile-Linux platforms, and more
recently on QEMU. He is also a Debian developer; "I've been using
Debian basically all of my life, but I'm also contributing to the project
and I've been an active developer for many years
". At the moment, he
is working on SteamOS.
![Alberto Garcia [Alberto Garcia]](https://static.lwn.net/images/2025/osseu-garcia-sm.png)
He was quick to point out that dirlock is not a new encryption system as it is only meant to manage filesystems that are encrypted using existing tools. Steam Decks and similar devices are easy to misplace—or steal. Since the hard drive is not encrypted, whoever ends up with the device can read its contents. That may not sound all that problematic for a gaming handheld, but the devices are much more than that; they may have credentials for things other than just Steam accounts, for one thing. In addition, the devices have a desktop mode where various programs can be installed, including web browsers that may store even more personal information. Users have been requesting disk encryption for a long time, Garcia said.
From his slides, he showed the disk layout of the device. It is based around an A/B arrangement for the operating system partitions, which consists of two sets of read-only root partitions, boot partitions, and /var partitions. None of those are particularly sensitive; most of the data on those is downloaded to the device from the internet. The bulk of the disk is taken up with the /home partition, which is where all of the user's data is stored. That includes the games, but also configuration and other data that the user may want to keep private.
Currently, users do have an encryption option, but it is somewhat limited. SteamOS ships with the KDE Plasma desktop, so the Plasma Vault tool can be used to create encrypted directories. It is not a general-purpose solution, however, for encrypting everything in the user's home directory.
Goals
The goals of the project were focused on the needs of SteamOS, but "the
idea is to make them general enough so they can be used in any Linux system
or in other systems
". The most important goal is that if the device is
lost or stolen, the personal files on it should be unreadable; there are
other scenarios, such as the so-called evil maid attack,
that are important to guard against, but the main goal is to protect the
personal data, he said. For that, the user's home directory should be
encrypted, but it would be nice to be able to encrypt other directories too.
The devices have removable media that can be used to store games and other
data, so encrypting those would be useful, for example.
While SteamOS is currently single-user, support for multiple users with independent encryption keys is another goal for the tool. Access to the encrypted files must be authenticated somehow, with a PIN, password, or something else. But, since handheld gaming devices do not have a physical keyboard, the expectation is that users will have short, weak passwords or PINs. Having support for a hardware-backed mechanism of some sort may help mitigate that weakness.
These devices are already out there in the hands of users, so "it would
be nice to have a way to enable encryption without having to reinstall the
whole operating system from scratch
". From a security point of view,
doing it that way is not ideal, but the goal is to avoid requiring users to
wipe their devices; the hope is to have a simple "encrypt data" button or
command. Beyond that, the tool needs a D-Bus API. The underlying
encryption should also have reasonable
performance, "so the user can use the machine normally without noticing
any regression in the performance
".
There are three available encryption technologies that were considered. The first, stacked filesystem encryption, stores the data as regular files in the filesystem with encrypted contents and names. It is implemented in user space, which hurts performance; the Filesystem in Userspace (FUSE) mechanism is used to mount an encrypted filesystem that gives access to the data. Two examples of this type of encryption are gocryptfs and EncFS; the Plasma Vault tool uses the technique as well.
Another technology, block-device encryption, encrypts each individual block
of block devices,
such as disk
partitions or loopback-mounted files; it does not care what the contents of the
block device are, normally it is a filesystem, but it does not have to be.
The technique "offers the best confidentiality because what's inside is
completely hidden
"; attackers have no way to know how much data is
stored there, just that it is less than the size of the device. In Linux,
the most popular implementation is LUKS,
which stores the encryption keys in a header on the block device.
The third option is native filesystem encryption, where files are encrypted by the kernel at the filesystem level. That allows filesystems to contain a mix of encrypted and unencrypted directories. The file names and contents are encrypted, but the metadata (e.g. sizes, permissions) of files is not protected. The kernel provides the fscrypt API to access the feature, but it must be implemented by individual filesystems; at the moment, ext4 and f2fs have support, and he believes it is in progress for Btrfs. All of the encryption keys for fscrypt must be managed by user space.
LUKS versus fscrypt
For SteamOS, the decision came down to either LUKS or fscrypt. LUKS has better confidentiality and works with hardware-backed mechanisms like the TPM and FIDO tokens, but it has some downsides as well. Normally, the LUKS partition needs to be unlocked early in the boot process, which may limit the input methods that can be used for authentication. There is no fine-grained control over what is encrypted and there is no way to encrypt an existing installation; it is meant to be used for a new filesystem on a block device.
"On the other hand, fscrypt makes it very easy to encrypt an existing
installation, because you can start from an existing filesystem and start
encrypting directories there.
" It also makes it easy to encrypt
other directories, for separate user accounts, for example, with different
keys. It integrates easily with PAM, which opens up lots of possibilities
for authentication mechanisms, and fscrypt directories can be unlocked
after booting, and even remotely via ssh. On the con side, the lack of
protection for the file metadata allows attackers to know or guess some
things about the files and directory structure; in addition, fscrypt does
not stop attackers from deleting files.
The team chose fscrypt as the better option for SteamOS. It is "more
practical
", with good confidentiality guarantees. It is flexible and
"very easy to enable in existing system
". Fscrypt offers good
performance as well; in his tests, it performed a little better than LUKS,
Garcia said.
But fscrypt is just a kernel API, SteamOS will need to handle the
encryption keys. Two existing tools, the fscrypt
command-line tool and systemd-homed,
which are incompatible with each other, were considered. fscrypt,
which is related to but different than the kernel API, is "the reference
tool to manage encrypted directories
"; it was developed in Go by the
people working on the kernel API. It is simple to use and supports PAM,
but it only allows passwords or raw binary keys and has no support for
hardware-backed mechanisms. It also lacks a D-Bus API.
Systemd-homed is not really an encryption tool, or one for managing encrypted filesystems directly, it is for managing user accounts—and only for those tied to humans, not for system accounts. The goal is to separate the configuration of the accounts from the rest of system in order to make it easier to move the accounts to other systems, he said. It has multiple storage backends, two of which are encrypted; one uses a LUKS loopback-mounted file and the other uses the deprecated v1 fscrypt API. Systemd-homed supports D-Bus, PAM, and FIDO tokens, but there is no TPM support. It also only handles encrypting the home directory, while the SteamOS developers want to be able to encrypt more than just that, it has its own user database, separate from /etc/passwd, and it uses ID-mapped mounts, which can conflict with other tools, such as Podman. Overall, systemd-homed was a strong contender, Garcia said, but the team decided to go in a different direction.
dirlock
Dirlock just "does encryption, authentication, and nothing else; it
doesn't touch anything else, it tries to be as least invasive as
possible
". It is "heavily inspired
" by fscrypt and
Garcia tried not to diverge from the choices made by the tool. Dirlock is
still under development, but it is usable at this point. PAM and FIDO
support are working, as is basic TPM support. Since users are expected to
have low-entropy PINs, the anti-hammering feature of the TPM is used to
protect against brute-force attacks. There is also a D-Bus API, but it is
in the prototype stage and not yet ready for widespread use.
Dirlock is open-source software, available under the three-clause BSD license. It was written from scratch in Rust, with the needs of SteamOS in mind, but it should work on any Linux system. It will be available in the upcoming SteamOS 3.8 release as an experimental feature; some users are testing it on pre-release versions of SteamOS, so the developers are already getting feedback on it.
A directory encrypted with fscrypt has an "encryption policy" associated with it; the policy is the master encryption key and several configuration parameters, including the encryption algorithm used. The master key is loaded into the kernel to unlock the directory, so that the files can be seen and accessed normally, and is removed to lock the directory. It is up to user space, dirlock in this case, to manage the master key and to keep it safe.
The master key is not used directly by dirlock, he said, it is wrapped with
intermediate keys called "protectors"; there are protectors using
passwords, FIDO2 keys, and others. That scheme has the advantage that
"if the protector is compromised, because the password is lost or
something, it can be deleted without exposing the master key and without
having to re-encrypt other data
". The design for key-handling in
dirlock was taken from fscrypt, but the idea of using intermediate
keys to protect the master key is much older and is also used in LUKS and
BitLocker.
For dirlock, there may not just be a single master key because there may be more than one encrypted directory, so those keys can be protected in various ways. For example, two users can each have their encrypted home directory with protectors using their own password. In addition, a single FIDO2 protector can be used for both users' master keys, so it can decrypt either of the home directories. The users can change their passwords without affecting the ability of the FIDO2 protector to provide access to the directories.
Another scenario might be two users who share a third directory. Each user's password protector can unlock their personal home directory and the shared directory. Each user only needs to know their password for access. Either can change their password at will, without affecting the other user's access.
So far, several protectors have been implemented. The password protector uses the password and cryptographic salt as inputs to a key-derivation function, which generates an encryption key that can decrypt the protected (i.e. master) key. The FIDO2 protector gets the encryption key from the token, which uses a credential and salt internal to the token, possibly mixed with a PIN provided by the user, to generate it. For the TPM protector, the key is obtained from the TPM based on a PIN provided by the user. There are other authentication possibilities using the TPM and its platform-configuration registers (PCRs), but those have not been implemented for dirlock, at least yet.
There is a pam_dirlock.so module for PAM integration. Users do not need to be converted as the PAM module checks to see if the home directory is encrypted. If it is, then the authentication is handled by dirlock, otherwise, it returns PAM_USER_UNKNOWN so that the next PAM module can handle the authentication. He showed a sample PAM configuration that would implement that sort of behavior.
He did a demo of dirlock on a virtual machine (VM) running Debian. He set up two protectors, one for the software TPM in the VM and another for a real YubiKey FIDO2 token that was passed through to the VM. When the user logged in, they were prompted to press the YubiKey button, which would unlock the directory. Removing the YubiKey device from the VM caused it to fall back to the TPM-based key, which required a PIN to be entered. He showed logging in—and failing to log in—using those mechanisms and also noted that the TPM only allowed a certain number of attempts before disallowing further entry of PINs, which is part of its anti-hammering protection.
Something that struck me about the presentation was the total lack of fanfare surrounding the programming-language choice. It was not all that long ago when choosing Rust might have been given a rather higher profile in a talk of this nature, but it seems we are past that point now. Rust is just another attribute of a project—as it should be.
Those interested can view a YouTube video of the talk.
[I would like to thank the Linux Foundation, LWN's travel sponsor, for supporting my trip to Amsterdam for Open Source Summit Europe.]
Page editor: Jake Edge
Next page:
Brief items>>