|
|
Subscribe / Log in / New account

Leading items

Welcome to the LWN.net Weekly Edition for October 16, 2025

This edition contains the following feature content:

This week's edition also includes these inner pages:

  • Brief items: Brief news items from throughout the community.
  • Announcements: Newsletters, conferences, security updates, patches, and more.

Please enjoy this week's edition, and, as always, thank you for supporting LWN.net.

Comments (none posted)

The FSF considers large language models

By Jonathan Corbet
October 14, 2025

Cauldron
The Free Software Foundation's Licensing and Compliance Lab concerns itself with many aspects of software licensing, Krzysztof Siewicz said at the beginning of his 2025 GNU Tools Cauldron session. These include supporting projects that are facing licensing challenges, collecting copyright assignments, and addressing GPL violations. In this session, though, there was really only one topic that the audience wanted to know about: the interaction between free-software licensing and large language models (LLMs).

[Krzysztof
Siewicz] Anybody hoping to exit the session with clear answers about the status of LLM-created code was bound to be disappointed; the FSF, too, is trying to figure out what this landscape looks like. The organization is currently running a survey of free-software projects with the intent of gathering information about what position those projects are taking with regard to LLM-authored code. From that information (and more), the FSF eventually hopes to come up with guidance of its own.

Nick Clifton asked whether the FSF is working on a new version of the GNU General Public License — a GPLv4 — that takes LLM-generated code into account. No license changes are under consideration now, Siewicz answered; instead, the FSF is considering adjustments to the Free Software Definition first.

Siewicz continued that LLM-generated code is problematic from a free-software point of view because, among other reasons, the models themselves are usually non-free, as is the software used to train them. Clifton asked why the training code mattered; Siewicz said that at this point he was just highlighting the concern that some feel. There are people who want to avoid proprietary software even when it is being run by others.

Siewicz went on to say that one of the key questions is whether code that is created by an LLM is copyrightable and, if not, if there is some way to make it copyrightable. It was never said explicitly, but the driving issue seems to be whether this software can be credibly put under a copyleft license. Equally important is whether such code infringes on the rights of others. With regard to copyrightability, the question is still open; there are some cases working their way through the courts now. Regardless, though, he said that it seems possible to ensure that LLM output can be copyrighted by applying some human effort to enhance the resulting code. The use of a "creative prompt" might also make the code copyrightable.

Many years ago, he said, photographs were not generally seen as being copyrightable. That changed over time as people figured out what could be done with that technology and the creativity it enabled. Photography may be a good analogy for LLMs, he suggested.

There is also, of course, the question of copyright infringements in code produced by LLMs, usually in the form of training data leaking into the model's output. Prompting an LLM for output "in the style of" some producer may be more likely to cause that to happen. Clifton suggested that LLM-generated code should be submitted with the prompt used to create it so that the potential for copyright infringement can be evaluated by others.

Siewicz said that he does not know of any model that says explicitly whether it incorporates licensed data. As some have suggested, it could be possible to train a model exclusively on permissively licensed material so that its output would have to be distributable, but even permissive licenses require the preservation of copyright notices, which LLMs do not do. A related concern is that some LLMs come with terms of service that assert copyright over the model's output; incorporating such code into a free-software project could expose that project to copyright claims.

Siewicz concluded his talk with a few suggested precautions for any project that accepts LLM-generated code, assuming that the project accepts it at all. These suggestions mostly took the form of collecting metadata about the code. Submissions should disclose which LLM was used to create them, including version information and any available information on the data that the model was trained on. The prompt used to create the code should also be provided. The LLM-generated code should be clearly marked. If there are any use restrictions on the model output, those need to be documented as well. All of this information should be recorded and saved when the code is accepted.

A member of the audience pointed out that the line between LLMs and assistive (accessibility) technology can be blurry, and that any outright ban of the former can end up blocking developers needing assistive technology, which nobody wants to do.

There were some questions about how to distinguish LLM-generated code from human-authored code, given that some contributors may not be up-front about their model use. Clifton said that there must always be humans in the loop; they, in the end, are responsible for the code they submit. Jeff Law added that the developers certificate of origin, under which code is submitted to many projects, includes a statement that the contributor has the right to submit the code in question. Determining whether that right is something the contributor truly holds is not a new concern; developers could be, for example, submitting code that is owned by their employer.

A real concern, Siewicz said, is whether contributors are sufficiently educated to know where the risks actually are.

Mark Wielaard said that developers are normally able to cite any inspirations for the code they write; an LLM is clearly inspired by other code, but is unable to make any such citations. So there is no way to really know where LLM-generated code came from. A developer would have to publish their entire session with the LLM to even begin to fill that in.

The session came to an end with, perhaps, participants feeling that they had a better understanding of where some of the concerns are, but nobody walked out convinced that they knew the answers.

A video of this session is available on YouTube.

[Thanks to the Linux Foundation, LWN's travel sponsor, for supporting my travel to this event.]

Comments (26 posted)

Debian Technical Committee overrides systemd change

By Joe Brockmeier
October 13, 2025

Debian packagers have a great deal of latitude when it comes to the configuration of the software they package; they may opt, for example, to disable default features in software that they feel are a security hazard. However, packagers are expected to ensure that their packages comply with Debian Policy, regardless of the upstream's preferences. If a packager fails to comply with the policy, the Debian Technical Committee (TC) can step in to override them, which it has done in the case of a recent systemd change that broke several programs that depend on a world-writable /run/lock directory.

The Filesystem Hierarchy Standard (FHS) specifies that the /var/lock directory should be used to store lock files for devices and other resources shared by multiple applications. On Debian, /var/lock is a symbolic link to /run/lock. The /run directory is created as a tmpfs filesystem specifically for run-time files by systemd-tmpfiles during system startup.

Debian Policy still cites the FHS, even though the FHS has gone unmaintained for more than a decade. The specification was not so much finished as abandoned after FHS 3.0 was released—though there is a slow-moving effort to revive and revise the standard as FHS 4.0, it has not yet produced any results. Meanwhile, in the absence of a current standard, systemd has spun off its file-hierarchy documentation to the Linux Userspace API (UAPI) Group as a specification. LWN covered that development in August, related to Fedora's search for an FHS successor.

Locking up /run/lock

The /run/lock directory was deprecated in Systemd v258; rather than dropping the directory entirely, though, the project has changed the default to making /run/lock writable only by root, which is stricter than the permissions Debian had shipped with previously rather than making it world-writable as in the past. The plan is to get rid of /run/lock entirely in the v259 release, though users (or distributions) can still retain the legacy behavior by adding a configuration file in /etc/tmpfiles.d to override systemd's defaults and create the directory with the desired permissions.

The Debian project just released a new stable version, Debian 13 ("trixie"), in August, and work has begun on Debian 14 ("forky"). The current stable version of Debian shipped with systemd v257, so users on stable will not be affected by these changes. But v258 has entered Debian unstable where the change to /run/lock broke other software, such as the Unix-to-Unix Copy program (UUCP) and the cu utility. Use of the directory is not limited to vintage utilities; Zbigniew Jędrzejewski-Szmek objected to removing /var/lock in v259 as it would break alsa-utils and create additional work for distributions:

Doing this would this way just creat a foottrap for distributions: if they notice the change, they'll just create a local override, so we get a more complicated system in total with zero benefit to anyone. If they miss it, things will be broken for a while until users report it. And then they'll add the override.

On August 13, Marco d'Itri—who is listed as a maintainer of the systemd package—filed a bug against the uucp package reporting that systemd v258-rc1-1 had broken uucico, along with filing a bug against the systemd package, which cited the FHS entry for /var/lock. He said that a compromise might be to make the directory writable by the dialout group rather than world-writable. He also mentioned that there was a previous effort in 2014 to modernize software that uses UUCP-style locks to use flock() instead, but it stalled out.

"Dead and severely outdated"

Rather than temporarily reverting the behavior, systemd maintainer Luca Boccassi argued that a world-writable directory in /run is a security risk. Any process could write as much as it wanted to /run, which could effectively DoS the system by exhausting space or inodes; filling up /run would then cause critical services, such as udev, to stop working. The FHS, he said, is "dead and severely outdated".

The issue had already been discussed by the systemd project; Lennart Poettering had responded that he did not see the point of /var/lock "in the modern world", but distributions were free to do as they see fit:

Consider this more a passing of the baton from upstream systemd to downstreams: if your distro wants this kind of legacy interface, then just add this via a distro-specific tmpfiles drop-in. But there's no point really in forcing anyone who has a more forward-looking view of the world to still carry that dir.

Poettering argued that distributions could make their own choices, though it made him shudder to think of allowing unprivileged programs to fill up a directory. Boccassi echoed that sentiment in his response to the Debian systemd bug. Any package could ship a configuration for tmpfiles.d to create the directory and assume responsibility for it as well:

I certainly won't try to stop anyone wishing to do it, but also I do not wish for these old workarounds to ship in this package either.

There's ~2 years time until Forky ships, and that should be plenty of time to either add this workaround elsewhere, or fix remaining programs to use BSD locks, or both, so I'm not going to hold back the new version from testing for this niche case, sorry.

Boccassi closed the bug with the "WONTFIX" tag.

Debian Policy

On September 1, d'Itri responded that upstream systemd's opinion was not relevant in this case. Debian policy requires the directory for lock files of serial devices, though he had also opened a bug to revisit that, since the practice of using /var/lock for serial-device locks dates back to the 1980s. However, /var/lock is provided by systemd, so he reasoned that it is a systemd bug unless another package was identified to take ownership of creating the directory. "But you cannot just decide that the policy violation does not exist."

Thorsten Alteholz opened a bug with Debian's Technical Committee on September 15. He asked for advice on how to proceed since the systemd bug had been marked WONTFIX.

So what do you recommend how to go on from here? Change Debian policy (as asked in #1111839), revert the change in systemd, find a Debian wide solution or let every package maintainer implement their own solution?

Bdale Garbee weighed in on the bug as well. He said that he uses cu "almost constantly for interacting with embedded serial consoles on devices a USB connection away from my laptop". He was frustrated with the change imposed by systemd "with no warning", and looked forward to the TC's response.

On September 24, Matthew Vernon responded to the bug, with a CC to the systemd maintainer alias. He said that it seemed that Debian Policy required FHS compliance, "at least until we come up with a transition plan", and asked if systemd would please revert the change. There was no response from any of the systemd maintainers.

Override

Vernon updated the bug on September 29, and said that the TC had discussed the situation at its last meeting. The conclusion was that systemd should comply with policy, and thus FHS. He included a draft ballot text that the TC would vote on, which noted that the committee was "sympathetic" to the argument that flock() would be a better solution, but that "an important part of the role of a Debian Developer is ensuring that software in Debian complies with Debian Policy". A final ballot for the committee to consider was posted on October 2.

The ballot contained three options; all three required that the systemd package provide /var/lock "with relaxed enough permissions that existing Debian software that uses /var/lock for system-wide locks of serial devices (and similar purposes) works again". The committee would exercise its power under the Debian Constitution to override the systemd maintainers. The options were:

1) This change to systemd must persist until a satisfactory migration of impacted software has occurred and Policy updated accordingly.

2) This change to systemd must persist until Policy has been updated to allow otherwise.

3) This change to systemd must persist until the TC allows otherwise, which the TC expects to do once a suitable transition plan has been agreed.

Vernon replied on October 6 to say that a decision had been reached and that option one had won.

Unlocking

Debian's continued use of UUCP-style locking does seem to be more than a little bit dated. The FHS 3.0 is clearly reaching the end of its useful life, if not actually expired.

As a comparison, Fedora's uucp package has a patch to use lockdev instead. The upcoming Fedora 43 release includes systemd v258, and /run/lock is not world-writable. It seems like Debian could borrow Fedora's approach for the uucp package, though that would not solve the problem for any other Debian packages affected by the /run/lock change. There is also third-party software to consider, of course.

To date, Boccassi has not responded to the conversation; I emailed d'Itri to ask why he did not make the change himself, since he was aware of the bug. He replied on October 13 and said that "a maintainer cannot force a decision on another maintainer." Since he and Boccassi disagreed about the change it was left to the TC to decide. He said that he planned to prepare a merge request to implement the TC's decision, "as I have already agreed with Luca", within the week.

Comments (31 posted)

Gccrs after libcore

By Jonathan Corbet
October 9, 2025

Cauldron
Despite its increasing popularity, the Rust programming language is still supported by a single compiler, the LLVM-based rustc. At the 2025 GNU Tools Cauldron, Pierre-Emmanuel Patry said that a lot of people are waiting for a GCC-based Rust compiler before jumping into the language. Patry, who is working on just that compiler (known as "gccrs"), provided an update on the status of that project and what is coming next.

There are a few reasons to want a GCC-based Rust compiler, he began. Many developers are working with GCC now, prefer it, and do not want to have to change to a different toolchain. Having multiple compilers helps to build confidence in the future of a language in general. There is also the list of architectures that GCC supports, but which LLVM does not.

There is an intermediate option in the form of rustc_codegen_gcc, a project that grafts the GCC code-generation layer onto rustc. But, Patry said, few people seem willing to use rustc_codegen_gcc; he was unsure why. And, in any case, that project does not fully address the problem of architecture support in the Rust standard library, which must be ported separately to each new architecture. So developers are still waiting for gccrs, which is progressing.

Recent work

A longstanding problem with gccrs has been name resolution; that is a complicated task in Rust, and gccrs was not able to handle all of the possible cases. Now, though, name resolution has been rewritten from scratch and imaginatively named "name resolution 2.0". This work is complete and enabled by default in gccrs, though there are still some details to work out.

[Pierre-Emmanuel Patry] The project's development methodology has changed somewhat in the last year, he said. The project's approach is, essentially, "find a random problem, fix it, repeat". Recently, that approach has been focused on being able to build libcore, the Rust core library that is needed by all programs. That focus has been good, helping the project to identify the highest-priority issues, but it has also been slowing down progress recently. Fixing the building of libcore is essentially a serial process at this point; one problem must be dealt with before the next can be identified. So now, to increase parallelism, the gccrs developers are starting to spread their focus again.

The plan for gccrs has long been to implement support for version 1.49 of the Rust language, which was released in 2020. Among other reasons, that version was chosen because it was the last before the addition of const generics to the language. But it turns out that libcore needs a lot of newer, not-yet-stabilized Rust features; the 1.49 version of libcore uses, among other things, const generics. The Rust developers often put pre-stabilization features into the compiler and use them internally, and that has happened here. For this reason, gccrs has ended up including a number of newer language features, even though 1.49 remains the target.

Work has also progressed on impl trait (also covered here), which enables the specification of generic types that implement a given trait. "Argument position impl trait" allows the type of a function parameter to be inferred at the call site; the actual type of a parameter is not visible to the called function, only the expected trait. More recently, work has been done on "return position impl trait", where functions can return an abstract type implementing a given trait.

There were a couple of Google Summer of Code students working on the project this year. The first worked on rest patterns — patterns in match statements that match the remainder of a structure. Progress was made, but some patterns remain unimplemented. The second student worked on moving the lint mechanism away from GCC's GIMPLE internal representation, which was unable to support generics, to the gccrs high-level representation, which is able to carry much more information about the source program.

One year ago, he said, the project was concerned about performance; it took the compiler 30 seconds to compile about 1,000 lines of macro-heavy code. Performance has been a relatively low priority for this project, which is still focused on just getting something working, but there was still some time to look for easy problems to fix. It turns out that the type-checking code was doing a bit more copying of data than was strictly necessary; working through five lines of code could result in over 65,000 extra copies. This problem has been addressed, and the compiler is much faster now.

Next steps

One of the next things to be worked on, Patry said, is the Drop trait, which functions as an object destructor. This feature is coming along, but is missing the necessary glue to recursively drop nested structures. There is also a need to implement the code that ensures that all references to an object are truly gone before invoking the trait. Fn traits, used by closures, are partially implemented in gccrs now. Handling of dyn traits, which support dynamic dispatching, is almost complete as well.

People often ask "when will gccrs be ready?", he said. It will be ready in one sense — building libcore, though perhaps not compiling it entirely correctly — by early 2026. Once that is done, the focus will shift to being able to build Rust for Linux; that will involve adding a number of newer features, the availability of which will be hidden behind a command-line option. Patry stressed that the project will not, initially, try to implement every feature added between 1.49 and the 1.78 version required by the kernel; only features actually used by the kernel will go in.

After Rust for Linux, the plan is to build the entire 1.49 standard library. There still is no plan to increase the target version; the focus is still on getting things working. Most of the features needed by the standard library are also needed by libcore, so this phase should go relatively quickly.

Last year, the project added a dependency on rustc to compile the Rust code used by the compiler itself. Many people dislike this dependency and would like to see it go away. Once the standard-library support is complete, though, the compiler should be able to bootstrap itself, allowing the eventual removal of the rustc dependency.

The step after that is to move toward version 1.80 (or later) of the language. There aren't that many language changes after 1.49 that should pose problems, he said. But support for features like trait upcasting and use<...> will take some work.

For some time, the gccrs project has had the goal of pushing its work upstream into the GCC trunk every two weeks. Instead, code has been pushed upstream only three times (so far) in 2025. The process has improved and things will get better, he said, but he can't guarantee a two-week cycle anytime soon. Equally problematic has been keeping up with upstream GCC, which was a painful and manual process. That work has mostly been automated and is happening regularly now.

Patch review is a problem, though. Upstream GCC developers are generally unable to review Rust-specific patches, especially when they are sent in batches of 200 or so a few times a year. These batches need to be split into smaller, more frequent posts, but this work, he said, is not something the project wants to force its contributors to do. Gccrs, it seems, has a number of "very young" contributors who are not really able to work well with an email-based workflow; without care, those contributors will be lost.

The GCC project in general has been experimenting with Forgejo, using a bot to email patches upstream in response to pull requests. Gccrs has been playing with a similar scheme out of its GitHub repository; contributors can work entirely within GitHub and not have to interact with the mailing lists to submit patches. I asked what happens when people on the mailing lists have comments about the posted patches; at that point, it seems, the contributor will still have to deal with email.

The gccrs developers think that the Forgejo experiment is a good one that will widen the access to the project for contributors. It brings together a lot of resources onto a single page and matches what developers are using more widely; it is also free software, unlike GitHub, which is "proprietary and awful". But, he said, that experiment is still exactly that: an experiment that could be removed at any time. Moving the gccrs project from GitHub would likely lose some contributors, and would leave behind a lot of metadata that lives within the GitHub system. It would also complicate the task of maintaining cross-references with the Rust repositories. Beyond that, he said, GitHub functions as a sort of social network; leaving it would reduce the gccrs project's visibility.

Thus, he said, gccrs is not considering making a switch in the short or medium term, even though gccrs developers are enthusiastic about the experiment.

Patry concluded there. I asked how many developers are working on the project now; he answered that there are two engineers working on it full time, and roughly ten in a sort of core team who contribute regularly.

All told, gccrs seems to be making better progress than might have been expected a year or two ago. It was a bit discouraging, though, to see this talk placed at the end of the conference and sparsely attended; one gets the sense that interest from the wider GCC community in Rust is not all that high. Given the role that proper Rust support could play in keeping GCC relevant in the coming years, one might expect to see that community show a bit more interest.

[Thanks to the Linux Foundation, LWN's travel sponsor, for supporting my travel to this event.]

Comments (20 posted)

Enhancing FineIBT

By Jake Edge
October 10, 2025

LSS EU

At the Linux Security Summit Europe (LSS EU), Scott Constable and Sebastian Österlund gave a talk on an enhancement to a control-flow integrity (CFI) protection that was added to the kernel several years ago. The "FineIBT: Fine-grain Control-flow Enforcement with Indirect Branch Tracking" mechanism was merged for Linux 6.2 in early 2023 to harden the kernel against CFI attacks of various sorts, but needed some fixes and enhancements more recently. The talk looked at the CFI vulnerability problem, FineIBT, and an enhanced version that is hoped to be able to unify all of the disparate hardware and software mitigations to address both regular and speculative CFI vulnerabilities.

Constable began with introductions. He is a defensive-security researcher for Intel Labs, while Österlund is an offensive-security researcher on the Intel STORM team. Constable offered thanks to various people for their help on the feature, including Peter Zijlstra, who developed the patches and was present for the talk. Beyond that, Constable thanked the "tremendous" Linux kernel community, which "really helped to refine this enhancement throughout a very lively discussion", resulting in a patch set that was "a lot better".

In addition to the lengthy corporate disclaimer boilerplate shown in the slides, he noted that the speakers had a disclaimer of their own. "We will shamelessly and unapologetically be using the Intel assembly syntax throughout this presentation", he said, to a few groans from the audience; Linux uses AT&T syntax, which is the other primary x86 syntax.

Problem

The problem being addressed is known as call-oriented programming: "an adversary doing something to redirect an indirect function call from the correct, or intended, target" to a malicious target, Constable said. It is an old problem that has been revisited many times; in the talk, they would be describing a new approach to solving it. The problem comes in two flavors; at the architectural level, bugs like buffer overflows and use-after-free flaws allow pointer corruption for malicious redirection. At the microarchitectural level, the Spectre family of vulnerabilities shows ways to exploit things like indirect-branch misprediction for malicious purposes.

[Scott Constable]

The solution on the architectural side has been present since Linux 6.2, at least for those using the Clang compiler on x86_64: FineIBT. An extension to the instruction-set architecture for x86_64, called control-flow enforcement technology (CET), includes indirect branch tracking (IBT) support that is coupled with kernel changes to "enforce fine-grained forward-edge CFI". But the picture on the microarchitectural side is far less clear-cut; "there's no unified solution". Instead, there are a variety of "knobs and bits" scattered in different model-specific registers (MSRs) "by different hardware vendors who define them a little bit differently from one another". It is all platform- and vendor-specific, which makes it "difficult to navigate if you are not deeply familiar with the literature of side-channel vulnerabilities or speculative-execution vulnerabilities".

A unified solution, for both sides of the forward-edge CFI problem, was the focus of the talk, Constable said; the hope is that at some point in the future, there will be no need to rely on all of the different platform-specific mitigations. He handed off to Österlund to give more background on the vulnerabilities that are being addressed. Österlund said he would begin with "some light morning material" about Spectre.

The Spectre family of vulnerabilities was first disclosed in early 2018; once that happened, variants kept popping up. One of those was branch-target injection (BTI), where an attacker can "inject a target for an indirect call, which will be executed speculatively". An attacker trains the branch predictor to target a "gadget" that does what is needed. Mitigations were created for that vulnerability, but researchers came up with others, such as the branch-history injection (BHI) vulnerability.

There are variety of mitigations that have been used to avoid these problems. They are a mix of software (e.g. retpolines), microcode updates (e.g. the indirect-branch-predictor barrier), and hardware mitigations (e.g. IBT), which need to cooperate. There are MSR bits to disable some behavior, as well. All of these mitigations have different performance tradeoffs; "once we enable all of these, you might end up with quite a significant performance overhead".

[Sebastian Österlund]

He went through an example of how an attack might work. First, the attacker runs code in user space that trains the branch predictor to favor a pre-chosen gadget location in the kernel for the indirect function pointer; the gadget will leave some kind of microarchitectural trace, typically in the CPU caches, that can be used to exfiltrate some data. The attacker causes the kernel to call the indirect function via some system call; the CPU then speculatively executes the gadget. The attacker then retrieves the information via timing access to the data or some other mechanism. In a more detailed example, he showed how the value in a particular element of an array holding a secret could be determined by arranging that the gadget evicts an entry from the cache based on the value; when measuring the access time, a slow access indicates the evicted value and its offset determines what the secret value was. Rinse and repeat.

There have been "a bunch of mitigations" for Spectre BTI, including enhanced indirect branch restricted speculation (eIBRS), which "kind of makes sure that you can't train these branch-target buffers from user space". But researchers came up with a BHI variant that circumvented eIBRS. Attackers could still influence the branch-history buffer from user space, which impacted the indirect function chosen from the branch-target buffer.

The IBT mechanism requires that all indirect branches target sites that contain an ENDBR64 instruction. If the target does not have that instruction, a fault is generated; "the nice thing is that it limits both architectural and speculative targets" to require the ENDBR64. That eliminates "a large part of the attack surface", but is not a full CFI solution because legitimate targets can be used as gadgets. It has been something of a cat-and-mouse game, Österlund said, with mitigations being circumvented by new vulnerabilities, which are mitigated, and so on. On a slide, he listed a few different papers that he recommended for more information before handing back to Constable.

FineIBT

The last time he scanned the kernel source, Constable found "tens of thousands of unique indirect call targets", he said. With that many targets, IBT is not sufficient to mitigate the problem; there are simply too many gadgets in the kernel that start with the required ENDBR64 instruction.

So, FineIBT was developed on top of IBT to cut down on the number of gadgets available. At each site where an indirect function is called, the compiler will calculate a 32-bit hash of the function pointer's type. It puts that hash into a register, which is checked by the call site to ensure that a pointer of the right type was used for the call. Both the regular IBT ENDBR64 check and the hash match are combined for much stronger forward-edge CFI protection.

While Linux has tens of thousands of indirect call targets, it only has around 11,000 unique function types (including function arity, argument types, and return type) for those targets. A question that often gets asked, he said, is about hash collisions. Since it is a 32-bit hash, a collision between two types is unlikely.

From the slides, the FineIBT version of the start of an indirect function, looks like the following:
    __cfi_\func:
    ENDBR64                     # endbr instruction
    SUB       R10D, 0x12345678  # check hash
    JZ        \func
    UD2                         # error (undefined opcode)
    NOP3
    \func:                      # real function

The hash check has a conditional jump (JZ) instruction, however, so mitigating the microarchitectural vulnerability is less effective. The jump instruction can be predicted, thus it can be mispredicted; if both the indirect call and the conditional jump are mispredicted, speculative execution could proceed into a function with the wrong type. "That could allow a gadget to be executed and that could allow a potential data exposure", he said.

The solution, called FineIBT+BHI because it was initially developed as "a novel mitigation to address branch-history injection", uses an instruction that is not subject to prediction. That instruction poisons the registers that are live at the target function, which contain the function arguments of interest to the attacker. So the start of an indirect function looks like the following:

    __cfi_\func:
    ENDBR64                     # endbr instruction
    SUB       R10D, 0x12345678  # check hash
    JZ        .poison
    UD2                         # error (undefined opcode)
    .poison:
    CMOVNZ    RDI, R10          # poison first argument
    \func:                      # real function
The idea is that if the CPU speculates to the .poison label, the first argument to the function (in register RDI) will be overwritten with the (non-zero) result of the hash-check subtraction.

Using the result of the subtraction has some useful properties, he said. For one thing, the hash value (and thus the subtraction) is only done on the lower half of the R10 register, which means the CPU will clear the upper half. When the hash value is not correct, the result in the register is value with all zeroes in the upper 32 bits—making it a user-space address in Linux. Because of supervisor-mode access protection (SMAP), accessing a user-space address from the kernel will result in a trap. In addition, the hash values at the call site and in the function are immediate values in the kernel binary, thus the subtraction is some constant value, so the attacker-controlled arguments will be poisoned with a constant that cannot be used as a pointer.

Challenges

There were some challenges to making this work, of course. In order to poison all of the live arguments to the function, the code needs to know how many arguments the function called indirectly actually takes. The feature already uses the kCFI infrastructure, which generates the hash values FineIBT uses into a 16-byte stub placed before each indirect-call target. FineIBT extracts the hash value and replaces the stub with its own code (as seen above).

The kCFI stub stores the hash value as part of a MOV instruction that is not executed. Constable authored a small change to LLVM for kCFI that would encode the arity of the function into the destination register of the MOV instruction. That way, the kernel knows which registers need to be poisoned (because they are presumed to be under attacker control) when speculative execution takes place.

As it turns out, even just poisoning a single register (as was shown in the code above) produces code that is larger than the 16 bytes available to be run-time patched into the kCFI stub; as shown, it requires 17 bytes for that code. Using some trickery avoids that problem:

    __cfi_\func:
    ENDBR64                     # endbr instruction
    SUB       R10D, 0x12345678  # check hash
    JNZ       -7
    .poison:
    CMOVNZ    RDI, R10          # poison first argument
    \func:                      # real function
Getting rid of the undefined opcode to cause a fault saves two bytes, but requires a different jump. A jump if non-zero (JNZ) instruction is used instead; the -7 argument causes the jump to go to a byte inside the SUB opcode (0xea). That value corresponds to a far-jump opcode that is not present in 64-bit processors, thus it will fault.

That code only requires 15 bytes, but it also only handles poisoning a single register. For indirect calls that have more than one argument, the CALL opcode can be used to call an arity-specific subroutine with the right CMOVNZ operations; that results in a 16-byte stub. In general, the penalty for FineIBT+BHI is far less than that of a branch misprediction; Constable calculated that the penalty is roughly three cycles versus 15-20 for a misprediction.

Österlund then reported on some benchmarking that has been done. Intel engineers chose three different microarchitectures to measure, representing both server and desktop processors. They ran "a bunch of benchmarks", including the Phoronix test suite OS benchmark and UnixBench; in his opinion, the Phoronix benchmark is the most representative because it uses "a bunch of real-world workloads". The difference in overhead between FineIBT and FineIBT+BHI "is negligible". Overall, most benchmarks show less than 1% performance loss, and some microarchitectures show an improvement over FineIBT alone. Constable noted that of the three shown, two "showed negative overhead" compared to FineIBT.

[Peter Zijlstra]

FineIBT+BHI is a work in progress, Constable said, but the goal is to make it "the forward-edge CFI solution for the Linux kernel on Intel processors". Something that they are working on currently is that FineIBT, thus FineIBT+BHI, depend on kCFI, which in turn depends on Clang. The patches for kCFI have not landed in GCC, though Zijlstra pointed out that a patch set had been posted the week before the talk. There are also some Intel processors that require hardware-based mitigations to address backward-edge (return-based) CFI; those mitigations cannot be disabled, yet, but the intent is to get to a point where they can be, Constable said.

Another question that is often asked is about "type confusion": whether two indirect call targets with the same type, thus the same hash, but have different behavior, Constable said. Can something malicious happen when redirecting one to the other? An example, that is unfortunately not uncommon in Linux, is a function with one or two arguments, where one of them is a void pointer; the pointer may be cast to wildly different types in the target functions. Theoretically, "you could concoct a malicious gadget", but research has been done to determine if there are "such gadgets that exist in the Linux kernel"; two researchers for Tel Aviv University looked at the question (among others) and did not find any exploitable gadgets of that sort. Intel also did some internal research using a different methodology that also did not find any evidence that type confusion can result in exploitable vulnerabilities.

An audience member asked about using salt or some other mechanism to ensure that hash collisions due to type confusion did not occur. Zijlstra noted that there are some ideas floating around to ameliorate that potential problem. For example, filesystem function pointers and scheduler function pointers might have the same type, but are unrelated. They could be given different hash values as part of the kernel build. There is also a request for being able to explicitly annotate functions in a way that would change their hash to avoid type confusion.

Another attendee asked about using the far-jump opcode versus UD2; do they have the same effect? Zijlstra said that they do, but that the Intel architects were not particularly happy that the opcode is being used in that manner; they want to reserve it for future expansion. A single-byte UDB instruction (opcode 0xd6) has been approved for upcoming processors, which meant that Zijlstra needed to rejigger the whole FineIBT+BHI instruction sequence, but he was able to do so.

The last question was about adding some kind of instruction in the future that would immediately cause speculation to cease when the FineIBT checks fail. Constable said that the current solution is fairly flexible and can be adapted if some of the assumptions that are being made need to change. "Once you take all of the assumptions we have today and bake them into an instruction and then an assumption changes, it's tough." The more flexible approach is not particularly expensive, so they are generally happy to continue with it.

A YouTube video of the talk is available for those interested.

[I would like to thank the Linux Foundation, LWN's travel sponsor, for supporting my trip to Amsterdam for Linux Security Summit Europe.]

Comments (3 posted)

The end of the 6.18 merge window

By Daroc Alden
October 14, 2025

The 6.18 merge window has come to an end, bringing with it a total of 11,974 non-merge commits, 3,499 of which came in after LWN's first-half summary. The total is a little higher than the 6.17 merge window, which saw 11,404 non-merge commits. There are once again a good number of changes and new features included in this release.

The most important changes included in the second part of the 6.18 merge window were:

Architecture-specific

Core kernel

  • The Rust Binder driver has finally been merged, after many necessary supporting APIs were added. The Binder maintainers say that the C implementation will remain in the kernel for several more releases while they verify that the Rust implementation precisely matches the existing user-space interface. Once that happens, it will be a true point of no return for Rust in the kernel, given that Binder is both essential to Android and receiving constant development.

Filesystems and block I/O

Hardware support

  • Clock: Allwinner A523/T527 MCU clock control units, Qualcomm ipq5424 APSS clock controllers, Qualcomm Glymur SoC TCSR, global, and display clock controllers, Qualcomm MSM8937 SoC clocks, STMicroelectronics STM32MP21 clocks, MediaTek MT8195 vencsys clocks, MediaTek MT8196 vencsys, vdecsys, ufssys, pextpsys, I2C, mcu, mdpsys, mfg, and disp0 clock controllers, and SpacemiT P1 real-time clocks.
  • Graphics: Qualcomm QCS8300 eDP PHYs.
  • Industrial I/O: Vishay VEML6046X00 high accuracy RGBIR color sensors, Intel Bay Trail/Cherry Trail Dollar Cove TI PMIC ADC devices, Infineon TLV493D 3D magnetometers, Analog Devices ADE9000 multiphase power monitors, Marvell 88PM886 PMIC analog-digital converters, and Rohm BD79112 analog-digital converters.
  • Input: Awinic AW86927 haptic chips, Hynitron CST816x series controllers, and Himax HX852x(ES) touchscreen controllers.
  • Miscellaneous: ECB and CBC modes of Texas Instruments DTHE V2 hardware AES engines, Xilinx Versal random-number generators, ThinkPad T14s Gen6 Snapdragon embedded controllers, Acer A1-840 tablets, Loongson-2K100 and 2K0500 NAND controllers, FudanMicro FM25S01A NAND controllers, Realtek RTl93xx switch SoC ECC controllers, STMicroelectronics M2LRxx I2C devices, ADLink PCI-7250, LPCI-7250, and LPCIe-7250 PCI/PCIe boards, Airoha AN8855 switch electronic fuses, SpacemiT K1 PDMA controllers, Sophgo SG2042 SoC PCIe controllers, STMicroelectronics STM32MP25 PCIe controllers, and MediaTek MT8196 SoC GPUEB IPI mailboxes.
  • Networking: Pensando Ethernet RDMA devices and MediaTek PCIe 5G HP DRMR-H01 networking cards.
  • Sound: DualSense Playstation controller audio jacks.
  • USB: Intel USBIO USB IO-expanders, Renesas RZ/G3E SoC USB host controllers, Maxim MAX14526 MUIC devices, Qualcomm PMIV0104 eUSB2 repeaters, and Sophgo CV18XX/SG200X USB PHYs.

Miscellaneous

Security-related

  • Following feedback suggesting that it is broken, the new HMAC-encrypted-transaction support on the trusted-platform module (TPM) bus has been disabled by default.

Virtualization and containers

  • x86 hosts can now enable SEV-SNP CipherText Hiding, which prevents unauthorized CPU accesses from reading the ciphertext of secure nested paging (SNP) guests' private memory.
  • KVM supports virtualizing control-flow enforcement technology (CET) on Intel (in the form of shadow stacks and indirect branch tracking) and AMD (in the form of shadow stacks) chips. While shadow stacks and indirect branch tracking can theoretically be enabled separately, in practice that has performance problems, so KVM will enable both if either one is requested.

Internal kernel changes

  • The build documentation now clarifies that Python is a required dependency for many kernel configurations.
  • The DMA-mapping API continues the migration to physical addresses as the primary interface, rather than page pointers and offsets. The patch set gives the justification, and another commit in this series shows a sample conversion.
  • Relatedly, Greg Kroah-Hartman has said that all of the pieces needed to implement a typical USB driver in Rust are now in place — although, with only a sample driver using the abstractions, the bindings themselves are disabled in the build until someone submits an actual, working USB driver and any remaining problems are fixed.
  • The work to support memory descriptors continues; the kernel now has a new memdesc_flags_t structure to separate out the flags from slab and folio structures.
  • The work on huge zero folios has been merged.
  • The prctl() interface for transparent huge pages has also made it in to this release.
  • The perf utility now handles "N" (debugging) symbols produced by the Rust compiler. It also has a new "X" modifier that prevents events from being grouped automatically.
  • There were 190 exported symbols removed during this merge window, and 353 added; see this page for the full list.

  • There were also ten new kfuncs added: bpf_dynptr_from_skb_meta(), bpf_kfunc_multi_st_ops_test_1(), bpf_kfunc_ret_rcu_test(), bpf_kfunc_ret_rcu_test_nostruct(), bpf_strcasecmp(), bpf_task_work_schedule_resume(), bpf_task_work_schedule_signal(), bpf_xdp_pull_data(), scx_bpf_cpu_curr(), and scx_bpf_locked_rq().

Linus Torvalds seemed pleased that this merge window didn't introduce any problems on his test machines; other than that happy occurrence, the 6.18-rc1 release was fairly typical. 6.18 is expected to become the next long-term support release of the kernel. If the release-candidate process takes the same amount of time as previous releases, 6.18 proper will likely be released around the end of November.

Comments (2 posted)

A new API for interrupt-aware spinlocks

By Daroc Alden
October 15, 2025

Kangrejos

Boqun Feng spoke at Kangrejos 2025 about adding a frequently needed API for Rust drivers that need to handle interrupts: interrupt-aware spinlocks. Most drivers will need to communicate information from interrupt handlers to main driver code, and this exchange is frequently synchronized with the use of spinlocks. While his first attempts ran into problems, Feng's ultimate solution could help prevent bugs in C code as well, by tracking the number of nested scopes that have disabled interrupts. The patch set, which contains work from Feng and Lyude Paul, is still under review.

Code that acquires a spinlock needs to disable preemption: otherwise, if it were preempted, everything else contending for the lock would just pointlessly burn CPU time. The same thing (almost) applies to using spinlocks in interrupts, Feng said. If kernel code acquires a spinlock in process context, and then an interrupt arrives, the handler for which tries to acquire the same lock, the system will deadlock. The simple rule in the kernel is that if a lock is ever used in both interrupt handler and process context, the lock must be "irq-safe" (meaning that the acquirer disables interrupts when appropriate). The kernel's lockdep tool will check that this rule is followed.

[Boqun Feng]

Every sufficiently complex driver will need an interrupt handler, Feng continued. In order to write drivers in Rust, kernel programmers will need an abstraction to deal with interrupts and irq-safe locks. The existing irq-safe spinlocks in C use either spin_lock_irq()/spin_unlock_irq() or spin_lock_irqsave()/spin_lock_irqrestore(), depending on whether the code is expected to run in an interrupt-enabled context or not. The former pair of functions leaves interrupts unconditionally enabled after the lock is released, while the latter pair stores the current interrupt state and restores it afterward. There are also some scope guards that can handle unlocking these spinlocks automatically at the end of a function.

The first attempt at a Rust API was actually analogous to the kernel's scope guards. In Rust, having a special guard type that, when dropped, releases a lock is a common pattern. The problem that Feng ran into was one that the C API struggles with as well: what happens if two locks are dropped in the wrong order? Consider a piece of code that starts out with interrupts enabled, acquires lock A using spin_lock_irqsave() (resulting in interrupts being temporarily disabled), and then acquires lock B, again using spin_lock_irqsave(). The second call will store "disabled" as the current interrupt state. Releasing lock A and then lock B ends with interrupts being permanently disabled, which "is not ideal".

Feng looked at a few different ways to solve the problem, including using a token type to track whether the code was called in an interruptible context at compile time, but that proved to be too complex. Ultimately, in a mailing list discussion of the problem in early September, Thomas Gleixner commented:

Thinking more about this. I think there is a more general problem here.

Much of the Rust effort today is trying to emulate the existing way how the C implementations work.

I think that's fundamentally wrong because a lot of the programming patterns in the kernel are fundamentally wrong in C as well. They are just proliferated technical debt.

What should be done is to look at it from the Rust perspective in the first place: How should this stuff be implemented correctly?

That sent Feng back to the drawing board; he eventually settled on an approach that tracks the number of nested scopes with interrupts disabled. When the number changes between zero and one, the code can enable or disable interrupts as appropriate, but it no longer matters what order locks are dropped in. That code is written in C, so it can benefit the rest of the kernel as well.

Andreas Hindborg asked whether all of the existing C users of spin_lock_irqsave() would be adapted to Feng's approach, or whether this was a new API. Feng clarified that it was a new API, but that Rust code should only use this one. He said that in the mailing list discussion of the change, scheduling maintainer Peter Zijlstra was opposed to ending up with three ways to lock and unlock interrupt-aware spinlocks, and ran into problems when he attempted to mechanically convert the existing C users, but Feng was optimistic that Zijlstra would accept a slower transition.

Joel Fernandes asked how this API interacts with CondVar::wait(), which temporarily releases a spinlock (and reenables interrupts) while waiting for a notification. Feng said that CondVar::wait() would save the current depth of interrupt-disabling-nesting, and restore it afterward. Fernandes pointed out that this would only work with one lock held, which Feng agreed with. Waiting on a conditional variable with two locks held is broken anyway "and lockdep would yell at you".

The approach Feng presented nicely enables Rust code to interact with interrupt-aware spinlocks, but there's another small API change that it would enable: Rust could have a separate guard type to enable and disable interrupts (on the local CPU) for non-spinlock reasons. If the running code already has an instance of that type (indicating that it must have disabled interrupts at some point), then it can lock a spinlock without incrementing the count. That would be "useful in a few obscure cases", as well as for performance optimization.

Gary Guo, Benno Lossin, and Feng then launched into a detailed discussion of whether it would be better to let the existing interrupt-aware-spinlock guards serve that purpose, rather than creating a new type. The back and forth quickly escaped me, but it ended with the conclusion that Feng's API is the right path forward, although there may be some ways to avoid the performance overhead of an atomic write in the hot path in more cases in the future.

At that point, Feng went into the implementation details of the new API. There is a 32-bit, per-CPU field accessed via the preempt_count() function that is faster than a normal per-CPU variable; Feng tried seeing if he could steal bits 16 through 23 for the interrupt-disabled nesting count, but that reduced the level of non-maskable interrupt (NMI) nesting that could be tracked from 16 bits to eight bits. Feng wasn't sure why the kernel needs 32,000 levels of NMI nesting, but the maintainer insisted. Since most code only needs to care whether any NMI nesting is happening, not the exact level, Feng and Fernandes ended up reducing the NMI-nesting count to one bit in preempt_count(), plus a normal per-CPU variable for the actual depth.

The existing C users of interrupt-aware spinlocks should, in many cases, be able to just use the new API. Whenever a function has a paired disable and enable, that can safely be replaced with calls to the new local_interrupt_disable() and local_interrupt_enable() functions. "But there are some 'creative' unpaired uses."

Feng wrote a tool to find unpaired uses, in order to figure out how big a hassle migration would be. In the process, he actually found a latent bug in the kernel where someone had used the unconditional API in a callback when it should have used local_irq_save(). He doesn't think that the Rust API should be blocked on rewriting the existing code, though; that can be done more gradually afterward. "The new API is nicer; so maybe people will voluntarily switch."

If Zijlstra and other maintainers aren't happy with that answer, Feng's backup plan is to expose per-CPU variables to Rust, and then write the new API entirely in Rust — leaving the new API unavailable to C users. If most of the existing C users could have been trivially moved to the new, safer API, it would be a shame to miss out on improving the kernel's existing C code. Time will tell whether Feng's proposal is adopted, but, either way, the API for Rust drivers should soon be one step closer to parity with its counterpart for C drivers.

Comments (3 posted)

Last-minute /boot boost for Fedora 43

By Joe Brockmeier
October 9, 2025

Sudden increases in the size of Fedora's initramfs files have prompted the project to fast-track a proposal to increase the default size of the /boot partition for new installs of Fedora 43 and later. The project has also walked back a few changes that have contributed to larger initramfs files, but the ever-increasing size of firmware means that the need for more room is unavoidable. The Fedora Engineering Steering Council (FESCo) has approved a last-minute change just before the final freeze for Fedora 43 to increase the default size of the /boot partition from 1GB to 2GB; this will leave plenty of space for kernels and initramfs images if a user is installing from scratch, but it is of no help for users upgrading from Fedora 42.

Dracut changes

The current default for Fedora, at least for Fedora Workstation on x86_64 and most other editions and spins, is to have a separate 1GB /boot partition. A system will have up to three kernels and initramfs images installed by default, plus a rescue kernel and initramfs that is generated during the system installation.

In early September, Chris Murphy started a discussion on Fedora's developer mailing list about a recent change that he had noticed in the size of the initramfs files; the images had ballooned from 35MB to 59MB. Adam Williamson said that he had recently noticed test failures caused by a warning that /boot was nearly full, but he solved them by doubling the space allocated for the /boot partition. Fabio Valentini said that he had also encountered a big increase in initramfs sizes, though he had not figured out the cause.

Murphy noted that the smaller images had been created with dracut version 105, while the larger had been created using version 107. He filed a bug on September 9; the problem stemmed from a change in dracut to include additional drivers and modules in the initramfs. This change was made so that a rescue image would be portable between systems. For example, if a user needed to move their NVMe drive to a new system due to hardware failure; this would help ensure that it would have the necessary drivers to boot on that hardware.

The bug was accepted as a blocker for the Fedora 43 release, and Pavel Valena began working on reverting the change.

Firmware increases

On September 21, Murphy started a separate discussion about increasing the size of Fedora's boot partition. This time he was concerned about the increase in size of the initramfs for Fedora's rescue image. He said he was unsure when the image sizes of the rescue image had increased, but he shared that the size of the initramfs with NVIDIA and AMD firmware was more than 256MB. Removing the NVIDIA firmware helped shrink the image to a more manageable 130MB.

Murphy noted that Fedora last increased the size of /boot from 512MB to 1GB in 2016. Since firmware is getting bigger "at a faster rate than most other things", he wondered if Fedora should increase the volume size.

Luya Tshimbalanga suggested that Fedora should consider making /boot a Btrfs subvolume rather than a dedicated LVM partition. Unlike LVM partitions, Btrfs subvolumes share the storage pool of the entire Btrfs filesystem; therefore, if a Btrfs filesystem has, say, 2TB of space, a /boot subvolume would also have up to that amount available. (Subvolumes can have capacity limits, but not by default.)

Neal Gompa replied that he would like to see that happen, but it is not currently supported by GRUB. SUSE has created a patch for GRUB that allows configuring /boot as a Btrfs subvolume; the patch has not been accepted upstream yet, though. The most recent version of the patch was sent by Michael Chang to the grub-devel list on October 2. There is a bug tracking progress of this feature for Fedora, but it seems clear that it will not be ready in time for Fedora 43; it would no doubt require more testing than simply bumping up the size of /boot.

Rather than making /boot bigger, Zbigniew Jędrzejewski-Szmek asked if it might make sense to get rid of the rescue images instead:

I have been using Fedora for ... a few years now, and I have never used one of those either. If somebody is using the Rescue images, please speak up.

Otherwise, I think we should get a proposal filed for F44 and get rid of them.

Vitaly Zaitsev liked the idea, and said that users could make use of a Fedora live image to repair their system if needed. Marius Schwarz said that was technically true, "but in reality only experts can do this". He argued that the Linux community was continuing to expand, and that there were many users who wanted to use Linux without having to know everything about it. "We have to accept this, or we will never be 'the next coming pc desktop os'." Ultimately, the idea to dump Fedora's rescue image did not gain traction.

On October 1, Murphy pointed out that a fix had been applied to dracut that would reduce the size of the regular initramfs files by about 50%, but the rescue initramfs was still a problem. The NVIDIA firmware alone, he said, was consuming more than 99MB. He proposed that Fedora should set the /boot partition size with the goal that a user could upgrade for five years without having to resize the partition.

I don't have a completely logical process for imagining what the size should be, because I don't know the current let alone the future growth rate of firmware that might need to be in the initramfs. If a lofty goal is that a given partition layout should be possible to upgrade for 5 years without reprovisioning, then maybe 50% free space today is sufficient room to grow over 5 years? That might still be tight but then maybe we'll find a better solution to the firmware problem than stuffing it all in the initramfs?

Schwarz noted that the NVIDIA firmware contained two major releases, splitting the firmware package into two versions could reduce the size significantly. Williamson said that Fedora probably didn't need one of the versions at all; David Airlie replied that he had sent a patch to allow picking only the most recent firmware.

Emergency change

Even with some of the ideas to reduce initramfs size, it was clear that the 1GB constraint was going to continue to be a problem. Despite the fact that it was late in the cycle for such a change, Murphy submitted an emergency system-wide change ticket for FESCo to consider increasing the partition size to 2GB. Simon de Vlieger wondered if the change applied to all Fedora versions or just the x86_64 desktop versions of Fedora. Gompa clarified that it was universal, "because the firmware problem isn't x86-specific. It's way more visible on x86, but it's not an x86-only problem."

Kevin Fenzi said that he was generally in favor of moving to a 2GB partition but did not like doing it at the last minute. He still voted in favor of the change, with the hope it would not cause problems that FESCo had not thought of. Fabio Alessandro Locati also expressed concern but voted in favor as well. Gompa announced that the change had reached seven +1 votes and was approved on October 6.

Upgrades

With the change approved, users installing Fedora 43 should have ample space for upgrades for years to come. However, it does little to help users who have installed Fedora previously who hope to upgrade to 43 when it is released. The documentation of the change simply suggests that users of older releases "may be advised to consider reinstalling instead of upgrading".

Users who are reluctant to reinstall may wish to reduce the number of kernels kept on the system, remove the rescue kernel and initramfs image, or both. To retain two kernels rather than three, users can set installonly_limit=2 in /etc/dnf/dnf.conf. The rescue kernel is generated automatically during installation and is not installed as a package. It would need to be removed manually, and then set "dracut_rescue_image="no"" in /usr/lib/dracut/dracut.conf.d/02-rescue.conf.

Fedora's final freeze for the 43 release started on October 7; according to the tracking bug for the change, updates have been submitted to ensure that the the Anaconda installer and Fedora's image-builder set the partition size correctly. Interested users may wish to help test some of the nightly composes to see if the late change introduces any unexpected problems.

Comments (60 posted)

Page editor: Jonathan Corbet
Next page: Brief items>>


Copyright © 2025, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds