|
|
Log in / Subscribe / Register

Leading items

Welcome to the LWN.net Weekly Edition for December 11, 2025

This edition contains the following feature content:

This week's edition also includes these inner pages:

  • Brief items: Brief news items from throughout the community.
  • Announcements: Newsletters, conferences, security updates, patches, and more.

Please enjoy this week's edition, and, as always, thank you for supporting LWN.net.

Comments (none posted)

Eventual Rust in CPython

By Daroc Alden
December 5, 2025

Emma Smith and Kirill Podoprigora, two of Python's core developers, have opened a discussion about including Rust code in CPython, the reference implementation of the Python programming language. Initially, Rust would only be used for optional extension modules, but they would like to see Rust become a required dependency over time. The initial plan was to make Rust required by 2028, but Smith and Podoprigora indefinitely postponed that goal in response to concerns raised in the discussion.

The proposal

The timeline given in their pre-PEP called for Python 3.15 (expected in October 2026) to add a warning to Python's configure script if Rust is not available when Python is built. Any uses of Rust would be strictly optional at that point, so the build wouldn't fail if it is missing. At this stage, Rust would be used in the implementation of the standard library, in order to implement native versions of Python modules that are important to the performance of Python applications, such as base64. Example code to accomplish this was included in the proposal. In 3.16, the configure script would fail if Rust is missing unless users explicitly provide the "--with-rust=no" flag. In 3.17 (expected in 2028), Python could begin strictly requiring Rust at build time — although it would not be required at run time, for users who get their Python installation in binary form.

Besides Rust's appeal as a solution to memory-safety problems, Smith cited that there are an increasing number of third-party Python extensions written in Rust as a reason to bring the proposal forward. Perhaps, if Rust code could be included directly in the CPython repository, the project would attract more contributors interested in bringing their extensions into the standard library, she said. The example in the pre-PEP was the base64 module, but she expressed hope that many areas of the standard library could see improvement. She also highlighted the Rust for Linux project as an example of this kind of integration going well.

Cornelius Krupp was apprehensive; he thought that the Rust-for-Linux project was more of a cautionary tale, given the public disagreements between maintainers. Those disagreements have settled down over time, and the kernel community is currently integrating Rust with reasonable tranquility, but the Rust for Linux project still reminds many people of how intense disagreements over programming languages can get in an established software project. Jacopo Abramo had the same worry, but thought that the Python community might weather that kind of disagreement better than the kernel community has. Smith agreed with Abramo, saying that she expected the experience to be "altogether different" for Python.

Steve Dower had a different reason to oppose the proposal: he wasn't against the Rust part, but he was against adding additional optional modules to Python's core code. In his view, optional extensions should really live in a separate repository. Da Woods called out that the proposal wouldn't bring any new features or capabilities to Python. Smith replied (in the same message linked above) that the goal was to eventually introduce Rust into the core of Python, in a controlled way. So, the proposal wasn't only about enabling extension modules. That didn't satisfy Dower, however. He said that his experience with Rust, mixed-language code, and teams forced him to disapprove of the entire proposal. Several other community members agreed with his disapproval for reasons of their own.

Chris Angelico expressed concern that Rust might be more susceptible to a "trusting trust" attack (where a compiler is invisibly subverted to introduce targeted backdoors) than C, since right now Rust only has one usable compiler. Sergey Davidoff linked to the mrustc project, which can be used to show that the Rust compiler (rustc) is free of such attacks by comparing the artifacts produced from rustc and mrustc. Dower agreed that Rust didn't pose any more security risk than C, but also wasn't sure how it would provide any security benefits, given that CPython is full of low-level C code that any Rust code will need to interoperate with. Aria Desires pointed to the recent Android Security post about the adoption of Rust as evidence that mixed code bases adopting Rust do end up with fewer security vulnerabilities.

Not everyone was against the proposal, however. Alex Gaynor and James Webber both spoke up in favor. Guido van Rossum also approved, calling the proposal a great development and saying that he trusted Smith and others to guide the discussion.

Stephan Sokolow pointed out that many people were treating the discussion as being about "Rust vs. C", but that in reality it might be "Rust vs. wear out and stop contributing". Paul Moore thought that was an insightful point, and that the project should be willing to put in some work now in order to make contributing to the project easier in the future.

Nathan Goldbaum is a maintainer of the PyO3 project, which provides Rust bindings to the Python interpreter to support embedding Python in Rust applications and writing Python extensions in Rust. He said that having official Rust bindings would significantly reduce the amount of work he has to do to support new Python versions. Another PyO3 maintainer, David Hewitt, agreed, going on to suggest that perhaps CPython would benefit from looking at the API that PyO3 has developed over time and picking "the bits that work best".

Raphael Gaschignard thought that the example Rust code Smith had provided would be a more compelling argument for adopting Rust if it demonstrated how using the language could simplify error handling and memory management compared to C code. Smith pointed out one such example, but concurred that the current proof-of-concept code wasn't a great demonstration of Rust's benefits in this area.

Gentoo developer Michał Górny said that the inclusion of Rust in CPython would be unfortunate for Gentoo, which supports many niche architectures that other distributions don't:

I do realize that these platforms are not "supported" by CPython right now. Nevertheless, even though there historically were efforts to block building on them, they currently work and require comparatively little maintenance effort to keep them working. Admittedly, the wider Python ecosystem with its Rust adoption puts quite a strain on us and the user experience worsens every few months, we still manage to provide a working setup.

[...]

That said, I do realize that we're basically obsolete and it's just a matter of time until some projects pulls the switch and force us to tell our users "sorry, we are no longer able to provide a working system for you".

Hewitt offered assistance with Górny's integration problems. "I build PyO3 [...] to empower more people to write software, not to alienate." Górny appreciated the thought, but reiterated that the problem here was Rust itself and its platform support.

Scaling back

In response to the concerns raised in the discussion, Smith and Podoprigora scaled back the goals of the proposal, saying that it should be limited to using Rust for optional extension modules (i.e. speeding up parts of the standard library) for the foreseeable future. They still want to see Rust adopted in CPython's core eventually, but a more gradual approach should help address problems raised by bootstrapping, language portability, and related concerns that people raised in the thread, Smith said.

That struck some people as too conservative. Jelle Zijlstra said that if the proposal were limited to optional extension modules, it would bring complexity to the implementation of the standard library for a marginal benefit. Many people are excited about bringing Rust to CPython, Zijlstra said, but restricting Rust code to optional modules means putting Rust in the place that it will do the least good for the CPython code. Several other commenters agreed.

Smith pushed back, saying that moving to Rust was a long-term investment in the quality of the code, and that having a slow, conservative early period of the transition would help build out the knowledge and experience necessary to make the transition succeed. She later clarified that a lot of the benefit she saw from this overly careful proposal was doing the groundwork to make using Rust possible at all: sorting out the build-system integration, starting to gather feedback from users and maintainers, and prototyping what a native Rust API for Python could look like. All of that has to happen before it makes sense to consider Rust in the core code — so even though she eventually wants to reach that state, it makes sense to start here.

At the time of writing, the discussion is still ongoing. The Python community has not reached a firm conclusion about the adoption of Rust — but it has definitely ruled out a fast adoption. If Smith and Podoprigora's proposal moves forward, it still seems like it will be several years before Rust is adopted in CPython's core code, if it ever is. Still, the discussion also revealed a lot of enthusiasm for Rust — and that many people would rather contribute code written in Rust than attempt to wrestle with CPython's existing C code.

Comments (61 posted)

A "frozen" dictionary for Python

By Jake Edge
December 4, 2025

Dictionaries are ubiquitous in Python code; they are the data structure of choice for a wide variety of tasks. But dictionaries are mutable, which makes them problematic for sharing data in concurrent code. Python has added various concurrency features to the language over the last decade or so—async, free threading without the global interpreter lock (GIL), and independent subinterpreters—but users must work out their own solution for an immutable dictionary that can be safely shared by concurrent code. There are existing modules that could be used, but a recent proposal, PEP 814 ("Add frozendict built-in type"), looks to bring the feature to the language itself.

Victor Stinner announced the PEP that he and Donghee Na have authored in a post to the PEPs category of the Python discussion forum on November 13. The idea has come up before, including in PEP 416, which has essentially the same title as 814 and was authored by Stinner back in 2012. It was rejected by Guido van Rossum at the time, in part due to its target: a Python sandbox that never really panned out.

frozendict

The idea is fairly straightforward: add frozendict as a new immutable type to the language's builtins module. As Stinner put it:

We expect frozendict to be safe by design, as it prevents any unintended modifications. This addition benefits not only CPython's standard library, but also third-party maintainers who can take advantage of a reliable, immutable dictionary type.

While frozendict has a lot in common with the dict built-in type, it is not a subclass of dict; instead, it is a subclass of the base object type. The frozendict() constructor can be used to create one in various ways:

    fd = frozendict()           # empty
    fd = frozendict(a=1, b=2)   # frozen { 'a' : 1, 'b' : 2 }
    d = { 'a' : 1, 'b' : 2 }
    fd = frozendict(d)          # same
    l = [ ( 'a', 1 ), ( 'b', 2 ) ]
    fd = frozendict(l)          # same
    fd2 = frozendict(fd)        # same
    assert d == fd == fd2       # True

As with dictionaries, the keys for a frozendict must be immutable, thus hashable, but the values may or may not be. For example, a list is a legitimate type for a value in either type of dictionary, but it is mutable, making the dictionary as a whole (frozen or not) mutable. However, if all of the values stored in a frozendict are immutable, it is also immutable, so it can be hashed and used in places where that is required (e.g. dictionary keys, set elements, or entries in a functools.lru_cache).

As might be guessed, based on the last line of the example above, frozen dictionaries that are hashable can be compared for equality with other dictionaries of either type. In addition, neither the hash() value nor the equality test depend on the insertion order of the dictionary, though that order is preserved in a frozen dictionary (as it is in the regular variety). So:

    d = { 'a' : 1, 'b' : 2 }
    fd = frozendict(d)
    d2 = { 'b' : 2, 'a' : 1 }
    fd2 = frozendict(d2)
    assert d == d2 == fd == fd2

    # frozendict unions work too, from the PEP
    >>> frozendict(x=1) | frozendict(y=1)
    frozendict({'x': 1, 'y': 1})
    >>> frozendict(x=1) | dict(y=1)
    frozendict({'x': 1, 'y': 1})
For the unions, a new frozen dictionary is created in both cases; the "|=" union-assignment operator also works by generating a new frozendict for the result.

Iteration over a frozendict works as expected; the type implements the collections.abc.Mapping abstract base class, so .items() returns an iterable of key-value tuples, while .keys() and .values() provide the keys and values of the frozen dictionary. For the most part, a frozendict acts like a dict that cannot change; the specific differences between the two are listed in the PEP. It also contains a lengthy list of places in the standard library where a dict could be switched to a frozendict to "enhance safety and prevent unintended modifications".

Discussion

The reaction to the PEP was generally positive, with the usual suggestions for tweaks and more substantive additions to the proposal. Stinner kept the discussion focused on the proposal at hand for the most part. One part of the proposal was troubling to some: converting a dict to a frozendict was described as an O(n) shallow copy. Daniel F Moisset thought that it would make sense to have an in-place transformation that could be O(1) instead. He proposed adding a .freeze() method that would essentially just change the type of a dict object to frozendict.

However, changing the type of an existing object is fraught with peril, as Brett Cannon described:

But now you have made that dictionary frozen for everyone who holds a reference to it, which means side-effects at a distance in a way that could be unexpected (e.g. context switch in a thread and now suddenly you're going to get an exception trying to mutate what was a dict a microsecond ago but is now frozen). That seems like asking for really nasty debugging issues just to optimize some creation time.

The PEP is not aimed at performance, he continued, but is meant to help "lessen bugs in concurrent code". Moisset noted, that dictionaries can already change in unexpected ways via .clear() or .update(), thus the debugging issues already exist. He recognized that the authors may not want to tackle that as part of the PEP, but wanted to try to ensure that an O(1) transformation was not precluded in the future.

Cannon's strong objection is to changing the type of the object directly. Ben Hsing and "Nice Zombies" proposed ways to construct a new frozendict without requiring the shallow copy—thus O(1)—by either moving the hash table to a newly created frozendict, while clearing the dictionary, or by using a copy-on-write scheme for the table. As Steve Dower noted, that optimization can be added later as long as the PEP does not specify that the operation must be O(n), which would be a silly thing to do, but that it sometimes happens "because it makes people stop complaining", he said in a footnote. In light of the discussion, the PEP specifically defers that optimization to a later time, suggesting that it could also be done for other frozen types (tuple and frozenset), perhaps by resurrecting PEP 351 ("The freeze protocol").

On December 1, Stinner announced that the PEP had been submitted to the steering council for pronouncement. Given that Na is on the council, though will presumably recuse himself from deciding on this PEP, he probably has a pretty good sense for how it might be received by the group. So it seems likely that the PEP has a good chance of being approved. The availability of the free-threaded version of the language (i.e. without the GIL) means that more multithreaded Python programs are being created, so having a safe way to share dictionaries between threads will be a boon.

Comments (12 posted)

Bazzite: a gem for Linux gamers

By Joe Brockmeier
December 9, 2025

One of the things that has historically stood between Linux and the fabled "year of the Linux desktop" is its lack of support for video games. Many users who would have happily abandoned Windows have, reluctantly, stayed for the video games or had to deal with dual booting. In the past few years, though, Linux support for games—including those that only have Windows versions—has improved dramatically, if one is willing to put the pieces together. Bazzite, an image-based Fedora derivative, is a project that aims to let users play games and use the Linux desktop with almost no assembly required.

Bazzite is part of the Universal Blue project, which aims to provide "the reliability of a Chromebook, but with the flexibility and power of a traditional Linux desktop" via immutable Linux images based on Fedora. LWN covered another Universal Blue project, Bluefin, in December 2023. Each Universal Blue project is based on bootc images from the Universal Blue project's base images, which in turn are based on Fedora Atomic Desktops, or CentOS Stream images in the case of the long-term-support (LTS) release of Bluefin.

Bazzite is relatively new to the scene: its first release was in November 2023, based on Fedora 38. The project takes its name from bazzite: a mineral that forms small blue crystals. It is designed to be similar to Valve's SteamOS, but more suitable for general desktop use. It is largely based on stock software from Fedora, but adds a number of packages to provide hardware drivers for gaming, including support for various game controllers, keyboards, and proprietary NVIDIA drivers.

Bazzite also provides a custom kernel: the project uses the Fedora Always Ready (kernel-ark) repository that is used to develop Fedora Linux's kernels, and then applies a set of patches added for device support and performance optimizations for games. The project recently announced its Fall 2025 update based on Fedora 43, with support for new handheld gaming PCs and steering-wheel hardware.

Choose your character

The common wisdom for many years has been that too many choices overwhelm users who are new to Linux; the Bazzite project has largely chosen to ignore that wisdom. The biggest hurdle for getting started with Bazzite may be deciding which image to start with. The first step in the quest to install Bazzite begins with a web form that helps users choose the right image by selecting the type of hardware (desktop, laptop, handheld, tablet, or virtual machine), GPU vendor and model, desktop environment (GNOME or KDE), and whether the user wants the Steam gaming mode or not.

To add to the confusion, there is also a Bazzite Developer eXperience (DX) option for game developers. Choosing the right image for Bazzite is similar to the beginning of many video games, where users are asked to make choices about the character they will play in the game.

Once users have made their choices, they are offered two downloads: an ISO image that boots into the installer, or a live image that allows them to run a Bazzite desktop and install from there if they decide to do so. I installed Bazzite with the KDE desktop on a gaming PC that I acquired in 2022; it was not the top of the line for gaming then and has fallen further behind in the interim. However, it still has enough horsepower for current games and more than enough for the titles from the late 1990s and early 2000s that I prefer.

Installing Bazzite is simple enough; since it is based on Fedora, it uses the Anaconda installer and requires little decision-making on the part of users.

Managing software

The nice thing about image-based distributions is the ease of managing the base system; when there is an update for Bazzite, it automatically downloads the image for the update and queues the image for use on the next reboot. If there is a problem of some kind with the new image, one can revert to the previous known-good release and catch up with updates later. Like Fedora, Bazzite uses the OSTree tools for this under the hood, but Bazzite provides a helper utility to make the process as simple as possible.

Bazzite has three channels for its images: stable, testing, and unstable. Stable is, of course, the one that's recommended for most users. The testing image has, as one might expect, changes that are expected to land in the stable branch at some point. The unstable image is described as a "playground for developers/contributors" and not recommended. Users can switch between the different channels by rebasing with rpm-ostree. For example, to move to testing, one would run this command:

    rpm-ostree rebase ostree-image-signed:docker://ghcr.io/ublue-os/bazzite:testing

The project publishes new images about twice a week for its stable channel, and more often for the testing channel. These roll up any changes from Bazzite as well as Fedora. Upgrades from major Fedora releases, such as Fedora 42 to Fedora 43, are automatic. Users who are on the stable channel will simply get Fedora 43 when the project moves to that release.

A maze of twisty little passages

The downside to image-based distributions is that installing software not included with the image is a bit more complicated. The project suggests seven different ways to install software, ranked by the project from most recommended to least recommended. Again, this might be a bit confusing for users who are new to Linux. The good news is that Bazzite's documentation is clear and current.

The project's top-ranked choice is to use ujust, which is Bazzite's wrapper for the just command runner. The software available via ujust is limited to a handful of just recipes to install drivers, programs, and utilities. For example, "ujust install-steamcmd" will install the Steam Console Client, "ujust install-openrazer" will install the OpenRazer drivers for Razer gaming hardware, and "ujust install-coolercontrol" adds CoolerControl for monitoring and controlling cooling devices on gaming systems.

Note that Bazzite depends on ujust for more than installing software. For example, the "ujust changelogs" command will display the changelog for the current release. That includes the packages that have changed, commits to the Bazzite repository, and instructions on how to rebase to a different image if needed. The "ujust pick" command brings up a handy text-user interface dialog with ujust recipes organized by category.

The next recommendation on Bazzite's list is to use Flatpak to install applications. The distribution includes the Bazaar app store, along with Flatseal and Warehouse for managing Flatpak permissions and data. By default, Bazaar is pointed at Flathub as its source of applications, and there does not seem to be a way to add additional Flatpak remotes through Bazaar itself. It is possible to use the "flatpak remote-add" utility if one wishes to enable additional remote repositories, though. Once the remote is enabled, the available Flatpaks will show up in Bazaar.

Bazaar allows users to filter out applications that have stopped receiving updates, and to hide proprietary software from search results. Unfortunately, it does not have a setting to filter applications by whether they are verified as being supplied by the original developer or vendor. For example, KDE applications on Flathub like Okular have a checkmark and note that Flathub has verified the application is from KDE. One of the first things I do when using GNOME Software on Fedora is to turn off the Fedora Flatpak repositories and to filter out unverified applications. It'd be reassuring if Bazaar also allowed filtering unverified applications.

If one wants to install Google Chrome, for example, it seems like a good idea to get a package directly from Google rather than a third-party Flatpak that may well be out of date with the upstream release. The current Flathub version is the 142.0.7444.175 from November 17, while the most recent release from Google is 143.0.7499.40; that release has 13 security fixes that users no doubt would like to have immediately. Google provides Debian packages and RPMs, but it does not provide a Flatpak; users are left with the unofficial package. That's fine if users understand that it's not coming from Google, but many users will not know to check whether software is verified or not.

There is not a lot to say about using Bazaar itself, really. It has a simple app-store interface that should be immediately intuitive for long-time Linux users as well as folks fresh to Linux from macOS and Windows.

Whether Flatpaks are the best option, however, is a matter of taste. The project also recommends using Distrobox, though it's only ranked fifth. Users can employ Distrobox to create a container with their preferred Linux distribution (or distributions) and install applications in the container. Distrobox integrates with the host system so that it is largely transparent to the user that the application is running in a container. DistroShelf, a graphical interface for managing Distrobox containers, is included with Bazzite and makes setting up the containers and exporting applications from the container a breeze.

I have found that I prefer using Distrobox on Bazzite in lieu of Flatpaks. For one thing, I prefer using APT or DNF to install applications over the app-store model. Flatpaks also have a number of limitations and irritations due to sandboxing that can usually be overcome, but Distrobox has fewer of those problems. For example, the Kid3 audio tagger can only access a few directories when running as a Flatpak. One could use Flatseal to modify this, but running Kid3 in a Fedora container with Distrobox avoids the problem entirely. It is also apparent where an application's configuration files are when installed from a distribution package. Admittedly, this may also be a reluctance to adapt to a new technology on my part. However, my suggestion to those trying Bazzite after using a package-based Linux distribution would be to take the Distrobox route.

[Bazzite desktop]

Homebrew is also included with Bazzite, and it is the third-choice option suggested by the project. LWN covered Homebrew in November 2025. Homebrew is easy enough to use, but its package selection is mostly limited to command-line applications; if a user wants LibreOffice or to use Emacs with a GUI, Homebrew is not an option.

In order to set up a service on Bazzite, the project recommends using Podman to set up a Quadlet. This is a method of orchestrating a container or set of containers using systemd without the overhead of Kubernetes.

If those methods aren't suitable for some reason, the project recommends the AppImage format or (as a last resort) using rpm-ostree to layer packages onto the image. The project warns that layering packages "can be destructive and may prevent updates as well as other issues until the layered packages are uninstalled". Note that some ujust tasks that the project includes with Bazzite do make use of package layering, though one presumes that those operations are tested extensively. As a rule, though, users should probably avoid package layering unless it's necessary to add packages with drivers.

The obvious problem with mixing and matching multiple methods of application installation, compared with a single package manager to rule them all, is that keeping software up to date could be a hassle. However, Bazzite makes this simple; it has a "ujust update" recipe that uses the Topgrade utility to take care of updating the base image, Flatpaks, distroboxes, and Homebrew packages all at once. Users are still on their own when it comes to maintaining AppImages, though.

Playing games

There are some games, proprietary and open source, that have native builds for Linux; but the number of native Linux ports is eclipsed by the number of popular PC games that are still exclusively for Windows. However, many of the Windows-native games will play on Linux thanks to the work that has gone into projects like Wine and its vkd3d project for Microsoft's Direct3D version 12 API, as well as the DXVK project, which is a Vulkan-based translation layer for Direct3D versions 8 through 11.

Those projects are used by Proton, which is incorporated in the proprietary Steam Client, as well as the Lutris open-gaming platform for Linux; both of these are pre-installed with Bazzite. The Steam package is provided via a third-party repository since it cannot be included with Fedora. The Lutris package comes from Bazzite's repository on Fedora's Copr build service.

If one purchases games through Steam, the Steam Client is obviously the best choice. I spent several hours over the past few weeks, especially during the US Thanksgiving holiday, trying out a number of different games, ranging from games released for Windows nearly 20 years ago to "Doom: The Dark Ages", which was released in May 2025. In all, I tried about ten different games with Steam on Bazzite; some I played for about 20 minutes, and others I've logged several hours with over that time period. There are still some games that do not work well on Linux—but I did not encounter any problems during my testing. Not with the games themselves, anyway—I cannot say the same for my video-game skills, which have clearly deteriorated from disuse.

Lutris, on the other hand, proved to be frustrating. It supports games from the Steam library but works with other vendors like Battle.net, Epic Games, and Humble Bundle. In addition, it lets users install Windows games that were not acquired through one of the online stores, and acts as a launcher for gaming emulators, which it calls "runners", as well. For example, Lutris claims support for DOSBox for MS-DOS games, the MAME arcade-game emulator, and more than 50 others. It even allows users to install runners directly from Lutris rather than having to find and install them separately.

I tried installing a few Windows games from CD-ROM; each time the install seemed to work perfectly, but the games failed to launch. Unfortunately, Lutris does not have much in the way of documentation: it has a forum and Discord channel, but little in the way of actual documentation. The forum, alas, seems currently to be overwhelmed with spam about customer-service-contact numbers for airlines. The Bazzite project has some minimal documentation for Lutris, but it did not address the problems I was seeing.

Lutris was the only real sour note I had using Bazzite; I can imagine that users coming from Windows would be equally disappointed if their experience matches mine. My experience may have been an outlier and others might have better success.

I had better luck with PortProton, which is more limited in scope than Lutris but worked well at what it offers. It lets users install various game launchers written for Windows, and then use those to install games. PortProton is not installed by default with Bazzite but is available as a Flatpak. I was able to easily install games from other services, like Itch.io, whether they were Windows-only or Linux-native. It was also easy to get MAME running with PortProton, though I did not spend a lot of time (yet) finding ROMs to test it out with.

It's not all fun and games

All work and no play may make one dull, but all play and no work can have undesirable effects when it comes time to pay the bills. Unless a user has a dedicated machine for gaming, they'll want the distribution to be good for regular desktop use as well. Since Bazzite is basically Fedora under the hood it serves perfectly well as a desktop for productivity use (e.g., using LibreOffice or web-based work tools), as well as a platform for Linux development or for system administrators. It took no time at all to set up a Fedora 43 Distrobox container with my usual set of applications.

The missing piece, for me, was the choice of desktop. KDE is fine, but I've come to prefer using the niri Wayland compositor. Installing applications in a Distrobox is a piece of cake, but installing a desktop is a bit trickier. It may be possible, but it is not supported.

Since Bazzite is built using bootc, though, it is possible to create a custom image with a different selection of software. The Universal Blue project even provides an image template with instructions on how to do so, and encourages users to share examples.

Using that template, it was relatively easy to set up a custom image with niri and additional packages included so that I could use my preferred desktop. The downside is that the template and tools depend on GitHub Actions to build the image, which may not be optimal for some users. It is possible to build containers locally and use those, but I'm still tinkering with that workflow. When using the template and GitHub actions, by default, new images will be created on a schedule—those will include updates to Bazzite plus any additional packages.

Overall, Bazzite seems to be a good choice for users who prioritize PC gaming and are coming from Windows to Linux—especially those users who use Steam heavily, which seems to be the case for most of the gaming enthusiasts in my social circles.

Comments (32 posted)

Disagreements over post-quantum encryption for TLS

By Daroc Alden
December 8, 2025

The Internet Engineering Task Force (IETF) is the standards body responsible for the TLS encryption standard — which your browser is using right now to allow you to read LWN.net. As part of its work to keep TLS secure, the IETF has been entertaining proposals to adopt "post-quantum" cryptography (that is, cryptography that is not known to be easily broken by a quantum computer) for TLS version 1.3. Discussion of the proposal has exposed a large disagreement between participants who worried about weakened security and others who worried about weakened marketability.

What is post-quantum cryptography?

In 1994, Peter Shor developed Shor's algorithm, which can use a quantum computer to factor large numbers asymptotically faster (i.e. faster by a proportion that grows as the size of the input does) than a classical computer can. This was a huge blow to the theoretical security of the then-common RSA public-key encryption algorithm, which depends on the factoring of numbers being hard in order to guarantee security. Later work extended Shor's algorithm to apply to other key-exchange algorithms, such as elliptic-curve Diffie-Hellman, the most common key-exchange algorithm on the modern internet. There are doubts that any attack using a quantum computer could actually be made practical — but given that the field of cryptography moves slowly, it could still be worth getting ahead of the curve.

Quantum computing is sometimes explained as trying all possible answers to a problem at once, but that is incorrect. If that were the case, quantum computers could trivially break any possible encryption algorithm. Instead, quantum computers work by applying a limited set of transformations to a quantum state that can be thought of as a high-dimensional unit-length vector. The beauty of Shor's algorithm is that he showed how to use these extremely limited operations to reliably factor numbers.

The study of post-quantum cryptography is about finding an encryption mechanism that none of the generalizations of Shor's algorithm or related quantum algorithms apply to: finding encryption techniques where there is no known way for a quantum computer to break them meaningfully faster than a classical computer can. While attackers may not be breaking encryption with quantum computers today, the worry is that they could use a "store now, decrypt later" attack to break today's cryptography with the theoretically much more capable quantum computers of tomorrow.

For TLS, the question is specifically how to make a post-quantum key-exchange mechanism. When a TLS connection is established, the server and client use public-key cryptography to agree on a shared encryption key without leaking that key to any eavesdroppers. Then they can use that shared key with (much less computationally expensive) symmetric encryption to secure the rest of the connection. Current symmetric encryption schemes are almost certainly not vulnerable to attack by quantum computers because of their radically different design, so the only part of TLS's security that needs to upgrade to avoid attacks from a quantum computer is the key-exchange mechanism.

Belt and suspenders

The problem, of course, is that trying to come up with novel, hard mathematical problems that can be used as the basis of an encryption scheme does not always work. Sometimes, cryptographers will pose a problem believing it to be sufficiently hard, and then a mathematician will come along and discover a new approach that makes attacking the problem feasible. That is exactly what happened to the SIKE protocol in 2022. Even when a cryptosystem is not completely broken, a particular implementation can still suffer from side-channel attacks or other problematic behaviors, as happened with post-quantum encryption standard Kyber/ML-KEM multiple times from its initial draft in 2017 to the present.

That's why, when the US National Institute of Standards and Technology (NIST) standardized Kyber/ML-KEM as its recommended post-quantum key-exchange mechanism in August 2024, it provided approved ways to combine a traditional key-exchange mechanism with a post-quantum key-exchange mechanism. When these algorithms are properly combined (which is not too difficult, although cryptographic implementations always require some care), the result is a hybrid scheme that remains secure so long as either one of its components remains secure.

The Linux Foundation's Open Quantum Safe project, which provides open-source implementations of post-quantum cryptography, fully supports this kind of hybrid scheme. The IETF's initial draft recommendation in 2023 for how to use post-quantum cryptography in TLS specifically said that TLS should use this kind of hybrid approach:

The migration to [post-quantum cryptography] is unique in the history of modern digital cryptography in that neither the traditional algorithms nor the post-quantum algorithms are fully trusted to protect data for the required data lifetimes. The traditional algorithms, such as RSA and elliptic curve, will fall to quantum cryptanalysis, while the post-quantum algorithms face uncertainty about the underlying mathematics, compliance issues (when certified implementations will be commercially available), unknown vulnerabilities, hardware and software implementations that have not had sufficient maturing time to rule out classical cryptanalytic attacks and implementation bugs.

During the transition from traditional to post-quantum algorithms, there is a desire or a requirement for protocols that use both algorithm types. The primary goal of a hybrid key exchange mechanism is to facilitate the establishment of a shared secret which remains secure as long as as one of the component key exchange mechanisms remains unbroken.

But the most recent draft from September 2025, which was ultimately adopted as a working-group document, relaxes that requirement, noting:

However, Pure PQC Key Exchange may be required for specific deployments with regulatory or compliance mandates that necessitate the exclusive use of post-quantum cryptography. Examples include sectors governed by stringent cryptographic standards.

This refers to the US National Security Agency (NSA) requirements for products purchased by the US government. The requirements "will effectively deprecate the use of RSA, Diffie-Hellman (DH), and elliptic curve cryptography (ECDH and ECDSA) when mandated." The NSA has a history of publicly endorsing weak (plausibly already broken, internally) cryptography in order to make its job — monitoring internet communications — easier. If the draft were to become an internet standard, the fact that it optionally permits the use of non-hybrid post-quantum cryptography might make some people feel that such cryptography is safe, when that is not the current academic consensus.

There are other arguments for allowing non-hybrid post-quantum encryption — mostly boiling down to the implementation and performance costs of supporting a more complex scheme. But when Firefox, Chrome, and the Open Quantum Safe project all already support and use hybrid post-quantum encryption, that motivation didn't ring true for other IETF participants.

Some proponents of the change argued that supporting non-hybrid post-quantum encryption would be simpler, since a non-hybrid encryption scheme would be simpler than a hybrid one. Opponents said that was focusing on the wrong kind of simplicity; adding another method of encryption to TLS makes implementations more complex, not less. They also pointed to the cost of modern elliptic-curve cryptography as being so much smaller than the cost of post-quantum cryptography that using both would not have a major impact on the performance of TLS.

From substance to process

The disagreement came to a head when Sean Turner, one of the chairs of the IETF working group discussing the topic, declared in March 2025 that consensus had been reached and the proposal ought to move to the next phase of standardization: adoption as a working-group document. Once a draft document is adopted, it enters a phase of editing by the members of the working group to ensure that it is clearly written and technically accurate, before being sent to the Internet Engineering Steering Group (IESG) to possibly become an internet standard.

Turner's decision to adopt the draft came as a surprise to some of the participants in the discussion, such as Daniel J. Bernstein, who strongly disagreed with weakening the requirements for TLS 1.3 to allow non-hybrid key-exchange mechanisms and had repeatedly said as much. The IETF operates on a consensus model where, in theory, objections raised on the mailing list need to be responded to and either refuted or used to improve the standard under discussion.

In practice, the other 23 participants in the discussion acknowledged the concerns of the six people who objected to the inclusion of non-hybrid post-quantum key-exchange mechanisms in the standard. The group that wanted to see the draft accepted just disagreed that it was an important weakening in the face of regulatory and maintenance concerns, and wanted to adopt the standard as written anyway.

From there, the discussion turned on the question of whether the working-group charter allowed for adopting a draft that reduced the security of TLS in this context. That question never reached a consensus either. After repeated appeals from Bernstein over the next several months, the IESG, which handles the IETF's internal policies and procedures, asked Paul Wouters and Deb Cooley, the IETF's area directors responsible for the TLS working group, whether Turner's declaration of consensus had been made correctly.

Wouters declared that Turner had made the right call, based on the state of the discussion at the time. He pointed out that while the draft permits TLS to use non-hybrid post-quantum key-exchange algorithms, it doesn't recommend them: the recommendation remains to use the hybrid versions where possible. He also noted that the many voices calling for adoption indicated that there was a market segment being served by the ability to use non-hybrid algorithms.

A few days after Wouters's response, on November 5, Turner called for last objections to adopting the draft as a working-group document. Employees of the NSA, the United Kingdom's Government Communications Headquarters (GCHQ), and Canada's Communications Security Establishment Canada (CSEC) all wrote in with their support, as did employees of several companies working on US military contracts. Quynh Dang, an employee of NIST, also supported publication as a working-group document, although claimed not to represent NIST in this matter. Among others, Stephen Farrell disagreed, calling for the standard to at least add language addressing the fact that security experts in the working group thought that the hybrid approach was more secure: "Absent that, I think producing an RFC based on this draft provides a misleading signal to the community."

As it stands now, the working group has adopted the draft that allows for non-hybrid post-quantum key-exchange mechanisms to be used in TLS. According to the IETF process, the draft will now be edited by the working-group members for clarity and technical accuracy, before being presented to the IESG for approval as an internet standard. At that point, companies wishing to sell their devices and applications to the US government will certainly enable the use of these less-secure mechanisms — and be able to truthfully advertise their products as meeting NIST, NSA, and IETF standards for security.

[ Thanks to Thomas Dalichow for bringing this topic to our attention. ]

Comments (52 posted)

Mix and match Linux distributions with Distrobox

By Joe Brockmeier
December 10, 2025

Linux containers have made it reasonably easy to develop, distribute, and deploy server applications along with all the distribution dependencies that they need. For example, anyone can deploy and run a Debian-based PostgreSQL container on a Fedora Linux host. Distrobox is a project that is designed to bring the cross-distribution compatibility to the desktop and allow users to mix-and-match Linux distributions without fussing with dual-booting, virtual machines, or multiple computers. It is an ideal way to install additional software on image-based systems, such as Fedora's Atomic Desktops or Bazzite, and also provides a convenient way to move a development environment or favorite applications to a new system.

Distrobox creator Luca Di Maio was inspired by the Toolbx project (formerly Container Toolbox) for Fedora. Generally, the idea with Linux containers is to run processes in their own environment to isolate them as much as possible from the host without having to resort to virtual machines with their additional overhead. It is possible, though, to set up a container to give it privileged access to the system with little to no isolation. This is typically referred to as a privileged container. It is possible to set up privileged containers manually, but it requires the user to know a great deal about working with containers and some fairly involved setup. The original goal for Toolbx was to let users run a privileged container "toolbox" on image-based systems that could be used for system administration and troubleshooting without having to include administration utilities in the image itself.

Toolbx has a somewhat limited scope: it works best with specific container images and was primarily developed to provide a software toolbox for image-based Fedora editions like CoreOS and Silverblue. Toolbx is also limited to the Podman container manager, which is not ideal for those who prefer to use Docker. Di Maio wanted to have broader compatibility with other Linux distributions as host operating systems as well as the ability to use a wide range of container images. He published the first release, Distrobox 1.0.0, in December 2021, under the GPLv3.

The Distrobox project is, as its homepage notes, "a fancy wrapper" that works with Docker and Podman, with di Maio's own minimal container manager, Lilipod, as a fallback if the others are not present. Distrobox is written in POSIX shell, so that it is as portable as possible. The project's primary goal is to allow the use of any Linux distribution containers on any Linux distribution host running one of the supported container managers. It also has a goal of entering a container as quickly as possible, "every millisecond adds up if you use the container as your default environment for your terminal".

Di Maio notes in the project's README that isolation and sandboxing of applications are non-goals: "on the contrary it aims to tightly integrate the container with the host". This means that containers run by Distrobox have access to the user's home directory, external storage, USB devices, audio devices, and more. The idea of a sandboxed mode for Distrobox was discussed but di Maio ultimately decided against implementing one and has opted not to take patches attempting to add isolation features: "the focus of distrobox is not isolation, security or stuff, but integration and transparency with the host".

Getting started

Distrobox is packaged for most popular Linux distributions, and the project has a compatibility table of host distributions for those it has been tested on. The table has notes with additional instructions if necessary. There is a table of guest distributions as well that lists the container images that have been successfully tested with Distrobox. If a distribution is not mentioned in the host or guest table it does not mean that Distrobox will not work; it simply indicates that the combinations have not been tested. Users will also need one of the supported container managers installed as well.

Here is an example of how and why one might use Distrobox: the Signal project only provides packages for Debian-based distributions, so there are no official packages for Arch, Fedora, openSUSE, or other non-Debian systems. There is a Flatpak available, but it is not maintained by the Signal project itself. It is possible to compile Signal from source, but it is far easier to create a Debian Distrobox and install the official Signal package there:

    $ distrobox create -i debian:trixie -n signal -H ~/distrobox/signal
    $ distrobox enter signal

The first command tells distrobox to create a new container using the debian:trixie image with its home directory set to ~/distrobox/signal, and to name it "signal". If the debian:trixie container image is not already present on the system, Distrobox will ask if you want to download it. Note that Distrobox uses the system's settings for short-name container lookup; it is also possible to pass the full registry path to distrobox if one prefers. The second command tells it to open a shell inside the container (or enter the container) so one can work within its environment.

All of that will probably look quite familiar to folks who work with Linux containers regularly, but Distrobox is doing quite a bit more behind the scenes. The "--dry-run" (or just "-d") option will print out the command that "distrobox create" is passing to the container manager for inspection, without taking any actions. It's worth eyeballing the output to see what goes into setting up a container that's well-integrated with the host system. In the case of the example command here, there are more than 50 options passed to podman, including 17 volume mounts. That is a lot of container magic that the user is not required to fuss with or know about, unless they want to.

One of the things it does is to set up passwordless sudo inside the container, so one can immediately install packages in the container and so forth. Note that, by default, Distrobox executes the user's login shell when entering a container, but this can be changed if one wants to use (say) the Fish shell on the host system but GNU Bash inside a Distrobox container.

The purpose of setting a home directory is not to provide isolation from the user's home directory, but to avoid clutter. If no home directory is specified, the guest container will use ~/; that may be undesirable if one wants to maintain a separate set of configuration files or project directories in a Distrobox. Applications and processes in the Distrobox will still be able to access files under the user's normal home directory. For example, if Distrobox container is set up with its home as /home/user/distrobox/signal then that will be set as $HOME for applications within the container. However, the applications can still access files in /home/user, /home/user/Documents, and so forth.

The distrobox create man page provides a number of useful examples to help getting started. The "distrobox create -C" command will provide a current list of images that are considered compatible with Distrobox. If a container image is not listed as compatible, that does not mean it won't work, just that it has not been tested and confirmed to work.

Naturally, Distroboxes can be destroyed as well as created: "distrobox rm name" will remove a container. That does not, however, remove the container image from the system. If the goal is to completely purge the container image as well, then it's necessary to use the container manager to do so, e.g. "podman rmi imagename".

Exporting applications and moving containers

Since Distrobox is often used to run desktop applications, it does not make sense to have to enter the Distrobox container each time one wants to run Signal or some other application. The project provides a distrobox-export command that sets up a .desktop file to launch the command directly from one's desktop. For example, this will create a launcher for the Signal desktop client:

    $ distrobox-export --app signal-desktop

Note that the command should be run from within the container created by Distrobox, not on the host. Assuming that the user is running a desktop environment like GNOME, KDE, or Xfce, an entry for Signal should show up in whatever menu or launcher is being used. Now it's not necessary to use "distrobox enter" at all to run Signal, it can be run like any other desktop application that is installed on the host. Even though Distrobox prioritizes speed in starting containers, there is usually a small but noticeable lag when launching an application from a container that hasn't started yet.

Alternatively, one might want to use a Distrobox container as their primary command-line environment. Some terminal emulators, such as Ptyxis, offer integration with Distrobox and other container tools; a user can set Ptyxis to open new terminal windows in the container automatically. This is particularly handy on image-based systems where one cannot install software directly on the host using APT or DNF. I set Ptyxis to use a Fedora container by default and do my work in the container and install additional software therein. To distinguish between working in the host terminal and in my default container, I set the color palette to "Borland" from the Gogh set of color schemes for terminals. The old-school look is appealing, and I don't get it confused with the black-on-green theme for regular terminals.

The "distrobox assemble" command can automate the creation of one or more Distroboxes. It takes an INI file with specifications on how to create a new container or multiple containers. For example, this is a simple distrobox.ini that would create a container using a Fedora Rawhide image with several packages that I use regularly. The pull=true option tells distrobox to go ahead and pull the container image if it is not available on the system.

    [rawhide]
    additional_packages="emacs emacs-pgtk aerc tmux git lolcat bat"
    image=fedora:rawhide
    pull=true
    home=/home/jzb/distrobox/rawhide

Jorge Castro has written a good blog post on using "distrobox assemble" that is worth reading for anyone using Distrobox.

The project is not on a time-based schedule, so releases are somewhat sporadic. There is also no roadmap to follow for future releases. Generally, Distrobox seems mostly feature complete, so new versions seem to be mostly about bug fixes or catching up to changes in Linux distributions and container tooling. The most recent feature release was version 1.8.2.0 on October 28 which included Bash completion enhancements, added support for new guest distributions, and included quite a few documentation changes as well as bug fixes. With that release, di Maio also announced the addition of Alessio Biancalana as a maintainer, and Biancalana has headed up the recent minor releases. The most recent is 1.8.2.2.

The project does not, unfortunately, have a mailing list or even a web forum: the documentation points users to a Matrix room and Telegram group instead. Users interested in making contributions or reporting bugs should check CONTRIBUTING.md for guidance.

I've found Distrobox to be enormously useful for providing an easy and quick way to run multiple distributions on a single system. Distrobox is probably one of my most-used applications after Firefox and Emacs. Multiple versions of Alpine Linux, CentOS Stream, Debian, Fedora, openSUSE, Ubuntu, and others are just a few keystrokes away any time I need them.

Comments (27 posted)

The beginning of the 6.19 merge window

By Jonathan Corbet
December 4, 2025
As of this writing, 4,124 non-merge commits have been pulled into the mainline repository for the 6.19 kernel development cycle. That is a relatively small fraction of what can be expected this time around, but it contains quite a bit of significant work, with changes to many core kernel subsystems. Read on for a summary of the first part of the 6.19 merge window.

The most significant changes merged so far include:

Architecture-specific

  • Support has been added for AMD's "smart data cache injection" feature, which allows I/O devices to place data directly in the L3 cache without the need to go through RAM first. Documentation for this feature's control knobs has been dispersed throughout Documentation/filesystems/resctrl.rst; interested readers can find it by searching for "io_alloc".
  • Nearly three years after LWN first looked at the patches, basic support for Intel's linear address-space separation (LASS) feature has been merged. LASS implements the separation between the kernel and user-space address ranges in the hardware itself, improving security and closing off the sort of side channels that can be used by speculative attacks. In 6.19, LASS support will be disabled on most systems while some remaining problems are worked out.
  • The s390 architecture has a new interface for the configuration and management of hotplug memory; see this commit for an overview.
  • Support for 31-bit binaries in compatibility mode on s390 systems has been removed; from the merge message: "To the best of our knowledge there aren't any 31 bit binaries out in the world anymore that would matter for newer kernels or newer distributions".
  • S390 systems have long lacked stack-protector support; that will change in 6.19 thanks to better support in the upcoming GCC 16 compiler release. See this commit for some background.
  • Support for the Arm Memory System Resource Partitioning and Monitoring (MPAM) capability has been added for 64-bit systems. From the KConfig commit:
    Memory System Resource Partitioning and Monitoring (MPAM) is an optional extension to the Arm architecture that allows each transaction issued to the memory system to be labeled with a Partition identifier (PARTID) and Performance Monitoring Group identifier (PMG).

    Memory system components, such as the caches, can be configured with policies to control how much of various physical resources (such as memory bandwidth or cache memory) the transactions labelled with each PARTID can consume. Depending on the capabilities of the hardware, the PARTID and PMG can also be used as filtering criteria to measure the memory system resource consumption of different parts of a workload.

    This feature is strenuously undocumented within the kernel, but some information on its use can be found on this page.

Core kernel

  • The new listns() system call allows user space to iterate through the namespaces on the system. The handling of reference counts for namespaces has also changed, making it impossible for user space to resurrect a namespace that is at the end of its life.
  • If a process dumps core as the result of a signal, another process holding a pidfd for the signaled process can now learn which signal brought about the end.
  • Support for deferred unwinding of user-space call stacks has been merged; this is part of the larger project of enabling support for unwinding using SFrame. See this article for details.
  • The restartable sequences implementation has been massively reworked, leading to better performance throughout.
  • BPF programs can now contain indirect jumps, which are accessed by way of a dedicated "instruction set" map type. Some information on the new maps can be found in this commit, while information on indirect jumps is in this commit. This feature is x86-only for now.

Filesystems and block I/O

  • The filesystem in userspace (FUSE) subsystem has improved support for buffered reads when large folios are in use. The iomap layer has gained the ability to track folios that are partially up to date so that reads on such folios only need to bring in the data that is missing.
  • The virtual filesystem layer has gained the ability to create recallable directory delegations, intended to support NFS directory delegations. This feature allows an NFS server to hand responsibility for the management of a directory to a client. That delegation can be recalled should a second client also try to make changes in the delegated directory.
  • There is a new "file dynptr" abstraction that allows BPF programs to read data from structured files. Documentation is scarce, but this page covers the dynptr concept in general, and this commit contains tests for the new feature.
  • The Btrfs filesystem now implements a "shutdown" state that attempts to complete outstanding operations but rejects any new operations. There is an ioctl() operation to enter this state, but it is marked as experimental for this release.
  • The ext4 implementation can now manage filesystems with a block size greater than the system page size.

Hardware support

  • Clock: Realtek system timers.
  • Miscellaneous: Intel integrated memory/IO hub memory controllers and NXP i.MX91 thermal monitoring units.
  • Networking: Eswin EIC7700 Ethernet controllers, Motorcomm YT9215 Ethernet switches, Mucse 1GbE PCI Express adapters, MaxLinear GSW1xx Ethernet switches, and Realtek 8852CU USB wireless network adapters.

Miscellaneous

  • The documentation build system has been reworked, replacing a bunch of makefile logic with a separate build-wrapper script written in Python.

Networking

  • A locking change deep within the TCP transmit code has yielded "a 300 % (4x) improvement on heavy TX workloads, sending twice the number of packets per second, for half the cpu cycles".
  • It is now possible to mark network sockets as exempt from the system's global memory limits; the idea is that limits will be imposed within a container instead. This commit describes the reasoning behind this feature, and this one introduces a sysctl knob to control the feature. There is also a mechanism to allow BPF programs to control the accounting flag.
  • The system now supports RFC 5837, providing more information via ICMP that will improve route tracing. See this commit for more information.
  • There is improved support for continuous busy polling within network drivers; see this commit for details.
  • The CAN XL protocol is now supported.
  • The getsockname() and getpeername() system calls are now supported by io_uring.

Security-related

  • The kernel's internal cryptographic library has gained support for the SHA-3 and BLAKE2b hash algorithms. This commit includes documentation for the SHA-3 implementation.
  • Security modules will now be notified when a memfd is created, allowing policies specific to those files to be implemented; SELinux has gained support for this new hook. See this commit for more information.
  • The Integrity Policy Enforcement security module has gained support for the AT_EXECVE_CHECK flag for execveat(). This flag will cause integrity checks to be performed on a script file before it is passed to an interpreter for execution.

Internal kernel changes

  • The inode i_state field has been hidden and must now be manipulated using accessor functions; see this commit for an overview of the new way of doing things.
  • There are a couple of new filesystem-layer primitive operations, FD_ADD() and FD_PREPARE(), that handle a lot of the boilerplate of creating and installing files. This commit provides an overview.
  • The new scoped_seqlock_read() macro simplifies common uses of seqlocks; see this commit for some more information.
  • The objtool utility has been reworked to facilitate the creation of live patches; see this commit for details.
  • The scoped user-space access primitives have been merged. They provide a way to access user-space data in hot code paths that is safe from speculative attacks.

    Before getting into the mainline, this code first had to work around a deficiency in both the GCC and Clang compilers: a jump instruction in assembly code that jumps out of the scope will bypass the cleanup code (which is the whole reason for using scopes in the first place). This commit shows the workaround: a two-step jump, with the assembly goto remaining within the scope.

  • The at_least marker, which can indicate a minimum allowable size for an array passed into a function, has been added. See this article for more information.
  • The directory tools/lib/python has been established as a central location for Python modules used by code shipped with the kernel. For now, its use is mostly limited to the documentation subsystem, but there are other Python utilities in the kernel that may eventually benefit from a move there as well.
  • The new mempool function mempool_alloc_bulk() can be used to safely allocate multiple objects in a single call.
  • The new %pt5p specifier for printk() can be used to print a timespec structure in a human-readable form.
  • The large "syn" Rust crate is now packaged with the kernel source; it is a library for the parsing of Rust code into a syntax tree. See this merge message for more information.
  • Support for struct sockaddr_unsized, which is intended to improve memory safety for the struct sockaddr flexible array, has been merged.

There are, as of this writing, still over 8,000 commits waiting in linux-next; most of those can be expected to spill into the mainline over the remainder of the merge window. Normally, the merge window would be expected to close on December 14, but Linus Torvalds suggested in the 6.18 release announcement that the closing might be delayed somewhat. Your editor, suffering through the third conference-overlapping merge window this year, suspects that the usual schedule will hold, though. Either way, we will provide the usual summary once the merge window closes.

Comments (2 posted)

An open seat on the TAB

By Jonathan Corbet
December 8, 2025
As has been recently announced, nominations are open for the 2025 Linux Foundation Technical Advisory Board (TAB) elections. I am one of the TAB members whose term is coming to an end, but I have decided that, after 18 years on the board, I will not be seeking re-election; instead, I will step aside and make room for a fresh voice. My time on the TAB has been rewarding, and I will be sad to leave; the TAB has an important role to play in the functioning of the kernel community.

Some history

Back in the early 2000s, an industry group was formed under the name of the Open Source Development Laboratory, or OSDL; its role was to help encourage the development of Linux in ways that suited its members. It is fair to say that OSDL got off to a rough start; it had a habit of releasing "specifications", starting with Carrier Grade Linux, that came across as marching orders to the kernel community. Said community was even less receptive to marching orders back then than it is now; the result was a lot of friction, with OSDL failing to make the sort of headway that it (and its sponsors) would have liked.

That friction can be seen, for example, in our coverage of the 2004 Kernel Summit, where various frustrations were raised. Kernel developers asked for community representation on the OSDL board and a more open process for the development of specifications, for example. In 2005, Greg Kroah-Hartman pulled together a group of kernel developers to try to push OSDL in a more community-friendly direction. He followed up in 2006 with a long list, produced by that group of developers, of changes OSDL could make to improve the situation. That list included facilitating access to proprietary hardware documentation, creating a travel fund for developers to get to events, creation of a US-based technical conference, and:

We really need a technical writer dedicated to help keeping the in-kernel documentation up to date and relevant. The kernel developers are much better at writing code than documenting it, and if there was someone watching over this, and bugging the developers to keep it up to date when it lapses, it would help out immensely.

That, alas, has never really come to be, though some attempts were made. But OSDL did eventually accede to most of the group's other requests, including establishing the travel fund and assisting with the creation of the Linux Plumbers Conference, which was indeed US-based for many years.

There was another item on the wishlist, though: the creation of a technical advisory board made up of community members as a way for the community to be heard within OSDL. Tied to that was a request for the community to hold a seat on the OSDL board of directors. In March 2006, OSDL announced the creation of the TAB, with an initial membership of James Bottomley, Wim Coekaerts, Randy Dunlap, Kroah-Hartman, Christoph Lameter, Olivia Mackall, Ted Ts'o, Arjan van de Ven, and Chris Wright. The TAB, in turn, would hold the asked-for seat on the OSDL board. After years of requests, the kernel community had a say in how OSDL made its decisions.

At the beginning of 2007, though, OSDL was no more; it was folded into the Free Standards Group, run at the time by Jim Zemlin, and renamed the Linux Foundation. The combination of the pre-merger changes and the new leadership led to a refocused organization that has served the community well ever since.

Yes, one need not go too far to hear grumbles about the Linux Foundation, and many of them are deserved. But one should not lose track of the many things the Linux Foundation does to benefit our community, including:

  • The Linux Foundation employs Linus Torvalds, Kroah-Hartman and Shuah Khan as fellows, allowing them to focus on the kernel full time and isolating them from any vendor's agenda. It also supports other developers to carry out specific tasks for the kernel.
  • It organizes successful conferences around the world, from trade shows to highly technical gatherings; the days of holding the Kernel Summit in a shopping-mall basement are a distant memory now.
  • The Linux Foundation funds and operates the kernel.org infrastructure and supports the development of tools like b4.
  • It still maintains the travel fund, which enables developers to attend events that would otherwise be inaccessible to them.
  • The Linux Foundation has done a great deal of work to educate companies on how to work with our process and make the kernel better. Its mentorship program has brought a number of new developers into our community.
  • It has provided legal expertise in many areas, including influencing the European Cyber Resilience Act to be more friendly toward free software.

And so on. It should also be mentioned that the Linux Foundation has long sponsored LWN's travel to events all over the world so that we can all keep up with what is happening in our community.

Election

In 2007, I sent in a nomination for the first actual election for members to the Linux Foundation TAB; the results saw me elected, along with Van de Ven, Kroah-Hartman, Lameter, and Olaf Kirch. I have served on the TAB continuously since then, for a total of 18 years now.

That time has been interesting in many ways. We have, over the years, responded to the kernel.org breakin, the disclosure of Meltdown and Spectre, various ups and downs of the Linux Plumbers Conference, the University of Minnesota scandal, the events leading to the community's code of conduct, the kernel's response to international sanctions, numerous disagreements between the community and various corporations, and disagreements within the community itself, among many other things. We even tried to bring in a paid documentation writer (funded by the Linux Foundation) for a while.

The role of the TAB now is somewhat different from what was originally envisioned. The Linux Foundation rarely seeks our advice, so our "technical advisory" role is limited. The organization does listen when we have something to say, but does not often start the conversation. Instead, the TAB has become a sort of governing board in a community that has no governing boards. It tries to chart a path through difficult situations, such as the adoption of a code of conduct or, more recently, the process of deciding what the community's policy toward machine-generated contributions should be. The TAB talks to people to de-escalate conflicts. And it still oversees the Linux Plumbers Conference.

The TAB has almost no power at all; it cannot compel anybody to do anything. But it does have influence. At any given time, the TAB membership represents many decades (if not centuries) of community experience; somebody generally knows the right person to talk to when there is a situation to address. It has, and will hopefully maintain, sufficient respect within the community that its members will be listened to. The TAB also plays an important information-exchange role, with members informing the group of developments that they should be aware of.

My role there has often been on the information-sharing side, since I spend a lot of time keeping an eye on obscure corners of the community. The TAB has, at times, been happy to let me do some of its writing. I have run elections and helped organize the Linux Plumbers Conference, and more. It has been a good time.

While the role of the TAB has evolved over the years, I believe that it is as important as it ever was, and I am proud to have been a part of it. After 18 years, though, I begin to think that perhaps I occupied that seat for long enough, and that it is time to let another member of the community take that place. Lest anybody worry: my retirement from the TAB is not a retirement from the community or from LWN; I expect to be around for a while yet.

I would encourage all members of our community to consider putting in their nomination to be a part of the TAB going forward. It is a way to help keep our community strong and to widen your view of how the whole system works. The TAB still has the important role of helping the Linux Foundation stay relevant and useful for the kernel community. And the members of the TAB are a great group of people to work with. I will be sad to leave this group, and wish my best to my successor. Many thanks to all of you for 18 great years.

Comments (6 posted)

Page editor: Joe Brockmeier
Next page: Brief items>>


Copyright © 2025, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds