LWN.net Weekly Edition for September 15, 2022
Welcome to the LWN.net Weekly Edition for September 15, 2022
This edition contains the following feature content:
- A Python security fix breaks (some) bignums: a quickly-done security patch creates some perhaps surprising regressions among users who deal in large numbers.
- Compiling Rust with GCC: an update: a progress report on two GCC-based alternatives for Rust compilation.
- A pair of Rust kernel modules: efforts to create an NVMe driver and 9P filesystem server in Rust have both been surprisingly successful.
- The transparent huge page shrinker: how to recover memory from partially used huge pages.
- LXC and LXD: a different container story: an overview of one of the oldest approaches to Linux containers.
This week's edition also includes these inner pages:
- Brief items: Brief news items from throughout the community.
- Announcements: Newsletters, conferences, security updates, patches, and more.
Please enjoy this week's edition, and, as always, thank you for supporting LWN.net.
A Python security fix breaks (some) bignums
Typically, an urgent security release of a project is not for a two-year-old CVE, but such is the case for a recent Python release of four versions of the language. The bug is a denial of service (DoS) that can be caused by converting enormous numbers to strings—or vice versa—but it was not deemed serious enough to fix when it was first reported. Evidently more recent reports, including a remote exploit of the bug, have raised its importance—causing a rushed-out fix. But the fix breaks some existing Python code, and the process of handling the incident has left something to be desired, leading the project to look at ways to improve its processes.
Python integers can have an arbitrary size; once they are larger than can be stored in a native integer, they are stored as arbitrary-length "bignum" values. So Python can generally handle much larger integer values than some other languages. Up until recently, Python would happily output a one followed by 10,000 zeroes for print(10**10000). But as a GitHub issue describes, that behavior has been changed to address CVE-2020-10735, which can be triggered by converting large values to and from strings in bases other than those that are a power of two—the default is base 10, of course.
The fix is to restrict the number of digits in strings that are being converted to integers (or that can result from the conversion of integers) to 4300 digits for those bases. If the limit is exceeded, a ValueError is raised. There are mechanisms that can be used to change the limit, as described in the documentation for the Python standard types. The value for the maximum allowable digits can be set with an environment variable (PYTHONINTMAXSTRDIGITS), a command-line argument (-X int_max_str_digits), or from within code using sys.set_int_max_str_digits(). It can be set to zero, meaning there is no limit, or any number greater than or equal to a lower-limit threshold value, which is set to 640.
Collateral damage
After the release was made on September 7, Oscar Benjamin posted
a complaint about it to the Core Development section of the Python
discussion forum. He noted that the new limit causes problems for the
SageMath mathematics application
(bug report)
and the SymPy math
library (Benjamin's
bug report). The fix for the bug is
a significant
backward-compatibility break, because of the "fact that
int(string) and str(integer) work with arbitrarily
large integers has been a clearly intended and documented feature of Python
for many years
".
Furthermore, the ways to alter the limit are all global in nature; a library cannot really do much about it, since changing it would affect the rest of the program. Applications like SageMath can probably just set the limit (back) to zero, but not SymPy:
For Sage that is probably a manageable fix because Sage is mostly an application. SymPy on the other hand is mostly a library. A library should not generally alter global state like this so it isn't reasonable to have import sympy call sys.set_int_max_str_digits because the library needs to cooperate with other libraries and the application itself. Even worse the premise of this change in Python is that calling sys.set_int_max_str_digits(0) reopens the vulnerability described in the CVE so any library that does that should then presumably be considered a vulnerability in itself. The docs also say that max_str_digits is set on a per interpreter basis but is it threadsafe? What happens if I want to change the limit so that I can call str and then change it back again afterwards?
He wondered why the choice was made to switch the longstanding behavior of
int() and str(), rather than creating new variants, such
as safe_int(), that enforced this limit. Programs that wanted the
new behavior could switch to using those functions, but the default would
remain unchanged. Benjamin was hoping that the change could be
reconsidered before it became "established as the status quo
" but
that is not going to happen.
Steve Dower apologized for how the bug fix was handled. The problem had been reported to various security teams, which then converged on the Python Security Response Team (PSRT), he said. That group agreed that fixing it in the Python runtime was the right approach, but making that actually happen took too long:
The delay between report and fix is entirely our fault. The security team is made up of volunteers, our availability isn't always reliable, and there's nobody "in charge" to coordinate work. We've been discussing how to improve our processes. However, we did agree that the potential for exploitation is high enough that we didn't want to disclose the issue without a fix available and ready for use.
As might be guessed, the problem stems from untrusted input. The PSRT tried a number of different approaches but settled on forcing the limit, he said:
The code doing int(gigabyte_long_untrusted_string) could be anywhere inside a json.load or HTTP header parser, and can run very deep. Parsing libraries are everywhere, and tend to use int indiscriminately (though they usually handle ValueError already). Expecting every library to add a new argument to every int() call would have led to thousands of vulnerabilities being filed, and made it impossible for users to ever trust that their systems could not be DoS'd.We agree it's a heavy hammer to do it in the core, but it's also the only hammer that has a chance of giving users the confidence to keep running Python at the boundary of their apps.
Gregory P. Smith, who opened the GitHub issue and authored the fix, noted that the int_max_str_digits setting is not specific to a particular thread, so changing it in one thread may affect others, as Benjamin suggested. Because the scope of the change was simply a security fix, no new features could be added, Smith said, but he expects that some kind of execution context to contain the setting (and perhaps others) will be added for Python 3.12, which is due in 2023. Meanwhile, he understands that some libraries may need to change the setting even though that has more wide-ranging effects:
If a project/framework/library decides to say, "we can't live with this limit", and include a Warning in their documentation telling users that the limit has been disabled, the consequences, why, how to opt-out of that or re-enable it, and what different limits may be more appropriate for what purposes if they choose to do so, that is entirely up to them. I consider it reasonable even though I fully agree with the philosophy that changing a global setting from a library is an anti-pattern.
Beyond that, Smith is sympathetic
to the complaints, but the intent was "to pick something that limited
the global amount of pain caused
". They saw no way to satisfy
all of the different users of Python, so "we chose the 'secure by
default' approach as that flips the story such that only the less common
code that actually wants to deal with huge integers in decimal needs to do
something
" to deal with the change.
In addition, it had gotten to the point where the PSRT was concerned that other Python distributors would decide to apply their own fixes, which could lead to fragmentation within the ecosystem. Smith is also a member of the Python steering council; he and the other members felt that this was the only reasonable path:
We don't take this kind of change lightly.We weren't happy with any of the available options.
But those also included not being happy with continuing to do nothing and sitting on it for longer, or continuing to do nothing and letting it become public directly or via any of the disappointed at our lack of action reporters.
But Benjamin noted
that PEP 387
("Backwards Compatibility Policy") seems to preclude removing a feature
without notice, though it does provide a steering council escape hatch, which
was used here. While security is a reasonable concern, he was unhappy that
"security apparently trumps compatibility to the point that even
providing a reasonable alternative for a long standing very much core
feature can be neglected
". In particular, he complained that no
alternative conversion function (e.g. large_int()) was added for
code that wants to be able to convert arbitrarily long strings and
integers. He wrote a version of that function to demonstrate that the
tools being provided are inadequate to allow libraries like SymPy to write their
own:
def large_int(s: str): # This function is not thread-safe. # It could fail to parse a large integer. # It could also undermine security of other threads old_val = sys.get_int_max_str_digits() try: sys.set_int_max_str_digits(0) return int(s) finally: sys.set_int_max_str_digits(old_val)
Since another thread could be changing the limit at the same time, all bets are off for a multi-threaded Python program when calling that function. Benjamin and others also brought up the performance of the current conversion code, better algorithms for conversions, and other plans for the future, all of which may well deserve more coverage down the road. But much of the focus of the discussion was about the process of making a change like this, with no warning, and its effects on other projects in the Python community.
For example, "Damian" pointed
out that there is a community using Python for studying number theory;
the language has a low barrier to entry "because of the unbounded nature
of Python integers and the very easy syntax to learn
". But this change
has erected a new barrier and the way to restore the previous behavior is
buried; "it took me ~15 paragraphs of reading from the release notes to
the documentation, to subtopics in the documentation
" to find what is
needed. Smith agreed
("yuck
") and filed a bug to improve
the documentation.
Why 4300?
The limit of 4300 digits seems rather arbitrary—why not 4000 or 5000, for
example? Tim Peters said
that it did not make sense to him; "I haven't been able to find the
reasoning behind it
". Tests on his system showed that numbers that
large could be converted "well over a thousand times per second
",
which seemed like it would not really create a denial-of-service problem.
But
Smith said
that the limit was chosen for just that reason; he filled in a bit more
background on the attack, though the full details of it still have not been
disclosed as of this writing:
Picture a CPU core handling N things [per] second, which can include a few megabytes of data which could be something full of a thousand integer strings (ie: JSON). It's still possible to get something to spin an entire CPU core for a second in an application like that even with the 4300 digit limit in place. What isn't possible anymore is getting it to spend infinite time by sending the same amount of data. With mitigation in place an attack now [needs] to scale the data volume linearly with desired cpu wastage just like other existing boring avenues of resource waste.
Steven D'Aprano lamented that secrecy, however, noting that there have been multiple disclosures of similar bugs along the way.
I don't buy the need to push out a fix to this without public discussion. This class of vulnerability was already public, and has been for years.Even if it turns out that the chosen solution is the best, or only, solution, the secrecy and "trust us, we know best" from the security team was IMO not justified.
Smith said
that the vulnerability is not "in any way exciting, new, or novel
".
The decision was made to keep it private "because it hadn't publicly
been associated with the Python ecosystem
". That may turn out to be
the wrong choice, in truth, but we will not know that until we see the
exploit(s) the security team was working with.
Peters
asked
again about the value of the digit limit. Neil Schemenauer said
that it "was chosen to not break a numpy unit test
". Ofek Lev called
that "extraordinarily arbitrary
" and wondered if there was proof
that was actually the case. Andre Müller was in
favor of the change "because it makes Python safer
", but he also
wondered about 4300. Christian Heimes filled
in the details:
4,300 happens to be small enough to have int(s) in under 100 usec on modern hardware and large enough to pass unit tests of popular projects that were in our internal test [systems]. My initial value was 2,000, which is easier to remember and more than large enough for a 4k RSA key in decimal notation. We later bumped it up to 4,300 because the test suites of some popular Python projects like NumPy were failing with 2,000.
Chris Angelico thought
that the limit was "extremely aggressively chosen
"; it could be
quite a bit higher without causing major problems, he said. Dower agreed
that was possible, "but as we've seen from the examples above, the
people who are unhappy about the limit won't be happy until it's completely
gone
". Given that, having a lower limit means those people will
encounter and work around the problem that much sooner.
The discussion continues as of this writing and it looks like there are some real efforts toward better fixes down the road, but for now users of lengthy bignums need to make adjustments in their code. It is unfortunate that the community did not get to pitch in on the solution, as there are a few different suggestions in the thread that seem at least plausibly workable—with less impact. We will have to wait and see what the original vulnerability report(s) have to say; no one has said so specifically, but one hopes those will be released at some point soon. The two-year gap between report and fix is worrisome as well, but the security team is aware of that problem and, with luck, working on a fix for that as well.
Compiling Rust with GCC: an update
While the Rust language has appeal for kernel development, many developers are concerned by the fact that there is only one compiler available; there are many reasons why a second implementation would be desirable. At the 2022 Kangrejos gathering, three developers described projects to build Rust programs with GCC in two different ways. A fully featured, GCC-based Rust implementation is still going to take some time, but rapid progress is being made.
gccrs
Philip Herron and Arthur Cohen started off with a discussion of gccrs, which is a Rust front-end for the GCC compiler. This project, Herron said, was initially started in 2014, though it subsequently stalled. It was restarted in 2019 and moved slowly until funding for two developers arrived thanks to Open Source Security and Embecosm. Since then, development has been happening much more quickly.
Currently, the developers are targeting Rust 1.49, which is a bit behind the state of the art — it was released at the end of 2020. In some ways the developers are still playing catch-up with even that older version; there are a number of intrinsics missing, for example. In other ways they are trying to get ahead of the game; there has been some work done on const generics that, so far, is really only a parser, "but it's a start". An experimental release of gccrs as part of GCC can be expected in May or June 2023.
Cohen talked about building the Rust for Linux project (the
integration of
Rust with
the Linux kernel) specifically. That project is currently targeting Rust
1.62, which is rather more recent than the 1.49 that gccrs is aiming at;
there is thus a fair amount of ground yet to cover even once gccrs hits
its target. There are not many differences in the language itself, he
said, but there are more in the libraries. Even with the official
compiler, Rust for Linux has to set the RUST_BOOTSTRAP variable to
gain access to unstable features; gccrs is trying to implement the ones
that are needed for the kernel. Generic
associated types are also needed.
Eventually, the goal is for gccrs to be able to compile Rust for Linux.
One thing he pointed out is that gccrs is making no attempt to implement the same compiler flags that the existing rustc compiler uses. That would be a difficult task and those options are "not GCCish". A separate wrapper is being implemented for the Cargo build system to allow it to invoke gccrs rather than rustc.
An important component for the kernel — and just about everything else — is the libcore library. It includes fundamental types like Option and Result, without which little can be done, for example. The liballoc library, which implements the Vector and Box types among others, is also needed. Cohen noted that this library has been customized for the kernel, but Rust for Linux developer Miguel Ojeda said that the changes are minimal.
Testing is currently done by compiling various projects with gccrs; these include the Blake3 crypto library and libcore 1.49. The rustc test suite is also being used. Plans are to add building Rust for Linux to the testing regime as well.
What else is missing at this point? Herron said that borrow checking is a big missing feature in current gccrs. Opaque types are not yet implemented. Plus, of course, there are a lot of bugs. Cohen added that the test suite needs work. A lot of the tests are intended to fail, so gccrs "passes" them, but for the wrong reason. He is working on adding the proper use of error codes so that only the right kinds of failures are seen as the correct behavior.
Future plans include a lot of cross-compiler testing. Eventually it would be good to start testing with Crater, which attempts to compile all of the crates found on crates.io, but that will take longer. With regard to borrow checking, Cohen added, they are not even trying to come up with their own implementation; instead they will be integrating Polonius. This, it is hoped, is a harbinger of more code sharing to be done with the Rust community in the future.
The code repository can be found on GitHub.
rust_codegen_gcc
Antoni Boucher then gave an update on his project, which is called rust_codegen_gcc. The rustc compiler is built on LLVM, he began, but it includes an API that allows the substitution of a different backend for code generation. That API can be used to hook libgccjit into rustc, enabling code generation with GCC. The biggest motivation for this work is to support architectures that LLVM cannot compile for. That is needed for Rust for Linux support across all of the architectures the kernel can be built for, and should be useful for other embedded targets as well.
Over the last year, rust_codegen_gcc has been merged into the rustc repository. It has gained support for global variables and 128-bit integers (though not yet in big-endian format). Support for SIMD operations has improved; it can compile the tests for x86-64 and pass most of them. It is also now possible to bootstrap rustc with rust_codegen_gcc, which is a big milestone — but some programs don't compile yet. Alignment support has improved, packed structs are supported, inline assembly support is getting better, and some function and variable attributes are supported. Boucher has added many intrinsics that GCC lacks and fixed a lot of crashes.
Almost all of the rustc tests pass now, and the situation is getting
better; most of the failures are with SIMD or with unimplemented features
like link-time optimization (LTO). With regard to SIMD, the necessary
additions
to libgccjit are done, as is the "vector shuffle" instruction. About 99%
of the LLVM SIMD intrinsics and half of the Rust SIMD intrinsics have been
implemented. A lot of fixes and improvements (128-bit integers, for
example) have gone into GCC from this work.
There is, of course, a lot still to be done. Outstanding tasks include unwinding for panic() support, proper debuginfo, LTO, and big-endian 128-bit integers. Support for the new architectures has to be added to the libraries, and SIMD support needs to be expanded beyond x86. There are function and variable attributes, including inline, that are yet to be implemented. Supporting distribution via rustup is also on the list.
There are a number of things that could be improved in the rustc API. For example, GCC distinguishes between lvalues (assignment targets) and rvalues, while LLVM just has "values"; that mismatch creates difficulties in places. LLVM has a concept of "landing pads" for exceptions, while GCC uses a try/catch mechanism. The handling of basic blocks is implicit in the API, but needs to be explicit in places. More fundamentally, GCC's API is based on the abstract syntax tree, while LLVM's is based on instructions, leading to confusion at times. LLVM uses the same aggregate operations for structs and arrays, while they are different in GCC.
On the libgccjit side, there is room for improvement in type introspection, including attributes. The time required for code generation and the resulting binary size are both worse than with LLVM. There are optimizations that are missed by libgccjit as well.
Boucher demonstrated the compilation of some basic Rust kernel modules with the GCC-backed compiler. Wedson Almeida Filho asked whether any testing had been done with an architecture that is not supported by LLVM, but that has not yet happened. There will probably be "details to deal with" when that test is done, Boucher said.
There are some potential complications for Rust for Linux. The ABI used by the generated code differs on some platforms. There is also the question of whether backports should be done to support older versions of the compiler. That is complicated by the fact that patches to GCC are needed to make rust_codegen_gcc work now.
Herron and Cohen joined Boucher at the end of the session, where they were asked about their timelines. Herron answered that it will require most of a year for gccrs to get to the point where rust_codegen_gcc is now. When asked about compilation times, Cohen said that benchmarks would not be meaningful now, when the focus is still on just getting something to work. He expects that gccrs is "probably very slow" at this point. Boucher said that rust_codegen_gcc is slower than rustc, but that there are optimizations yet to be done to improve the situation.
[Thanks to LWN subscribers for supporting my travel to this event.]
A pair of Rust kernel modules
The idea of being able to write kernel code in the Rust language has a certain appeal, but it is hard to judge how well that would actually work in the absence of examples to look at. Those examples, especially for modules beyond the "hello world" level of complexity, have been somewhat scarce, but that is beginning to change. At the 2022 Kangrejos gathering in Oviedo, Spain, two developers presented the modules they have developed and some lessons that have been learned from this exercise.
An NVMe driver
Andreas Hindborg was up first to talk about an NVM Express driver written in Rust. The primary reason for this project, he said, was to take advantage of the memory-safety guarantees that Rust offers and to gain some real-world experience with the language. His conclusions from this project include that Rust comes with a lot of nice tooling and that its type system is helpful for writing correct code. It is, he said, easier to write a kernel driver in Rust than in C.
Why write an NVMe driver when the kernel already has one that works well?
There are no problems with the existing driver, he said, but NVMe is a good
target for experiments with driver abstractions. NVMe itself is relatively
simple, but it has high performance requirements. It is widely deployed,
and the existing driver provides a mature reference implementation to
compare against.
Hindborg talked for a while about the internals of the NVMe interface; in short, communications between the interface and the computer go through a set of queues. Often the driver will configure an I/O queue for each core in the system if the interface can handle it. Creating data structures in Rust to model these queues is a relatively straightforward task. In the end, the Rust driver, when tested with the FIO tool, performs almost as well as the existing C driver. The difference, Hindborg said, is that the C driver has already been highly tuned, while the Rust driver has not; it should be able to get to the same level of performance eventually.
He concluded by saying that the Rust NVMe driver is still "a playground" and not production-ready at this point. To move things forward, he would like to create more abstractions that would allow the removal of the remaining unsafe blocks in the driver. It doesn't yet support device removal or the sysfs knobs for the nvme-cli tool. He would also like to look into using the Rust async model, which would "simplify a lot of things" in the driver, but possibly at the cost of performance.
At the end of Hindborg's talk, Paul McKenney asked if there was any available information on the relative bug rates between the C and Rust drivers. Hindborg answered that there have certainly been some bugs; building Rust abstractions around existing C code can be hard to do correctly. That work needs a lot of care and review, but once it works, drivers built on it tend to show few problems.
A 9P filesystem server
Last year, Linus Walleij suggested that, rather than writing drivers in Rust, developers should target areas with a higher attack surface — network protocols, for example. Wedson Almeida Filho has taken that advice and written an in-kernel server for the 9P filesystem protocol in the hopes that this project would demonstrate the productivity gains and security benefits that Rust can provide. Initially, he had started trying to replace the ksmbd server, but that turned out to not be an ideal project. The SMB protocol is too complex and the server needs some significant user-space components to work. He wanted something simpler; 9P fit the bill.
The 9P file protocol, he said, comes from the Plan 9
operating system. The kernel has a 9P client, but no 9P server. There is
a 9P server in QEMU
that can be used to export host filesystems into a guest. The protocol is
simple, Almeida said, defining a set of only ten operations. His 9P server
implementation works now, in a read-only mode, and required just over 1,000
lines of code.
Almeida was also looking for a way to experiment with async Rust in the kernel. In the async model, the compiler takes thread-like code and turns it into a state machine that can be implemented with "executors" and "reactors", which are implemented in the kernel crate. He created an executor that can run async code in a kernel workqueue; anywhere such code would block, it will release the workqueue thread for another task. There is also a socket reactor that is called for socket-state changes; it will call Waker::wake() from the Rust kernel crate to get the appropriate executor going again.
There is, of course, plenty of work yet to be done. He would like to implement reactors for other I/O submission paths, including KIOCBs (asynchronous I/O), URBs (USB devices), and BIOs (block devices). Memory allocation can still use some work; it would be good if a GFP_KERNEL could give up its thread while waiting for the memory-management subsystem to do complicated things.
At the end, I asked whether the objective of demonstrating the security benefits of Rust had been achieved; has there been, for example, any fuzz testing of the server? Almeida answered that the Rust-based parsing interface makes a lot of mistakes impossible. No fuzz testing has been done — the server has only been working for a couple of weeks — but he will do it. He concluded that he will be interested to see how his server fares in such testing relative to the QEMU implementation.
[Thanks to LWN subscribers for supporting my travel to this event.]
The transparent huge page shrinker
Huge pages are a mechanism implemented by the CPU that allows the management of memory in larger chunks. Use of huge pages can increase performance significantly, which is why the kernel has a "transparent huge page" mechanism to try to create them when possible. But a huge page will only be helpful if most of the memory contained within it is actually in use; otherwise it is just an expensive waste of memory. This patch set from Alexander Zhu implements a mechanism to detect underutilized huge pages and recover that wasted memory for other uses.The base page size on most systems running Linux is 4,096 bytes, a number which has remained unchanged for many years even as the amount of memory installed in those systems has grown. By grouping (typically) 512 physically contiguous base pages into a huge page, it is possible to reduce the overhead of managing those pages. More importantly, though, huge pages take far fewer of the processor's scarce translation lookaside buffer (TLB) slots, which cache the results of virtual-to-physical address translations. TLB misses can be quite expensive, so expanding the amount of memory that can be covered by the TLB (as huge pages do) can improve performance significantly.
The downside of huge pages (as with larger page sizes in general) is internal fragmentation. If only part of a huge page is actually being used, the rest is wasted memory that cannot be used for any other purpose. Since such a page contains little useful memory, the hoped-for TLB-related performance improvements will not be realized. In the worst cases, it would clearly make sense to break a poorly utilized huge page back into base pages and only keep those that are clearly in use. The kernel's memory-management subsystem can break up huge pages to, among other things, facilitate reclaim, but it is not equipped to focus its attention specifically on underutilized huge pages.
Zhu's patch set aims to fill that gap in a few steps, the first being figuring out which of the huge pages in the system are being fully utilized — and which are not. To that end, a scanning function is run every second from a kernel workqueue; each run will look at up to 256 huge pages to determine how fully each is utilized. Only anonymous huge pages are scanned; this work doesn't address file-backed huge pages. The results can be read out of /sys/kernel/debug/thp_utilization in the form of a table like this:
Utilized[0-50]: 1331 680884 Utilized[51-101]: 9 3983 Utilized[102-152]: 3 1187 Utilized[153-203]: 0 0 Utilized[204-255]: 2 539 Utilized[256-306]: 5 1135 Utilized[307-357]: 1 192 Utilized[358-408]: 0 0 Utilized[409-459]: 1 57 Utilized[460-512]: 400 13 Last Scan Time: 223.98 Last Scan Duration: 70.65
This output (taken from the cover letter) is a histogram showing the number of huge pages containing a given number of utilized base pages. The first line, for example, shows the number of huge pages for which no more than 50 base pages are in active use. There are 1,331 of those pages, containing 680,884 unused base pages. There is a clear shape to the results: nearly all pages fall into one of the two extremes. As a general rule, a huge page is either fully utilized or almost entirely unused.
An important question to answer when interpreting this data is: how does the code determine which base pages within a huge page are actually used? The CPU and memory-management unit do not provide much help in this task; if the memory is mapped as a huge page, there is no per-base-page "accessed" bit to look at. Instead, Zhu's patch scans through the memory itself to see what is stored there. Any base pages that contain only zeroes are deemed to be unused, while those containing non-zero data are counted as being used. It is clearly not a perfect heuristic; a program could initialize pages with non-zero data then never touch them again. But it may be difficult to design a better one that doesn't involve actively breaking apart huge pages into base pages.
The results of this scanning identify a clear subset of the huge pages in a system that should perhaps be broken apart. In current kernels, though, splitting a zero-filled huge page will result in the creation of a lot of zero-filled base pages — and no immediate recovery of the unused memory. Zhu's patch set changes the splitting of huge pages so that it simply drops zero-filled base pages rather than remapping them into the process's address space. Since these are anonymous pages, the kernel will quickly substitute a new, zero-filled page should the process eventually reference one of the dropped pages.
The final step is to actively break up underutilized huge pages when the kernel is looking for memory to reclaim. To that end, the scanner will add the least-utilized pages (those in the 0-50 bucket shown above) to a new linked list so that they can be found quickly. A new shrinker is registered with the memory-management subsystem that can be called when memory is tight. When invoked, that shrinker will pull some entries from the list of underutilized huge pages and split them, resulting in the return of all zero-filled base pages found there to the system.
Most of the comments on this patch set have been focused on details rather than the overall premise. David Hildenbrand expressed a concern that unmapping zero-filled pages in an area managed by a userfaultfd() handler could create confusion if that handler subsequently receives page faults it was not expecting. Zhu answered that, if this is a concern, zero-filled base pages in userfaultfd()-managed areas could be mapped to the kernel's shared zero page instead.
The kernel has supported transparent huge pages since the feature was merged into the 2.6.38 kernel in 2011, but it is still not enabled for all processes. One of the reasons for holding back is the internal-fragmentation problem, which can outweigh the benefits that transparent huge pages provide. Zhu's explicit goal is to make that problem go away, allowing the enabling of transparent huge pages by default. If this work is successful, it could represent an important step for a longstanding kernel feature that, arguably, has never quite lived up to its potential.
LXC and LXD: a different container story
OCI containers are the most popular type of Linux container, but they are not the only type, nor were they the first. LXC (short for "LinuX Containers") predates Docker by several years, though it was also not the first. LXC dates back to its first release in 2008; the earliest version of Docker, which was tagged in 2013, was actually a wrapper around LXC. The LXC project is still going strong and shows no signs of winding down; LXC 5.0 was released in July and comes with a promise of support until 2027.
LXC
LXC was initially developed by IBM, and was part of a collaboration between several parties looking to add namespaces to the kernel. Eventually, Canonical took over stewardship of the project, and now hosts its infrastructure and employs many of its maintainers. The project includes a C library called liblxc and a collection of command-line tools built on top of it that can be used to create, interact with, and destroy containers. LXC does not provide or require a daemon to manage containers; the tools it includes act directly on container processes.
LXC was the first container implementation to be built entirely on capabilities found in the mainline kernel; predecessors required out-of-tree patches to work. Like Docker, LXC containers are created using a combination of control groups and namespaces. Because LXC was developed in parallel with the effort to add namespaces to the kernel, it could be considered a sort of reference implementation of using namespaces for containers on Linux.
Unlike Docker, LXC does not presume to espouse an opinion about what kinds of processes should run in a container. By default, it will try to launch an init system inside of the container, which can then launch other processes — something that is notoriously hard to do in a Docker container. With the correct configuration, though, it is even possible to run LXC containers nested within another LXC container, or to run the Docker daemon inside of an LXC container.
LXC containers are defined using a configuration file, which offers a great deal of control over how the container is constructed. The lxc-create utility is used to create containers. LXC does not bundle container configurations and images together; instead, the container configuration specifies a directory or block device to use for the container's root filesystem. LXC can use an existing root filesystem, or lxc-create can construct one on the fly using a template.
An LXC template is a shell script that constructs a root filesystem using a few key variables that lxc-create replaces before the template is run. A handful of templates are included; among them is an OCI template that uses SUSE's umoci utility to download and unpack a container image from an OCI container registry, which gives LXC the ability to run all of the same containers that Docker and other OCI runtimes can.
A separate collection of templates that can build root filesystems for a variety of popular distributions is available, but this approach has fallen out of favor because the tools that these templates use often require root privileges. These days, pre-built images are preferred because they can more easily be used by unprivileged users.
The LXC project has developed a tool called distrobuilder to create these pre-built images, which are made available on an image server hosted by Canonical. The lxc-download template can be used to create a container based on an image from an image server.
In theory, anybody can host their own image server, but in practice, few seem to do so, at least in public. There does not appear to be a large library of pre-packaged applications in this format like a user of Docker or Helm might be accustomed to. Canonical's image server only contains base images for an assortment of distributions; any additional software must be bundled into a custom image, or installed using the package manager inside the container.
Among the various options for running containers on Linux, LXC appears to be the most flexible. It comes with reasonable defaults, but it makes no effort to hide the complexity of creating a container from the user; every detail of the containers that it creates can be customized and adjusted to taste. Docker has found much popularity in papering over these details, but at the cost of flexibility compared to LXC.
LXD
LXD is a more specialized sister (or perhaps daughter) project of LXC; its development is also sponsored by Canonical. LXD was initially released in 2015; version 5.5 came out in August. Like LXC, LXD also has long-term support branches; the most recent long-term support release is LXD 5.0.x, which will be supported until 2027. As might be inferred from the name, LXD includes a daemon, which is built on top of liblxc.
LXD does away with LXC's template system in favor of being purely image-based. Because of this, Docker container images cannot be used with LXD — there is no LXD equivalent to LXC's OCI template. LXD uses the same image servers as the lxc-download template but requires a different image format; distrobuilder contains support for building images of both types (as well as plain .tar.gz images), though, and Canonical's image server carries both LXC and LXD versions of all of the images it hosts.
Like the Docker daemon, LXD is controlled by an API based on HTTP. LXD also comes with a command-line client using this API called lxc (not to be confused with the tools that come with LXC, which are named lxc-*). Also like Docker, LXD can listen on a UNIX socket, and in this mode, authentication is largely-nonexistent; access to the API socket is controlled using filesystem permissions.
As part of its Landscape suite of server-management tools, Canonical offers a role-based access control (RBAC) service that LXD can integrate with for more fine-grained access control. Landscape is only free for personal use or evaluation purposes, though; enterprises that want the additional security controls provided by this feature must subscribe to Canonical's Ubuntu Advantage service.
LXD can also be used to run virtual machines. Working with virtual machines in LXD is more-or-less identical to working with containers, and the same images can be used for both; all that needs to be done to create a VM is to pass the --vm flag to the lxc create command (once again, not to be confused with the lxc-create command from LXC). LXD uses KVM and the QEMU emulator to run its virtual machines.
Several hosts running LXD can be combined into a cluster. LXD cluster nodes coordinate with each other using a protocol based on the Raft consensus algorithm, much like some OCI container orchestrators do. Containers and virtual machines can be launched on a specific cluster node, or jobs can be distributed to arbitrary groups of nodes. Like Swarm and Kubernetes, LXD bridges cluster networks between nodes so that containers or VMs running on different nodes can communicate with each other.
LXD is an interesting project; the set of features it offers would seem to make it a viable alternative to Swarm or Kubernetes, but for the lack of compatibility with OCI containers. This seems like a curious oversight; LXC's OCI template demonstrates that it should be possible, and LXD appears to have everything else it would need to compete in that arena, but its developers are not interested. As it stands, LXD has deliberately limited its audience to the set of people interested in running system containers or virtual machines. The tools that it offers to its chosen audience are powerful; people who are weary of struggling with configuring other virtual-machine managers would be well-advised to have a look at LXD.
Conclusion
The Linux Containers project as a whole seems healthy, with committed maintainers backed by a corporate sponsor, regular releases, and long-term support. LXC offers a mature and stable set of tools, while LXD offers a more "modern" feeling user interface to the same technology, and throws in virtual machines and clustering for good measure. LXC can be made to run OCI containers, but LXD cannot; people who are deeply immersed in the world of OCI might be better-served looking for something more firmly rooted in that ecosystem. For people looking for a different kind of container, though, LXC and LXD are both solid options.
Brief items
Kernel development
Kernel release status
The current development kernel is 6.0-rc5, released on September 11. Linus said: "Nothing looks particularly scary, so jump right in".
Stable updates: 5.19.8, 5.15.66, and 5.10.142 were released on September 8; 5.15.67 added a quick fix shortly thereafter.
The 5.19.9, 5.15.68, 5.10.143, 5.4.212, 4.19.257, 4.14.293, and 4.9.328 stable updates are all in the review process; they are due on September 15.
Quote of the week
The big issue is that so few Linux kernel developers understand what the Linux security policy is. So much emphasis has gone into "least privilege" and "fine granularity" that it is rare to find someone who is willing to think about the big picture.— Casey Schaufler
Development
Unicode 15 released
Version 15 of the Unicode standard has been released.
This version adds 4,489 characters, bringing the total to 149,186 characters. These additions include two new scripts, for a total of 161 scripts, along with 20 new emoji characters, and 4,193 CJK (Chinese, Japanese, and Korean) ideographs.
Scaling Git’s garbage collection (GitHub blog)
The GitHub blog has a detailed look at garbage collection in Git and the work that has been done to make it faster.
To solve this problem, we turned to a long-discussed idea on the Git mailing list: cruft packs. The idea is simple: store an auxiliary list of mtime data alongside a pack containing just unreachable objects. To garbage collect a repository, Git places the unreachable objects in a pack. That pack is designated as a “cruft pack” because Git also writes the mtime data corresponding to each object in a separate file alongside that pack. This makes it possible to update the mtime of a single unreachable object without changing the mtimes of any other unreachable object.
Page editor: Jake Edge
Announcements
Newsletters
Distributions and system administration
Development
Meeting minutes
Calls for Presentations
CFP Deadlines: September 15, 2022 to November 14, 2022
The following listing of CFP deadlines is taken from the LWN.net CFP Calendar.
Deadline | Event Dates | Event | Location |
---|---|---|---|
September 28 | November 26 November 27 |
UbuCon Asia 2022 | Seoul, South Korea |
September 30 | November 7 November 9 |
Ubuntu Summit 2022 | Prague, Czech Republic |
September 30 | October 20 October 21 |
Netfilter workshop | Seville, Spain |
October 15 | December 2 December 3 |
OLF Conference 2022 | Columbus, OH, USA |
If the CFP deadline for your event does not appear here, please tell us about it.
Upcoming Events
Events: September 15, 2022 to November 14, 2022
The following event listing is taken from the LWN.net Calendar.
Date(s) | Event | Location |
---|---|---|
September 13 September 16 |
Open Source Summit Europe | Dublin, Ireland |
September 14 September 15 |
Git Merge 2022 | Chicago, USA |
September 15 September 18 |
EuroBSDCon 2022 | Vienna, Austria |
September 15 September 16 |
Linux Security Summit Europe | Dublin, Ireland |
September 16 September 18 |
GNU Tools Cauldron | Prague, CZ |
September 16 September 18 |
Ten Years of Guix | Paris, France |
September 19 September 21 |
Open Source Firmware Conference | Gothenburg, Sweden |
September 21 September 22 |
Open Mainframe Summit | Philadelphia, US |
September 21 September 22 |
DevOpsDays Berlin 2022 | Berlin, Germany |
September 26 October 2 |
openSUSE Asia Summit 2022 | Online |
September 28 October 1 |
LibreOffice Conference 2022 | Milan, Italy |
October 1 October 7 |
Akademy 2022 | Barcelona, Spain |
October 4 October 6 |
X.Org Developers Conference | Minneapolis, MN, USA |
October 11 October 13 |
Tracing Summit 2022 | London, UK |
October 13 October 14 |
PyConZA 2022 | Durban, South Africa |
October 15 October 16 |
OpenFest 2022 | Sofia, Bulgaria |
October 16 October 21 |
DjangoCon US 2022 | San Diego, USA |
October 19 October 21 |
ROSCon 2022 | Kyoto, Japan |
October 20 October 21 |
Netfilter workshop | Seville, Spain |
October 24 October 28 |
Netdev 0x16 | Lisbon, Portugal |
October 24 October 28 |
KubeCon / CloudNativeCon North America | Detroit, United States |
October 25 October 28 |
PostgreSQL Conference Europe | Berlin, Germany |
October 29 | PyCon Hong Kong 2022 | Hong Kong, Hong Kong |
October 29 | Minsk Linux User Group meeting | Minsk, Belarus |
November 1 November 3 |
Reproducible Builds Summit 2022 | Venice, Italy |
November 3 November 14 |
Ceph Virtual 2022 | Online |
November 4 November 5 |
SeaGL 2022 | Seattle, US |
November 7 November 9 |
Ubuntu Summit 2022 | Prague, Czech Republic |
November 8 November 9 |
Open Source Experience Paris | Paris, France |
November 11 November 12 |
South Tyrol Free Software Conference | Bolzano, Italy |
November 11 | Software Supply Chain Offensive Research and Ecosystem Defenses 2022 | Los Angeles, USA |
If your event does not appear here, please tell us about it.
Security updates
Alert summary September 8, 2022 to September 14, 2022
Dist. | ID | Release | Package | Date |
---|---|---|---|---|
CentOS | CESA-2022:6381 | C7 | open-vm-tools | 2022-09-13 |
Debian | DLA-3105-1 | LTS | connman | 2022-09-13 |
Debian | DSA-5229-1 | stable | freecad | 2022-09-13 |
Debian | DSA-5228-1 | stable | gdk-pixbuf | 2022-09-11 |
Debian | DSA-5227-1 | stable | libgoogle-gson-java | 2022-09-07 |
Debian | DLA-3101-1 | LTS | libxslt | 2022-09-09 |
Debian | DLA-3102-1 | LTS | linux-5.10 | 2022-09-11 |
Debian | DLA-3104-1 | LTS | paramiko | 2022-09-12 |
Debian | DLA-3106-1 | LTS | python-oslo.utils | 2022-09-13 |
Debian | DLA-3107-1 | LTS | sqlite3 | 2022-09-13 |
Debian | DLA-3103-1 | LTS | zlib | 2022-09-12 |
Fedora | FEDORA-2022-6813a0eb99 | F36 | autotrace | 2022-09-08 |
Fedora | FEDORA-2022-8e1df11a7a | F35 | insight | 2022-09-08 |
Fedora | FEDORA-2022-cf658a432f | F35 | libapreq2 | 2022-09-13 |
Fedora | FEDORA-2022-61f5b492b7 | F36 | libapreq2 | 2022-09-13 |
Fedora | FEDORA-2022-f83aec6d57 | F36 | mediawiki | 2022-09-09 |
Fedora | FEDORA-2022-cd23eac6f4 | F36 | open-vm-tools | 2022-09-08 |
Fedora | FEDORA-2022-ae75c0ca4f | F35 | qt5-qtwebengine | 2022-09-14 |
Fedora | FEDORA-2022-3f5099bcc9 | F35 | vim | 2022-09-14 |
Fedora | FEDORA-2022-c28b637883 | F36 | vim | 2022-09-14 |
Fedora | FEDORA-2022-ddfeee50c9 | F35 | webkit2gtk3 | 2022-09-10 |
Mageia | MGASA-2022-0322 | 8 | gstreamer1.0-plugins-good | 2022-09-10 |
Mageia | MGASA-2022-0323 | 8 | jupyter-notebook | 2022-09-10 |
Mageia | MGASA-2022-0324 | 8 | kernel | 2022-09-10 |
Mageia | MGASA-2022-0321 | 8 | rpm | 2022-09-10 |
Oracle | ELSA-2022-6381 | OL7 | open-vm-tools | 2022-09-07 |
Oracle | ELSA-2022-6358 | OL9 | open-vm-tools | 2022-09-07 |
Red Hat | RHSA-2022:6439-01 | EL8 | booth | 2022-09-13 |
Red Hat | RHSA-2022:6463-01 | EL8 | gnupg2 | 2022-09-13 |
Red Hat | RHSA-2022:6432-01 | EL7.6 | kernel | 2022-09-13 |
Red Hat | RHSA-2022:6460-01 | EL8 | kernel | 2022-09-13 |
Red Hat | RHSA-2022:6437-01 | EL8 | kernel-rt | 2022-09-13 |
Red Hat | RHSA-2022:6443-01 | EL8 | mariadb:10.3 | 2022-09-13 |
Red Hat | RHSA-2022:6448-01 | EL8 | nodejs:14 | 2022-09-13 |
Red Hat | RHSA-2022:6449-01 | EL8 | nodejs:16 | 2022-09-13 |
Red Hat | RHSA-2022:6381-01 | EL7 | open-vm-tools | 2022-09-07 |
Red Hat | RHSA-2022:6384-01 | EL8 | openvswitch2.13 | 2022-09-07 |
Red Hat | RHSA-2022:6385-01 | EL8 | openvswitch2.15 | 2022-09-07 |
Red Hat | RHSA-2022:6382-01 | EL8 | openvswitch2.16 | 2022-09-07 |
Red Hat | RHSA-2022:6383-01 | EL8 | openvswitch2.17 | 2022-09-07 |
Red Hat | RHSA-2022:6386-01 | EL9 | openvswitch2.17 | 2022-09-07 |
Red Hat | RHSA-2022:6392-01 | EL7 EL8 | ovirt-host | 2022-09-08 |
Red Hat | RHSA-2022:6457-01 | EL8 | python3 | 2022-09-13 |
Red Hat | RHSA-2022:6389-01 | EL7 | rh-nodejs14-nodejs and rh-nodejs14-nodejs-nodemon | 2022-09-08 |
Red Hat | RHSA-2022:6447-01 | EL8 | ruby:2.7 | 2022-09-13 |
Red Hat | RHSA-2022:6450-01 | EL8 | ruby:3.0 | 2022-09-13 |
Scientific Linux | SLSA-2022:6381-1 | SL7 | open-vm-tools | 2022-09-08 |
Slackware | SSA:2022-250-01 | python3 | 2022-09-07 | |
Slackware | SSA:2022-252-01 | vim | 2022-09-09 | |
SUSE | SUSE-SU-2022:3138-1 | SLE12 | ImageMagick | 2022-09-07 |
SUSE | SUSE-SU-2022:3247-1 | MP4.3 SLE15 oS15.4 | bluez | 2022-09-12 |
SUSE | openSUSE-SU-2022:10120-1 | osB15 | chromium | 2022-09-12 |
SUSE | openSUSE-SU-2022:10119-1 | osB15 | chromium | 2022-09-12 |
SUSE | SUSE-SU-2022:3249-1 | MP4.1 MP4.2 MP4.3 SLE15 SES6 SES7 oS15.3 oS15.4 | clamav | 2022-09-12 |
SUSE | SUSE-SU-2022:3139-1 | OS9 SLE12 | clamav | 2022-09-07 |
SUSE | SUSE-SU-2022:3273-1 | OS9 SLE12 | firefox | 2022-09-14 |
SUSE | SUSE-SU-2022:3272-1 | SLE15 SES6 | firefox | 2022-09-14 |
SUSE | SUSE-SU-2022:3252-1 | MP4.2 MP4.3 SLE15 SLE-m5.1 SLE-m5.2 oS15.3 oS15.4 | freetype2 | 2022-09-12 |
SUSE | SUSE-SU-2022:3246-1 | MP4.2 MP4.3 SLE15 oS15.3 oS15.4 | frr | 2022-09-12 |
SUSE | SUSE-SU-2022:3230-1 | MP4.1 MP4.2 SLE15 SLE-m5.2 SES7 oS15.3 | gdk-pixbuf | 2022-09-09 |
SUSE | SUSE-SU-2022:3153-1 | MP4.3 SLE15 oS15.4 | gdk-pixbuf | 2022-09-07 |
SUSE | SUSE-SU-2022:3144-1 | MP4.1 SLE15 SES6 SES7 | gpg2 | 2022-09-07 |
SUSE | SUSE-SU-2022:3142-1 | MP4.2 MP4.3 SLE15 SLE-m5.1 SLE-m5.2 oS15.3 oS15.4 osM5.2 | icu | 2022-09-07 |
SUSE | SUSE-SU-2022:3141-1 | MP4.2 MP4.3 SLE15 oS15.3 oS15.4 | icu | 2022-09-07 |
SUSE | SUSE-SU-2022:3140-1 | SLE12 | icu | 2022-09-07 |
SUSE | SUSE-SU-2022:3152-1 | OS9 SLE12 | java-1_8_0-ibm | 2022-09-07 |
SUSE | SUSE-SU-2022:3232-1 | MP4.3 SLE15 oS15.4 | keepalived | 2022-09-09 |
SUSE | SUSE-SU-2022:3234-1 | OS8 | keepalived | 2022-09-09 |
SUSE | SUSE-SU-2022:3235-1 | OS9 | keepalived | 2022-09-09 |
SUSE | SUSE-SU-2022:3264-1 | MP4.2 SLE15 SLE-m5.1 SLE-m5.2 oS15.3 oS15.4 osM5.2 | kernel | 2022-09-14 |
SUSE | SUSE-SU-2022:3263-1 | SLE12 | kernel | 2022-09-14 |
SUSE | SUSE-SU-2022:3265-1 | SLE12 | kernel | 2022-09-14 |
SUSE | SUSE-SU-2022:3190-1 | SLE12 | libEMF | 2022-09-08 |
SUSE | SUSE-SU-2022:3191-1 | SLE15 oS15.3 | libEMF | 2022-09-08 |
SUSE | SUSE-SU-2022:3207-1 | SLE12 | libnl-1_1 | 2022-09-08 |
SUSE | SUSE-SU-2022:3208-1 | SLE12 | libnl3 | 2022-09-08 |
SUSE | SUSE-SU-2022:3162-1 | MP4.2 MP4.3 SLE15 SLE-m5.1 SLE-m5.2 oS15.3 oS15.4 osM5.2 | libyajl | 2022-09-07 |
SUSE | SUSE-SU-2022:3245-1 | MP4.2 MP4.3 SLE15 oS15.3 oS15.4 | libyang | 2022-09-12 |
SUSE | SUSE-SU-2022:3267-1 | MP4.2 SLE15 oS15.3 | libzapojit | 2022-09-14 |
SUSE | SUSE-SU-2022:3266-1 | SLE12 | libzapojit | 2022-09-14 |
SUSE | SUSE-SU-2022:3225-1 | MP4.1 SLE15 SES7 | mariadb | 2022-09-09 |
SUSE | SUSE-SU-2022:3159-1 | MP4.3 SLE15 oS15.4 | mariadb | 2022-09-07 |
SUSE | SUSE-SU-2022:3251-1 | MP4.2 SLE15 oS15.3 | nodejs16 | 2022-09-12 |
SUSE | SUSE-SU-2022:3250-1 | MP4.3 SLE15 oS15.4 | nodejs16 | 2022-09-12 |
SUSE | SUSE-SU-2022:3196-1 | SLE12 | nodejs16 | 2022-09-08 |
SUSE | openSUSE-SU-2022:10117-1 | oS15.3 | opera | 2022-09-12 |
SUSE | openSUSE-SU-2022:10118-1 | oS15.4 | opera | 2022-09-12 |
SUSE | SUSE-SU-2022:3271-1 | MP4.2 MP4.3 SLE15 SLE-m5.1 SLE-m5.2 oS15.3 oS15.4 osM5.2 | perl | 2022-09-14 |
SUSE | SUSE-SU-2022:3198-1 | MP4.3 SLE15 | php8-pear | 2022-09-08 |
SUSE | SUSE-SU-2022:3193-1 | OS9 SLE12 | postgresql12 | 2022-09-08 |
SUSE | SUSE-SU-2022:3269-1 | OS9 SLE12 | postgresql14 | 2022-09-14 |
SUSE | SUSE-SU-2022:3231-1 | SLE12 | python-PyYAML | 2022-09-09 |
SUSE | SUSE-SU-2022:1064-2 | MP4.1 SLE15 SES7 oS15.4 | python2-numpy | 2022-09-12 |
SUSE | SUSE-SU-2022:3248-1 | MP4.1 MP4.2 MP4.3 SLE15 SES7 oS15.3 oS15.4 | qpdf | 2022-09-12 |
SUSE | SUSE-SU-2022:3259-1 | MP4.0 MP4.1 MP4.2 MP4.3 SLE15 oS15.3 oS15.4 | rubygem-kramdown | 2022-09-12 |
SUSE | SUSE-SU-2022:3212-1 | SLE12 | rubygem-rake | 2022-09-08 |
SUSE | SUSE-SU-2022:3244-1 | MP4.3 SLE15 oS15.4 | samba | 2022-09-12 |
SUSE | SUSE-SU-2022:3270-1 | SLE12 | samba | 2022-09-14 |
SUSE | SUSE-SU-2022:3154-1 | MP4.2 SLE15 oS15.3 | udisks2 | 2022-09-07 |
SUSE | SUSE-SU-2022:3160-1 | OS9 SLE12 | udisks2 | 2022-09-07 |
SUSE | SUSE-SU-2022:3229-1 | MP4.1 MP4.2 MP4.3 SLE15 SLE-m5.1 SLE-m5.2 SES6 SES7 oS15.3 oS15.4 | vim | 2022-09-09 |
SUSE | SUSE-SU-2022:3137-1 | MP4.3 SLE15 oS15.4 | webkit2gtk3 | 2022-09-07 |
SUSE | SUSE-SU-2022:3136-1 | OS9 SLE12 | webkit2gtk3 | 2022-09-07 |
SUSE | SUSE-SU-2022:3199-1 | MP4.3 SLE15 oS15.4 | yast2-samba-provision | 2022-09-08 |
Ubuntu | USN-4976-2 | 16.04 | dnsmasq | 2022-09-07 |
Ubuntu | USN-5609-1 | 22.04 | dotnet6 | 2022-09-13 |
Ubuntu | USN-5608-1 | 18.04 20.04 22.04 | dpdk | 2022-09-13 |
Ubuntu | USN-5607-1 | 20.04 22.04 | gdk-pixbuf | 2022-09-13 |
Ubuntu | USN-5605-1 | 20.04 | linux-azure-fde | 2022-09-09 |
Ubuntu | USN-5602-1 | 22.04 | linux-raspi | 2022-09-08 |
Ubuntu | USN-5603-1 | 18.04 | linux-raspi-5.4 | 2022-09-08 |
Ubuntu | USN-5606-1 | 16.04 18.04 20.04 22.04 | poppler | 2022-09-12 |
Ubuntu | USN-5610-1 | 20.04 22.04 | rust-regex | 2022-09-14 |
Ubuntu | USN-5583-2 | 18.04 | systemd | 2022-09-14 |
Ubuntu | USN-5604-1 | 14.04 16.04 | tiff | 2022-09-08 |
Ubuntu | USN-5523-2 | 18.04 20.04 | tiff | 2022-09-12 |
Kernel patches of interest
Kernel releases
Architecture-specific
Core kernel
Development tools
Device drivers
Device-driver infrastructure
Filesystems and block layer
Memory management
Networking
Security-related
Virtualization and containers
Miscellaneous
Page editor: Jonathan Corbet