LWN.net Weekly Edition for February 23, 2023
Welcome to the LWN.net Weekly Edition for February 23, 2023
This edition contains the following feature content:
- Python packaging targets: the Python packaging discussion continues: who are the intended users of a packaging system, and what are they looking for?
- Debating composefs: a discussion on whether the kernel needs composefs or, instead, improvements to existing filesystems.
- Rethinking splice(): the splice() system call offers a lot of promise, but that promise tends not to be realized in the real world.
- Some development statistics for 6.2: where the code in the 6.2 kernel release came from.
- Passwordless authentication with FIDO2—beyond just the web: a FOSDEM talk on using FIDO2 in various settings.
This week's edition also includes these inner pages:
- Brief items: Brief news items from throughout the community.
- Announcements: Newsletters, conferences, security updates, patches, and more.
Please enjoy this week's edition, and, as always, thank you for supporting LWN.net.
Python packaging targets
As we have seen in earlier articles, the packaging landscape for Python is fragmented and complex, though users of the language have been clamoring for some kind of unification for a decade or more at this point. The developers behind pip and other packaging tools would like to find a way to satisfy this wish from Python-language users and developers, thus they have been discussing possible solutions with increasing urgency, it seems, of late. In order to do that, though, it is important to understand what specific items—and types of Python users—to target.
Integrator or user?
Steve Dower framed
the overarching problems of Python packaging—and the discussion thereof—in
terms of the types of users that are being targeted. There are distinct
groups of users, but the Python packaging community has not defined its
audience "so we keep trying to build solutions that Just Work™ for groups
of people who aren't even trying to do the same thing, let alone using the
same workflow or environment/platform
". There are (at least) two roles,
but they get conflated:
One concrete example that some of us have been throwing around for years: the difference between an "system integrator" and an "end user". That is, the person who chooses the set of package versions and assembles an environment, and the person who runs the Python command. They don't have to be the same person, and they're pretty clearly totally distinct jobs, but we always seem to act as if end users must act like system integrators (i.e. know how to install compilers, define constraints, resolve conflicts, execute shell commands) whether they want to or not. And then we rile against system integrators who are trying to offer their users a usable experience (e.g. Red Hat, Debian, Anaconda) for not forcing their users to also be system integrators.[...] To end with a single, discussion-worthy question, and bearing in mind that we don't just set the technology but also the culture of Python packaging: should we be trying to make each Python user be their own system integrator, supporting the existing integrators, or become the sole integrator ourselves?
Paul Moore replied
that the packaging community "should be supporting Python
users
". That means helping many of them not to have to be
integrators, for others it means providing a path "to do a simplified
integration job
". In the dichotomy that Dowers outlined, integrators
(and distributors) build the binary artifacts needed for the packages that
are required, while users consume those artifacts typically via some form
of package
manager.
But it is also important that all of those users are able
to get
Python and its packages in a form that meets their needs, Moore said. He
thinks that
the community should be supporting the existing integrators
(e.g. Linux distributions, Anaconda) and encouraging new ones for niches
that are not well served, like "Windows users for whom conda isn't the
right solution
". But he does not think it makes sense for the Python
community to become the sole integrator, unless the steering council wants
to bring packaging inside the core-development tent.
If the packaging community wants to better support integrators/distributors, though, it needs cooperation from those integrators, Moore said. There also needs to be better documentation on the Python home page about how to get Python and how to choose an integrator if one is needed. Though the push from users seems to be for something that would include virtual-environment management and more (a la Rust's cargo), that may be a step too far for Python at this point; a unified package installer might be possible, however:
A common install command - either implemented in core Python with "hooks" for integrators to link to their own implementations, or a defined command like pyinstall which all integrators are required to implement with the same interface, so that the user experience is identical.
Python developers who want to do their own integration are important, Ralf
Gommers said,
but they are also over-represented in the discussion; most language users
"want
things to work well without too much trouble
". He agreed with Moore
that better supporting the existing integrators was the right goal. Moore
pointed
out that unifying the install command would allow the Python Package
Index (PyPI) to switch from recommending "pip install" to
something more generic; beyond that, though, "if we don't unify
interfaces, how do we address this aspect of what users see as 'too many
options'
"? But Dower said
that pip is the right choice for the unified command "because
it is how you get packages from PyPI
"; the missing piece
is a way for pip to know that it should not be installing a
package because it should come from the distributor/integrator:
I can imagine some kind of data file in the base Python install listing packages (or referencing a URL listing packages) that pip should warn before installing, and probably messages to display telling the user what they should prefer. Distributors could bundle this file in with their base Python package so that anyone using it gets redirected to the right source.
That would require more coordination between the developers of pip and those for other environments, such as conda and the Linux distributions, Moore said; the longtime message has been that pip is the wrong way to install packages system-wide on Linux, for example. He is not against the idea, but worries that it is not going to be user-friendly:
So users would go to PyPI, find instructions to do pip install X, try that and get told "nope, you should use apt-get install python-X instead"? That doesn't seem like it's going to reduce user confusion, I'm afraid... (It's OK for experienced users, but those are the sort of people for whom the current "be your own integrator" model is acceptable).
No matter what path is chosen, it is important to ensure that the community
knows about it, Russell Keith-Magee said.
In fact, the communication about the strategy "is just as important (if
not more so) than the strategy itself
", especially since "it's going
to take months or years
" to get to the desired end point. While the Python Packaging Authority (PyPA)
is not exactly a standards body, it can help guide the community in the
right direction:
However, even an informal statement declaring a vision and desired future state for Python packaging would be invaluable in terms of setting community expectations, generating convergence towards that desired future state, and guiding the contributions of those who might be inclined to help get there.
Building versus installing
Installing a package from PyPI can range from a simple, straightforward task, for pure Python packages, to extremely complex, requiring multiple compilers, build systems, and more. In the easiest cases, pip (or something like it) can simply copy a source distribution (sdist), which is a .tar.gz file, and install its files in the proper location so that import can find them. In more complicated cases, where there are dependencies on other PyPI packages, the pyproject.toml file can be consulted in order to pick up those dependencies (and, naturally, their dependencies) before installation. The sdist and pyproject.toml formats are described in PEP 517 ("A build-system independent format for source trees") and PEP 518 ("Specifying Minimum Build System Requirements for Python Projects").
If one or more of the dependencies requires a compiler in order to build a Python extension written in C, C++, Rust, or some other language, it is often up to the user to figure that out and install the right pieces, unless there is a wheel file available on PyPI for the installer's environment. The binary wheel file will contain a library that has already been built, so the build tools are not required. But the combinatorial explosion of environments makes it difficult for PyPI to have wheels available for all of them; this is especially true for packages like NumPy and SciPy, where speed is critical, so the libraries need to be optimized for the specific CPU instruction set and GPUs available.
One way to arrange for that is to split the job into two parts, as Hatch maintainer Ofek Lev described:
To be super specific here, for building a wheel with compiled bits you need 2 distinct things: a build backend and an extension module builder.The build backend is the thing that users configure to ship the desired files in a zip archive which we call a wheel. The extension module builder is the thing in charge of compiling like cmake that will then communicate with the build backend to say what extra files should be included.
Lev thinks that his extensionlib toolkit provides the right model, though Gommers disagreed. The problem is that Lev is oversimplifying, Gommers said:
You seem to be under the impression that it's a matter of finding some compiler, pointing it at a few source files, and producing a .so/.dll. In reality, dealing with compiled code is (at least) an order of magnitude more complex than anything that any pure Python build backend does.
Henry Schreiner, who maintains several different packaging tools including
the
scikit-build
build-system generator, noted
that the packaging landscape has improved a great deal since the days of
the "setuptools/distutils monopoly
". In particular, setuptools and the
now-deprecated distutils are
painful for
some kinds of projects to use (it "requires up to thousands of lines
(14K in numpy [distutils], IIRC) to work
"), which is not really
maintainable. Instead of having a Python-based builder for extension
modules, it makes sense to use external build tools, he said:
Certainly, in the compiled space, many/most users will want a build system like CMake or Meson - building a compiled extension from scratch is really hard, and not something I think we want to compete on. Reusing the years of work and thousands of pre-existing library integrations is valuable.
Gommers put
together a lengthy
blog post that described his vision for the path forward. That plan
would severely restrict the kinds of packages that would come directly from
PyPI ("only pure Python (or -any) packages
from PyPI, and everything else from a system package manager
"). There
was some pushback on that, but Dower suggested softening the restriction
somewhat,
while looking
for some consensus:
I think this thread is the place to agree that when a distributor has prebuilt binaries for their build of Python, those should be preferred over unmatched binaries. If we agree that the default workflows should somehow support that, we can do details in new threads.
So an Anaconda or Linux environment that has a binary version of some
package available would use that binary in preference to any PyPI wheel that
might—or might not—work in that environment. There are lots of wheels
at PyPI that work fine outside of the specific environment they were built
for; if they get installed on such a system they may not work, however
"either in obvious ways (fails to import) or subtle ways (crashes at
runtime, silent data corruption, etc.)
", Dower said. The problem with
the "get your
binaries from a distributor" model, Moore said
is that it generally means users need to get their Python from the same
source:
And that means "switch to a different stack", which for many users is simply not acceptable (especially if they don't know this is going to happen in advance, which they typically don't).If we were talking about distributors/integrators shipping binaries which worked with whatever Python installation the user had, there would be a lot less heat in this discussion. If I could install the Windows Store Python, and then find I needed to get (say) scipy from somewhere other than PyPI, and there was somewhere that shipped binaries that worked with the Windows Store Python, I wouldn't have the slightest problem (well, the UI of "pip fails, find the right command" needs to be worked out but that's a detail). But I've seen no sign of anyone offering that.
There's a reason no one is offering it, Dower said; it simply is not possible to do so:
The reason that binaries don't work with "whatever installation of Python the user has" is because you need to know about the installation in order to produce the binaries.[...] Most of the time on many modern desktop and server machines, the ABI is usually close enough that you can squint and get away with it ("wheels which work for "most" users"). Once you get into the territory where that doesn't work, you do need to know all the details of the ABI in order to build a compatible binary for that interpreter. The easiest way to do this is to get the interpreter from the same builder you get the binary from, because they'll have ensured it.
In general, Dower's proposed agreement was
well-received, but Moore thought
that is was only handling the easy part: "The difficult question is
when a distributor doesn't have prebuilt binaries, what should
happen?
" The right answer, according
to Dower, is to request that the distributor provide the package and,
failing that, to build it from source "using the distributor's
environment as the target
"; that way the resulting binary module is
compatible with the
distribution.
The discussion roared on, but along the way Moore stepped
back to look at the overall goal of the discussion. If someone were to
wave a magic wand that resulted in a consensus on the single tool to use:
"What would we then actually do to make that consensus
effective?
" He does not have a good answer to that question, but it
feels like the community "expects the PyPA to have some sort of influence (or authority) over this, and our biggest issue is basically that we don't...
"
Clearly communicating the recommendation would be what many users are
looking for, Oscar Benjamin said. "If
PyPA said 'we now recommend tool X' and there was clear documentation for
using it and migrating to it then I would be happy to go along with that. I
think that's where most users are at.
"
Moore was unsure
how the PyPA could actually accomplish that: "I genuinely don't know how
the average user distinguishes between a formal recommendation from the
PyPA and a bunch of random documentation they found on the internet. Is packaging.python.org genuinely that
influential?
" But Dower thought
there was a straightforward answer to that: "I suspect for most users,
'preinstalled with the python.org installer and on PATH by default' is what
it would take.
" Moore agreed;
"Now all we have to do is agree what tool to distribute ;)
"
There are, of course, several possibilities for which tool that might be, but obviously pip is in a rather privileged position, since it is already shipped with Python. Whether pip can be adapted to be the grand, unified tool was discussed further as the strategy thread began to wind down—after nearly 300, often lengthy, posts. The rest of the discussion on which tool—and how to make that decision—will come in our next installment.
Debating composefs
When LWN looked at the composefs filesystem in December, we reported that there had been "little response" to the patches. That is no longer the case. Whether composefs (or something like it) should be merged has become the subject of an extended debate; at its core, the discussion is over just how Linux should support certain types of container workloads.Composefs is an interesting sort of filesystem, in that a mounted instance is an assembly of two independent parts. One of those, an "object store", is a directory tree filled with files of interest, perhaps with names that reflect the hash of their contents; the object store normally lives on a read-only filesystem of its own. The other is a "manifest" that maps human-readable names to the names of files in the object store. Composefs uses the manifest to create the filesystem that is visible to users while keeping the object store hidden from view. The resulting filesystem is read-only.
This mechanism is intended to create the system image for containers. When designing a container, one might start with a base operating-system image, then add a number of layers containing the packages needed for that specific container's job. With composefs, the object store contains all of the files that might be of interest to any container the system might run, and the composition of the image used for any specific container is done in the manifest file. The result is a flexible mechanism that can mount a system image more quickly than the alternatives while allowing the object store to be verified with fs-verity and shared across all containers in the system.
The v3 discussion
Version 3 of the composefs patches was posted in late January; it included a number of improvements requested by reviewers of the previous versions. Amir Goldstein was not entirely happy with the work that had been done, though; he suggested that, rather than proposing composefs, the developers (Alexander Larsson and Giuseppe Scrivano) should put their efforts into improving the existing kernel subsystems (specifically overlayfs and the EROFS filesystem) instead.
A long discussion of the value (or lack thereof) of composefs followed. There are currently two alternatives to composefs that are used for container use cases:
- At container boot time, the system image is assembled by creating a large set of symbolic links to the places where the target files actually live. This approach suffers from a lengthy startup time; at least one system call is required to create each of thousands of symbolic links, and that takes time.
- The container is given a base image on a read-only filesystem; overlayfs is then used to overlay one or more layers on top to create the container-specific image.
The consensus seems to be that the symbolic-link approach, due to its startup cost, is not a viable alternative to composefs. Goldstein and others do think that the overlayfs approach could be viable, though, perhaps with a few changes to that filesystem. The composefs developers are not so sure.
Considering overlayfs
One current shortcoming with overlayfs, Scrivano said, is that, unlike composefs, it is unable to provide fs-verity protection for the filesystem as a whole. Any changes that affect the overlay layer would bypass that protection. Larsson described an overlayfs configuration that could work (with some improvements to overlayfs), but was unenthusiastic about that option:
However, the main issue I have with the overlayfs approach is that it is sort of clumsy and over-complex. Basically, the composefs approach is laser focused on read-only images, whereas the overlayfs approach just chains together technologies that happen to work, but also do a lot of other stuff.
Among other things, he said, the extra complexity in the overlayfs solution leads to worse performance. The benchmark he used to show this was to create a filesystem using both approaches, then measure the time required to execute an "ls -lR" of the whole thing. In the cold-cache case, the overlayfs solution took about four times as long to run; the performance in the warm-cache case was more comparable, but composefs was still faster.
Goldstein strongly
contested the characterization of the overlayfs solution; he also
started an extended sub-thread on whether the "ls -lR"
benchmark made sense. He added:
"I see. composefs is really very optimized for ls -lR. Now only need to
figure out if real users start a container and do ls -lR without reading
many files is a real life use case.
"
Dave Chinner jumped
in to defend this test:
Cold cache performance dominates the runtime of short lived containers as well as high density container hosts being run to their container level memory limits. `ls -lR` is just a microbenchmark that demonstrates how much better composefs cold cache behaviour is than the alternatives being proposed.
He added that he has used similar tests to benchmark filesystems for many years and has never had to justify it to anybody. Larsson, meanwhile, explained the emphasis that is being placed on this performance metric (or "key performance indicator" — KPI) this way:
My current work is in automotive, which wants to move to a containerized workload in the car. The primary KPI is cold boot performance, because there are legal requirements for the entire system to boot in 2 seconds. It is also quite typical to have shortlived containers in cloud workloads, and startup time there is very important. In fact, the last few months I've been primarily spending on optimizing container startup performance (as can be seen in the massive improvements to this in the upcoming podman 4.4).
Goldstein finally accepted the importance of this metric and suggested that overlayfs could be changed to provide better cold-cache performance as well. Larsson answered that, if overlayfs could be modified to address the performance gap, it might be the better solution; he also raised doubts as to whether the performance gap could really be closed and whether it made sense to add more complexity to overlayfs.
The conclusion from this part of the discussion was that some experimentation with overlayfs made sense to see whether it is a viable option or not. Overlayfs maintainer Miklos Szeredi has been mostly absent from the discussion, but did briefly indicate that some of the proposed changes might make sense.
The EROFS option
There was another option that came up a number of times in the discussion, though: enhance the EROFS filesystem to include the functionality provided by composefs. This filesystem, which is designed for embedded, read-only operation, already has fs-verity support. EROFS developer Gao Xiang has repeatedly said that the filesystem could be enhanced to implement the features provided by composefs; indeed, much of that functionality is already there as part of the Nydus mechanism. Scrivano has questioned this idea, though:
Sure composefs is quite simple and you could embed the composefs features in EROFS and let EROFS behave as composefs when provided a similar manifest file. But how is that any better than having a separate implementation that does just one thing well instead of merging different paradigms together?
Gao has suggested that the composefs developers will, sooner or later, want to add support for storing file data (rather than just the manifest metadata), at which point composefs will start to look more like an ordinary filesystem. At such a point, the argument for separating it from other filesystems would not be so strong.
A few performance issues in EROFS (for this use case) were identified in the course of the discussion, and various fixes have been implemented. Jingbo Xu has run a number of benchmarks to measure the results of patches to both EROFS and overlayfs, but none yet have shown that either of the other two options can outperform composefs. That work is still in an early state, though.
As might be imagined, this sprawling conversation did not come to any sort of consensus with regard to whether it makes sense to merge composefs or to, instead, put development effort into one of the alternatives. Chances are that no such conclusion will be reached for the next few months. This is just the sort of decision that the upcoming LSFMM/BPF summit was created to resolve; chances are that there will be an interesting discussion at that venue.
Rethinking splice()
The splice() system call is built on an appealing idea: connect two file descriptors together so that data can be moved from one to the other without passing through user space and, preferably, without being copied in the kernel. splice() has enabled some significant performance optimizations over the years, but it has also proved difficult to work with and occasionally surprising. A recent linux-kernel discussion showed how splice() can cause trouble, to the point that some developers now wonder if adding it was a good idea.Stefan Metzmacher is a Samba developer who would like to use splice() to implement zero-copy I/O in the Samba server. He has run into a problem, though. If a file is being sent to a remote client over the network, splice() can be used to feed the file data into a socket; the network layer will read that data directly out of the page cache without needing to make a copy in the kernel — exactly the desired result. But if the file is written before network transmission is complete, the newly written data may be sent, even though that write happened after the splice() call was made, perhaps even in the same process. That can lead to unpleasant surprises (and unhappy Samba users) when the data received at the remote end is not what is expected.
The problem here is a bit more subtle than it might seem at a first glance. To begin with, it is not possible to splice a file directly into a network socket; splice() requires that at least one of the file descriptors given to it is a pipe. So the actual sequence of operations is to splice the file into a pipe, then to connect the pipe to the socket with a second splice() call. Neither splice() call knows when the data it passes through has reached its final destination; the network layer may still be working with the file data even after both splice() calls have completed. There is no easy way to know that the data has been transmitted and that it is safe to modify the file again.
In his initial email, Metzmacher asked whether it would be possible to
prevent this problem by marking file-cache pages as copy-on-write when they
are passed to splice(). Then, if the file were written while the
transfer was underway, that transfer could continue to read from the older
data while the write to the file proceeded independently. Linus Torvalds
quickly rejected
that idea, saying that the sharing of the buffers holding the data is
"the whole point of splice
". Making those pages copy-on-write would
break sharing of data in general. He later added
that a splice() call should be seen as a form of mmap(),
with similar semantics.
He also said: "You can say 'I don't like splice()'. That's fine. I used
to think splice was a really cool concept, but I kind of hate it these
days. Not liking splice() makes a ton of sense.
" Like it or not,
though, the current behavior of splice() cannot change, since that
would break existing applications; even Torvalds's dislike cannot overcome
that.
Samba developer Jeremy Allison suggested that
the solution to Metzmacher's problem could be for Samba to only attempt
zero-copy I/O when the client holds a lease on the file in question, thus
ensuring that there should be no concurrent access. He later had to backtrack on that
idea, though; since the Samba server cannot know when network transmission
is complete, the possibility for surprises still exists even in the
presence of a lease. Thus, he concluded, "splice() is unusable
for Samba even in the leased file case
".
Dave Chinner observed that this problem resembles those that have previously been solved in the filesystem layer. There are many cases, including RAID 5 or data compressed by the filesystem, where data to be written must be held stable for the duration of the operation; this is the whole stable-pages problem that was confronted almost twelve years ago. Perhaps a similar solution could be implemented here, he said, where attempts to write to pages currently being used in a splice() chain would simply block until the operation has completed.
Both Torvalds and Matthew Wilcox pointed out the flaw with this idea: the splice() operation can take an unbounded amount of time, so it could be used (accidentally or otherwise) to block access to a file indefinitely. That idea did not go far.
Andy Lutomirski argued
that splice() is the wrong interface for what applications want to
do; splice() has no way of usefully communicating status
information back to the caller. Instead, he said, io_uring might be a better way to implement
this functionality. It allows multiple operations to be queued efficiently
and, crucially, it has the completion mechanism that can let user space
know when a given buffer is no longer in use. Jens Axboe, the maintainer
of io_uring, was initially unsure
about this idea, but warmed
to it after Lutomirski suggested
that the problem could be simplified by taking the pipes out of the picture
and allowing one non-pipe file descriptor to be connected directly to
another. The pipes, Axboe said, "do get in the way
" sometimes.
Axboe thought that a new "send file" io_uring operation could be a good solution to this problem; it could be designed from the beginning with asynchronous operation in mind and without the use of pipes. So that may be the solution that comes out of this discussion — though somebody would, naturally, actually have to implement it first.
There was some talk about whether splice() should be deprecated; Torvalds doesn't think the system call has much value:
The same way "everything is a pipeline of processes" is very much historical Unix and very useful for shell scripting, but isn't actually then normally very useful for larger problems, splice() really never lived up to that conceptual issue, and it's just really really nasty in practice.But we're stuck with it.
There is little point in discouraging use of splice(), though, if the kernel lacks a better alternative; Torvalds expressed doubt that the io_uring approach would turn out to be better in the end. The only way to find out is probably to try it and see how well it actually works. Until that happens, splice() will be the best that the kernel has to offer, its faults notwithstanding.
Some development statistics for 6.2
The 6.2 kernel was released on February 19, at the end of a ten-week development cycle. This time around, 15,536 non-merge changesets found their way into the mainline repository, making this cycle significantly more active than its predecessor. Read on for a look at the work that went into this kernel release.The work in 6.2 was contributed by 2,088 developers, which just barely sets a new record; the previous record was the 2,086 developers contributed to 5.19. Of those developers, 294 made their first contribution to the kernel in this cycle, a fairly typical number. The most active contributors to the 6.2 release were:
Most active 6.2 developers
By changesets Uwe Kleine-König 571 3.7% Krzysztof Kozlowski 342 2.2% Johan Hovold 178 1.1% Andy Shevchenko 152 1.0% Thomas Gleixner 148 1.0% Ville Syrjälä 147 0.9% Yang Yingliang 141 0.9% Ben Skeggs 128 0.8% Christoph Hellwig 126 0.8% Dmitry Baryshkov 120 0.8% Namhyung Kim 117 0.8% Sean Christopherson 112 0.7% Colin Ian King 103 0.7% David Howells 99 0.6% Martin Kaiser 97 0.6% Ian Rogers 96 0.6% Josef Bacik 94 0.6% Hans de Goede 93 0.6% Dmitry Torokhov 80 0.5% Thomas Zimmermann 77 0.5%
By changed lines Ian Rogers 149899 17.0% Ping-Ke Shih 31997 3.6% Ben Skeggs 23004 2.6% Steen Hegelund 17222 2.0% Arnd Bergmann 13042 1.5% Shayne Chen 12476 1.4% Jason Gunthorpe 11480 1.3% Krzysztof Kozlowski 11365 1.3% Eugen Hristev 10452 1.2% David Howells 9020 1.0% Dmitry Baryshkov 7981 0.9% Daniel Almeida 7654 0.9% Nick Terrell 6984 0.8% Carlos Bilbao 6817 0.8% Paul Kocialkowski 6487 0.7% Josef Bacik 6219 0.7% Johan Hovold 5726 0.6% Kumar Kartikeya Dwivedi 5615 0.6% Tianjia Zhang 5531 0.6% Horatiu Vultur 5488 0.6%
Uwe Kleine-König contributed the most changesets this time around; this work was dominated by the conversion of a vast number of drivers to a new I2C API. Krzysztof Kozlowski continues to work on updating and improving devicetree files. Johan Hovold worked mostly with a set of Qualcomm PHY drivers, Andy Shevchenko performed cleanups across the driver tree, and Thomas Gleixner worked extensively in the core-kernel and x86 subsystems.
Usually, when a developer lands at the top of the "lines changed" column, it is the result of adding more machine-generated amdgpu header files. This time, though, Ian Rogers got there by working with the perf tool and, in particular, updating a number of Intel PMU event definition files. Ping-Ke Shih contributed a number of improvements to the rtw89 wireless network adapter driver. Ben Skeggs worked, as always, on the Nouveau graphics driver, Steen Hegelund worked on the sparx5 network driver, and Arnd Bergmann deleted a number of unmaintained drivers.
The top testers and reviewers this time around were:
Test and review credits in 6.2
Tested-by Philipp Hortmann 154 11.2% Daniel Wheeler 94 6.8% Mark Broadworth 51 3.7% Gurucharan G 39 2.8% Yang Lixiao 33 2.4% Matthew Rosato 27 2.0% Vincent Donnefort 26 1.9% Yi Liu 25 1.8% Nicolin Chen 24 1.7% Steev Klimaszewski 24 1.7% Yu He 22 1.6% Daniel Golle 20 1.5% Naveen N. Rao 16 1.2% Arnaldo Carvalho de Melo 16 1.2% Guenter Roeck 15 1.1%
Reviewed-by Andy Shevchenko 271 3.1% Krzysztof Kozlowski 249 2.8% Rob Herring 231 2.6% Konrad Dybcio 196 2.2% Dmitry Baryshkov 174 2.0% AngeloGioacchino Del Regno 127 1.4% Lyude Paul 125 1.4% Kevin Tian 118 1.3% David Sterba 114 1.3% Ranjani Sridharan 108 1.2% Jason Gunthorpe 98 1.1% Hans de Goede 97 1.1% Jani Nikula 93 1.1% Chaitanya Kulkarni 89 1.0% Peter Ujfalusi 82 0.9%
Philipp Hortmann's testing was almost entirely limited to Realtek wireless driver changes. Daniel Wheeler continues to test graphics patches from within AMD; Mark Broadworth did the same. In the reviews column, Andy Shevchenko reviewed patches across the driver tree at a rate of nearly four per day. Krzysztof Kozlowski, Rob Herring, and Konrad Dybcio all reviewed devicetree patches at almost the same rate.
All told, 1,161 commits in 6.2 (7.4%) had Tested-by tags, while 6,735 commits (43.4%) had Reviewed-by tags. A quick look shows that the use of Reviewed-by tags has been steadily growing over the years:
Commits with Reviewed-by tags in each release Release Total Tagged % v4.0 10,346 1,712 16.5% v4.1 11,916 1,666 14.0% v4.2 13,694 2,316 16.9% v4.3 12,274 2,174 17.7% v4.4 13,071 2,434 18.6% v4.5 12,080 2,355 19.5% v4.6 13,517 2,785 20.6% v4.7 12,283 2,749 22.4% v4.8 13,382 2,965 22.2% v4.9 16,214 4,073 25.1% v4.10 13,029 2,972 22.8% v4.11 12,724 3,126 24.6% v4.12 14,570 3,977 27.3% v4.13 13,006 3,272 25.2% v4.14 13,452 3,144 23.4% v4.15 14,866 4,970 33.4% v4.16 13,630 3,902 28.6% v4.17 13,541 3,978 29.4% v4.18 13,283 4,040 30.4% v4.19 14,043 4,171 29.7% v4.20 13,884 4,201 30.3% v5.0 12,808 4,045 31.6% v5.1 12,749 3,843 30.1% v5.2 14,309 4,974 34.8% v5.3 14,605 4,860 33.3% v5.4 14,619 5,184 35.5% v5.5 14,350 4,939 34.4% v5.6 12,665 4,184 33.0% v5.7 13,901 4,797 34.5% v5.8 16,306 5,477 33.6% v5.9 14,858 5,251 35.3% v5.10 16,175 5,352 33.1% v5.11 14,340 5,038 35.1% v5.12 13,015 4,690 36.0% v5.13 16,030 5,245 32.7% v5.14 14,735 5,228 35.5% v5.15 12,377 4,361 35.2% v5.16 14,190 5,182 36.5% v5.17 13,038 4,846 37.2% v5.18 14,954 6,017 40.2% v5.19 15,134 6,361 42.0% v6.0 15,402 6,044 39.2% v6.1 13,942 5,997 43.0% v6.2 15,440 6,735 43.4%
Less than half of the non-merge commits in 6.2 contain Reviewed-by tags, but the percentage of such tags is still nearly triple what it was for the 4.0 release, almost eight years ago.
A total of 235 employers supported work on 6.2, a fairly normal sort of number. The most active employers this time were:
Most active 6.2 employers
By changesets Intel 1658 10.7% (Unknown) 1125 7.2% 1119 7.2% Linaro 1118 7.2% Red Hat 1026 6.6% Huawei Technologies 858 5.5% Pengutronix 641 4.1% AMD 616 4.0% (None) 577 3.7% SUSE 457 2.9% NVIDIA 443 2.9% Meta 429 2.8% (Consultant) 323 2.1% IBM 249 1.6% Arm 240 1.5% NXP Semiconductors 237 1.5% Linutronix 221 1.4% Renesas Electronics 210 1.4% Microchip Technology Inc. 172 1.1% Oracle 166 1.1%
By lines changed 182057 20.7% Linaro 65578 7.4% Red Hat 63332 7.2% Intel 60882 6.9% (Unknown) 41141 4.7% Realtek 38961 4.4% Microchip Technology Inc. 37217 4.2% NVIDIA 33702 3.8% Meta 32152 3.6% AMD 28888 3.3% MediaTek 22373 2.5% (None) 22363 2.5% SUSE 20743 2.4% Collabora 13197 1.5% Renesas Electronics 11889 1.3% Huawei Technologies 11295 1.3% IBM 10225 1.2% (Consultant) 8988 1.0% Analog Devices 8864 1.0% NXP Semiconductors 8829 1.0%
IBM, once one of the biggest contributors to Linux, continues to slide slowly down in this ranking (Red Hat, which is owned by IBM but said to be run independently, has also declined, but less so). But, in general, this table looks as it always does, without a lot of surprises.
That reflects the state of the kernel development process as a whole; it continues to produce kernel releases on a predictable schedule without many surprises. As of this writing, there are over 12,000 changesets waiting in linux-next for the 6.3 development cycle, so it looks like the pace will not be slowing down much in the near future. However 6.3 turns out, you'll find the results summarized here shortly thereafter.
Passwordless authentication with FIDO2—beyond just the web
FIDO2 is a standard for authenticating users without the need for passwords. While the technology has been introduced mainly to protect accounts on web sites, it's also useful for other purposes, such as logging into Linux systems. The same technology can even be used beyond authentication, for example to sign files or Git commits. A couple of talks at FOSDEM 2023 in Brussels presented the possibilities for Linux users.
The FIDO2 standard is a joint effort between the FIDO Alliance (FIDO stands for Fast Identity Online) and the World Wide Web Consortium (W3C) to develop standards for strong authentication. Users can securely authenticate themselves with a FIDO2 security key (a hardware token), which is more convenient, faster, and more secure than traditional password-based authentication. The security key can ask the user to touch a button or enter a PIN for authentication; alternatively, it can include a fingerprint reader or other means for biometric authentication. FIDO2 can be used as an extra factor added to a traditional password as part of multi-factor authentication or as the only means of authentication. In the latter case, this is called passwordless authentication. Note that a previous FIDO standard, FIDO U2F, was primarily designed for two-factor authentication.
The FIDO2 standard consists of two parts. Web Authentication (WebAuthn) is a W3C recommendation with broad browser support that describes an API allowing web sites to add FIDO2 authentication to their login pages. FIDO's Client to Authenticator Protocol (CTAP) complements WebAuthn by enabling an external authenticator, such as a security key or a mobile phone, to work with the browser. So in short: the browser talks WebAuthn to the server and CTAP to the authenticator device.
Both standards are open, and anyone can manufacture FIDO2 security keys. Various manufacturers have built such hardware tokens. Yubico has some YubiKey models that support FIDO2, as well as a dedicated FIDO2 security key. Feitian, Nitrokey, SoloKeys, and OnlyKey offer FIDO2 tokens too.
How FIDO2 works
FIDO2 is a challenge-response authentication system using asymmetric encryption. When a user registers with a web site, WebAuthn and CTAP work together to make the authenticator (usually a security key) create a new key pair. This is only done after the user proves possession of the authenticator, for example by pressing a button on the device, scanning a finger, or entering a PIN. The private key never leaves the device, while the public key is sent to the web site and associated with the user's account there.
When logging into the web site, the site sends a challenge and its origin (such as its domain) to the web browser using the WebAuthn API. The web browser then sends the challenge to the authenticator using CTAP. The user again proves possession of the authenticator and the device generates a response by signing the challenge with its private key. The response is returned to the web browser (using CTAP) and then to the web site (using WebAuthn). The web site verifies the response against the original challenge using the public key that was used to register the account.
The FIDO2 token can store multiple credentials, each of which consists of a credential ID, the private key, the user ID, and a relying party ID corresponding to the web site's domain. The web site stores the same user ID and credential ID, as well as the public key. The way FIDO2 works protects users against phishing. The web browser only accepts WebAuthn calls using a relying party ID allowed from the web site's domain, and only HTTPS connections are allowed. So when the authenticator signs the challenge, the browser already knows that it's talking to the right web site, because it has verified this using the web site's TLS certificate. As a result, the user doesn't have to check the domain manually.
Using FIDO2 beyond the web
Most of the documentation about FIDO2 is about its use in web sites as if that's the only possible use. An example is the unofficial but helpful WebAuthn Guide made by Duo Security. However, the specifications can be used beyond the web. That was the topic of a talk by Joost van Dijk, a developer advocate at Yubico.
Yubico developed libfido2, a C library and corresponding command-line tools to communicate with a FIDO device (not only the ones made by Yubico) over USB or NFC. It supports the FIDO U2F and FIDO2 protocols. The project is licensed under the BSD 2-clause license and supports Linux, macOS, Windows, OpenBSD, and FreeBSD. Some external projects have built upon libfido2 to create bindings for .NET, Go, Perl, and Rust. Yubico also maintains a Python library, python-fido2, that is tested on Linux, macOS, and Windows.
One of the projects using libfido2 is pam-u2f, also developed by Yubico and included in the repositories of many Linux distributions. It integrates FIDO2 security keys into Pluggable Authentication Modules (PAM), which is the flexible and extensible authentication framework for Linux systems that allows multiple authentication methods to be used. Requiring a credential on a FIDO2 device for a specific purpose is as easy as registering a new credential, saving it in a configuration file, and then adding a reference to pam_u2f.so in the PAM configuration, referring to the file with the saved credential. As an example, Van Dijk showed how to enable two-factor authentication for sudo with a FIDO2 token as the second factor. The pam-u2f documentation shows some other examples.
New types of SSH keys
Another use case Van Dijk described is an SSH key that is backed by a FIDO2 authenticator. OpenSSH 8.2 (released in February 2020) introduced support for FIDO2 security keys, using the libfido2 library under the hood. The challenge-response mechanism works not unlike on the web, but this time the authenticator talks to the ssh client using CTAP, and the ssh client talks to the sshd server using the normal SSH protocol.
To make this work, OpenSSH introduced new public key types "ecdsa-sk" and "ed25519-sk", along with corresponding certificate types. If the user generates a new SSH key pair of one of those types with ssh-keygen, the private key is generated and stored inside the hardware token, together with the relying party ID "ssh:" and optionally a key handle. The authenticator returns the public key and the key handle to ssh-keygen. The program saves the public key in a file as usual, while the file that normally stores the private key now includes the key handle.
Authenticating with this type of key to the SSH server involves a challenge-response mechanism. The SSH server sends a challenge to the client, which sends it to the FIDO2 authenticator. The latter signs the server's challenge with the private key to create a digital signature that is sent to the client and then to the server. The SSH server is able to verify this signature by the corresponding public key known to be associated to the FIDO2 authenticator. The private key never leaves the authenticator during this process, even the ssh client doesn't get access to it. A hardware-backed SSH key may be used like any other key type supported by OpenSSH, as long as the hardware token is attached when the key is used.
Van Dijk showed some examples of what's possible with a hardware-backed SSH key: authenticating to GitHub when cloning a Git repository over SSH, signing Git commits and tags and verifying their signatures, and signing and verifying files.
Unlocking LUKS2 volumes with a hardware token
CTAP also offers the hmac-secret extension, which is supported by most FIDO2 tokens. This is used to retrieve a secret from the authenticator in order to encrypt or decrypt data. To prevent offline attacks, part of the secret (the salt) is held by the client, while the other part is stored on the authenticator. The client hands its salt to the authenticator, the device combines the salt with its own part of the secret using the HMAC-SHA-256 hash-based message authentication (HMAC), and returns the resulting key. Van Dijk showed how to use this key to encrypt data, after which it's safe to delete the key. To decrypt the data later, the client hands the salt back to the authenticator, which regenerates the key from the salt and its own secret.
One application of this hmac-secret extension that Van Dijk demonstrated is in Linux Unified Key Setup (LUKS) disk encryption. Systemd 248 introduced support for unlocking LUKS2 volumes with a FIDO2 security key. After enrolling a FIDO2 authenticator to a LUKS2 encrypted volume, the systemd-cryptsetup component waits for the FIDO2 token to be plugged in at boot, hands it the salt, gets back the key, and unlocks the volume with the key.
FIDO2 security keys instead of smart cards
With all this functionality of FIDO2 security keys, one has to wonder why traditional smart cards or hardware keys implementing OpenPGP aren't enough. These have been used for a long time to store private keys offline for encryption, authentication, signing, and verification. Van Dijk considers that the biggest advantage of FIDO2 security keys is that they are cheaper and more user-friendly.
Open-source support is also improving a lot in many domains. For example, at FOSDEM Red Hat's Alexander Bokovoy gave two talks about his work on integrating FIDO2 in FreeIPA for passwordless authentication for centrally managed users. Under the hood, this also uses the libfido2 library. His colleague Iker Pedrosa has some instructions on his blog.
With Microsoft, Apple, and Google jumping on the FIDO2 bandwagon during the last few years, surely more and more affordable FIDO2 devices will come to market. With software support improving as well, it won't be long before users start replacing their passwords with cheap FIDO2 security keys.
Brief items
Kernel development
Kernel release status
The 6.2 kernel is out, released on February 19. Linus said:
Please do give 6.2 a testing. Maybe it's not a sexy LTS release like 6.1 ended up being, but all those regular pedestrian kernels want some test love too.
Headline features in this release include the ability to manage linked lists and other data structures in BPF programs, more additions to the kernel's Rust infrastructure, improvements in Btrfs RAID5/6 reliability, IPv6 protective load balancing, faster "Retbleed" mitigation with return stack buffer stuffing, control-flow integrity improvements with FineIBT, oops limits, and more.
See the LWN merge-window summaries (part 1, part 2) and the KernelNewbies 6.2 page for more information.
Stable updates: 6.1.13, 5.15.95, 5.10.169, 5.4.232, 4.19.273, and 4.14.306 were all released on February 22.
Distributions
No more Flatpak (by default) in Ubuntu Flavors
The Ubuntu Flavors offerings (Kubuntu and the like) have decided that the way to improve the user experience is to put more emphasis on the Snap package format.
Going forward, the Flatpak package as well as the packages to integrate Flatpak into the respective software center will no longer be installed by default in the next release due in April 2023, Lunar Lobster. Users who have used Flatpak will not be affected on upgrade, as flavors are including a special migration that takes this into account. Those who haven’t interacted with Flatpak will be presented with software from the Ubuntu repositories and the Snap Store.
Distributions quote of the week
Leap is dependent on the SLE [SUSE Linux Enterprise] codebase in order to exist.— Richard BrownThat SLE codebase is transforming into ALP, or to put a more dramatic spin on it, SLE as we know it is ending.
This is an unavoidable fact - it's decisions made by SUSE, for SUSE's business, to make SUSE money.
Therefore, Leap as we know it, is ending.
What comes next is not so set in stone.
Development
GDB 13.1 released
Version 13.1 of the GNU GDB debugger has been released. Changes include support for the LoongArch and CSKY architectures, a number of Python API improvements, support for zstd-compressed debug sections, and more.An RFC for governance of the Rust project
The Rust community has been working to reform its governance model; that work is now being presented as a draft document describing how that model will work.
This RFC establishes a Leadership Council as the successor of the core team and the new governance structure through which Rust Project members collectively confer the authority to ensure successful operation of the Project. The Leadership Council delegates much of this authority to teams (which includes subteams, working groups, etc.) who autonomously make decisions concerning their purviews. However, the Council retains some decision-making authority, outlined and delimited by this RFC.
Systemd 253 released
Systemd 253 has been released. As always, the list of changes is extensive. Support for version-1 control groups and separate /usr systems is going away later this year. There is a new tool for working with unified kernel images, a number of new unit-file options have been added, and much more; click below for the full list.
Page editor: Jake Edge
Announcements
Newsletters
Distributions and system administration
Development
Meeting minutes
Calls for Presentations
CFP Deadlines: February 23, 2023 to April 24, 2023
The following listing of CFP deadlines is taken from the LWN.net CFP Calendar.
Deadline | Event Dates | Event | Location |
---|---|---|---|
February 28 | May 17 | Icinga Camp Berlin | Berlin, Germany |
March 1 | April 29 | 19. Augsburger Linux-Infotag | Augsburg, Germany |
March 1 | April 26 April 28 |
Linaro Connect | London, UK |
March 1 | May 10 May 12 |
Linux Security Summit | Vancouver, Canada |
March 1 | May 8 May 10 |
Storage, Filesystem, Memory-Management and BPF Summit | Vancouver, Canada |
March 5 | April 23 April 26 |
foss-north 2023 | Göteborg, Sweden |
March 20 | June 13 June 15 |
Beam Summit 2023 | New York City, US |
March 22 | May 5 | Ceph Days India 2023 | Bengaluru, India |
March 27 | July 26 July 31 |
GUADEC | Riga, Latvia |
March 30 | May 12 | PGConf.BE | Haasrode, Belgium |
March 30 | July 15 July 21 |
aKademy 2023 | Thessaloniki, Greece |
March 31 | June 27 | PostgreSQL Conference Germany | Essen, Germany |
March 31 | September 13 September 14 |
stackconf 2023 | Berlin, Germany |
March 31 | October 15 October 17 |
All Things Open 2023 | Raleigh, NC, US |
April 2 | June 14 June 15 |
KVM Forum 2023 | Brno, Czech Republic |
April 2 | September 12 September 15 |
RustConf | Albuquerque, New Mexico, US |
April 9 | May 26 May 28 |
openSUSE Conference 2023 | Nürnberg, Germany |
April 21 | September 7 September 8 |
PyCon Estonia | Tallinn, Estonia |
If the CFP deadline for your event does not appear here, please tell us about it.
Upcoming Events
Events: February 23, 2023 to April 24, 2023
The following event listing is taken from the LWN.net Calendar.
Date(s) | Event | Location |
---|---|---|
February 22 February 24 |
PGConf India | Bengaluru, India |
March 9 | Ceph Days Southern California co-located with SCALE | Pasadena, CA, US |
March 9 March 12 |
Southern California Linux Expo | Pasadena, CA, US |
March 11 March 12 |
Chemnitzer Linux Tage 2023 | Chemnitz, Germany |
March 13 March 14 |
FOSS Backstage | Berlin, Germany |
March 14 March 16 |
Embedded World 2023 | Munich, Germany |
March 14 March 16 |
Everything Open | Melbourne, Australia |
March 17 | BOB 2023 | Berlin, Germany |
March 18 March 19 |
LibrePlanet | Boston, US |
March 20 March 24 |
Mozilla Festival 2023 | Online |
March 21 | Nordic PGDay 2023 | Stockholm, Sweden |
March 23 | Open Source 101 | Charlotte, SC, US |
April 1 | Central PA Open Source Conference | Lancaster, PA, US |
April 5 April 7 |
Mercurial Conference Paris | Paris, France |
April 13 April 15 |
FOSS Asia Summit 2023 | Singapore, Asia |
April 14 April 15 |
Grazer Linuxtage 2023 | Graz, Austria |
April 16 April 18 |
Cephalocon Amsterdam 2023 co-located with KubeCon EU | Amsterdam, Netherlands |
April 17 April 19 |
Power management and scheduling summit | Ancona, Italy |
April 19 April 27 |
PyCon US -- 20th Anniversary Special | Salt Lake City, US |
April 21 April 23 |
Linux Application Summit 2023 | Brno, Czechia |
April 23 April 26 |
foss-north 2023 | Göteborg, Sweden |
If your event does not appear here, please tell us about it.
Security updates
Alert summary February 16, 2023 to February 22, 2023
Dist. | ID | Release | Package | Date |
---|---|---|---|---|
CentOS | CESA-2023:0530 | C7 | libksba | 2023-02-20 |
CentOS | CESA-2023:0600 | C7 | thunderbird | 2023-02-20 |
CentOS | CESA-2023:0675 | C7 | tigervnc and xorg-x11-server | 2023-02-20 |
Debian | DLA-3330-1 | LTS | amanda | 2023-02-21 |
Debian | DLA-3332-1 | LTS | apr-util | 2023-02-21 |
Debian | DLA-3323-1 | LTS | c-ares | 2023-02-18 |
Debian | DLA-3328-1 | LTS | clamav | 2023-02-20 |
Debian | DLA-3319-1 | LTS | firefox-esr | 2023-02-16 |
Debian | DSA-5350-1 | stable | firefox-esr | 2023-02-15 |
Debian | DLA-3321-1 | LTS | gnutls28 | 2023-02-18 |
Debian | DLA-3322-1 | LTS | golang-github-opencontainers-selinux | 2023-02-18 |
Debian | DLA-3326-1 | LTS | isc-dhcp | 2023-02-20 |
Debian | DLA-3327-1 | LTS | nss | 2023-02-20 |
Debian | DSA-5353-1 | stable | nss | 2023-02-17 |
Debian | DLA-3325-1 | LTS | openssl | 2023-02-20 |
Debian | DLA-3329-1 | LTS | python-django | 2023-02-20 |
Debian | DSA-5354-1 | stable | snort | 2023-02-18 |
Debian | DSA-5356-1 | stable | sox | 2023-02-20 |
Debian | DLA-3324-1 | LTS | thunderbird | 2023-02-20 |
Debian | DSA-5355-1 | stable | thunderbird | 2023-02-18 |
Debian | DLA-3333-1 | LTS | tiff | 2023-02-21 |
Debian | DLA-3320-1 | LTS | webkit2gtk | 2023-02-17 |
Debian | DSA-5351-1 | stable | webkit2gtk | 2023-02-16 |
Debian | DSA-5352-1 | stable | wpewebkit | 2023-02-16 |
Fedora | FEDORA-2023-c3d65c8f7b | F37 | OpenImageIO | 2023-02-22 |
Fedora | FEDORA-2023-677d58bb20 | F36 | apptainer | 2023-02-22 |
Fedora | FEDORA-2023-01ff262091 | F37 | apptainer | 2023-02-22 |
Fedora | FEDORA-2023-d686b8d48f | F37 | clamav | 2023-02-19 |
Fedora | FEDORA-2023-e449235964 | F36 | community-mysql | 2023-02-16 |
Fedora | FEDORA-2023-d332f0b6a3 | F37 | community-mysql | 2023-02-16 |
Fedora | FEDORA-2023-ddf6575695 | F37 | curl | 2023-02-19 |
Fedora | FEDORA-2023-e1ffb79ddf | F37 | edk2 | 2023-02-16 |
Fedora | FEDORA-2023-578506ae4b | F36 | firefox | 2023-02-17 |
Fedora | FEDORA-2023-75b98848b2 | F37 | firefox | 2023-02-16 |
Fedora | FEDORA-2023-2b3acb6cfd | F36 | git | 2023-02-22 |
Fedora | FEDORA-2023-5b372318ff | F37 | git | 2023-02-16 |
Fedora | FEDORA-2023-cb63c0f615 | F37 | gssntlmssp | 2023-02-22 |
Fedora | FEDORA-2023-457955ce13 | F36 | kernel | 2023-02-21 |
Fedora | FEDORA-2023-b67c3bf65d | F37 | kernel | 2023-02-21 |
Fedora | FEDORA-2023-a5564c0a3f | F36 | openssl | 2023-02-22 |
Fedora | FEDORA-2023-c713d12577 | F36 | phpMyAdmin | 2023-02-17 |
Fedora | FEDORA-2023-179053442b | F37 | phpMyAdmin | 2023-02-17 |
Fedora | FEDORA-2023-766cc7ab0f | F36 | thunderbird | 2023-02-21 |
Fedora | FEDORA-2023-50429a3169 | F37 | thunderbird | 2023-02-19 |
Fedora | FEDORA-2023-3a9674404c | F36 | tpm2-tools | 2023-02-17 |
Fedora | FEDORA-2023-3a9674404c | F36 | tpm2-tss | 2023-02-17 |
Fedora | FEDORA-2023-93fb5b08eb | F36 | vim | 2023-02-18 |
Fedora | FEDORA-2023-efe0594c2b | F36 | webkit2gtk3 | 2023-02-22 |
Fedora | FEDORA-2023-2dc87954d9 | F37 | webkitgtk | 2023-02-18 |
Fedora | FEDORA-2023-c69a2a8f8b | F37 | xen | 2023-02-20 |
Fedora | FEDORA-2023-fb5022e741 | F36 | xorg-x11-server | 2023-02-22 |
Fedora | FEDORA-2023-1ebf4507df | F36 | xorg-x11-server-Xwayland | 2023-02-22 |
Mageia | MGASA-2023-0054 | 8 | curl | 2023-02-20 |
Mageia | MGASA-2023-0056 | 8 | firefox | 2023-02-20 |
Mageia | MGASA-2023-0053 | 8 | nodejs-qs | 2023-02-20 |
Mageia | MGASA-2023-0051 | 8 | qtbase5 | 2023-02-20 |
Mageia | MGASA-2023-0057 | 8 | thunderbird | 2023-02-20 |
Mageia | MGASA-2023-0052 | 8 | upx | 2023-02-20 |
Mageia | MGASA-2023-0055 | 8 | webkit2 | 2023-02-20 |
Oracle | ELSA-2023-0812 | OL7 | firefox | 2023-02-21 |
Oracle | ELSA-2023-0812 | OL7 | firefox | 2023-02-21 |
Oracle | ELSA-2023-0808 | OL8 | firefox | 2023-02-21 |
Oracle | ELSA-2023-0810 | OL9 | firefox | 2023-02-21 |
Oracle | ELSA-2023-0817 | OL7 | thunderbird | 2023-02-21 |
Oracle | ELSA-2023-0817 | OL7 | thunderbird | 2023-02-21 |
Oracle | ELSA-2023-0821 | OL8 | thunderbird | 2023-02-21 |
Oracle | ELSA-2023-0824 | OL9 | thunderbird | 2023-02-21 |
Red Hat | RHSA-2023:0859-01 | EL8 | Red Hat Virtualization Host 4.4.z SP 1 | 2023-02-21 |
Red Hat | RHSA-2023:0814-01 | EL8 | Red Hat build of Cryostat | 2023-02-20 |
Red Hat | RHSA-2023:0812-01 | EL7 | firefox | 2023-02-20 |
Red Hat | RHSA-2023:0808-01 | EL8 | firefox | 2023-02-20 |
Red Hat | RHSA-2023:0806-01 | EL8.1 | firefox | 2023-02-20 |
Red Hat | RHSA-2023:0811-01 | EL8.2 | firefox | 2023-02-20 |
Red Hat | RHSA-2023:0805-01 | EL8.4 | firefox | 2023-02-20 |
Red Hat | RHSA-2023:0807-01 | EL8.6 | firefox | 2023-02-20 |
Red Hat | RHSA-2023:0810-01 | EL9 | firefox | 2023-02-20 |
Red Hat | RHSA-2023:0809-01 | EL9.0 | firefox | 2023-02-20 |
Red Hat | RHSA-2023:0852-01 | EL8 | httpd:2.4 | 2023-02-21 |
Red Hat | RHSA-2023:0832-01 | EL8 | kernel | 2023-02-21 |
Red Hat | RHSA-2023:0856-01 | EL8.1 | kernel | 2023-02-21 |
Red Hat | RHSA-2023:0854-01 | EL8 | kernel-rt | 2023-02-21 |
Red Hat | RHSA-2023:0839-01 | EL8 | kpatch-patch | 2023-02-21 |
Red Hat | RHSA-2023:0858-01 | EL8.1 | kpatch-patch | 2023-02-21 |
Red Hat | RHSA-2023:0855-01 | EL8 | pcs | 2023-02-21 |
Red Hat | RHSA-2023:0857-01 | EL8.1 | pcs | 2023-02-21 |
Red Hat | RHSA-2023:0848-01 | EL8 | php:8.0 | 2023-02-21 |
Red Hat | RHSA-2023:0835-01 | EL8 | python-setuptools | 2023-02-21 |
Red Hat | RHSA-2023:0833-02 | EL8 | python3 | 2023-02-21 |
Red Hat | RHSA-2023:0838-01 | EL8 | samba | 2023-02-21 |
Red Hat | RHSA-2023:0837-01 | EL8 | systemd | 2023-02-21 |
Red Hat | RHSA-2023:0842-01 | EL8 | tar | 2023-02-21 |
Red Hat | RHSA-2023:0817-01 | EL7 | thunderbird | 2023-02-20 |
Red Hat | RHSA-2023:0821-01 | EL8 | thunderbird | 2023-02-20 |
Red Hat | RHSA-2023:0818-01 | EL8.1 | thunderbird | 2023-02-20 |
Red Hat | RHSA-2023:0819-01 | EL8.2 | thunderbird | 2023-02-20 |
Red Hat | RHSA-2023:0820-01 | EL8.4 | thunderbird | 2023-02-20 |
Red Hat | RHSA-2023:0822-01 | EL8.6 | thunderbird | 2023-02-20 |
Red Hat | RHSA-2023:0824-01 | EL9 | thunderbird | 2023-02-20 |
Red Hat | RHSA-2023:0823-01 | EL9.0 | thunderbird | 2023-02-20 |
Scientific Linux | SLSA-2023:0812-1 | SL7 | firefox | 2023-02-20 |
Scientific Linux | SLSA-2023:0817-1 | SL7 | thunderbird | 2023-02-20 |
Slackware | SSA:2023-046-01 | curl | 2023-02-15 | |
Slackware | SSA:2023-046-02 | git | 2023-02-15 | |
Slackware | SSA:2023-048-01 | kernel | 2023-02-17 | |
Slackware | SSA:2023-047-01 | mozilla | 2023-02-16 | |
SUSE | SUSE-SU-2023:0428-1 | MP4.3 SLE15 oS15.4 | ImageMagick | 2023-02-15 |
SUSE | SUSE-SU-2023:0421-1 | OS9 SLE12 | ImageMagick | 2023-02-15 |
SUSE | SUSE-SU-2023:0424-1 | SLE15 SES7 SES7.1 oS15.4 | ImageMagick | 2023-02-15 |
SUSE | SUSE-SU-2023:0447-1 | MP4.2 SLE15 SES7 SES7.1 | apache2-mod_security2 | 2023-02-17 |
SUSE | SUSE-SU-2023:0431-1 | MP4.3 SLE15 oS15.4 | apache2-mod_security2 | 2023-02-15 |
SUSE | SUSE-SU-2023:0423-1 | MP4.0 MP4.1 MP4.2 MP4.3 SLE15 SES6 SES7 SES7.1 oS15.4 | aws-efs-utils | 2023-02-15 |
SUSE | SUSE-SU-2023:0427-1 | MP4.2 MP4.3 SLE15 SES7.1 oS15.4 | bind | 2023-02-15 |
SUSE | SUSE-SU-2023:0470-1 | MP4.2 MP4.3 SLE15 SES7 SES7.1 oS15.4 | clamav | 2023-02-21 |
SUSE | SUSE-SU-2023:0470-1 | MP4.2 MP4.3 SLE15 SES7 SES7.1 oS15.4 | clamav | 2023-02-21 |
SUSE | SUSE-SU-2023:0453-1 | OS9 SLE12 | clamav | 2023-02-20 |
SUSE | SUSE-SU-2023:0471-1 | SLE12 | clamav | 2023-02-21 |
SUSE | SUSE-SU-2023:0429-1 | MP4.3 SLE15 SLE-m5.3 oS15.4 osM5.3 | curl | 2023-02-15 |
SUSE | SUSE-SU-2023:0425-1 | SLE12 | curl | 2023-02-15 |
SUSE | SUSE-SU-2023:0461-1 | MP4.3 SLE15 SES7 SES7.1 oS15.4 | firefox | 2023-02-20 |
SUSE | SUSE-SU-2023:0466-1 | SLE12 | firefox | 2023-02-21 |
SUSE | SUSE-SU-2023:0430-1 | MP4.2 MP4.3 SLE15 SES7.1 oS15.4 | git | 2023-02-15 |
SUSE | SUSE-SU-2023:0426-1 | OS8 OS9 SLE12 | git | 2023-02-15 |
SUSE | SUSE-SU-2023:0475-1 | MP4.3 SLE15 oS15.4 | gnutls | 2023-02-22 |
SUSE | openSUSE-SU-2023:0048-1 | osB15 | gssntlmssp | 2023-02-18 |
SUSE | SUSE-SU-2023:0436-1 | SLE12 | java-11-openjdk | 2023-02-16 |
SUSE | SUSE-SU-2023:0435-1 | MP4.3 SLE15 oS15.4 | java-17-openjdk | 2023-02-16 |
SUSE | SUSE-SU-2023:0437-1 | OS9 SLE12 | java-1_8_0-openjdk | 2023-02-16 |
SUSE | openSUSE-SU-2023:0054-1 | osB15 | jhead | 2023-02-20 |
SUSE | SUSE-SU-2023:0433-1 | MP4.3 SLE15 SLE-m5.3 oS15.4 osM5.3 | kernel | 2023-02-16 |
SUSE | SUSE-SU-2023:0056-2 | MP4.2 SLE15 SES6 SES7 SES7.1 | libksba | 2023-02-16 |
SUSE | SUSE-SU-2023:0443-1 | MP4.2 SLE15 SLE-m5.1 SLE-m5.2 SES7 SES7.1 osM5.2 | mozilla-nss | 2023-02-17 |
SUSE | SUSE-SU-2023:0434-1 | MP4.3 SLE15 SLE-m5.3 oS15.4 osM5.3 | mozilla-nss | 2023-02-16 |
SUSE | SUSE-SU-2023:0468-1 | OS9 SLE12 | mozilla-nss | 2023-02-21 |
SUSE | SUSE-SU-2023:0476-1 | SLE15 SES7 SES7.1 oS15.4 | php7 | 2023-02-22 |
SUSE | SUSE-SU-2023:0451-1 | SLE15 | postgresql-jdbc | 2023-02-20 |
SUSE | SUSE-SU-2023:0450-1 | SLE15 SES7 SES7.1 oS15.4 | postgresql12 | 2023-02-20 |
SUSE | SUSE-SU-2023:0460-1 | SLE15 | prometheus-ha_cluster_exporter | 2023-02-20 |
SUSE | SUSE-SU-2023:0465-1 | SLE15 oS15.4 | prometheus-ha_cluster_exporter | 2023-02-21 |
SUSE | openSUSE-SU-2023:0057-1 | osB15 | python-Django | 2023-02-21 |
SUSE | SUSE-SU-2023:0442-1 | OS8 OS9 | rubygem-actionpack-4_2 | 2023-02-17 |
SUSE | SUSE-SU-2023:0444-1 | SLE15 oS15.4 | rubygem-actionpack-5_1 | 2023-02-17 |
SUSE | SUSE-SU-2023:0463-1 | MP4.3 SLE15 SLE-m5.1 SLE-m5.2 SLE-m5.3 oS15.4 osM5.2 osM5.3 | tar | 2023-02-20 |
SUSE | SUSE-SU-2023:0441-1 | SLE12 | tar | 2023-02-17 |
SUSE | openSUSE-SU-2023:0053-1 | osB15 | timescaledb | 2023-02-20 |
SUSE | SUSE-SU-2023:0455-1 | OS9 SLE12 | ucode-intel | 2023-02-20 |
SUSE | SUSE-SU-2023:0456-1 | SLE12 | ucode-intel | 2023-02-20 |
SUSE | SUSE-SU-2023:0454-1 | SLE15 | ucode-intel | 2023-02-20 |
Ubuntu | USN-5881-1 | 18.04 | chromium-browser | 2023-02-21 |
Ubuntu | USN-5880-1 | 18.04 20.04 | firefox | 2023-02-20 |
Ubuntu | USN-5873-1 | 18.04 20.04 22.04 22.10 | golang-golang-x-text, golang-x-text | 2023-02-16 |
Ubuntu | USN-5807-2 | 16.04 | libxpm | 2023-02-21 |
Ubuntu | USN-5876-1 | 20.04 22.04 | linux-aws, linux-aws-5.15, linux-azure-fde, linux-gcp, linux-gcp-5.15, linux-intel-iotg | 2023-02-15 |
Ubuntu | USN-5874-1 | 18.04 20.04 | linux-aws-5.4, linux-gcp, linux-gcp-5.4, linux-hwe-5.4, linux-ibm, linux-ibm-5.4, linux-oracle-5.4 | 2023-02-15 |
Ubuntu | USN-5878-1 | 22.10 | linux-azure | 2023-02-16 |
Ubuntu | USN-5875-1 | 20.04 | linux-gke | 2023-02-15 |
Ubuntu | USN-5877-1 | 20.04 | linux-gke-5.15 | 2023-02-15 |
Ubuntu | USN-5879-1 | 22.04 | linux-hwe-5.19 | 2023-02-16 |
Ubuntu | USN-5739-2 | 20.04 22.04 22.10 | mariadb-10.3, mariadb-10.6 | 2023-02-22 |
Ubuntu | USN-5872-1 | 14.04 16.04 | nss | 2023-02-15 |
Ubuntu | USN-5778-2 | 14.04 16.04 | xorg-server, xorg-server-hwe-16.04 | 2023-02-16 |
Kernel patches of interest
Kernel releases
Architecture-specific
Build system
Core kernel
Development tools
Device drivers
Device-driver infrastructure
Documentation
Filesystems and block layer
Memory management
Networking
Security-related
Virtualization and containers
Miscellaneous
Page editor: Jonathan Corbet