LWN.net Weekly Edition for January 26, 2023
Welcome to the LWN.net Weekly Edition for January 26, 2023
This edition contains the following feature content:
- Python packaging, visions, and unification: the ongoing discussion on providing a better packaging story for the Python ecosystem.
- X clients and byte swapping: turning off a little-used X11 feature can reduce its attack surface, but not all users are happy with the idea.
- Kernel code on the chopping block: three kernel subsystems that may soon be on their way out — and one that came back.
- Hiding a process's executable from itself: a focused security feature to protect against container breakouts.
- Nolibc: a minimal C-library replacement shipped with the kernel: a detailed look at nolibc, which can be useful for the running of small workloads bundled with a kernel.
This week's edition also includes these inner pages:
- Brief items: Brief news items from throughout the community.
- Announcements: Newsletters, conferences, security updates, patches, and more.
Please enjoy this week's edition, and, as always, thank you for supporting LWN.net.
Python packaging, visions, and unification
The Python community is currently struggling with a longtime difficulty in its ecosystem: how to develop, package, distribute, and maintain libraries and applications. The current situation is sub-optimal in several dimensions due, at least in part, to the existence of multiple, non-interoperable mechanisms and tools to handle some of those needs. Last week, we had an overview of Python packaging as a prelude to starting to dig into the discussions. In this installment, we start to look at the kinds of problems that exist—and the barriers to solving them.
Our overview just scratched the surface of the Python packaging world, so we will pick up some of the other pieces as we go along. The recent discussions seem to largely stem from Brett Cannon's mid-November post to renominate himself to the steering council (SC) for the 2023 term; that thread also served to highlight the role of the Python Packaging Authority (PyPA) and its relationship to the Python core developers. Up until relatively recently, the PyPA was an informal organization with a membership that was not well-defined; it had an ad hoc style of governance. That changed in 2019 with the advent of PEP 609 ("Python Packaging Authority (PyPA) Governance"); the PEP formalized the governance of the PyPA. The current delegation to it is based on that PEP, but only covers new packaging PEPs, as Cannon described:
Strictly speaking, we delegate the ability to accept packaging-related PEPs to the PyPA following their own practices outlined in PEP 609 [...]. So if they choose to operate without our delegation by simply not using PEPs they could totally do that.Otherwise the only other thing I can think of that python-dev controls package-wise, now that distutils is out the door, is ensurepip.
The steering council was explicitly set up as an organization for governing
the Python core and the language's core developers; packaging falls outside
of that
scope, Cannon said. "Since
the SC looks after Python core and e.g. packaging is delegated to the PyPA
group, one could argue it falls outside our purview.
" Steve Dower likened
the arrangement to "a non-compete agreement ;)
". Cannon agreed,
but had another way to phrase it: "or 'please deal with this because we
don't want to' agreement depending your viewpoint of packaging ;)
".
Unified vision
Much of the discussion that stemmed from that exchange veered away from the
governance question and further into the packaging details, so Cannon split
those responses into a new thread in the Packaging category of the
forum. The new thread ("Wanting a singular packaging tool/vision") started with John Hagen's call
for a "unified tooling experience
". In his view, Python should
adopt a model
similar
to that of Rust's: "The kind of unified, cross platform experience the
Rust team has managed with rustup/cargo/rustc
that bootstraps so well would
be great.
" He further
described his vision later in the thread.
Hagen thinks it makes sense for the council to help the community
"rally around a central, official, out-of-the-box workflow
", which would
help reduce the duplication of effort that pervades the Python packaging
world. But Paul Moore did
not see how that helped matters; it would expand the council's
authority into the packaging realm, but that does not magically solve the
problems. The PyPA has been working on these issues for years:
Yes, the SC has an authority that the PyPA maybe doesn't (we've always been uncomfortable with the "A" standing for "Authority" :) ) but an "official vision" won't directly make anything happen. For a much smaller example, look at the stdlib venv module. That is the "official vision" on how to create virtual environments, but virtualenv still exists and thrives, conda has its own environment creation mechanism, etc.
The PyPA has been unable to create the "cohesive vision
" that Hagen
is asking for, though; Antoine Pitrou said:
On the contrary, the PyPA seems to openly go for an ecosystem of scattered [utilities] with partially overlapping goals (not to mention the diversity of configuration schemes that goes with it).Bootstrapping a Python project always leads to looking around the Internet for clues as to "the idiomatic way", only to find out that there is none and that you have to make a half-informed choice between multiple possibilities (not to mention multiple documentation sources emphasizing different practices).
And I say that as a Python core developer. I have no idea how frustrating it might be for the occasional Python developer who was hoping to do the Right Thing without hassles.
Moore agreed:
"We've tried (multiple times - pipenv, poetry, conda, …) but nothing
ever pleased enough people to become 'the definitive solution'.
" He
was not at all sure that the council could successfully just step in and choose
a packaging mechanism to back. "Would that work? Do they even have that
type of
authority? I don't know.
"
In a reply to Hagen, H. Vetinari said that Python and Rust are different:
Python has a way, way, WAY harder context here than rust (where dependencies are mostly mono-lingual and can be built with the same build system, which conveniently also comes with a uniform compiler everywhere), since it exists as a glue language to just about everything. So without a comprehensive and cross-platform answer how to deal with non-python artefacts (building, distribution) that live in python projects - anything from C/C++/Fortran/Rust/Java/NodeJS/CUDA… - a cargo-like experience is very far away.
Conda?
Vetinari pointed to Poetry as a well-loved option because of its user experience (UX), but noted that it lacks a way to handle non-Python dependencies, as do all of the wheel-based package tools. There is an alternative in the form of conda, however:
Conda/mamba has the most complete (though certainly not perfect) answer to this set of problems, but doesn't get much recognition (especially on DPO [discuss.python.org]), for reasons that elude me - even though it's effectively been offered for that purpose.
There are a number of problems that people have with conda as the
"universal" solution, however. Moore complained
about the user experience: "[...] there's a non-trivial subset of users
who find the conda UX unpleasant to the point of not being willing to use
it
". He is one of those people, but he knows of "many others, in
various fields and with various levels of experience
" who feel similarly.
But Vetinari is
not concerned about the tool's user experience: "I think the UX can
be polished
". There are more important aspects to conda, because
"it has enough abstraction power to deal with all sorts of non-python
dependencies in a way that runs stably (i.e. no random ABI divergences and
crashes between packages)
". Cannon cautioned
about turning the discussion into "conda versus the PyPA approach". "As
someone who has to work with both worlds from a tooling perspective as
their job I can tell you no one has nailed the packaging problem
perfectly.
"
There are enough problems with conda's approach that some users "are not
willing to accept the trade-offs that using conda would involve
", Moore
said.
The question in his mind is whether conda would be made a part of the "unified
packaging vision" or not; if it is, "it needs to change, probably quite
significantly (to meet the needs of those people who currently don't use
it)
". If not, it can remain as a viable alternative option.
Pitrou wondered
what those needs were, beyond the user-experience issues, which are
"largely irrelevant
". Moore disagreed
that the user experience was irrelevant ("I know of people who were
learning Python who would have given up if they had to use conda
"), but
said that other concerns were more important. The "most glaring
example
" is that conda needs to manage the Python executable, so it
cannot work for those who want to install their own Python, built from the
source code at
python.org, say, or provided by a Linux distribution or the Windows
Store. He is not necessarily advocating for the unified vision:
Personally, I'm largely happy with the direction things are going in under the PyPA (on the understanding that progress is frustratingly, even glacially, slow :( ), and having conda be a separate ecosystem for people who prefer/need it.
Dower, who was the first to respond to Cannon's nomination message, clarified
that he was looking beyond just a "packaging vision" from the steering
council, "but a singular vision for growing, developing and supporting
the Python user base that includes packaging (and education, and
documentation, and outreach, etc.)
". He was hoping that the council
might provide such a thing but is realizing that is not likely; he does
note that the Anaconda
distribution, which is where conda comes from, is already doing much of
that:
Right now, honestly, Anaconda is doing it best, giving their users multiple tools, docs, guidance, etc. and actively developing new ways to use Python. Meanwhile, python-dev is looking like mere caretakers of a GitHub repository, and PyPA is trying to put out fires and reconcile massive divergence between ideas that became implementations because the discussions were too hard. (And before anyone takes offence, I am definitely putting myself in the "caretaker" camp. I have no affiliation with Anaconda, and just get to watch on from the outside while they do all the cool stuff.) I hope we can Discourse our way out of it into something with a bit more focused momentum, but it feels unlikely.
No one directly responded to Dower's rather disheartening outlook, but the
"discoursing" certainly continued.
Ofek Lev suggested
that Hatch "is well
positioned to provide this unified experience since it is already pretty
much there
", though there are two areas that need work. There needs to
be a way to distribute Python, Hatch, and its dependencies in a single
binary, but there also is a need to standardize the Python "lock file"
format.
A lock file in this context specifies the versions of dependencies required
for a particular project. Cannon described
some of
the history, starting with the rejected PEP 665 ("A file format
to list Python dependencies for reproducibility of an application") that he
co-authored. He is working on mousebender as
"an opinionated PoC [proof of concept] for wheels-only lock/pinned
dependency files
" with an eye toward deriving a lock-file standard from
that work.
But there are also people who want to be able to simply write Python scripts, with dependencies from the Python Package Index (PyPI) that can be installed with pip, but that are not necessarily going to be long-lived projects, Moore said. Imposing the overhead of creating a project with a lock file and so on is beyond what that user needs or wants to do. He pointed to a 2018 mailing list post from Nathaniel Smith outlining the lifecycle of a Python program, which described the use case he is concerned about:
I think Python tooling currently focuses mostly on the "Reusable Library" section of this, with some help for the rest of section 3. But we tend to ignore sections 1 ("Beginner") and 2 ("Sharing with others"). Those two early phases are where people have "a bunch of scripts in a Work directory" and where a formal project directory is more overhead than they want. And as a sidenote, I think that referring to stage 1 as "beginner" is misleading - I've been using Python for years and I still find that most of my work is in this category.[...] I'd like a "unified vision" to cover this use case as well, and not just the "create a Python project" workflow.
Some of what Moore is looking for can be handled by Hatch, Lev said, or by using Pyflow, as pointed out by Weverton Marques. Other tools were mentioned as well. But, of course, a big part of the Python packaging problem is this proliferation of tools.
So the conversation soon turned toward how pip and other PyPA tools could work better with the conda world—and vice versa. There are some difficult problems in trying to reduce or eliminate the rather large impedance mismatch between the two. But that is the subject of the next installment in what looks likely to be at least a couple more articles on figuring out Python packaging directions for the future.
X clients and byte swapping
While there are still systems with both byte orders, little-endian has largely "won" the battle at this point since the vast majority of today's systems store data with the least-significant byte first (at the lowest address). But when the X11 protocol was developed in the 1980s, there were lots of systems of each byte order, so the X protocol allowed either order and the server (display side) would swap the bytes to its byte order as needed. Over time, the code for swapping data in the messages, which was written in a more-trusting era, has bit-rotted so that it is now a largely untested attack surface that is nearly always unused. Peter Hutterer has been doing some work to stop using that code by default, both in upstream X.org code and in downstream Fedora.
A Fedora 38 change proposal to disable support for byte-swapped clients by default in the X server was posted in mid-December. It is owned by Hutterer, who proposed adopting the work he was doing for the X.org server into Fedora. At the time, it was unclear whether the upstream changes would land in time, so the Fedora proposal was contingent on that happening. It turns out that Hutterer merged the changes on January 5, so that would not be an impediment to Fedora being an early adopter of the feature.
The justification for making the change is that the byte-swapping code in
the X server (and Xwayland) is largely untested these days because of the
rarity of having different endian clients and servers. In fact, the
proposal claims that "a vanishingly small number of users run clients
and X servers on
different machines, let alone on different machines with different
endianess
". The proposal points to a 2014
X.org security advisory as an example of the kinds of problems that may
be lurking in the SProc*() (i.e. byte-swapped) versions of the
message-handling functions. The benefit:
This change removes a large potential attack surface that can be used by malicious clients to expose memory locations, crash the X server and/or do other entertaining things malicious programs like to do.
With that change in place, clients with a different byte order "will
simply fail, in a [manner] similar to failing to
authenticate against an X server
".
It should be noted that this only changes the default
behavior to reject clients with the other byte order; a configuration
option for the X.org server can be used to restore the existing behavior:
Section "ServerFlags" Option "AllowSwappedClients" "on" EndSection
The picture for Wayland users is a bit more complicated because the Wayland compositor starts the Xwayland server for X clients. Xwayland needs the +byteswappedclients option in order to support clients with the other byte order, so there needs to be a way to get the Wayland compositor to pass that option down. At the time of the proposal, that was not available for GNOME's mutter and other Wayland compositors, though bugs have been filed to add support. More recently, Hutterer said that the feature was added to mutter for GNOME 43; it can permanently set as follows:
$ gsettings set org.gnome.mutter.wayland xwayland-allow-byte-swapped-clients true
In the Fedora devel mailing thread, Peter Boy was concerned that users would not be able to figure out what went wrong when they try to connect from, say, a big-endian s390 client to a little-endian server. The original error message, "Swapped clients prohibited", was deemed cryptic by Vit Ondruch and Björn Persson. Hutterer duly changed it to at least refer to "byte-swapped", which was suggested, though David Malcolm and Boy would like to see even more detail than that.
Another concern was for those who are doing native development on s390
systems and are
using SSH with X forwarding from Fedora laptops to access those systems.
Niklas Schnelle said
that he is one of the maintainers of the s390 kernel, does much of his
work from Fedora, and frequently uses X clients from the s390 system. He
suspects that other developers who do not work on the kernel are in the
same boat. So he suggested a rethink because it will be "a significant
hassle
" for those people; beyond that, the s390 usage means that the
byte-swapping code is being tested more than the proposal would indicate,
he said.
Elizabeth K. Joseph
echoed
those concerns and suggested that more testing of the
byte-swapping code could perhaps help. Demi Marie Obenour agreed
that more testing would be good, starting with fuzz testing "X server's
byte-swapping and input validation routines
". But Stephen Smoogen
pointed
out that part of the problem may well be that the X developers do not have
access to big-endian systems to even be able to test with.
Neal Gompa asked
whether it made sense to simply force the X protocol to be little-endian
everywhere. Hutterer essentially ruled that
out, saying that it would take a lot of work on the client side to add
the byte-swapping code and
still require a huge amount of testing to ensure that everything is working
correctly. But there's more to it than that: "And it's not a matter of
'working' but 'safe against a malicious client
sending bad messages'. That's a completely different ballpark.
"
Gompa then asked
about Wayland: "Please tell me the Wayland protocol doesn't do
this?
" Hutterer said that Wayland
requires integers to be in the byte order of the compositor,
though there are some image formats that are defined as
little-endian-only; "the wayland protocol doesn't [do] this :)
".
The Fedora Engineering Steering Committee (FESCo) voted on the change proposal early in January. It adopted the proposal unanimously. While there was no mention of s390, there were further concerns about the error message expressed in the FESCo issue for the vote, so Hutterer changed it again (to "Prohibited client endianess, see the Xserver man page"). He also pointed out that the message may not be reaching users, however:
That error message is in general a bit of an issue. There's a facility for returning it but in the end it's just unceremoniously and unconditionally dumped to stderr by libxcb. There's no way for an app to hook into it.
When Hutterer landed the change in X.org, he announced it on his blog, which LWN posted to the daily page. That sparked a fairly lively comment thread, which contained several complaints about the change. It is clear that it will break some people's workflows—how many people it affects is not at all clear, though—at least until they can reconfigure their X server or Wayland compositor. But it is also a clear reduction in the exposed attack surface for the vast majority of users who will never need, or miss, that code. That seems like a reasonable tradeoff in the grand scheme.
Kernel code on the chopping block
Code that is added to the kernel can stay there for a long time; there is code in current kernels that has been present for over 30 years. Nothing is forever, though. The kernel development community is currently discussing the removal of two architectures and one filesystem, all of which seem to have mostly fallen out of use. But, as we will see, removal of code from the kernel is not easy and is subject to reconsideration even after it happens.
Not-so-SuperH
Hitachi's SuperH architecture dates back to the 1990s; it is a 32-bit processor that was intended for embedded applications. Support for SuperH was first added (in minimal form) to the kernel in the 2.3.16 release in September 1999. While it was popular for some years after its introduction, SuperH was eventually eclipsed by Arm. SuperH processors are still produced by Renesas, but they are not recommended for use in new designs.
Around 2015 there was an attempt to reinvigorate this architecture, inspired by the fact that the patents covering its design were expiring. The result was a design for a processor called the J-Core J2, which was intended to be built as entirely open hardware. J2 did not really take off, though, perhaps because interest in 32-bit processors was fading quickly in those times; the J-Core news page was last updated in 2016. This message from Rob Landley suggests that this project is not entirely dead. Still, despite the appeal of an entirely open processor, it does not appear that J-Core will be the salvation of the SuperH architecture.
More recently, Christoph Hellwig posted a patch series removing support for the SuperH architecture from the kernel, saying:
arch/sh has been a long drag because it supports a lot of SOCs, and most of them haven't even been converted to device tree infrastructure. These SOCs are generally obsolete as well, and all of the support has been barely maintained for almost 10 years, and not at all for more than 1 year.
John Paul Adrian Glaubitz objected
to this removal, noting that he is still maintaining the Debian SuperH
port. "It's a bit disappointing that people keep hammering on it. It
works fine for me.
" Geert Uytterhoeven added
that the problem with SuperH isn't really a lack of users or even
developers creating patches, it's the fact that there is no maintainer to
apply those patches.
A likely outcome to this discussion is that Glaubitz, along with Landley, will try to take over maintenance of the kernel's SuperH port. They will have their work cut out for them; among other things, SuperH needs a proper conversion to use device trees and the removal of support for many obsolete CPUs and boards. But, if the new maintainers can get SuperH back on track, its days in the kernel may not be numbered quite yet.
The last sinking of the Itanic
In the late 1990s, there was a lot of talk of Intel's new processor architecture, which was known by the codename "Merced" at the time. This CPU was going to be the future of computing, it would be so much better that it would quickly dethrone x86. But details about Merced were scarce, and it was not at all clear that there would be enough information available to create a Linux port for this architecture. There were widespread worries that, by denying access to information on Merced, Intel and Microsoft would sideline Linux just as it was beginning to gain some real commercial momentum.
Needless to say, that is not what happened. Intel, instead, became an early investor in Red Hat in 1998. When Merced, later branded "Itanium", hit the market, Linux had first-class support for it. The Itanium (ia64) architecture first showed up in the 2.3.43 development kernel release in early 2000. Working Itanium processors took somewhat longer and, by the time they were widely available, they had long since been surpassed by x86-64, initially developed by AMD. Itanium was never a successful product line and no Itanium CPUs are manufactured now.
Itanium support remains in the kernel, though, but it's not clear how much longer that will last. The most recent discussion about its future happened as a result of this patch from Mateusz Guzik, who had tracked down a locking performance problem that, seemingly, originated in a cpu_relax() call that is only needed by the Itanium architecture. That led Tony Luck, once a maintainer of that architecture in the kernel, to ask:
Is it time yet for:$ git rm -r arch/ia64
Ard Biesheuvel concurred, noting that he is unable to do any sort of testing to ensure that EFI changes work on the Itanium architecture. On the other hand, Jessica Clarke and Glaubitz both pointed out that Debian has a working Itanium port; Glaubitz added that Gentoo also supports the architecture. Luck suggested that keeping these systems going was a wasted effort:
Are there people actually running production systems on ia64 that also update to v6.x kernels?If so, why? Just scrap the machine and replace with almost anything else. You'll cover the cost of the new machine in short order with the savings on your power bill.
The answer was "hobbyists", who like keeping such systems running. Biesheuvel sympathized with the hobbyists, but asked:
However, the question is not how you or I choose to spend (or waste) their time. The question is whether it is reasonable *as a community* to insist that everyone who contributes a cross-architecture change also has to ensure that obsolete architectures such as ia64 or alpha are not left behind.
He pointed to the original topic of the thread as an example: locking was being slowed as the result of an Itanium-specific problem, so developers need to find a way to speed up the locking that doesn't introduce regressions on a processor that nobody has access to. Or perhaps not: Luck suggested how such a fix could be structured, but Linus Torvalds answered that it simply wasn't worth the effort.
Nobody yet has yet said that Itanium support can actually be removed, so it is not clear when that will happen. Sooner seems more likely than later, though. As developers stop even their minimal support for this architecture, it will fade into an increasingly unmaintainable state. Support for this unloved architecture has been in the kernel for 23 years; it's not likely to stay for many more.
JFS
Once upon a time, journaling was the hot new feature for filesystems, and Linux didn't have any filesystems that supported it; system administrators had to be far more familiar with the fsck tool than is normally required now. That hole was first filled by ReiserFS, which was merged for the 2.4.0.4 in early February 2002. But it was quickly followed by JFS, which entered the kernel less than three weeks later. JFS came out of IBM and was the main filesystem for the AIX operating system; it, too, supported journaling.
JFS never become one of the top Linux filesystems, though it did pop up here and there; the MythTV project still suggests it in various places in its documentation. But JFS has not seen any real development for years and it's not clear that there are many users left. Hellwig, pointing out that JFS will slow down the folio conversion effort, asked whether it should just be removed.
This filesystem seems to have few defenders. Dave Kleikamp, who originally
added it in 2002 and who is still listed as its maintainer, responded:
"Obviously, I haven't put much effort into JFS in a long time and I
would not miss it if it were to be removed
". No patches doing the
removal have been posted but, in the absence of an impassioned defense of
JFS from somewhere, one can expect it to, at a minimum, be marked as
orphaned in the near future.
The zombie pktcdvd driver
Packet writing
is a protocol that can be used for the writing of recordable optical media
(younger readers should ask their parents about optical media; it used to
be important). The kernel
gained support for packet writing in late 2004. As of 2016, though, the
pktcdvd
driver was unmaintained and unloved; the regular block drivers had long
since gained the ability to drive all optical devices, and the
packet-writing driver, it seemed, was no longer needed. It was marked deprecated in
the 4.10 release with a promise that it would be removed "after a
release or two
".
A full 33 releases later, this driver was actually removed during the 6.2 merge window. That, however, led to a complaint from Pali Rohár, who said:
This 6.2-rc1 release is breaking support for CD-RW, DVD-RW, DVD+RW and partially also DVD-RAM medias because of pktcdvd driver removal from the kernel in this release. Driver is still used and userspace tools for it are part of the udftools project, which is still under active maintenance. More people already informed me about this "surprise".
Both Torvalds
and block subsystem maintainer Jens
Axboe questioned the need for this driver, but it became clear that it
still provides some functionality (specifically ensuring that all writes
are exactly 32KB in size) that projects like udftools need. So pktcdvd came back.
There was some talk of putting the necessary logic into the SCSI CD driver,
but, as Torvalds noted,
"nobody is going to be motivated to do any development in this area, and
the best we can do is probably to just keep it limping along
". If, at
some point, this driver fails to limp properly, developers may start
considering its removal again.
Hiding a process's executable from itself
Back in 2019, a high-profile container vulnerability led to the adoption of some complex workarounds and a frenzy of patching. The immediate problem was fixed, but the incident was severe enough that security-conscious developers have continued to look for ways to prevent similar vulnerabilities in the future. This patch set from Giuseppe Scrivano takes a rather simpler approach to the problem.The 2019 incident, which came to be known as CVE-2019-5736, involved a sequence of steps that culminated in the overwriting of the runc container-runtime binary from within a container. That binary should not have even been visible within the container, much less writable, but such obstacles look like challenges to a determined attacker. In this case, the attack was able to gain access to this binary via /proc/self/exe, which always refers to the binary executable for the current process.
Specifically, the attack opens the runc process's /proc/self/exe file, creating a read-only file descriptor — inside the container — for the target binary, which lives outside that container. Once runc exits, the attacker is able to reopen that file descriptor for write access; that descriptor can subsequently be used to overwrite the runc binary. Since runc is run with privilege outside of the container runtime, this becomes a compromise of the host as a whole; see the above-linked article for details.
This vulnerability was closed by having runc copy its binary image into a memfd area and sealing it; control is then be passed to that image before entering the container. Sealing prevents modifying the image, but even if that protection fails, the container is running from an independent copy of the binary that will never be used again, so overwriting it is no longer useful. It is a bit of an elaborate workaround, but it plugged the hole at the time.
Scrivano is proposing a different solution to the problem: simply disable access to /proc/self/exe as a way of blocking image-overwrite attacks of this type. Specifically, it adds a new prctl() command called PR_SET_HIDE_SELF_EXE that can be used to disable access to /proc/self/exe. Once this option has been enabled, any attempt to open that file from within the owning process will fail with an ENOENT error — as if the file simply no longer existed at all. Enabling this behavior is a one-way operation; once it has been turned on, it cannot be disabled until the next execve() call, which will reset this option to the disabled state, is made.
This behavior is necessarily opt-in; any program that wants to have its executable image protected from access in this way will have to request it specifically. The intent, though, is that this simple call will be able to replace the more complicated workarounds that are needed to prevent this sort of attack today. A prctl() is a small price to pay if it eliminates the need to create a new copy of the executable image every time a new container is launched.
This new option is thus a simple way of blocking this specific attack, but it leads to some related questions. Hiding the container-runtime binary seems like less satisfying solution than ensuring that this binary cannot be modified regardless of whether it is visible within a container. It seems like it closes off one specific path to a compromise without addressing the underlying problem.
More to the point, perhaps, is the question of just how many operations the kernel developers would like to add to prevent access to specific resources that might possibly be misused. There is, conceivably, no end to the files (under /proc and beyond) that might be useful to an attacker who is determined to take over the system. Adding a new prctl() option — and the necessary kernel code to implement it — for every such file could lead to a mess in the long term. There comes a point where it might make more sense to use a security module to implement this sort of policy.
If the development community feels that way, though, it's not saying so — or much of anything else. The patch set has been posted three times and has not received any substantive comments any of those times. There will, presumably, need to be an eventual discussion that decides whether this type of mechanism is the best way of protecting systems against attacks using /proc/self/exe. For the moment, though, it would appear that this change is simply waiting for the wider community to take notice of it.
Nolibc: a minimal C-library replacement shipped with the kernel
The kernel project does not host much user-space code in its repository, but there are exceptions. One of those, currently found in the tools/include/nolibc directory, has only been present since the 5.1 release. The nolibc project aims to provide minimal C-library emulation for small, low-level workloads. Read on for an overview of nolibc, its history, and future direction written by its principal contributor.
The nolibc component actually made a discreet entry into the 5.0 kernel
as part of the RCU torture-test suite ("rcutorture"), via commit 66b6f755ad45
("rcutorture:
Import a copy of nolibc"). This happened after Paul McKenney asked:
"Does anyone do kernel-only deployments, for example, setting up an
embedded device having a Linux kernel and absolutely no userspace
whatsoever?
"
He went on:
The mkinitramfs approach results in about 40MB of initrd, and dracut about 10MB. Most of this is completely useless for rcutorture, which isn't interested in mounting filesystems, opening devices, and almost all of the other interesting things that mkinitramfs and dracut enable.Those who know me will not be at all surprised to learn that I went overboard making the resulting initrd as small as possible. I started by throwing out everything not absolutely needed by the dash and sleep binaries, which got me down to about 2.5MB, 1.8MB of which was libc.
This description felt familiar to me, since I have been solving similar problems for a long time. The end result (so far) is nolibc — a minimal C library for times when a system is booted only to run a single tiny program.
A bit of history: when size matters
For 25 years, I have been building single-floppy-based emergency routers, firewalls, and traffic generators containing both a kernel and a root filesystem. I later moved on to credit-card-sized live CDs and, nowadays, embedding tiny interactive shells in all of my kernels to help better recover from boot issues.
All of these had in common a small program called preinit that is in charge of creating /dev entries, mounting /proc, optionally mounting a RAM-based filesystem, and loading some extra modules before executing init. The characteristic that all my kernels have in common is that all of their modules are packaged within the kernel's builtin initial ramfs, reserving the initial RAMdisk (initrd) for the root filesystem. When the kernel boots, it pivots the root and boot mount points to make the pre-packaged modules appear at their final location, making the root filesystem independent of the kernel version used. This is extremely convenient when working from flash or network boot, since you can easily swap kernels without ever touching the filesystem.
Because it had to fit into 800KB (or smaller) kernels, the preinit code initially had to be minimal; one of the approaches consisted of strictly avoiding stdio or high-level C-library functions and reusing code and data as much as possible (such as merging string tails). This resulted in code that could be statically built and remain small (less than 1KB for the original floppy version) thanks to its small dependencies.
As it evolved, though, the resulting binary size was often dominated by the C-library initialization code. Switching to diet libc helped but, in 2010, it was not evolving anymore and still had some limitations. I replaced it with the slightly larger — but more active — uClibc, with a bunch of functions replaced by local alternatives to keep the size low (e.g. 640 bytes were saved on memmove() and strpcy()), and used this until 2016.
uClibc, in turn, started to show some more annoying limitations which became a concern when trying to port code to architectures that it did not support by then (such as aarch64), and its maintenance was falling off. I started to consider using klibc, which also had the advantage of being developed and maintained by kernel developers but, while klibc was much more modern, portable, and closer to my needs, it still had the same inconvenience of requiring to be built separately before being linked into the final executable.
It then started to appear obvious that, if the preinit code had so little dependency on the C library, it could make sense to just define the system calls directly and be done with it. Thus, in January 2017, some system-call definitions were moved to macros. With this new ability to build without relying on any C library at all, a natural-sounding name quickly emerged for this project: "nolibc".
The first focus was on being able to build existing code as seamlessly as possible, with either nolibc or a regular C library, because ifdefs in the code are painful to deal with. This includes the ability to bypass the "include" block when nolibc was already included on the command line, and not to depend on any external files that would require build-process updates. Because of this, it was decided that only macros and static definitions would be used, which imposes some limitations that we'll cover below.
One nice advantage for quick tests is that it became possible to include nolibc.h directly from the compiler's command line, and to rely on the NOLIBC definition coming from this file to avoid including the normal C-library headers with a construct like:
#ifndef NOLIBC #include <stdint.h> #include <stdio.h> /* ... */ #endif
This allows a program to be built with a regular C library on architectures lacking nolibc support while making it easy to switch to nolibc when it is available. This is how rcutorture currently uses it.
Another point to note is that, while the resulting binary is always statically linked with nolibc (hence it is self-contained and doesn't require any shared library to be installed), it is still usually smaller than with glibc, with either static or dynamic linking:
$ size init-glibc-static init-glibc init-nolibc text data bss dec hex filename 707520 22432 26848 756800 b8c40 init-glibc-static 16579 848 19200 36627 8f13 init-glibc 13398 0 23016 36414 8e3e init-nolibc
Given that this binary is present in all my kernels, my immediate next focus was supporting the various architectures that I was routinely using: i386, x86_64, armv7, aarch64, and mips. This was addressed by providing architecture-specific setup code for the platforms that I could test. It appeared that a few system calls differ between architectures (e.g. select() or poll() variants exist on some architectures), and even some structures can differ, like the struct stat passed to the stat() system call. It was thus decided that the required ifdefs would all be inside the nolibc code, with the occasional wrapper used to emulate generic calls, in order to preserve the ease of use.
The painful errno
A last challenging point was the handling of errno. In the first attempt, the preinit loader used a negative system-call return value to carry an error-return code. This was convenient, but doing so ruins portability because POSIX-compliant programs have to retrieve the error code from the global errno variable, and only expect system calls to return -1 on error. (I personally think that this original design is a mistake that complicates everything but it's not going to change and we have to adapt to it).
The problem with errno is that it's expected to be a global variable, which implies that some code has to be linked between nolibc and the rest of the program. That would remove a significant part of the ease of use, so a trade-off was found: since the vast majority of programs using nolibc will consist of a single file, let's just declare errno static. It will be visible only from the files that include nolibc, and will even be eliminated by the compiler if it's never used.
The downside of this approach is that, if a program is made of multiple files, each of them will see a different errno. The need to support multi-file programs is rare with nolibc, and programs relying on a value of errno collected from another file are even less common, so this was considered as an acceptable trade-off. For programs that don't care and really need to be the smallest possible, it is possible to remove all assignments to errno by defining NOLIBC_IGNORE_ERRNO.
A proposal
Base on the elements above, it looked like nolibc was the perfect fit for rcutorture. I sent the proposal, showing what it could achieve: a version of the sleep program in a 664-byte binary. (Note that, since then, some versions of binutils have started to add a section called ".note.gnu.property", which stores information about the program's use of hardening extensions like shadow stacks, that pushes the code 4KB apart).
McKenney expressed interest in this approach, so we tried together to port his existing program to nolibc, which resulted in faster builds and much smaller images for his tests (which is particularly interesting in certain situations, like when they're booting from the network via TFTP for example). With this change, rcutorture automatically detects if nolibc supports the architecture, and uses it in such cases, otherwise falls back to the regular static build.
The last problem I was seeing is that, these days, I'm more than full-time on HAProxy and, while I continue to quickly glance over all linux-kernel messages that land in my inbox, it has become extremely difficult for me to reserve time on certain schedules, such as being ready for merge windows. I didn't want to become a bottleneck for the project, should it be merged into the kernel. McKenney offered to host it as part of his -rcu tree and to take care of merge windows; that sounded like a great deal for both of us and that initial work was quickly submitted for inclusion.
Contributions and improvements
Since the project was merged, there have been several discussions about how to improve it for other use cases, and how to better organize it. Ingo Molnar suggested pulling it out of the RCU subdirectory to make it easier to use for other projects. The code was duly moved under tools/.
Support for RISC V came in 5.2 by Pranith Kumar; S390 support was posted by Sven Schnelle during 6.2-rc and was merged for a future version. Ammar Faizi found some interesting ABI issues and inconsistencies that were addressed in 5.17; that made Borislav Petkov dig into glibc and the x86 psABI (processor specific ABI — basically the calling convention that makes sure that all programs built for a given platform are interoperable regardless of compiler and libraries used). After this work, the psABI spec was updated to reflect what glibc actually does.
At this point, it can be said that the project has taken off, as it even received a tiny dynamic-memory allocator that is sufficient to run strdup() to keep a copy of a command-line argument, for example. Signal handling is also currently being discussed.
Splitting into multiple files
It appeared obvious that the single-file approach wasn't the most convenient for contributors, and that having to ifdef-out regular include files to help with portability was a bit cumbersome. One challenge was to preserve the convenient way of building by including nolibc.h directly from the source directory when the compiler provides its own kernel headers, and to provide an installable, architecture-specific layout. This problem was addressed in 5.19 by having this nolibc.h file continue to include all other ones; it also defines NOLIBC, which is used by older programs to avoid loading the standard system headers.
Now the file layout looks like this:
-+- nolibc.h (includes all other ones) +- arch-$ARCH.h (only in the source tree) +- arch.h (wrapper or one of the above) +- ctype.h +- errno.h +- signal.h +- std.h +- stdio.h +- stdlib.h +- string.h +- sys.h +- time.h +- types.h +- unistd.h
The arch.h file in the source tree checks the target architecture and includes the corresponding file. In the installation tree, it is simply replaced by one of these files. In the source tree, it's only referenced by nolibc.h. This new approach was already confirmed to be easier to deal with, as the s390 patches touched few files. This clean patch set may serve as an example of how to bring support for a new architecture to nolibc. Basically all that is needed is to add a new arch-specific header file containing the assembly code, referencing it in the arch.h file, and possibly adjusting some of the existing system calls to take care of peculiarities of the new architecture. Then the kernel self tests and rcutorture should be updated (see below).
Two modes of operation
The kernel defines and uses various constants and structures that user space needs to know; these include system-call numbers, ioctl() numbers, structures like struct stat, and so on. This is known as the UAPI, for "user-space API", and it is exposed via kernel headers. These headers are needed for nolibc to be able to communicate properly with the kernel.
There are now two main ways to connect nolibc and the UAPI headers:
-
The quick way, which works with a libc-enabled toolchain without requiring any installation. In this case, the UAPI headers found under asm/ and linux/ are provided by the toolchain (possibly the native one). This mode remains compatible with programs that are built directly from the source tree using "-include $TOPDIR/tools/include/nolibc/nolibc.h".
-
The clean way that is compatible with bare-metal toolchains such as the ones found on kernel.org, but which requires an installation to gain access to the UAPI headers. This is done with "make headers_install". By combining this with the nolibc headers, we get a small, portable system root that is compatible with a bare-metal compiler for one architecture and enables building a simple source file into a working executable. This is the preferred mode of operation for the kernel self tests, as the target architecture is known and fixed.
Tests
The addition of new architectures showed the urgency of developing some self tests that could validate that the various system calls are properly implemented. One feature of system calls is that, for a single case of success, there are often multiple possible causes of failure, leading to different errno values. While it is not always easy to test them all, a reasonable framework was designed, making the addition of new tests, as much as possible, burden-free.
Tests are classified by categories — "syscall" (system calls) and "stdlib" (standard C library functions) only for now — and are simply numbered in each category. All the effort consists of providing sufficient macros to validate the most common cases. The code is not pretty for certain tests, but the goal is achieved, and the vast majority of them are easy to add.
When the test program is executed, it loops over the tests of the selected category and simply prints the test number, a symbolic name, and "OK" or "FAIL" for each of them depending on the outcome of the test. For example, here are two tests for chdir(), one expecting a success and the other a failure:
CASE_TEST(chdir_dot); EXPECT_SYSZR(1, chdir(".")); break; CASE_TEST(chdir_blah); EXPECT_SYSER(1, chdir("/blah"), -1, ENOENT); break;
Some tests may fail if certain configuration options are not enabled, or if the program is not run with sufficient permissions. This can be overcome by enumerating the tests to run or avoid via the NOLIBC_TEST environment variable. The goal here is to make it simple to adjust the tests to be run with a boot-loader command line. The output consists of OK/FAIL results for each test and a count of total errors.
The Makefile in tools/testing/selftests/nolibc takes care of runing the tests, including installing nolibc for the current architecture, putting it into an initramfs, building a kernel equipped with it, and possibly starting the kernel under QEMU. All architecture-specific settings are set with variables; these include the defconfig name, kernel image name, QEMU command line, etc.
The various steps remain separated so that the makefile can be used from a script that would, for example, copy the kernel to a remote machine or to a TFTP server.
$ make -C tools/testing/selftests/nolibc Supported targets under selftests/nolibc: all call the "run" target below help this help sysroot create the nolibc sysroot here (uses $ARCH) nolibc-test build the executable (uses $CC and $CROSS_COMPILE) initramfs prepare the initramfs with nolibc-test defconfig create a fresh new default config (uses $ARCH) kernel (re)build the kernel with the initramfs (uses $ARCH) run runs the kernel in QEMU after building it (uses $ARCH, $TEST) rerun runs a previously prebuilt kernel in QEMU (uses $ARCH, $TEST) clean clean the sysroot, initramfs, build and output files (...)
It is also possible to just create a binary for the local system or a specific architecture without building a kernel (useful when creating new tests).
One delicate aspect of the self tests is that, if they need to build a kernel, they have to be careful about passing all required options to that kernel. It looked desirable here to avoid confusion by using the same variable names as the kernel (ARCH, CROSS_COMPILE, etc.) in order to limit the risk of discrepancies between the kernel that is being built and the executable that is put inside.
As there is no easy way to call the tests from the main makefile, it is not possible to inherit common settings like kernel image name, defconfig, or build options. This was addressed by enumerating all required variables by architecture, and this appears to be the most maintainable.
While the tests were initially created in the hope of finding bugs in nolibc's system-call implementations (and a few were indeed found), they also proved useful to find bugs in its standard library functions (e.g. the memcmp() and fd_set implementations were wrong). It is possible that they might, one day, become useful to detect kernel regressions affecting system calls. It's obviously too early for this, as the tests are quite naive and more complete tests already exists in other projects like the Linux Test Project. A more likely situation would be for a new system call being developed with its test in parallel, and the test being used to debug the system call.
Current state, limitations, and future plans
The current state is that small programs can be written, built on the fly, and used. Threads are not implemented, which will limit some of the tests, unless the code uses the clone() system call and does not rely on thread-local storage or thread cancellation. Mark Brown managed to implement some tests for TPIDR2 on arm64 that use threads managed directly inside the test.
Many system calls, including the network-oriented calls and trivial ones like getuid() that nobody has needed, are still not implemented, and the string.h and stdlib.h functions are still limited, but it is expected that most kernel developers will be able to implement what they are missing when needed, so this is not really a problem.
The lack of data storage remains a growing concern, first for errno, then for the environ variable, and finally also for the auxiliary vector (AUXV, which contains some settings like the page size). While all of them could easily be recovered by the program from main(), such limitations were making it more difficult to implement new extensions, and a patch set addressing this was recently submitted.
McKenney and I had discussions about having a more interactive tool, that would allow running a wider variety of tests that could all be packaged together. This tool might include a tiny shell and possibly be able to follow a certain test sequence. But it's unclear yet if that should be part of a low-level self-testing project or if, instead, the focus should remain on making it easier to create new tests that could be orchestrated by other test suites.
A tandem team on the project
I don't know if similar "tandem" teams are common in other subsystems, but I must admit that the way the maintenance effort was split between McKenney and myself is efficient. My time on the nolibc project is dedicated to contribution reviews, bug fixes, and improvements — not much more than what it was before its inclusion into the kernel, thanks to McKenney taking care of everything related to inclusion in his -rcu tree, meeting deadlines, and sending pull requests. For contributors, there may be a perceived small increase in time between the moment a contribution is published and when it appears in a future kernel, if I have to adjust it before approving it, but it would be worse if I had to deal with it by myself.
It took some time and effort to adapt and write the testing infrastructure, but now it's mostly regular maintenance. It seems to me like we're significantly limiting the amount of processing overhead caused by the project on our respective sides, which allows it to be usable in the kernel at a low cost. For McKenney, the benefit is a reduced burden for his testing tools like rcutorture. For me, getting feedback, bug reports, fixes, and improvements for a project that is now more exposed than it used to be, is appreciated.
Brief items
Security
A security audit of Git
The Open Source Technology Improvement Fund has announced the completion of a security audit of the Git source.
For this portion of the research a total of 35 issues were discovered, including 2 critical severity findings and a high severity finding. Additionally, because of this research, a number of potentially catastrophic security bugs were discovered and resolved internally by the git security team.
See the full report for all the details.
Exploiting null-dereferences in the Linux kernel (Project Zero)
The Google Project Zero page shows how to compromise the kernel by using a NULL pointer to repeatedly force an oops and overflow a reference count.
Back when the kernel was able to access userland memory without restriction, and userland programs were still able to map the zero page, there were many easy techniques for exploiting null-deref bugs. However with the introduction of modern exploit mitigations such as SMEP and SMAP, as well as mmap_min_addr preventing unprivileged programs from mmap’ing low addresses, null-deref bugs are generally not considered a security issue in modern kernel versions. This blog post provides an exploit technique demonstrating that treating these bugs as universally innocuous often leads to faulty evaluations of their relevance to security.
This is the sort of vulnerability that the oops-limit patch is meant to block.
Security quote of the week
But most importantly: build security features as if they'll be used against you.— Matthew Garrett
Kernel development
Kernel release status
The current development kernel is 6.2-rc5, released on January 21, one day ahead of the normal schedule in order to avoid a conflict with Linus's wedding anniversary. He said:
Ok, so I thought we were back to normal after the winter holidays at rc4. Now, a week later, I think I was mistaken - we have fairly sizable rc5, so I suspect there was still pent up testing and fixes from people being off.Anyway, I am expecting to do an rc8 this release regardless, just because we effectively had a lost week or two in the early rc's, so a sizable rc5 doesn't really worry me. I do hope we're done with the release candidates growing, though.
Stable updates: 6.1.8, 5.15.90, 5.10.165, 5.4.230, 4.19.271, and 4.14.304 were all released on January 24.
The return of the Linux Kernel Podcast
After a brief break of ... a dozen years or so ... Jon Masters has announced the return of his kernel podcast:
This time around, I’m not committing to any specific cadence – let’s call it “periodic” (every few weeks). In each episode, I will aim to broadly summarize the latest happenings in the “plumbing” of the Linux kernel, and occasionally related bits of userspace “plumbing” (glibc, systemd, etc.), as well as impactful toolchain changes that enable new features or rebaseline requirements.
Distributions
OpenSUSE Leap 15.3 has reached end of life
Users of the openSUSE Leap 15.3 distribution will want to be looking at moving on; support for that release has come to an end. "The currently maintained stable release is openSUSE Leap 15.4, which will be maintained until around end of 2023 (same lifetime as SLES 15 SP4 regular support)".
Development
A history of the FFmpeg project
Kostya Shishkov has just posted the concluding installment of an extensive history of the FFmpeg project:
See, unlike many people I don’t regard FFmpeg as something unique (in the sense that it’s a project only Fabrice Bellard could create). It was nice to have around and it helped immeasurably but without it something else would fill the niche. There were other people working on similar tasks after all (does anybody remember transcode? or gmerlin?). Hopefully you got an idea on how many talented unsung heroes had been working on FFmpeg and libav over the years.
The full set can be found on this page. (Thanks to Paul Wise).
Zawinski: mozilla.org's 25th anniversary
Jamie Zawinski reminds us that the 25th anniversary of the Netscape open-source announcement — a crucial moment in free-software history — has just passed.
On January 20th, 1998, Netscape laid off a lot of people. One of them would have been me, as my "department", such as it was, had been eliminated, but I ended up mometarily moving from "clienteng" over to the "website" division. For about 48 hours I thought that I might end up writing a webmail product or something.That, uh, didn't happen.
That announcement was the opening topic on the second-ever LWN.net Weekly Edition as well.
Pandoc 3.0 released
Version 3.0 of the Pandoc document-conversion tool has been released; the list of new features is quite long, including "chunked" HTML output, support for complex figures, and much more.WINE 8.0 released
Version 8.0 of the WINE Windows compatibility layer has been released. The headline feature appears to be the conversion to PE ("portable executable") modules:
After 4 years of work, the PE conversion is finally complete: all modules can be built in PE format. This is an important milestone on the road to supporting various features such as copy protection, 32-bit applications on 64-bit hosts, Windows debuggers, x86 applications on ARM, etc.
Other changes include WoW64 support (allowing 32-bit modules to call into 64-bit libraries), Print Processor support, improved Direct3D support, and more.
Development quote of the week
"31-bit" is not a typo - PNG defines a "PNG four byte integer", which is limited to the range 0 to 231-1, to defend against the existence of C programmers.— David Buchanan
Miscellaneous
A pair of Free Software Foundation governance changes
The Free Software Foundation has announced a bylaw change requiring a 66% vote by the FSF board for any new or revised copyright licenses. The FSF has also announced an expansion of its board of directors and a call for nominations from among its associate members.
Page editor: Jake Edge
Announcements
Newsletters
Distributions and system administration
Development
Meeting minutes
Calls for Presentations
CFP Deadlines: January 26, 2023 to March 27, 2023
The following listing of CFP deadlines is taken from the LWN.net CFP Calendar.
Deadline | Event Dates | Event | Location |
---|---|---|---|
February 1 | May 11 | NLUUG Spring Conference | Utrecht, The Netherlands |
February 2 | March 9 | Ceph Days Southern California co-located with SCALE | Pasadena, CA, US |
February 5 | May 10 May 12 |
Open Source Summit North America | Vancouver, Canada |
February 12 | June 28 June 30 |
Embedded Open Source Summit | Prague, Czech Republic |
February 19 | April 16 April 18 |
Cephalocon Amsterdam 2023 co-located with KubeCon EU | Amsterdam, Netherlands |
February 28 | May 17 | Icinga Camp Berlin | Berlin, Germany |
March 1 | April 29 | 19. Augsburger Linux-Infotag | Augsburg, Germany |
March 1 | April 26 April 28 |
Linaro Connect | London, UK |
March 1 | May 10 May 12 |
Linux Security Summit | Vancouver, Canada |
March 1 | May 8 May 10 |
Storage, Filesystem, Memory-Management and BPF Summit | Vancouver, Canada |
March 5 | April 23 April 26 |
foss-north 2023 | Göteborg, Sweden |
March 20 | June 13 June 15 |
Beam Summit 2023 | New York City, US |
March 22 | May 5 | Ceph Days India 2023 | Bengaluru, India |
If the CFP deadline for your event does not appear here, please tell us about it.
Upcoming Events
Events: January 26, 2023 to March 27, 2023
The following event listing is taken from the LWN.net Calendar.
Date(s) | Event | Location |
---|---|---|
February 4 February 5 |
Free and Open Source Developers’ European Meeting | Brussels, Belgium |
February 7 February 8 |
State of Open Con 2023 | London, UK |
February 21 | Ceph Days New York City | New York City, US |
February 22 February 24 |
PGConf India | Bengaluru, India |
March 9 | Ceph Days Southern California co-located with SCALE | Pasadena, CA, US |
March 9 March 12 |
Southern California Linux Expo | Pasadena, CA, US |
March 11 March 12 |
Chemnitzer Linux Tage 2023 | Chemnitz, Germany |
March 13 March 14 |
FOSS Backstage | Berlin, Germany |
March 14 March 16 |
Embedded World 2023 | Munich, Germany |
March 14 March 16 |
Everything Open | Melbourne, Australia |
March 17 | BOB 2023 | Berlin, Germany |
March 18 March 19 |
LibrePlanet | Boston, US |
March 20 March 24 |
Mozilla Festival 2023 | Online |
March 21 | Nordic PGDay 2023 | Stockholm, Sweden |
March 23 | Open Source 101 | Charlotte, SC, US |
If your event does not appear here, please tell us about it.
Security updates
Alert summary January 19, 2023 to January 25, 2023
Dist. | ID | Release | Package | Date |
---|---|---|---|---|
Debian | DLA-3275-1 | LTS | firefox-esr | 2023-01-19 |
Debian | DSA-5322-1 | stable | firefox-esr | 2023-01-18 |
Debian | DSA-5324-1 | stable | kernel | 2023-01-23 |
Debian | DLA-3276-1 | LTS | lava | 2023-01-19 |
Debian | DLA-3280-1 | LTS | libde265 | 2023-01-24 |
Debian | DLA-3273-1 | LTS | libitext5-java | 2023-01-18 |
Debian | DSA-5323-1 | stable | libitext5-java | 2023-01-19 |
Debian | DSA-5326-1 | stable | nodejs | 2023-01-24 |
Debian | DLA-3277-1 | LTS | powerline-gitstatus | 2023-01-20 |
Debian | DSA-5325-1 | stable | spip | 2023-01-24 |
Debian | DLA-3272-1 | LTS | sudo | 2023-01-18 |
Debian | DSA-5321-1 | stable | sudo | 2023-01-18 |
Debian | DLA-3281-1 | LTS | swift | 2023-01-25 |
Debian | DSA-5327-1 | stable | swift | 2023-01-24 |
Debian | DLA-3278-1 | LTS | tiff | 2023-01-20 |
Debian | DLA-3279-1 | LTS | trafficserver | 2023-01-23 |
Debian | DLA-3274-1 | LTS | webkit2gtk | 2023-01-19 |
Fedora | FEDORA-2023-4d5f7e5cb0 | F36 | dotnet6.0 | 2023-01-22 |
Fedora | FEDORA-2023-f9368f7fea | F37 | dotnet6.0 | 2023-01-22 |
Fedora | FEDORA-2023-b1cc6d3454 | F36 | firefox | 2023-01-22 |
Fedora | FEDORA-2023-ee8bfe7d72 | F37 | firefox | 2023-01-19 |
Fedora | FEDORA-2023-746c4aacce | F36 | git | 2023-01-21 |
Fedora | FEDORA-2023-9718cc6113 | F37 | git | 2023-01-21 |
Fedora | FEDORA-2023-58eac2b872 | F36 | kernel | 2023-01-23 |
Fedora | FEDORA-2023-0597579983 | F37 | kernel | 2023-01-24 |
Fedora | FEDORA-2023-1bd07375a7 | F37 | libXpm | 2023-01-22 |
Fedora | FEDORA-2023-f81ad89b81 | F36 | nautilus | 2023-01-25 |
Fedora | FEDORA-2023-c8a60f6f80 | F36 | qemu | 2023-01-19 |
Fedora | FEDORA-2023-575fcaf4bf | F36 | rust | 2023-01-21 |
Fedora | FEDORA-2023-9078f609e6 | F37 | sudo | 2023-01-22 |
Fedora | FEDORA-2023-8d91390935 | F36 | upx | 2023-01-22 |
Fedora | FEDORA-2023-89fdc22ace | F37 | upx | 2023-01-22 |
Fedora | FEDORA-2023-18fd476362 | F36 | yarnpkg | 2023-01-21 |
Fedora | FEDORA-2023-ce8943223c | F37 | yarnpkg | 2023-01-21 |
Mageia | MGASA-2023-0016 | 8 | chromium-browser-stable | 2023-01-24 |
Mageia | MGASA-2023-0009 | 8 | docker | 2023-01-24 |
Mageia | MGASA-2023-0018 | 8 | firefox | 2023-01-24 |
Mageia | MGASA-2023-0023 | 8 | jpegoptim | 2023-01-24 |
Mageia | MGASA-2023-0007 | 8 | kernel | 2023-01-22 |
Mageia | MGASA-2023-0008 | 8 | kernel-linus | 2023-01-22 |
Mageia | MGASA-2023-0011 | 8 | nautilus | 2023-01-24 |
Mageia | MGASA-2023-0015 | 8 | net-snmp | 2023-01-24 |
Mageia | MGASA-2023-0022 | 8 | phoronix-test-suite | 2023-01-24 |
Mageia | MGASA-2023-0013 | 8 | php | 2023-01-24 |
Mageia | MGASA-2023-0014 | 8 | php-smarty | 2023-01-24 |
Mageia | MGASA-2023-0010 | 8 | samba | 2023-01-24 |
Mageia | MGASA-2023-0020 | 8 | sdl2 | 2023-01-24 |
Mageia | MGASA-2023-0025 | 8 | sudo | 2023-01-24 |
Mageia | MGASA-2023-0017 | 8 | tor | 2023-01-24 |
Mageia | MGASA-2023-0019 | 8 | viewvc | 2023-01-24 |
Mageia | MGASA-2023-0021 | 8 | vim | 2023-01-24 |
Mageia | MGASA-2023-0024 | 8 | virtualbox | 2023-01-24 |
Mageia | MGASA-2023-0012 | 8 | x11-server | 2023-01-24 |
Oracle | ELSA-2023-0340 | OL9 | bash | 2023-01-24 |
Oracle | ELSA-2023-0402 | OL7 | bind | 2023-01-24 |
Oracle | ELSA-2023-0333 | OL9 | curl | 2023-01-24 |
Oracle | ELSA-2023-0335 | OL9 | dbus | 2023-01-24 |
Oracle | ELSA-2023-0337 | OL9 | expat | 2023-01-24 |
Oracle | ELSA-2023-0296 | OL7 | firefox | 2023-01-24 |
Oracle | ELSA-2023-0285 | OL9 | firefox | 2023-01-24 |
Oracle | ELSA-2023-0328 | OL9 | go-toolset, golang | 2023-01-24 |
Oracle | ELSA-2023-0203 | OL7 | java-1.8.0-openjdk | 2023-01-24 |
Oracle | ELSA-2023-0195 | OL7 | java-11-openjdk | 2023-01-24 |
Oracle | ELSA-2023-0200 | OL8 | java-11-openjdk | 2023-01-19 |
Oracle | ELSA-2023-0202 | OL9 | java-11-openjdk | 2023-01-19 |
Oracle | ELSA-2023-0192 | OL8 | java-17-openjdk | 2023-01-19 |
Oracle | ELSA-2023-0194 | OL9 | java-17-openjdk | 2023-01-24 |
Oracle | ELSA-2023-0377 | OL7 | libXpm | 2023-01-24 |
Oracle | ELSA-2023-0377 | OL7 | libXpm | 2023-01-24 |
Oracle | ELSA-2023-0383 | OL9 | libXpm | 2023-01-24 |
Oracle | ELSA-2023-0089 | OL8 | libreoffice | 2023-01-19 |
Oracle | ELSA-2023-0304 | OL9 | libreoffice | 2023-01-24 |
Oracle | ELSA-2023-0302 | OL9 | libtiff | 2023-01-24 |
Oracle | ELSA-2023-0338 | OL9 | libxml2 | 2023-01-24 |
Oracle | ELSA-2023-0321 | OL9 | nodejs, nodejs-nodemon | 2023-01-24 |
Oracle | ELSA-2023-0318 | OL9 | postgresql-jdbc | 2023-01-24 |
Oracle | ELSA-2023-12065 | OL7 | qemu | 2023-01-24 |
Oracle | ELSA-2023-12064 | OL8 | ruby:2.5 | 2023-01-24 |
Oracle | ELSA-2023-0339 | OL9 | sqlite | 2023-01-24 |
Oracle | ELSA-2023-0403 | OL7 | sssd | 2023-01-24 |
Oracle | ELSA-2023-0291 | OL7 | sudo | 2023-01-24 |
Oracle | ELSA-2023-0291 | OL7 | sudo | 2023-01-24 |
Oracle | ELSA-2023-0284 | OL8 | sudo | 2023-01-24 |
Oracle | ELSA-2023-0282 | OL9 | sudo | 2023-01-24 |
Oracle | ELSA-2023-0303 | OL9 | usbguard | 2023-01-24 |
Red Hat | RHSA-2023:0340-01 | EL9 | bash | 2023-01-23 |
Red Hat | RHSA-2023:0402-01 | EL7 | bind | 2023-01-24 |
Red Hat | RHSA-2023:0333-01 | EL9 | curl | 2023-01-23 |
Red Hat | RHSA-2023:0335-01 | EL9 | dbus | 2023-01-23 |
Red Hat | RHSA-2023:0337-01 | EL9 | expat | 2023-01-23 |
Red Hat | RHSA-2023:0296-01 | EL7 | firefox | 2023-01-23 |
Red Hat | RHSA-2023:0288-01 | EL8 | firefox | 2023-01-23 |
Red Hat | RHSA-2023:0290-01 | EL8.1 | firefox | 2023-01-23 |
Red Hat | RHSA-2023:0294-01 | EL8.2 | firefox | 2023-01-23 |
Red Hat | RHSA-2023:0295-01 | EL8.4 | firefox | 2023-01-23 |
Red Hat | RHSA-2023:0289-01 | EL8.6 | firefox | 2023-01-23 |
Red Hat | RHSA-2023:0285-01 | EL9 | firefox | 2023-01-23 |
Red Hat | RHSA-2023:0286-01 | EL9.0 | firefox | 2023-01-23 |
Red Hat | RHSA-2023:0328-01 | EL9 | go-toolset, golang | 2023-01-23 |
Red Hat | RHSA-2023:0445-01 | EL7 | go-toolset-1.18 | 2023-01-25 |
Red Hat | RHSA-2023:0446-01 | EL8 | go-toolset:rhel8 | 2023-01-25 |
Red Hat | RHSA-2023:0203-01 | EL7 | java-1.8.0-openjdk | 2023-01-24 |
Red Hat | RHSA-2023:0204-01 | EL8.1 | java-1.8.0-openjdk | 2023-01-23 |
Red Hat | RHSA-2023:0205-01 | EL8.2 | java-1.8.0-openjdk | 2023-01-23 |
Red Hat | RHSA-2023:0206-01 | EL8.4 | java-1.8.0-openjdk | 2023-01-23 |
Red Hat | RHSA-2023:0207-01 | EL8.6 | java-1.8.0-openjdk | 2023-01-23 |
Red Hat | RHSA-2023:0209-01 | EL9.0 | java-1.8.0-openjdk | 2023-01-23 |
Red Hat | RHSA-2023:0195-01 | EL7 | java-11-openjdk | 2023-01-23 |
Red Hat | RHSA-2023:0200-01 | EL8 | java-11-openjdk | 2023-01-18 |
Red Hat | RHSA-2023:0196-01 | EL8.1 | java-11-openjdk | 2023-01-18 |
Red Hat | RHSA-2023:0197-01 | EL8.2 | java-11-openjdk | 2023-01-18 |
Red Hat | RHSA-2023:0198-01 | EL8.4 | java-11-openjdk | 2023-01-18 |
Red Hat | RHSA-2023:0199-01 | EL8.6 | java-11-openjdk | 2023-01-18 |
Red Hat | RHSA-2023:0202-01 | EL9 | java-11-openjdk | 2023-01-18 |
Red Hat | RHSA-2023:0201-01 | EL9.0 | java-11-openjdk | 2023-01-18 |
Red Hat | RHSA-2023:0192-01 | EL8 | java-17-openjdk | 2023-01-18 |
Red Hat | RHSA-2023:0191-01 | EL8.4 | java-17-openjdk | 2023-01-18 |
Red Hat | RHSA-2023:0190-01 | EL8.6 | java-17-openjdk | 2023-01-18 |
Red Hat | RHSA-2023:0194-01 | EL9 | java-17-openjdk | 2023-01-23 |
Red Hat | RHSA-2023:0193-01 | EL9.0 | java-17-openjdk | 2023-01-18 |
Red Hat | RHSA-2023:0399-01 | EL7 | kernel | 2023-01-24 |
Red Hat | RHSA-2023:0395-01 | EL8.2 | kernel | 2023-01-24 |
Red Hat | RHSA-2023:0440-01 | EL8.6 | kernel | 2023-01-24 |
Red Hat | RHSA-2023:0334-01 | EL9 | kernel | 2023-01-23 |
Red Hat | RHSA-2023:0400-01 | EL7 | kernel-rt | 2023-01-24 |
Red Hat | RHSA-2023:0392-01 | EL8.2 | kernel-rt | 2023-01-24 |
Red Hat | RHSA-2023:0300-01 | EL9 | kernel-rt | 2023-01-23 |
Red Hat | RHSA-2023:0404-01 | EL7 | kpatch-patch | 2023-01-24 |
Red Hat | RHSA-2023:0396-01 | EL8.2 | kpatch-patch | 2023-01-24 |
Red Hat | RHSA-2023:0441-01 | EL8.6 | kpatch-patch | 2023-01-24 |
Red Hat | RHSA-2023:0348-01 | EL9 | kpatch-patch | 2023-01-23 |
Red Hat | RHSA-2023:0377-01 | EL7 | libXpm | 2023-01-23 |
Red Hat | RHSA-2023:0379-01 | EL8 | libXpm | 2023-01-23 |
Red Hat | RHSA-2023:0384-01 | EL8.1 | libXpm | 2023-01-23 |
Red Hat | RHSA-2023:0380-01 | EL8.2 | libXpm | 2023-01-23 |
Red Hat | RHSA-2023:0382-01 | EL8.4 | libXpm | 2023-01-23 |
Red Hat | RHSA-2023:0378-01 | EL8.6 | libXpm | 2023-01-23 |
Red Hat | RHSA-2023:0383-01 | EL9 | libXpm | 2023-01-23 |
Red Hat | RHSA-2023:0381-01 | EL9.0 | libXpm | 2023-01-23 |
Red Hat | RHSA-2023:0304-01 | EL9 | libreoffice | 2023-01-23 |
Red Hat | RHSA-2023:0343-01 | EL9 | libtasn1 | 2023-01-23 |
Red Hat | RHSA-2023:0302-01 | EL9 | libtiff | 2023-01-23 |
Red Hat | RHSA-2023:0338-01 | EL9 | libxml2 | 2023-01-23 |
Red Hat | RHSA-2023:0321-01 | EL9 | nodejs, nodejs-nodemon | 2023-01-23 |
Red Hat | RHSA-2023:0393-01 | EL8.2 | pcs | 2023-01-24 |
Red Hat | RHSA-2023:0427-01 | EL8.6 | pcs | 2023-01-24 |
Red Hat | RHSA-2023:0318-01 | EL9 | postgresql-jdbc | 2023-01-23 |
Red Hat | RHSA-2023:0339-01 | EL9 | sqlite | 2023-01-23 |
Red Hat | RHSA-2023:0403-01 | EL7 | sssd | 2023-01-24 |
Red Hat | RHSA-2023:0442-01 | EL8.1 | sssd | 2023-01-24 |
Red Hat | RHSA-2023:0397-01 | EL8.2 | sssd | 2023-01-24 |
Red Hat | RHSA-2023:0287-01 | EL6 | sudo | 2023-01-23 |
Red Hat | RHSA-2023:0291-01 | EL7 | sudo | 2023-01-23 |
Red Hat | RHSA-2023:0284-01 | EL8 | sudo | 2023-01-23 |
Red Hat | RHSA-2023:0280-01 | EL8.1 | sudo | 2023-01-23 |
Red Hat | RHSA-2023:0292-01 | EL8.2 | sudo | 2023-01-23 |
Red Hat | RHSA-2023:0293-01 | EL8.4 | sudo | 2023-01-23 |
Red Hat | RHSA-2023:0283-01 | EL8.6 | sudo | 2023-01-23 |
Red Hat | RHSA-2023:0282-01 | EL9 | sudo | 2023-01-23 |
Red Hat | RHSA-2023:0281-01 | EL9.0 | sudo | 2023-01-23 |
Red Hat | RHSA-2023:0336-01 | EL9 | systemd | 2023-01-23 |
Red Hat | RHSA-2023:0303-01 | EL9 | usbguard | 2023-01-23 |
Red Hat | RHSA-2023:0432-01 | EL8.6 | virt:rhel, virt-devel:rhel | 2023-01-24 |
Scientific Linux | SLSA-2023:0402-1 | SL7 | bind | 2023-01-24 |
Scientific Linux | SLSA-2023:0296-1 | SL7 | firefox | 2023-01-23 |
Scientific Linux | SLSA-2023:0203-1 | SL7 | java-1.8.0-openjdk | 2023-01-24 |
Scientific Linux | SLSA-2023:0195-1 | SL7 | java-11-openjdk | 2023-01-23 |
Scientific Linux | SLSA-2023:0399-1 | SL7 | kernel | 2023-01-24 |
Scientific Linux | SLSA-2023:0403-1 | SL7 | sssd | 2023-01-24 |
Scientific Linux | SLSA-2023:0291-1 | SL7 | sudo | 2023-01-23 |
Slackware | SSA:2023-020-01 | mozilla | 2023-01-20 | |
Slackware | SSA:2023-020-02 | seamonkey | 2023-01-20 | |
Slackware | SSA:2023-018-01 | sudo | 2023-01-18 | |
SUSE | openSUSE-SU-2023:0025-1 | SLE12 | cacti, cacti-spine | 2023-01-21 |
SUSE | openSUSE-SU-2023:0025-1 | SLE12 osB15 | cacti, cacti-spine | 2023-01-21 |
SUSE | SUSE-SU-2023:0113-1 | MP4.3 SLE15 SES7 SES7.1 oS15.4 | firefox | 2023-01-20 |
SUSE | SUSE-SU-2023:0111-1 | OS9 SLE12 | firefox | 2023-01-20 |
SUSE | SUSE-SU-2023:0112-1 | SLE15 SES6 | firefox | 2023-01-20 |
SUSE | SUSE-SU-2023:0124-1 | OS9 SLE12 | freeradius-server | 2023-01-23 |
SUSE | SUSE-SU-2023:0110-1 | MP4.2 MP4.3 SLE15 SES7.1 oS15.4 | git | 2023-01-20 |
SUSE | SUSE-SU-2023:0109-1 | OS8 OS9 SLE12 | git | 2023-01-20 |
SUSE | SUSE-SU-2023:0108-1 | SLE15 SES6 SES7 oS15.4 | git | 2023-01-20 |
SUSE | SUSE-SU-2023:0130-1 | MP4.2 SLE15 SLE-m5.1 SLE-m5.2 SES6 SES7 SES7.1 osM5.2 | mozilla-nss | 2023-01-24 |
SUSE | SUSE-SU-2023:0119-1 | MP4.3 SLE15 SLE-m5.3 oS15.4 osM5.3 | mozilla-nss | 2023-01-20 |
SUSE | SUSE-SU-2023:0118-1 | OS9 SLE12 | mozilla-nss | 2023-01-20 |
SUSE | SUSE-SU-2023:0103-1 | MP4.3 SLE15 oS15.4 | postgresql-jdbc | 2023-01-19 |
SUSE | SUSE-SU-2023:0104-1 | SLE12 | postgresql-jdbc | 2023-01-19 |
SUSE | openSUSE-SU-2023:0030-1 | osB15 | python-mechanize | 2023-01-23 |
SUSE | SUSE-SU-2023:0127-1 | MP4.0 MP4.1 MP4.2 MP4.3 SLE15 SES6 SES7 SES7.1 oS15.4 | rubygem-websocket-extensions | 2023-01-24 |
SUSE | SUSE-SU-2023:0133-1 | MP4.3 SLE15 SES7.1 oS15.4 | rust1.65 | 2023-01-24 |
SUSE | SUSE-SU-2023:0132-1 | MP4.3 SLE15 oS15.4 | rust1.66 | 2023-01-24 |
SUSE | SUSE-SU-2023:0126-1 | OS9 SLE12 | samba | 2023-01-24 |
SUSE | SUSE-SU-2023:0122-1 | SLE12 | samba | 2023-01-23 |
SUSE | SUSE-SU-2023:0115-1 | MP4.2 SLE15 SLE-m5.1 SLE-m5.2 SES7.1 osM5.2 | sudo | 2023-01-20 |
SUSE | SUSE-SU-2023:0114-1 | MP4.3 SLE15 SLE-m5.3 oS15.4 osM5.3 | sudo | 2023-01-20 |
SUSE | SUSE-SU-2023:0101-1 | OS9 SLE12 | sudo | 2023-01-19 |
SUSE | SUSE-SU-2023:0100-1 | SLE12 | sudo | 2023-01-19 |
SUSE | SUSE-SU-2023:0117-1 | SLE12 | sudo | 2023-01-20 |
SUSE | SUSE-SU-2023:0116-1 | SLE15 SES6 SES7 | sudo | 2023-01-20 |
SUSE | openSUSE-SU-2023:0027-1 | osB15 | tor | 2023-01-21 |
SUSE | openSUSE-SU-2023:0031-1 | osB15 | upx | 2023-01-23 |
Ubuntu | USN-5820-1 | 16.04 18.04 20.04 22.04 22.10 | exuberant-ctags | 2023-01-24 |
Ubuntu | USN-5816-1 | 18.04 20.04 | firefox | 2023-01-23 |
Ubuntu | USN-5810-2 | 18.04 20.04 | git | 2023-01-19 |
Ubuntu | USN-5819-1 | 20.04 22.04 22.10 | haproxy | 2023-01-23 |
Ubuntu | USN-5813-1 | 16.04 18.04 20.04 | linux-aws-5.4, linux-gkeop, linux-hwe-5.4, linux-oracle, linux-snapdragon | 2023-01-19 |
Ubuntu | USN-5814-1 | 20.04 22.04 22.10 | linux-azure, linux-gkeop, linux-intel-iotg, linux-lowlatency, linux-lowlatency-hwe-5.15, linux-oracle-5.15 | 2023-01-19 |
Ubuntu | USN-5815-1 | 20.04 | linux-bluefield | 2023-01-19 |
Ubuntu | USN-5823-1 | 18.04 20.04 22.04 22.10 | mysql-5.7, mysql-8.0 | 2023-01-24 |
Ubuntu | USN-5823-2 | 16.04 | mysql-5.7 | 2023-01-24 |
Ubuntu | USN-5825-1 | 14.04 16.04 18.04 20.04 22.04 22.10 | pam | 2023-01-25 |
Ubuntu | USN-5818-1 | 18.04 20.04 22.04 22.10 | php7.2, php7.4, php8.1 | 2023-01-23 |
Ubuntu | USN-5817-1 | 14.04 16.04 18.04 20.04 22.04 22.10 | python-setuptools, setuptools | 2023-01-23 |
Ubuntu | USN-5812-1 | 20.04 | python-urllib3 | 2023-01-19 |
Ubuntu | USN-5806-2 | 18.04 22.04 22.10 | ruby2.5, ruby3.0 | 2023-01-23 |
Ubuntu | USN-5822-1 | 20.04 22.04 22.10 | samba | 2023-01-24 |
Ubuntu | USN-5811-2 | 16.04 | sudo | 2023-01-18 |
Ubuntu | USN-5811-1 | 18.04 20.04 22.04 22.10 | sudo | 2023-01-18 |
Ubuntu | USN-5821-1 | 14.04 18.04 20.04 22.04 22.10 | wheel | 2023-01-24 |
Kernel patches of interest
Kernel releases
Architecture-specific
Core kernel
Development tools
Device drivers
Device-driver infrastructure
Documentation
Filesystems and block layer
Memory management
Networking
Security-related
Virtualization and containers
Miscellaneous
Page editor: Jonathan Corbet