LWN.net Weekly Edition for August 17, 2023
Welcome to the LWN.net Weekly Edition for August 17, 2023
This edition contains the following feature content:
- Kernel security reporting for distributions: kernel developers discuss what should be done, if anything, to facilitate the reporting of security vulnerabilities.
- A per-interpreter GIL: another approach to increased concurrency in Python programs.
- An ioctl() call to detect memory writes: a proposed mechanism to catch game cheaters that may have other applications as well.
- Following up on file-position locking: fixing the locking problem was not quite as simple as first thought.
- A new futex API: for years people have complained about the futex API; here is an attempt to do something about it.
This week's edition also includes these inner pages:
- Brief items: Brief news items from throughout the community.
- Announcements: Newsletters, conferences, security updates, patches, and more.
Please enjoy this week's edition, and, as always, thank you for supporting LWN.net.
Kernel security reporting for distributions
The call for topics for the Linux Kernel Maintainers Summit went out on August 15; one proposed topic has generated some interesting discussion about security-bug reporting for the kernel. A recent patch to the kernel's documentation about how to report security bugs recommends avoiding posting to the linux-distros mailing list because its goals and rules do not mesh well with kernel security practices. That led Jiri Kosina to suggest a discussion on security reporting, especially with regard to Linux distributions.
The linux-distros mailing list is a closed list for reporting security bugs that affect Linux systems; as might be guessed, the participants are representatives of various distributions. It has some fairly stringent requirements regarding the maximum embargo period (14 days) after a bug is reported before it must be publicly disclosed; it also places requirements on the reporter to post the full details to the oss-security mailing list once the embargo has run its course. These policies have clashed with kernel bug reporting along the way. Examples include a 2018 embargo that went awry, an even longer embargo botch in 2021, and a 2022 discussion of the problems.
The core of the documentation patch from Greg Kroah-Hartman changes a suggestion that some bugs might need to be coordinated with linux-distros to a strongly worded admonition against doing so:
The kernel security team strongly recommends that reporters of potential security issues NEVER contact the "linux-distros" mailing list until AFTER discussing it with the kernel security team. Do not Cc: both lists at once. You may contact the linux-distros mailing list after a fix has been agreed on and you fully understand the requirements that doing so will impose on you and the kernel community.The different lists have different goals and the linux-distros rules do not contribute to actually fixing any potential security problems.
Kosina said that he generally agreed with the change, but wanted to discuss how to report kernel security bugs to the distributions in some other fashion. In part, he was looking at the problem from the perspective of a distribution maintainer for SUSE:
With my distro hat on, I really want the kernel security bugs to be *eventually* processed through linux-distros@ somehow, for one sole reason: it means that our distro security people will be aware of it, track it internally, and keep an eye on the fix being ported to all of our codestreams. This is exactly how this is done for all other packages.I would be curious to hear about this from other distros, but I sort of expect that they would agree.
If this process doesn't happen, many kernel security (with CVE potentially eventually assigned retroactively) fixes will go by unnoticed, as distro kernel people will never be explicitly made aware of them, and distros will be missing many of the patches. Which I don't think is a desired end effect.
Kosina noted that the advice from Kroah-Hartman is
that the distributions should simply follow one of the stable trees, but
that is unworkable for various reasons. One concrete problem that SUSE has
is tracking whether a given CVE has been fixed in its kernel; even
though he believes that the CVE system is seriously flawed, tracking CVEs and
ensuring the fixes get into SUSE kernels is critical, especially for
certain kinds of customers. The linux-distros list provided that tracking
in many cases "and that is lost if the fix goes only through
security@kernel.org
".
Vegard Nossum agreed
that the topic needed discussion and pointed to his earlier
efforts to overhaul the security-reporting documentation. It would
make sense to discuss the issue at the summit in part because the recent
documentation change came about due to complaints from Linus Torvalds on
the closed lists, he said. "I therefore think that Linus himself needs
to be involved in this discussion and that his arguments need to be made
in public instead of just paraphrased by me.
" In part, Torvalds was unhappy
with requirements that proof-of-concept code exploiting a vulnerability has
to be published if the code was posted to linux-distros.
distros@kernel.org?
Nossum suggested having a closed list for distribution representatives that was run by the kernel security team. The list's policies and membership could be set by the team and the list could be used to post patches for security fixes once they are ready for the distributions to incorporate them. Kroah-Hartman was quick to throw cold water on that idea, however.
A list that pre-announces security problems will either be
"deemed illegal in some countries
" or be required to admit all "major"
Linux users, which would include various government agencies across the
globe, he said. For that reason, the Linux Foundation is not going to be
willing to
host such a list and he suspects it would be hard to find an organization
willing
to do so; "it's amazing that linux-distros has been able to survive
for as long as it has
".
Steven Rostedt wondered if there were ways to solve Kosina's problem without running afoul of concerns over pre-announcements. He noted that the distribution kernels are the ones that are in the hands of users, so it would be best to ensure that the maintainers of those kernels have access to the information about security flaws. But Kroah-Hartman sees the landscape rather differently:
The huge majority of Linux use in the world is Android, everything else is a rounding error. Android does now have projects where security bugs reported to them are required for the reporter to submit the problem to security@k.o as well. We fix the issue, get it pushed out, and the reporter gets the credit. [...]After Android, Debian is by far the largest Linux user, and the Debian kernel developers do an awesome job of tracking the stable kernel releases which have been documented to fix 99% of known security issues _BEFORE_ they are known (data produced by Google security team for 2 years straight).
After that, the use of Linux tapers off to "roll your own kernel.org releases" (still a huge number of absolute boxes thanks to many government instances and corporate clouds) to the various "enterprise" distros, down to the other community distros (Fedora/openSUSE/Arch/etc.)
He lamented the length of time it takes for Android to actually get these fixes in the hands of its users (4-6 months), but praised the fact that Android follows the stable releases. He believes that governments are moving toward regulations that will reduce those delays in any case. The distributions that are not following the stable kernels are generally those for enterprise Linux:
It's that corporate sponsored "middle tier" (which is still quite small overall) that likes complaining about this stuff because they don't want to take the stable kernel updates for various (in my opinion stupid) reasons.So who would such a "distros@" list help? Only that middle-tier, small group of very well financed companies that want to go their own way with their kernel releases because they want to.
Kosina called
that an oversimplification, noting that Android is huge in terms of
users, but that the customers for the enterprise distributions "directly
contribute back to kernel development, by allowing companies like Red Hat,
SUSE, Canonical, ... to put kernel engineers on their payroll
".
Rostedt agreed,
noting that the piecemeal impact of vulnerabilities in Android kernels is rather
different than that of such flaws for enterprise kernels: "All it takes
is to go after one server, and
you have access to thousands of users and their private data.
"
>
Kroah-Hartman walked
back his "rounding error" comment to a certain extent, but wanted to
focus on the "false sense of security
" that comes from the enterprise
distributions picking and choosing fixes to apply to their kernels. The
Linux community finds and fixes every bug that it can; it provides those
kernels to the world but they do not reach all users because of the
decisions made by certain companies. Meanwhile: "those 'policy decisions of
companies' are now known by
governments to be incorrect, and soon will be made illegal in many
countries, so we are on the right side here.
"
Working with linux-distros?
The linux-distros (and oss-security) mailing lists are administered and
moderated by Alexander "Solar Designer" Peslyak, who was alerted to the
discussion by Nossum. Peslyak posted a
lengthy response, suggesting that interested people could perhaps meet at Open
Source Summit Europe since he would not be able to be present at the
Maintainers Summit. He said that email discussion would work, perhaps
better in fact, "but
meeting in person is a gesture that might help establish an atmosphere
of trust and assurance of good intent
".
He pointed out that the new wording in the documentation does not
preclude posting to
linux-distros, but says that it should happen only in consultation with the
kernel security team. He does not know if that would actually happen, but:
"Maybe some
friendly dialogue and agreeing on things could affect that.
" There
seem to be two problem areas: the linux-distros disclosure deadline
of 14 days even if there is no fix yet and the requirement that any
proof-of-concept posted to the list be made public within seven days of the
vulnerability disclosure. He agreed that both are problems and suggested
that a discussion on oss-security might lead to "satisfactory
solutions/exceptions
".
It seems clear that Kroah-Hartman, at least, does not see a real problem to
be solved. As he pointed
out, though, the security team is "run as
a 'collective' by those doing the work there, not by fiat
". So perhaps
others will be inclined to see what can be worked out in order to keep
linux-distros in the loop, though it seems like an uphill battle.
As Peslyak noted, another summit topic post about quality standards for embargoed fixes is also worth a look. It would seem that some interesting discussion topics are already lining up for the summit; look for coverage of whichever topics get chosen, in November, shortly after the event.
A per-interpreter GIL
"Subinterpreters", which are separate Python interpreters running in the same process that can be created using the C API, have been a part of Python since the previous century (version 1.5 in 1997), but they are largely unknown and unused. Eric Snow has been on something of a quest, since 2015 or so, to bring better multicore processing to Python by way of subinterpreters (or "multiple interpreters"). He has made it part of the way there, with the adoption of a separate global interpreter lock (GIL) for each subinterpreter, which was added for Python 3.12. Back in April, Snow gave a talk (YouTube video) at PyCon about multiple interpreters, their status, and his plans for the feature in the future.
We have looked in on the subinterpreter work a few times along the way; beyond the article when Snow started his quest, he presented a status update at the 2018 Python Language Summit and there were some further discussions back in 2020. On April 7, 2023, two weeks before Snow gave his talk, the steering council accepted PEP 684 ("A Per-Interpreter GIL") for inclusion into Python 3.12.
Background
Snow began by defining "two key concepts: the GIL and multiple interpreters". The GIL is used by the Python runtime to "protect a lot of important global state". On the positive side, the GIL helps keep the Python implementation simpler than it would be otherwise, which is important for a project where most of its contributors are volunteers, he said. On the negative side, the GIL negatively impacts the performance of threaded, CPU-bound Python code. The GIL is not perfect, but it is not quite as big of a problem for most people as it is made out to be; that is not particularly obvious, however, so Python "ends up getting a lot of bad press" about the GIL.
To understand "subinterpreters", there is a need to know what he means by "interpreter". There are multiple meanings for it, depending on context, such as the python executable, the full runtime including the standard library and the user's code, or the bytecode virtual machine (VM). For his talk, he would be using it to mean the Python VM and the run-time state to support it; it may be helpful to think of it as the "interpreter state", but it is often called a "subinterpreter" as well, he said.
In practice, subinterpreters are "like a cross between using threads and processes"; Nick Coghlan once described them as "threads with opt-in sharing", Snow said. He continued: "In a way, using multiple interpreters takes all the painful parts out of using threads." CPython has long supported multiple interpreter states in a single process, running at the same time in different threads, but it is not a well-known feature.
Attendees may or may not have noticed, but there has been "quite a bit of negativity for a long time around the GIL and Python's support for multiple cores" in the tech world, he said. Back in 2014, he talked himself into doing something, he was not sure what, to change that narrative. Quoting from his 2015 python-ideas post on multiple interpreters with their own GILs, he said, "I knew it would have to make Python's multicore support 'obvious, unmistakable, and undeniable'". Multiple interpreters was exactly what was needed to do so, he believes, but he did not realize "it would turn into a project that would consume a lot of my spare time over the next eight-and-a-half years".
If separate subinterpreters no longer shared a single GIL, that would mean that each thread in a process could run an interpreter with its own GIL. The threads would no longer block each other in the Python interpreter. Each thread could truly run in parallel with the others on multiple cores in the system.
CPython was written with the idea that every interpreter's execution was isolated and independent from any other's. In theory, that meant that the interpreter state would be isolated; "in practice, that was far from the truth". Since the subinterpreter feature was so little-known, thousands of global variables holding run-time state had been added to CPython by the time he got started on the project. The shared GIL protected all of that state; in order to stop sharing the GIL, the interpreter would "have to mostly stop sharing any global state".
Plan
So he set out on a plan to isolate interpreters from each other; this amounted to thousands of small, relatively uncomplicated tasks to isolate the state. The best part was that the isolation work was valuable even if it turned out that not sharing the GIL was too difficult to do—or did not provide the multicore benefits that he expected. That outcome was, he said, a real possibility when he started out, but he was successful in following the plan fully. It remains to be seen, though, what impact it has on the multicore parallelism story for Python.
"We'll have a per-interpreter GIL in Python 3.12", Snow said, to applause. It has been a long project for him, with several pauses, lots of collaboration, some obstacles, and "occasional burnout"; it required eight PEPs, three of which he authored. Most of the work has been on his own time, though he is grateful to his employer Microsoft, which has given him 20% of his time to work on open source for the last five years—and full time on open source starting earlier in 2023. In addition, many members of the Python community, Victor Stinner in particular, have provided help, support, encouragement, and more.
All of that work would be for naught if no one takes advantage of it, he said. So he wanted to ensure that he provided attendees with enough information to do so. Before that, however, he wanted to show "why you should care".
There are four different mechanisms for Python concurrency: threads, multiple processes, distributed processing, and async. David Beazley gave an excellent talk about those mechanisms at PyCon 2015, Snow said; it had a live demo ("of course") that built up a simple network server that computed requested Fibonacci numbers using several of the concurrency techniques.
Snow adapted the threaded version of Beazley's server in order to add multiple interpreters into the mix. He ran those programs on his laptop and said that he was excited to show the results, which can be seen in his slides as well as in the video. For long-running requests (fib(30)), the numbers for multiple interpreters are slightly better than those for multiple processes. Unlike with threads, though, a server running with multiple interpreters or processes does not have its performance degrade when adding a second or third client. Multiple interpreters took 0.9 seconds for one, two, or three clients; because of the GIL, threads took 0.9, 1.8, and 2.5s for those three scenarios (processes were 0.9, 1.0, and 1.1s).
Looking at the requests-per-second (rps) numbers for short-running requests (fib(1)) on multiple interpreters shows only a modest drop-off when adding clients, 13,100 for one down to 11,800 for three. The threaded version dropped off more quickly (13,000 to 8,100). In addition, requesting a single fib(40) caused the threads to go to 100 rps, while multiple interpreters was still maintaining 11,500 rps. "A long-running request does not starve the short-running ones; it's like magic", Snow said.
He put up a slide with both tables of results; "isn't that beautiful?", he asked, to another round of applause. He put up some graphs of the results, which is "exactly what I had hoped for all these years". While he expected results to be similar to what he got, "actually seeing them just blows my mind".
For those who want to try this out for themselves, he pointed to a branch that he was maintaining. But at this point, the first beta of Python 3.12 was released in May, a month after his talk, and contains the per-interpreter GIL work. Since then, several other releases have been made, including the rc1 release on August 6. There is no tutorial on using the feature available, yet, but that should be coming, he said. He also could not resist displaying the numbers one more time, to chuckles and laughter.
Currently, multiple interpreters can only be created and used from the C API; there is no Python interface to the feature. In 2017, he proposed adding a standard library module to expose multiple interpreters in the language in order to fix that lack. PEP 554 ("Multiple Interpreters in the Stdlib") would start to provide access to multiple interpreters from Python code. The Python API would be basic at first; for example, the only way to run code would be to pass it as a string, as with exec().
PEP 554 outlines many additional features that could be added to the interpreters module once the basic features are in. The implementation of the PEP was completed a few years ago, he said, though there were some minor tweaks needed along the way. Because it was strongly tied to the per-interpreter GIL feature in people's minds, PEP 554 was not considered for adoption independently. When it looked likely that the per-interpreter GIL work would land in Python 3.12, the final discussion on PEP 554 did not complete in time to have that PEP considered for the release. So there will be a per-interpreter GIL soon, but no easy way to access multiple interpreters from Python code until 3.13 in 2024.
He has released an interpreters module on PyPI that can be used to access the feature in the meantime. He said that others are welcome to create their own; it is pretty straightforward to do so as there are only three C API calls that need to be used.
Examples
He then presented some examples of using multiple interpreters in Python code; for the most part, the examples would follow the API proposed in PEP 554. Creating an interpreter is pretty straightforward:
interp = interpreters.create()That call actually does quite a bit of work; it creates the new interpreter state, populates the builtins and sys modules, imports "a bunch of other modules", and initializes the interpreter's __main__ module. Interpreter creation is not super cheap, Snow said, but in practice it should not matter to most users. On the debug build he was using for testing, it took around 35ms to create an interpreter; destroying one took around 5ms.
He was curious about how many interpreters he could create. On his laptop with 9GB of RAM, he ran out of memory after creating around 5,000 interpreters. That suggests that each takes 1.5MB or so. He is pretty sure that can be improved.
He showed four ways to execute the canonical "hello world" program in the current Python: the REPL, the command line (i.e. python -c), from a file, and via exec() on a string argument. With each of those, the interpreter gets the source, compiles it, then uses the bytecode interpreter to execute it using the __dict__ of the __main__ module as the execution environment. The exact same thing happens with:
script = ''' print('Hello world!') ''' interp.run(script)The difference is that the execution is done in the execution environment of interp; it uses the __main__.__dict__ from there instead of the one in the interpreter where the interp.run() call is being made.
Starting a new interpreter in its own thread is not difficult either, he said. The usual way to start a thread looks something like:
def task(): print('Hello world!') t = Thread(target = task) t.start() # alternatively t = Thread(target = exec, args = (script,)) t.start()The second version (which uses script from above) can be modified to run the interpreter in its own thread by simply switching the target:
t = Thread(target = interp.run, args = (script,)) t.start()It may make sense to create the interpreter in the thread, thus in task(), for some applications.
Meanwhile, an interpreter can be used multiple times, just by calling its run() method. Importantly, the execution environment does not get reset between runs, so the state of the interpreter is preserved:
>>> interp.run('answer = 42') >>> interp.run('print(answer)') 42One consequence is that it will be possible to create interpreters ahead of time, pre-populating them by importing needed modules, thus paying any import price early. Interpreter pools could be created, with interpreters that would get assigned to work as it become available.
Communication
Multiple interpreters are a powerful feature, he said, "but that power is limited if we don't have an easy way to send data to an interpreter or to get data back". Because of the isolation between these interpreters, you cannot simply send and receive objects between them—"at least for now". Data can be serialized and deserialized for communication. One simple way to do that is by using an f-string for the string to be passed:
data1, data2 = 42, 'spam' interp.run(f'''call_something({data1}) call_something_else({data2!r}) ''')
Reading and writing data from an interpreter could be done using os.pipe(); the interpreters are all in the same process, so the file descriptor numbers can simply be passed as integers. "However, we can certainly be more efficient than using pipes." The pickle module could be used as an alternative to just passing bytes around; it can serialize and deserialize most objects, but is, again, not all that efficient.
At one point, PEP 554 proposed adding a mechanism to efficiently pass objects between interpreters, which is based on the idea of communicating sequential processes (CSP). That mechanism was called "channels" after its counterpart in the Go language. He implemented it "and it works"; he thinks it makes for an elegant solution to the problem, even if his implementation was lacking in some ways.
His point was not that channels should necessarily be added to the standard library, but that it is not too hard to implement ways to send data efficiently rather than using pipes or pickle. The current state is that sending messages or objects to other interpreters is not all that efficient, which is not something that will change for Python 3.12. He is hopeful that an efficient and easy-to-use solution will be adopted in the next year for inclusion into Python 3.13. In the meantime, the gap may be filled by PyPI packages; "that depends on a lot of you".
Beyond just passing data back and forth, there is also value in sharing objects directly; this would be particularly useful for applications that use large data structures, such as massive arrays or in-memory databases. He is hopeful that the community will help fill that gap and that there will be some sharing-safe data types available for 3.13. In theory, large, read-only arrays should be safe to share between interpreters due to the design of the buffer protocol; he prototyped support for that at one point and it seemed to work, though he did not do a lot of testing.
The difficult part of any kind of concurrency is "how to safely share data and to do so efficiently". In a free-threaded interpreter, all data in the process is potentially shared, which means that all of it must be protected from simultaneous access. In contrast, any data sharing must be explicit for isolated interpreters, so the protection only needs to be applied to those objects or types. The narrow focus of explicit sharing "helps reduce potential development and maintenance costs for everyone ... and that's a critical distinction".
He thinks that multiple interpreters are now ready to take their place among the Python concurrency options. The feature can be used to implement the "relatively ancient and still relatively popular idea of isolated threads of execution that pass data back and forth explicitly". It is, effectively, the actor model using CSP; two languages that natively support this kind of concurrency are Go and Erlang. He looks forward to seeing what the community will do with multiple interpreters in the coming years.
Plans
Snow ended his talk with a look at his plans, which start with getting PEP 554 adopted for Python 3.13. It just provides the foundation, though, so there will be more work needed after that. Some work on more efficient ways to pass data is needed, as is work on ways for sharing data between interpreters. Another area he would like to work on is the ability to pass callables, instead of only strings, to the run() method. There has not been much work done on improving the performance of creating and destroying interpreters, which is another thing he plans to look into.
He was asked about global state in extension modules. Snow said that extension modules that want to support multiple interpreters need to implement PEP 489 ("Multi-phase extension module initialization"); there is some excellent documentation describing how to do so, he said. Part of the work to switch to multi-phase initialization will move any global state to module-specific state so that the extension can operate with multiple interpreters.
Snow gave his talk when the status of PEP 703 ("Making the Global Interpreter Lock Optional in CPython") was not yet known. It will provide a free-threaded interpreter, though possibly at the cost of some single-threaded performance. PEP 703, which also requires extensions to implement multi-phase initialization for obvious reasons, was not reintroduced until May; it was not until the end of July that the steering council announced its intent to approve the PEP after several lengthy threads along the way.
Now, the two features will coexist, eventually—or at least that is the plan. The existence of a free-threaded Python may dampen some of the enthusiasm around multiple interpreters, though the explicit-sharing model definitely does have its appeal. In any case, the Python concurrency picture has gotten rather larger over the past few months; it will be interesting to see where it all goes.
An ioctl() call to detect memory writes
It is the kernel's business to know when a process's memory has been written to; among other things, this knowledge is needed to determine which pages can be immediately reclaimed or to properly write dirty pages to backing store. Sometimes, though, user space also needs access to this information in a reliable and fast manner. This patch series from Muhammad Usama Anjum adds a new ioctl() call for this purpose; using it requires repurposing an existing system call in an unusual way, though.The driving purpose for this feature, it seems, is to enable an efficient emulation of the Windows GetWriteWatch() system call, which is evidently useful for game developers who want to defend against certain kinds of cheating. A game player who is able to access (and modify) a game's memory can enhance that game's functionality in ways that are not welcomed by the developers — or by other players. Using GetWriteWatch(), the game is able to detect when crucial data structures have been modified by an external actor, put up the modern equivalent of a "Tilt" indicator, and bring the gaming session to a halt.
Linux actually provides this functionality now by way of the pagemap file in /proc. The current dirty state of a range of pages can be read from this file, and writing the associated clear_refs file will reset the dirty state (useful, for example, after the game itself has written to the memory of interest). Accessing this file from user space is slow, though, which runs counter to the needs of most games. The new ioctl() call is meant to implement this feature more efficiently. The Checkpoint/Restore In Userspace (CRIU) project would also be able to make use of a more efficient mechanism to detect writes; in this case, the purpose is to identify pages that have been modified after the checkpoint process has begun.
Soft dirty deemed insufficient
The kernel's "soft dirty" mechanism, which provides the pagemap file, would seem to be the appropriate base on which to build a feature like this. All that should be needed is a more efficient mechanism to query the data and to reset the soft-dirty information for a specific range of pages. According to the cover letter, though, this approach ended up not working well. There are various other operations, such as virtual memory area merging or mprotect() calls, that can cause pages to be reported as dirty even though they have not been written to. That, in turn, can lead to a game concluding, incorrectly, that its memory has been tampered with.
That can lead to undesirable results. One rarely sees the level of anger that can be reached by a game player who has been told, at the crux point of a quest, that cheating has been detected and the game is over.
Fixing this false-positive problem, evidently, is not an option, so the decision was made to work around it instead via an unexpected path. The userfaultfd() system call allows a process to take charge of its own page-fault handling for a given range of memory. The patch set adds a new operation (UFFD_FEATURE_WP_ASYNC) to userfaultfd() that changes how write-protect faults are handled. Rather than pass such faults to user space, the kernel will simply restore write permission to the page in question and allow the faulting process to continue.
Thus, userfaultfd(), which was designed to allow the handling of faults in user space, is now being used to modify the handling of faults directly within the kernel with no user-space involvement. This approach does have some advantages for the use case in question, though: it allows specific ranges of memory to be targeted, and the use of write protection to trap write operations provides for more reliable reporting of writes to memory. To see which pages have been written, it is sufficient to query the write-protect status; if the page has been made writable, it has been written to.
The ioctl() interface
With that piece in place, it is possible to create an interface to query the results. That is done by opening the pagemap file for the process in question, then issuing the new PAGEMAP_SCAN ioctl() call. This call takes a relatively complex structure as an argument:
struct pm_scan_arg { __u64 size; __u64 flags; __u64 start; __u64 end; __u64 walk_end; __u64 vec; __u64 vec_len; __u64 max_pages; __u64 category_inverted; __u64 category_mask; __u64 category_anyof_mask; __u64 return_mask; };
The size argument contains the size of the structure itself; it is there to enable the backward-compatible addition of more fields later. There are two flags values that will be described below. The address range to be queried is specified by start and end; the walk_end field will be updated by the kernel to indicate where the page scan actually ended. vec points to an array holding vec_len structures (described below) to be filled in with the desired information.
The final four fields describe the information the caller is looking for. There are six "categories" of pages that can be reported on:
- PAGE_IS_WPALLOWED: the page protections allow writing.
- PAGE_IS_WRITTEN: the page has been written.
- PAGE_IS_FILE: the page is backed by a file.
- PAGE_IS_PRESENT: the page is present in RAM.
- PAGE_IS_SWAPPED: the (anonymous) page has been written to swap.
- PAGE_IS_PFNZERO: the page-table entry points to the zero page.
Every page will belong to some combination categories; a calling program will want to learn about the pages in the range of interest that are (or are not) in a subset of those categories. The usage of the masks is described, to a point, in this patch, though it makes for difficult reading. The code itself does the following to determine if a given page, described by categories, is interesting:
categories ^= p->arg.category_inverted; if ((categories & p->arg.category_mask) != p->arg.category_mask) return false; if (p->arg.category_anyof_mask && !(categories & p->arg.category_anyof_mask)) return false; return true;
Or, in English: first, the category_inverted mask is used to flip the sense of any of the selected categories; this is a way of selecting for pages that do not have a given category set. Then, the result must have all of the categories described by category_mask set and, if category_anyof_mask is not zero, at least one of those categories must be set as well. If all of those tests succeed, the page is of interest; otherwise it will be skipped.
After the scan, those pages are reported back to user space in vec, which is an array of these structures:
struct page_region { __u64 start; __u64 end; __u64 categories; };
The return_mask field of the request is used to collapse the pages of interest down to regions with the same categories set; for each such region, the return structure describes the address range covered and the actual categories set.
Finally, to return to the flags argument, there are two possibilities. If PM_SCAN_WP_MATCHING is set, the call will write-protect all of the selected pages after noting their status; this is meant to allow checking for pages that have been written and resetting the status for the next check. If PM_SCAN_CHECK_WPASYNC is set, the whole operation will be aborted if the memory region has not been set up with userfaultfd() as described above.
Checking for cheaters
To achieve the initial goal of determining whether pages in a given range have been modified, the first step is to invoke userfaultfd() to set up the write-protect handling. Then, occasionally, the application can invoke this new ioctl() call with both flags described above set, and with both category_mask and return_mask set to PAGE_IS_WRITTEN. If no pages have been written, no results will be returned; otherwise the returned structures will point to where the modifications have taken place. Meanwhile, the pages that were written to will have their write-protect status reset, ready for the next scan.
This series is in an impressive 27th 28th
revision as of this writing. It was initially proposed
in mid-2022 as a new system call named process_memwatch() before
eventually becoming an ioctl() call instead. There is clearly
some motivation to get this feature merged, but the level of interest in
the memory-management community is not entirely clear. The work has not
landed in linux-next, and recent posts have not generated a lot of
comments. There do appear to be use cases for the feature, though, so a
decision will need to be made at some point.
Following up on file-position locking
LWN recently covered a discussion on file-position locking that demonstrated the hazards that can result from unexpected concurrency. It turns out that this discussion had not yet fully run its course. Since that article was written, additional changes intended to address a performance regression evolved into a core virtual filesystem (VFS) layer API change to carry out some much-delayed housecleaning.At the end of the previous article, a change had been merged into the mainline to unconditionally take the file-position lock (which ensures that only one thread is manipulating the current file read/write position at any given time). The article noted that the performance impact of this change had not been measured. That changed on August 3, when Mateusz Guzik reported that there was indeed a performance change — specifically, a 5% regression on a test he had run. VFS layer maintainer Christian Brauner initially discounted the report, but also said that the problem, if it truly existed, could be mitigated by only taking the position lock for directories (and not for regular files). The original locking problem had only affected directory reads, so the fix is only needed there as well.
Linus Torvalds was quick to respond to the report, though, posting
a patch that implemented the directory-only behavior; it worked by
doing the locking explicitly in the implementation of the directory-related
system calls. Brauner pointed
out that this approach missed the case of calling lseek()
on a directory; "that's the heinous part
". Since the whole purpose
of lseek() is to manipulate the file position, that was indeed a
bit of an oversight.
Torvalds acknowledged
the omission, but additionally realized that there are some filesystems
that support read() and write() on directories as well.
"They may be broken, but people actually did do things like that
historically, maybe there's a reason adfs and ceph allow it
". Those
cases, too, would need to be addressed for a complete fix. To create that
fix, he put together a new patch that changes __fdget_pos() to
simply test whether the object in question is a directory, and to always
take the lock in that case. Brauner was still unconvinced
about the need for the patch, but agreed that Torvalds's approach would
work; that patch was duly applied to the
mainline for 6.5-rc5.
After that happened, Brauner suggested a slightly different approach. __fdget_pos() takes a file structure pointer as an argument; to determine whether that file corresponds to an open directory, the code must traverse the f_inode pointer to get to the associated inode structure. A simpler test, he said, might be to test whether the given file provides either of the iterate() or iterate_shared() operations, which are only provided for directories.
Once upon a time, the kernel's file_operations structure included a member called readdir() that, as might be expected, would be provided by filesystems to support reading directory contents. The 3.11 release in 2013 saw that member renamed to iterate(), with a different API. This function had exclusive access to the inode, which would be write-locked before the call was made. That limited performance for filesystems that were able to support multiple, concurrent read operations on the same directory. To speed things up, a new iterate_shared() variant was added for the 4.7 release in 2016. This version is called with a shared (read) lock on the inode, so multiple calls can be running concurrently.
At the time, the documentation was updated with the admonition: "Old
method is only used if the new one is absent; eventually it will be
removed. Switch while you still can; the old one won't stay.
" Few
people who are familiar with the kernel development process will be shocked
to learn that, in fact, iterate() did stay, and that it is
still present in current kernels, seven years later.
Torvalds, having been "shamed" by this reminder that iterate() still exists, decided to finish the job. With a new patch set, he eliminated the iterate() function. For the filesystems that were still using it (ceph, coda, exfat, jfs, ntfs, ocfs2, overlayfs, procfs, and vboxfs), he created a new wrapper that can be provided as iterate_shared(); it drops the read lock, then acquires a full write lock before calling into the filesystem. The exclusive locking, in other words, has been pushed down a level and only appears in the few remaining places where it is needed.
Torvalds had intended to apply this change during the 6.6 merge window; it is, after all, rather late in the 6.5 cycle for core changes of this nature. One of the advantages of being Linus Torvalds, though, is that you can break the rules when it seems appropriate; he took that approach in this case and merged the change into the mainline, also for the 6.5-rc5 release; it was quickly followed by a change from Brauner to change the position-locking test to just look for the existence of an iterate_shared() function.
Naturally, none of that work updated the associated documentation, giving the tireless documentation maintainer something to do. Otherwise, though, perhaps this little story has finally come to a real conclusion, with code that is more correct and a little bit of longstanding technical debt removed. Of course, it is once again true that no performance results have been posted, so the possibility of a third installment cannot be entirely ruled out.
A new futex API
The Linux fast user-space mutex ("futex") subsystem debuted with the 2.6.0 kernel; it provides a mechanism that can be used to implement user-space locking. Since futexes avoid calling into the kernel whenever possible, they can indeed be fast, especially in the uncontended case. The API used to access futexes has never been seen as one of Linux's strongest points, though, so there has long been a desire to improve it. This patch series from Peter Zijlstra shows what the future of futexes may look like.A futex is a 32-bit value stored in user-space memory that is, presumably, shared between at least two threads or processes. When used as a lock, a futex can be acquired with a single compare-and-swap instruction, without kernel involvement. The kernel comes into the picture, though, in the contended case, where a thread must block until a futex becomes available. Waiting for a futex and waking threads that are waiting are some of the features provided by the futex() system call.
futex() is a multiplexed system call, meaning that it performs a number of unrelated operations depending on its arguments. Over the years, as futex functionality has grown, this system call has become complex and unwieldy, to put it mildly. It is difficult to use, and difficult to extend further. This API has, among other things, slowed the addition of features that developers would like to see.
For some years, there has been talk of splitting the futex API into a set of more focused system calls; LWN covered one such discussion in 2020. Thus far, though, actual progress in this direction has been limited to the addition of futex_waitv() to the 5.16 release in early 2022; work on futexes seemingly stalled after that. With the current patch set, Zijlstra appears to be trying to restart this project and bring a better futex API to the mainline.
This patch series includes three new system calls:
int futex_wait(void *addr, unsigned long val, unsigned long mask, unsigned int flags, struct __kernel_timespec *timeout, clockid_t clockid); int futex_wake(void *addr, unsigned long mask, int nr, unsigned int flags); int futex_requeue(struct futex_waitv *waiters, unsigned int flags, int nr_wake, int nr_requeue);
For the most part, so far, these system calls are just new interfaces to existing functionality. futex_wait() is the same as futex(FUTEX_WAIT_BITSET); it will cause the calling thread to wait on the futex stored at addr, assuming that futex contains val at the time of the call. The mask will be stored in the kernel's context for this thread. futex_wake() mirrors the FUTEX_WAKE_BITSET operation, using mask to identify which waiter(s) to wake. futex_requeue() is another interface to FUTEX_CMP_REQUEUE, which can wake some waiters and requeue others onto a different futex.
There is also some new functionality that is made available via the new API, generally using the flags argument. One of the new flags is FUTEX2_NUMA, which is intended to improve performance on NUMA systems. User space is in control of the placement of its futexes and can, thus, ensure that they live on a NUMA node that is close to the threads that are using it. But the kernel maintains its own data structures for futexes that are being waited on; poor placement of those structures can slow everything down. This problem was mentioned here in 2016, but still lacks a solution in mainline kernels.
As noted above, a futex is a 32-bit value. When the FUTEX2_NUMA flag is provided, though, the futex(es) referred to are, instead, interpreted as:
struct futex_numa_32 { u32 val; u32 node; };
(Though this structure does not actually appear in this form in the kernel source).
The val field is the same old 32-bit quantity, while node
is the number of the NUMA node on which the kernel should allocate its own
data structures. The node value can also be set to all-ones
(~0), in which case the current NUMA node will be used, and the
node value will be updated accordingly by the kernel. The patch
adding this structure admonishes: "If userspace corrupts the node value
between WAIT and WAKE, the futex will not be found and no wakeup will
happen
".
The FUTEX2_NUMA flag thus gives user space some control over the placement of associated memory within the kernel. If this flag does not appear, these allocations will be spread across all of the nodes of the system (as is done with futexes in current kernels).
One other change in this series is reflected by another set of new flags. There has long been a desire to get away from the 32-bit requirement for futexes. Often, only a single bit is used; a smaller size would reduce waste and (more importantly) allow more futexes to be crammed into a single cache line. To accommodate this need, the new system calls require a flags value indicating the size of the futex(es) to be operated on; it can be one of FUTEX2_SIZE_U8, FUTEX2_SIZE_U16, or FUTEX2_SIZE_U32. There is also a FUTEX2_SIZE_U64 flag defined, but 64-bit futexes are not implemented in the current patch set.
When FUTEX2_NUMA is used, the node number has the same size as the futex value itself. Thus, a futex operation specifying FUTEX2_NUMA|FUTEX2_SIZE_U8 will provide an eight-bit node number, which could be modeled as:
struct futex_numa_8 { u8 val; u8 node; };
The end result of this work is a set of incremental improvements that cleans up the futex API and provides some functionality that developers have been asking for. Review comments have mostly been focused on relatively minor details, suggesting that there may not be much in the way of getting these changes merged into the mainline. Of course, the job does not end with these patches; there is still a lot of functionality provided by futex(), including priority inheritance, that is not available with the new API. But this work should make the common cases easier for developers to work with and might even, someday, lead to support for futexes in the GNU C library.
Brief items
Security
Security quotes of the week
I don't want to pretend that all privacy questions can be resolved with simple, bright-line rules. It's not clear who "owns" many classes of private data – does your mother own the fact that she gave birth to you, or do you? What if you disagree about such a disclosure – say, if you want to identify your mother as an abusive parent and she objects?— Cory Doctorow
Trolls, bots, and sybils distort online discourse and compromise the security of networked platforms. User identity is central to the vectors of attack and manipulation employed in these contexts. However it has long seemed that, try as it might, the security community has been unable to stem the rising tide of such problems. We posit the Ghost Trilemma, that there are three key properties of identity—sentience, location, and uniqueness—that cannot be simultaneously verified in a fully-decentralized setting— Sulagna Mukherjee, Paul Schmitt, Srivatsan Ravi, and Barath Raghavan in a paper abstract
If the hack was by a major government, the odds are really low that it has resecured its systems—unless it burned the network to the ground and rebuilt it from scratch (which seems unlikely).— Bruce Schneier on an attack on the UK Electoral Commission
Kernel development
Kernel release status
The current development kernel is 6.5-rc6, released on August 13. Linus said:
So apart from the regularly scheduled hardware mitigation patches, everything looks fairly normal. And I guess the hw mitigation is to be considered normal too, apart from the inevitable fixup patches it then causes because the embargo keeps us from testing it widely and keeps it from all our public automation. Sigh.
Stable updates: 6.4.10, 6.1.45, 5.15.126, 5.10.190, 5.4.253, 4.19.291, and 4.14.322 were released on August 11, followed by 6.4.11, 6.1.46, 5.15.127, 5.10.191, 5.4.254, 4.19.292, and 4.14.323 on August 16.
Maintainers Summit call for topics
The 2023 Maintainers Summit will be held on November 16 in Richmond, VA, immediately after the Linux Plumbers Conference.
As in previous years, the Maintainers Summit is invite-only, where the primary focus will be process issues around Linux Kernel Development. It will be limited to 30 invitees and a handful of sponsored attendees.
The call for topics has just gone out, with the first invitations to be sent within a couple of weeks or so.
Nuta: Exploring the internals of Linux v0.01
For those who find the 6.x kernel intimidating, Seiya Nuta has written a look at the 0.01 kernel, which reflects a simpler time.
By the way, there's an interesting comment about the scheduler:
* 'schedule()' is the scheduler function. This is GOOD CODE! There * probably won't be any reason to change this, as it should work well * in all circumstances (ie gives IO-bound processes good response etc).Yes it's indeed good code. Unfortunately (or fortunately), this prophecy is false. Linux became one of most practical and performant kernel which has introduced many new scheduling improvements and algorithms over the years, like Completely Fair Scheduler (CFS).
Quotes of the week
Being maintainer feels like a punishment, and that cannot stand. We need help.— Darrick WongPeople see the kinds of interpersonal interactions going on here and decide pursue any other career path. I know so, some have told me themselves.
You know what's really sad? Most of my friends work for small companies, nonprofits, and local governments. They report the same problems with overwork, pervasive fear and anger, and struggle to understand and adapt to new ideas that I observe here. They see the direct connection between their org's lack of revenue and the under resourcedness.
They /don't/ understand why the hell the same happens to me and my workplace proximity associates, when we all work for companies that each clear hundreds of billions of dollars in revenue per year.
Together we were able to solve the "impossible" problem of Android kernel security. Now that that windmill is semi-successfully conquered despite many who thought it never could be, why can't we help other users of our product to do the same? If not, we run the risk of having Linux be blamed for bad security, when in reality, it's the policy of companies not taking advantage of what we are providing them, a nuance that Linux users will never really understand, nor should they have to.— Greg Kroah-Hartman
Distributions
Debian turns 30
On August 16, 1993, Ian Murdock announced a new distribution to the comp.os.linux.development Usenet newsgroup:
This is just to announce the imminent completion of a brand-new Linux release, which I'm calling the Debian Linux Release. This is a release that I have put together basically from scratch; in other words, I didn't simply make some changes to SLS and call it a new release. I was inspired to put together this release after running SLS and generally being dissatisfied with much of it, and after much altering of SLS I decided that it would be easier to start from scratch. The base system is now virtually complete (though I'm still looking around to make sure that I grabbed the most recent sources for everything), and I'd like to get some feedback before I add the "fancy" stuff.
After 30 years, Debian is still going strong.
Debian adds LoongArch support
The Debian project has added the LoongArch architecture to its ports collection.
After an initial manual bootstrap of roughly 200 packages, two buildds are now building packages for the newly added "loong64" port with the help of qemu-user. After enough packages have been built for the port to be self-hosting, we're planning to replace these two buildds with real hardware hosted at Loongson.
Devuan 5.0.0 released
Version 5.0 ("Daedalus") of the Debian-based Devuan distribution has been released. "This is the result of many months of painstaking work by the Team and detailed testing by the wider Devuan community." The announcement lists a couple of new features but mostly defers to the Debian 12 ("bookworm") release notes.
The Open Enterprise Linux Association
The Open Enterprise Linux Association has announced its existence. It is a collaboration between CIQ (Rocky Linux), Oracle, and SUSE to provide an RHEL-compatible distribution.
Starting later this year, OpenELA will provide sources necessary for downstreams compatible with RHEL to exist, with initial focus on RHEL versions EL8, EL9 and possibly EL7. The project is committed to ensuring the continued availability of OpenELA sources to the community indefinitely.OpenELA’s core tenets, reflecting the spirit of the project, include full compliance with this existing standard, swift updates and secure fixes, transparency, community, and ensuring the resource remains free and redistributable for all.
Development
OpenSSH 9.4 released
OpenSSH 9.4 has been released. Changes this time include the ability to forward Unix-domain sockets, a tags mechanism for more flexible configuration, and more.
Miscellaneous
HashiCorp's license change
Readers have been pointing us to HashiCorp's announcement that it is moving to its own "Business Source License" for some of its (formerly) open-source products. Like other companies (example) that have taken this path, HashiCorp is removing the freedom to use its products commercially in ways that it sees as competitive. This is, in a real sense, an old and tiresome story.The lessons to be drawn from this change are old as well. One is to beware of depending on any platform, free or proprietary, that is controlled by a single company. It is a rare company that will not try to take advantage of that control at some point.
The other is to beware of contributor license agreements. HashiCorp's
agreement used
to read that it existed "to ensure that our projects remain licensed
under Free and Open Source licenses
"; the current version doesn't say that
anymore. But both versions give HashiCorp the right to play exactly this
kind of game with any code contributed by outsiders. Developers who were
contributing to a free-software project will now have their code used in a
rather more proprietary setting. When a company is given the right to take
somebody else's code proprietary, many of them will eventually make use of
that right.
Page editor: Jake Edge
Announcements
Newsletters
Distributions and system administration
Development
Meeting minutes
Calls for Presentations
CFP Deadlines: August 17, 2023 to October 16, 2023
The following listing of CFP deadlines is taken from the LWN.net CFP Calendar.
Deadline | Event Dates | Event | Location |
---|---|---|---|
August 18 | October 5 October 6 |
PyConZA | Durban, South Africa |
August 19 | September 2 September 3 |
Open Source Conference Albania 2023 | Tirana, Albania |
August 20 | September 25 September 26 |
GStreamer Conference 2023 | A Coruña, Spain |
August 27 | October 30 November 3 |
Netdev 0x17 | Vancouver, Canada |
August 31 | September 22 September 24 |
VideoLAN Dev Days 2023 | Dublin, Ireland |
September 1 | December 12 December 15 |
PostgreSQL Conference Europe | Prague, Czechia |
September 1 | December 1 December 3 |
GNOME.Asia 2023 | Kathmandu, Nepal |
September 10 | December 5 December 6 |
Open Source Summit Japan | Tokyo, Japan |
September 15 | December 2 December 3 |
EmacsConf 2023 | Online |
September 17 | November 4 November 5 |
OpenFest 2023 | Sofia, Bulgaria |
September 21 | February 27 February 29 |
22nd USENIX Conference on File and Storage Technologies | Santa Clara, CA, US |
September 28 | November 28 | NLUUG Fall Conference | Utrecht, The Netherlands |
October 2 | October 6 October 8 |
Qubes OS Summit | Berlin, Germany |
October 10 | October 28 | China Linux Kernel conference 2023 | Shenzhen, China |
If the CFP deadline for your event does not appear here, please tell us about it.
Upcoming Events
Events: August 17, 2023 to October 16, 2023
The following event listing is taken from the LWN.net Calendar.
Date(s) | Event | Location |
---|---|---|
September 2 September 3 |
Open Source Conference Albania 2023 | Tirana, Albania |
September 6 September 7 |
WasmCon | Bellevue, Washington, US |
September 7 September 8 |
PyCon Estonia | Tallinn, Estonia |
September 8 September 9 |
OLF Conference 2023 | Columbus, OH, US |
September 10 September 17 |
DebConf 23 | Kochi, India |
September 12 September 15 |
RustConf | Albuquerque, New Mexico, US |
September 13 September 14 |
stackconf 2023 | Berlin, Germany |
September 13 September 14 |
All Systems Go! 2023 | Berlin, Germany |
September 13 | eBPF Summit 2023 | online |
September 14 September 16 |
Kieler Open Source und Linux Tage | Kiel, Germany |
September 17 September 18 |
Tracing Summit | Bilbao, Spain |
September 19 September 21 |
Open Source Summit, Europe | Bilbao, Spain |
September 20 September 21 |
Linux Security Summit Europe | Bilbao, Spain |
September 21 September 23 |
LibreOffice Conference | Bucharest, Romania |
September 22 September 24 |
GNU Tools Cauldron | Cambridge, UK |
September 22 September 24 |
Jesień Linuksowa 2023 | Gliwice, Poland |
September 22 September 24 |
VideoLAN Dev Days 2023 | Dublin, Ireland |
September 25 September 26 |
GStreamer Conference 2023 | A Coruña, Spain |
September 25 September 27 |
Kernel Recipes 2023 | Paris, France |
September 26 September 29 |
Alpine Linux Persistence and Storage Summit | Lizumerhuette, Austria |
September 27 | GNU Project 40th Anniversary celebration | Biel, Switzerland |
October 1 | Hackday celebrating forty years of GNU at the Free Software Foundation | Boston, US |
October 3 October 5 |
PGConf NYC | New York, US |
October 5 October 6 |
PyConZA | Durban, South Africa |
October 6 October 8 |
LibreOffice Conf Asia x UbuCon Asia 2023 | Surakarta, Indonesia |
October 6 October 8 |
Qubes OS Summit | Berlin, Germany |
October 7 October 8 |
LibreOffice - Ubuntu Conference Asia 2023 | Surakarta, Indonesia |
October 15 October 17 |
All Things Open 2023 | Raleigh, NC, US |
If your event does not appear here, please tell us about it.
Security updates
Alert summary August 10, 2023 to August 16, 2023
Dist. | ID | Release | Package | Date |
---|---|---|---|---|
Debian | DLA-3529-1 | LTS | datatables.js | 2023-08-15 |
Debian | DLA-3523-1 | LTS | firefox-esr | 2023-08-09 |
Debian | DSA-5476-1 | stable | gst-plugins-ugly1.0 | 2023-08-12 |
Debian | DSA-5474-1 | stable | intel-microcode | 2023-08-11 |
Debian | DLA-3524-1 | LTS | kernel | 2023-08-10 |
Debian | DSA-5475-1 | stable | kernel | 2023-08-11 |
Debian | DLA-3526-1 | LTS | libreoffice | 2023-08-13 |
Debian | DLA-3525-1 | LTS | linux-5.10 | 2023-08-11 |
Debian | DLA-3426-3 | LTS | netatalk | 2023-08-13 |
Debian | DLA-3530-1 | LTS | openssl | 2023-08-16 |
Debian | DLA-3495-2 | LTS | php-dompdf | 2023-08-11 |
Debian | DLA-3528-1 | LTS | poppler | 2023-08-14 |
Debian | DSA-5477-1 | stable | samba | 2023-08-14 |
Debian | DLA-3527-1 | LTS | sox | 2023-08-13 |
Fedora | FEDORA-2023-99870af9f0 | F37 | OpenImageIO | 2023-08-11 |
Fedora | FEDORA-2023-ad5fee9a64 | F38 | OpenImageIO | 2023-08-11 |
Fedora | FEDORA-2023-95d73a5f50 | F38 | chromium | 2023-08-10 |
Fedora | FEDORA-2023-ea7128b5ce | F38 | chromium | 2023-08-12 |
Fedora | FEDORA-2023-d0ef677e6f | F37 | ghostscript | 2023-08-16 |
Fedora | FEDORA-2023-cba4a3a00f | F38 | ghostscript | 2023-08-12 |
Fedora | FEDORA-2023-ac752f8c37 | F37 | java-1.8.0-openjdk-portable | 2023-08-14 |
Fedora | FEDORA-2023-89bad07f9d | F38 | java-1.8.0-openjdk-portable | 2023-08-14 |
Fedora | FEDORA-2023-e643a71e0f | F37 | java-11-openjdk | 2023-08-16 |
Fedora | FEDORA-2023-30c8205a73 | F38 | java-11-openjdk | 2023-08-12 |
Fedora | FEDORA-2023-243a20ce18 | F37 | java-11-openjdk-portable | 2023-08-12 |
Fedora | FEDORA-2023-9a3ecf4fcf | F38 | java-11-openjdk-portable | 2023-08-12 |
Fedora | FEDORA-2023-d1d4839202 | F37 | java-17-openjdk-portable | 2023-08-12 |
Fedora | FEDORA-2023-b55ba9ed7a | F38 | java-17-openjdk-portable | 2023-08-12 |
Fedora | FEDORA-2023-ba53233477 | F37 | java-latest-openjdk | 2023-08-16 |
Fedora | FEDORA-2023-469d0d1a18 | F38 | java-latest-openjdk | 2023-08-16 |
Fedora | FEDORA-2023-020d609edb | F37 | java-latest-openjdk-portable | 2023-08-14 |
Fedora | FEDORA-2023-b7f6f0f77e | F38 | java-latest-openjdk-portable | 2023-08-14 |
Fedora | FEDORA-2023-638681260a | F37 | kernel | 2023-08-10 |
Fedora | FEDORA-2023-d9509be489 | F37 | kernel | 2023-08-14 |
Fedora | FEDORA-2023-ddfd3073b3 | F38 | kernel | 2023-08-10 |
Fedora | FEDORA-2023-ee241dcf80 | F38 | kernel | 2023-08-14 |
Fedora | FEDORA-2023-ca086f015c | F38 | krb5 | 2023-08-10 |
Fedora | FEDORA-2023-d15f5a186a | F38 | linux-firmware | 2023-08-11 |
Fedora | FEDORA-2023-755b8bb6db | F38 | linux-firmware | 2023-08-12 |
Fedora | FEDORA-2023-e1482687dd | F38 | microcode_ctl | 2023-08-16 |
Fedora | FEDORA-2023-b88b72e3e1 | F38 | mingw-python-certifi | 2023-08-12 |
Fedora | FEDORA-2023-9fa8f29bb7 | F37 | ntpsec | 2023-08-12 |
Fedora | FEDORA-2023-26cbce3854 | F38 | ntpsec | 2023-08-12 |
Fedora | FEDORA-2023-c68f2227e6 | F37 | php | 2023-08-11 |
Fedora | FEDORA-2023-984c26961f | F38 | php | 2023-08-12 |
Fedora | FEDORA-2023-6f2c7aa713 | F38 | rust | 2023-08-10 |
Fedora | FEDORA-2023-fff31650c8 | F38 | xen | 2023-08-16 |
Oracle | ELSA-2023-4059 | OL8 | .NET 6.0 | 2023-08-10 |
Oracle | ELSA-2023-4060 | OL9 | .NET 6.0 | 2023-08-10 |
Oracle | ELSA-2023-4058 | OL8 | .NET 7.0 | 2023-08-10 |
Oracle | ELSA-2023-4057 | OL9 | .NET 7.0 | 2023-08-10 |
Oracle | ELSA-2023-4327 | OL9 | 15 | 2023-08-10 |
Oracle | ELSA-2023-4330 | OL9 | 18 | 2023-08-10 |
Oracle | ELSA-2023-12579 | OL8 | aardvark-dns | 2023-08-10 |
Oracle | ELSA-2023-4152 | OL7 | bind | 2023-08-10 |
Oracle | ELSA-2023-4152 | OL7 | bind | 2023-08-10 |
Oracle | ELSA-2023-4102 | OL8 | bind | 2023-08-10 |
Oracle | ELSA-2023-4099 | OL9 | bind | 2023-08-10 |
Oracle | ELSA-2023-4100 | OL8 | bind9.16 | 2023-08-10 |
Oracle | ELSA-2023-12578 | OL8 | buildah | 2023-08-10 |
Oracle | ELSA-2023-4411 | OL9 | cjose | 2023-08-10 |
Oracle | ELSA-2023-4523 | OL8 | curl | 2023-08-10 |
Oracle | ELSA-2023-4354 | OL9 | curl | 2023-08-10 |
Oracle | ELSA-2023-4498 | OL8 | dbus | 2023-08-10 |
Oracle | ELSA-2023-4569 | OL9 | dbus | 2023-08-10 |
Oracle | ELSA-2023-3481 | OL7 | emacs | 2023-08-10 |
Oracle | ELSA-2023-4079 | OL7 | firefox | 2023-08-10 |
Oracle | ELSA-2023-4079 | OL7 | firefox | 2023-08-10 |
Oracle | ELSA-2023-4461 | OL7 | firefox | 2023-08-10 |
Oracle | ELSA-2023-4461 | OL7 | firefox | 2023-08-10 |
Oracle | ELSA-2023-4076 | OL8 | firefox | 2023-08-10 |
Oracle | ELSA-2023-4468 | OL8 | firefox | 2023-08-10 |
Oracle | ELSA-2023-4071 | OL9 | firefox | 2023-08-10 |
Oracle | ELSA-2023-4462 | OL9 | firefox | 2023-08-10 |
Oracle | ELSA-2023-3923 | OL9 | go-toolset and golang | 2023-08-10 |
Oracle | ELSA-2023-3922 | OL8 | go-toolset:ol8 | 2023-08-10 |
Oracle | ELSA-2023-4030 | OL9 | grafana | 2023-08-10 |
Oracle | ELSA-2023-4326 | OL7 | iperf3 | 2023-08-10 |
Oracle | ELSA-2023-4326 | OL7 | iperf3 | 2023-08-10 |
Oracle | ELSA-2023-4166 | OL7 | java-1.8.0-openjdk | 2023-08-10 |
Oracle | ELSA-2023-4166 | OL7 | java-1.8.0-openjdk | 2023-08-10 |
Oracle | ELSA-2023-4176 | OL8 | java-1.8.0-openjdk | 2023-08-10 |
Oracle | ELSA-2023-4178 | OL9 | java-1.8.0-openjdk | 2023-08-10 |
Oracle | ELSA-2023-4233 | OL7 | java-11-openjdk | 2023-08-10 |
Oracle | ELSA-2023-4233 | OL7 | java-11-openjdk | 2023-08-10 |
Oracle | ELSA-2023-4175 | OL8 | java-11-openjdk | 2023-08-10 |
Oracle | ELSA-2023-4158 | OL9 | java-11-openjdk | 2023-08-10 |
Oracle | ELSA-2023-4159 | OL8 | java-17-openjdk | 2023-08-10 |
Oracle | ELSA-2023-4177 | OL9 | java-17-openjdk | 2023-08-10 |
Oracle | ELSA-2023-4151 | OL7 | kernel | 2023-08-10 |
Oracle | ELSA-2023-12588 | OL8 | kernel | 2023-08-10 |
Oracle | ELSA-2023-3847 | OL8 | kernel | 2023-08-10 |
Oracle | ELSA-2023-4517 | OL8 | kernel | 2023-08-10 |
Oracle | ELSA-2023-3723 | OL9 | kernel | 2023-08-10 |
Oracle | ELSA-2023-4377 | OL9 | kernel | 2023-08-10 |
Oracle | ELSA-2023-4524 | OL8 | libcap | 2023-08-10 |
Oracle | ELSA-2023-4347 | OL9 | libeconf | 2023-08-10 |
Oracle | ELSA-2023-3839 | OL8 | libssh | 2023-08-10 |
Oracle | ELSA-2023-3827 | OL8 | libtiff | 2023-08-10 |
Oracle | ELSA-2023-4529 | OL8 | libxml2 | 2023-08-10 |
Oracle | ELSA-2023-4349 | OL9 | libxml2 | 2023-08-10 |
Oracle | ELSA-2023-12654 | OL7 | linux-firmware | 2023-08-10 |
Oracle | ELSA-2023-12657 | OL7 | linux-firmware | 2023-08-10 |
Oracle | ELSA-2023-12689 | OL7 | linux-firmware | 2023-08-10 |
Oracle | ELSA-2023-12690 | OL7 | linux-firmware | 2023-08-10 |
Oracle | ELSA-2023-12712 | OL7 | linux-firmware | 2023-08-10 |
Oracle | ELSA-2023-12713 | OL7 | linux-firmware | 2023-08-10 |
Oracle | ELSA-2023-12655 | OL8 | linux-firmware | 2023-08-10 |
Oracle | ELSA-2023-12691 | OL8 | linux-firmware | 2023-08-10 |
Oracle | ELSA-2023-12714 | OL8 | linux-firmware | 2023-08-10 |
Oracle | ELSA-2023-12656 | OL9 | linux-firmware | 2023-08-10 |
Oracle | ELSA-2023-12715 | OL9 | linux-firmware | 2023-08-10 |
Oracle | ELSA-2023-12692 | OL9 | linux-firmware | 2023-08-10 |
Oracle | ELSA-2023-4418 | OL8 | mod_auth_openidc:2.3 | 2023-08-10 |
Oracle | ELSA-2023-4331 | OL9 | nodejs | 2023-08-10 |
Oracle | ELSA-2023-4034 | OL8 | nodejs:16 | 2023-08-10 |
Oracle | ELSA-2023-4537 | OL8 | nodejs:16 | 2023-08-10 |
Oracle | ELSA-2023-4035 | OL8 | nodejs:18 | 2023-08-10 |
Oracle | ELSA-2023-4536 | OL8 | nodejs:18 | 2023-08-10 |
Oracle | ELSA-2023-3944 | OL7 | open-vm-tools | 2023-08-10 |
Oracle | ELSA-2023-3949 | OL8 | open-vm-tools | 2023-08-10 |
Oracle | ELSA-2023-3948 | OL9 | open-vm-tools | 2023-08-10 |
Oracle | ELSA-2023-4382 | OL7 | openssh | 2023-08-10 |
Oracle | ELSA-2023-4382 | OL7 | openssh | 2023-08-10 |
Oracle | ELSA-2023-4419 | OL8 | openssh | 2023-08-10 |
Oracle | ELSA-2023-4412 | OL9 | openssh | 2023-08-10 |
Oracle | ELSA-2023-4535 | OL8 | postgresql:12 | 2023-08-10 |
Oracle | ELSA-2023-4527 | OL8 | postgresql:13 | 2023-08-10 |
Oracle | ELSA-2023-12710 | OL8 | python-flask | 2023-08-10 |
Oracle | ELSA-2023-4350 | OL9 | python-requests | 2023-08-10 |
Oracle | ELSA-2023-12709 | OL8 | python-werkzeug | 2023-08-10 |
Oracle | ELSA-2023-3780 | OL8 | python27:2.7 | 2023-08-10 |
Oracle | ELSA-2023-3556 | OL7 | python3 | 2023-08-10 |
Oracle | ELSA-2023-3781 | OL8 | python38:3.8 and python38-devel:3.8 | 2023-08-10 |
Oracle | ELSA-2023-3811 | OL8 | python39:3.9 and python39-devel:3.9 | 2023-08-10 |
Oracle | ELSA-2023-3821 | OL8 | ruby:2.7 | 2023-08-10 |
Oracle | ELSA-2023-4328 | OL8 | samba | 2023-08-10 |
Oracle | ELSA-2023-4325 | OL9 | samba | 2023-08-10 |
Oracle | ELSA-2023-3840 | OL8 | sqlite | 2023-08-10 |
Oracle | ELSA-2023-3837 | OL8 | systemd | 2023-08-10 |
Oracle | ELSA-2023-4062 | OL7 | thunderbird | 2023-08-10 |
Oracle | ELSA-2023-4062 | OL7 | thunderbird | 2023-08-10 |
Oracle | ELSA-2023-4495 | OL7 | thunderbird | 2023-08-10 |
Oracle | ELSA-2023-4495 | OL7 | thunderbird | 2023-08-10 |
Oracle | ELSA-2023-4063 | OL8 | thunderbird | 2023-08-10 |
Oracle | ELSA-2023-4497 | OL8 | thunderbird | 2023-08-10 |
Oracle | ELSA-2023-4064 | OL9 | thunderbird | 2023-08-10 |
Oracle | ELSA-2023-4499 | OL9 | thunderbird | 2023-08-10 |
Oracle | ELSA-2023-3822 | OL8 | virt:ol and virt-devel:rhel | 2023-08-10 |
Oracle | ELSA-2023-4202 | OL8 | webkit2gtk3 | 2023-08-10 |
Oracle | ELSA-2023-4201 | OL9 | webkit2gtk3 | 2023-08-10 |
Red Hat | RHSA-2023:4645-01 | EL8 | .NET 6.0 | 2023-08-14 |
Red Hat | RHSA-2023:4640-01 | EL8.6 | .NET 6.0 | 2023-08-14 |
Red Hat | RHSA-2023:4644-01 | EL9 | .NET 6.0 | 2023-08-14 |
Red Hat | RHSA-2023:4639-01 | EL9.0 | .NET 6.0 | 2023-08-14 |
Red Hat | RHSA-2023:4643-01 | EL8 | .NET 7.0 | 2023-08-14 |
Red Hat | RHSA-2023:4642-01 | EL9 | .NET 7.0 | 2023-08-14 |
Red Hat | RHSA-2023:4655-01 | EL8 | redhat-ds:11 | 2023-08-15 |
Red Hat | RHSA-2023:4641-01 | EL7 | rh-dotnet60-dotnet | 2023-08-14 |
Red Hat | RHSA-2023:4634-01 | EL9 | rust | 2023-08-14 |
Red Hat | RHSA-2023:4651-01 | EL7 | rust-toolset-1.66-rust | 2023-08-15 |
Red Hat | RHSA-2023:4635-01 | EL8 | rust-toolset:rhel8 | 2023-08-14 |
SUSE | SUSE-SU-2023:3264-1 | MP4.3 SLE15 SES7.1 | container-suseconnect | 2023-08-10 |
SUSE | SUSE-SU-2023:3307-1 | SLE12 | docker | 2023-08-14 |
SUSE | SUSE-SU-2023:3263-1 | MP4.3 SLE15 SES7.1 oS15.4 oS15.5 | go1.19 | 2023-08-10 |
SUSE | SUSE-SU-2023:3267-1 | MP4.2 SLE15 SES7.1 | gstreamer-plugins-bad | 2023-08-10 |
SUSE | SUSE-SU-2023:3265-1 | MP4.2 SLE15 SLE-m5.2 SES7.1 oS15.4 | gstreamer-plugins-base | 2023-08-10 |
SUSE | SUSE-SU-2023:3266-1 | MP4.2 SLE15 SES7.1 oS15.4 | gstreamer-plugins-good | 2023-08-10 |
SUSE | SUSE-SU-2023:3287-1 | MP4.2 MP4.3 SLE15 SLE-m5.3 SLE-m5.4 SLE-m5.5 SES7.1 oS15.4 oS15.5 | java-11-openjdk | 2023-08-11 |
SUSE | SUSE-SU-2023:3305-1 | SLE15 SLE-m5.5 oS15.4 oS15.5 | java-1_8_0-openj9 | 2023-08-14 |
SUSE | SUSE-SU-2023:3332-1 | SLE15 SLE-m5.5 oS15.4 oS15.5 | java-1_8_0-openj9 | 2023-08-16 |
SUSE | SUSE-SU-2023:3313-1 | MP4.3 SLE15 SLE-m5.3 SLE-m5.4 oS15.4 osM5.3 osM5.4 | kernel | 2023-08-14 |
SUSE | SUSE-SU-2023:3333-1 | SLE11 | kernel | 2023-08-16 |
SUSE | SUSE-SU-2023:3309-1 | SLE12 | kernel | 2023-08-14 |
SUSE | SUSE-SU-2023:3329-1 | SLE12 | kernel | 2023-08-16 |
SUSE | SUSE-SU-2023:3324-1 | SLE12 | kernel | 2023-08-16 |
SUSE | SUSE-SU-2023:3318-1 | SLE15 SLE-m5.3 SLE-m5.4 oS15.4 osM5.3 osM5.4 | kernel | 2023-08-15 |
SUSE | SUSE-SU-2023:3302-1 | SLE15 SLE-m5.5 oS15.5 | kernel | 2023-08-14 |
SUSE | SUSE-SU-2023:3311-1 | SLE15 SLE-m5.5 oS15.5 | kernel | 2023-08-14 |
SUSE | SUSE-SU-2023:3262-1 | SLE12 | kernel-firmware | 2023-08-10 |
SUSE | SUSE-SU-2023:3298-1 | SLE15 oS15.5 | kernel-firmware | 2023-08-11 |
SUSE | SUSE-SU-2023:3325-1 | SLE15 oS15.5 | krb5 | 2023-08-16 |
SUSE | SUSE-SU-2023:3260-1 | MP4.3 SLE15 oS15.4 | kubernetes1.24 | 2023-08-10 |
SUSE | SUSE-SU-2023:3301-1 | MP4.2 MP4.3 SLE15 SLE-m5.1 SLE-m5.2 SLE-m5.3 SLE-m5.4 oS15.4 oS15.5 osM5.3 osM5.4 | libyajl | 2023-08-14 |
SUSE | SUSE-SU-2023:3306-1 | SLE12 | nodejs14 | 2023-08-14 |
SUSE | SUSE-SU-2023:3308-1 | SLE12 | openssl-1_0_0 | 2023-08-14 |
SUSE | SUSE-SU-2023:3291-1 | MP4.2 SLE-m5.1 SLE-m5.2 | openssl-1_1 | 2023-08-11 |
SUSE | openSUSE-SU-2023:0219-1 | osB15 | opensuse-welcome | 2023-08-14 |
SUSE | SUSE-SU-2023:3327-1 | MP4.3 SLE15 SLE-m5.3 SLE-m5.4 oS15.4 oS15.5 osM5.3 osM5.4 | pcre2 | 2023-08-16 |
SUSE | SUSE-SU-2023:3328-1 | SLE12 | pcre2 | 2023-08-16 |
SUSE | openSUSE-SU-2023:0223-1 | osB15 | perl-HTTP-Tiny | 2023-08-15 |
SUSE | openSUSE-SU-2023:0222-1 | osB15 | perl-HTTP-Tiny | 2023-08-15 |
SUSE | SUSE-SU-2023:3292-1 | MP4.2 MP4.3 SLE15 oS15.4 | poppler | 2023-08-11 |
SUSE | SUSE-SU-2023:3303-1 | SLE12 | poppler | 2023-08-14 |
SUSE | SUSE-SU-2023:3272-1 | MP4.3 SLE15 SLE-m5.3 SLE-m5.4 SLE-m5.5 oS15.4 oS15.5 | python-scipy | 2023-08-11 |
SUSE | SUSE-SU-2023:3290-1 | MP4.3 SLE15 oS15.4 | qatengine | 2023-08-11 |
SUSE | SUSE-SU-2023:3289-1 | SLE12 | ucode-intel | 2023-08-11 |
SUSE | SUSE-SU-2023:3268-1 | SLE12 | util-linux | 2023-08-10 |
SUSE | SUSE-SU-2023:2640-1 | MP4.2 MP4.3 SLE15 SLE-m5.1 SLE-m5.2 SLE-m5.3 SLE-m5.4 SES7 SES7.1 oS15.4 osM5.3 osM5.4 | vim | 2023-08-10 |
SUSE | SUSE-SU-2023:3300-1 | SLE15 | webkit2gtk3 | 2023-08-14 |
Ubuntu | USN-6278-2 | 22.04 | dotnet6, dotnet7 | 2023-08-10 |
Ubuntu | USN-6287-1 | 16.04 18.04 20.04 | golang-yaml.v2 | 2023-08-14 |
Ubuntu | USN-6243-2 | 18.04 | graphite-web | 2023-08-09 |
Ubuntu | USN-6291-1 | 16.04 | gstreamer1.0 | 2023-08-16 |
Ubuntu | USN-6286-1 | 16.04 18.04 20.04 22.04 23.04 | intel-microcode | 2023-08-14 |
Ubuntu | USN-6284-1 | 18.04 20.04 | linux, linux-aws, linux-aws-5.4, linux-gcp, linux-gcp-5.4, linux-gkeop, linux-iot, linux-kvm, linux-oracle, linux-oracle-5.4, linux-raspi, linux-raspi-5.4 | 2023-08-11 |
Ubuntu | USN-6283-1 | 23.04 | linux, linux-aws, linux-azure, linux-gcp, linux-ibm, linux-kvm, linux-lowlatency, linux-oracle, linux-raspi | 2023-08-11 |
Ubuntu | USN-6285-1 | 22.04 | linux-oem-6.1 | 2023-08-11 |
Ubuntu | USN-6288-1 | 20.04 22.04 23.04 | mysql-8.0 | 2023-08-15 |
Ubuntu | USN-6277-2 | 22.04 | php-dompdf | 2023-08-10 |
Ubuntu | USN-4897-2 | 14.04 | pygments | 2023-08-14 |
Ubuntu | USN-6280-1 | 16.04 18.04 20.04 22.04 | pypdf2 | 2023-08-14 |
Ubuntu | USN-6290-1 | 14.04 16.04 18.04 20.04 22.04 23.04 | tiff | 2023-08-16 |
Ubuntu | USN-6281-1 | 16.04 18.04 20.04 | velocity | 2023-08-10 |
Ubuntu | USN-6282-1 | 16.04 18.04 20.04 | velocity-tools | 2023-08-10 |
Ubuntu | USN-6289-1 | 22.04 23.04 | webkit2gtk | 2023-08-15 |
Kernel patches of interest
Kernel releases
Architecture-specific
Build system
Core kernel
Development tools
Device drivers
Device-driver infrastructure
Documentation
Filesystems and block layer
Memory management
Networking
Security-related
Virtualization and containers
Miscellaneous
Page editor: Jonathan Corbet