LWN.net Weekly Edition for August 25, 2022
Welcome to the LWN.net Weekly Edition for August 25, 2022
This edition contains the following feature content:
- From late-bound arguments to deferred computation, part 2: a proposal for deferred computation support in Python raises a lot of difficult questions and, seemingly, fails to address the initial use case.
- The growing image-processor unpleasantness: Linux support for camera devices is getting worse and solutions will be slow in coming.
- The ABI status of ELF hash tables: a GNU C Library build change breaks some users; is this an incompatible ABI change or not?
- LRU-list manipulation with DAMON: a 6.0 feature giving administrators more control over low-level memory management.
- The container orchestrator landscape: a survey of free container-orchestration systems.
This week's edition also includes these inner pages:
- Brief items: Brief news items from throughout the community.
- Announcements: Newsletters, conferences, security updates, patches, and more.
Please enjoy this week's edition, and, as always, thank you for supporting LWN.net.
From late-bound arguments to deferred computation, part 2
Discussion on PEP 671 ("Syntax for late-bound function argument defaults") has been going on—in fits and starts—since it was introduced last October. The idea is to provide a way to specify the default for a function argument that is evaluated in the scope of the function call, which will allow more concise, and visible, defaults. But there has been a persistent complaint that what the language needs is a more-general deferred computation feature; late-bound defaults would simply fall out as one specific user of the feature. The arrival of a proposal for deferred computation did not really accomplish that goal, however.
Last week, we looked at the discussion of late-bound argument defaults after it got rekindled in June. There are some developers who are interested in having a way to specify argument defaults that are evaluated at run time in the context of the call. For example:
def fn(a, max=len(a)): ...That function definition looks sensible at first glance, but len(a) is evaluated at function-definition time, so it does not refer to the argument a at all. PEP 671, authored by Chris Angelico, would add a new way to specify defaults that are evaluated using the context at the beginning of the function call (using "=>"), so the following would do what the example above looks like it is doing:
def fn(a, max=>len(a)): ...
But throughout all of the discussions, deferred computation was repeatedly raised as a better way forward; it was mostly championed by Steven D'Aprano and David Mertz. The details of what that feature might look like were not ever described until fairly late in the game, which was annoying to several of the participants in the threads. Things got more heated than is usual in discussions on the python-ideas mailing list, but that heat finally dissipated after a deferred-computation proposal was made.
A proto-PEP
As Brendan Barnwell and Paul Moore noted, a counter-proposal, such as one for deferred computation, is not required in order to oppose PEP 671. But it is difficult—frustrating—to try to defend a proposal when the "opposition" is so amorphous; that frustration can clearly be seen in the discussions. On June 21, Mertz posted the first version of his proto-PEP for "Generalized deferred computation". It would use the soft keyword later to identify expressions that should only be evaluated when a value is actually required for a non-later expression:
a = 5 b = later a + 2 # b is a deferred object c = later a * b + b # c is a deferred object print(c) # c and b are evaluatedThe result is 42, as expected. The PEP abstract notes that a deferred object can be thought of as a "thunk" or likened to an existing Python construct: "
To a first approximation, a deferred object is a zero-argument lambda function that calls itself when referenced."
The proto-PEP, which has never progressed to the point of getting a non-placeholder number, mentions the delayed() interface from the Dask Python parallel-computation library as something of a model for the feature. However, a Dask delayed object is only evaluated when its compute() method is called, while Mertz's version would automatically evaluate deferred objects when their value is used in a non-deferred context.
The proposal also talked about having a way to determine whether an object
was deferred without evaluating it. He suggested that type()
would not evaluate a deferred object or, perhaps, an
isdeferred() function would be added for that purpose. That puzzled Barnwell
who noted that it violates a basic Python principle: arguments to functions
are evaluated "before the function
gets any say in the matter
". Making type() or
isdeferred() do something different would be strange.
Mertz agreed that was a problematic piece, but he would like to find a way to query an object without evaluating it. Carl Meyer suggested that PEP 690 ("Lazy Imports"), which he co-authored, might provide some help on that; in general, even though Mertz's proposal was borne out of the PEP 671 discussion, Meyer thought it is more closely aligned with the lazy imports feature. But Angelico pointed out that there is a fundamental question that was not answered in the deferred computation proposal:
What are the scoping rules for deferred objects? Do they access names where they are evaluated, or where they are defined? Consider:def f(x): spam = 1 print(x) def g(): spam = 2 f(later spam) g()Does this print 1 or 2?This is a fundamental and crucial point, and must be settled early. You cannot defer this. :)
Angelico is demonstrating the two different scopes here. If later spam is evaluated in the scope where it was defined, spam will be 2, but if it is evaluated in the scope where it is evaluated, it will be 1.
Meyer agreed that specifying the scoping was crucial and thought it obvious that deferred objects would use the names from the scope where they were defined. Mertz initially disagreed, but eventually came around to that view as well. What that means, though, is that the feature cannot support late-bound arguments as Angelico had been saying for some time. The example given in the proto-PEP is:
def func(items=[], n=later len(items)): n # Evaluate the Deferred and re-bind the name n items.append("Hello") print(n) func([1, 2, 3]) # prints: 3Angelico pointed out that the lookup of items in that example would be done in the surrounding scope:
That's why PEP 671 has the distinction that the late-bound default is scoped within the function body, not its definition. Otherwise, this doesn't work.
No one has replied to Angelico, so it would seem that using the generalized mechanism for late-bound arguments is a non-starter. That does not necessarily mean that the deferred computation feature should be abandoned, but it would seem that whatever barrier it imposed for PEP 671 has disappeared.
More problems
Meanwhile, there are other problem spots for deferred computation, including further difficulties with the late-bound-arguments use case. Martin Di Paola noted that Dask, PySpark, the Django object-relational mapping (ORM) layer for databases, and other similar tools do much more under the covers than simply evaluate the deferred object. In each case, the framework can manipulate the abstract syntax tree (AST) to optimize the evaluation in some fashion. Moore was concerned that only having the automatic-evaluation mechanism would preclude these other use cases (and those that may arise) from using the proposed mechanism:
My concern is that we're unlikely to be able to justify *two* forms of "deferred expression" construct in Python, and your proposal, by requiring transparent evaluation on reference, would preclude any processing (such as optimisation, name injection, or other forms of AST manipulation) of the expression before evaluation.I suspect that you consider evaluation-on-reference as an important feature of your proposal, but could you consider explicit evaluation as an alternative? Or at the very least address in the PEP the fact that this would close the door on future explicit evaluation models?
Another problem arose from a familiar source; Angelico pointed out another late-bound-argument use case that fails. Stephen J. Turnbull reframed the problem and suggested a possible way forward, though things got ever more tangled.
def foo(cookiejar=defer []):foo() produces a late bound empty list that will be used again the next time foo() is invoked.Now, we could modify the defer syntax in function parameter default values to produce a deferred deferred object [...]
He guessed that might lead Angelico to reply with something akin to the "puke emoji", but instead Angelico poked holes in that idea as well:
def foo(cookiejar=defer []): cookiejar.append(1) # new list here cookiejar.append(2) # another new list???So the only way around it would be to make the defer keyword somehow magical when used in a function signature, which kinda defeats the whole point about being able to reuse another mechanic to achieve this.
It was around that point where Mertz seemed to perhaps throw in the towel on the idea, at least in terms of supporting late-bound defaults using it. Overall, it would seem that a lot more thought and work will need to go into a deferred-computation proposal before it can really even get off the ground. It is a popular feature from other languages, especially functional languages, but there may not be a real fit for it in the Python language. As mentioned, the technique is already used in a variety of ways in the ecosystem—without any change to the language syntax.
In the two months since that flurry of discussion, things have been quiet. The barriers to pushing PEP 671 forward would seem to have lessened some, at minimum, though that may still not be enough to get it across the finish line. As noted, there are no huge wins for adding this syntax; there are already only slightly more verbose ways to manually create late-bound defaults.
But back in October, Guido van Rossum was pleased to see work happening on fixing that particular language wart; he is no longer the final arbiter, of course, but he still has plenty of influence within the core Python development community. In order to proceed, if Angelico is still wanting to do so, he will need a core developer to sponsor the PEP, since, surprisingly, he is not one. Even if the PEP were to fail, it might be worth pushing it a bit further if a sponsor can be found; that way, the next time the idea gets raised, much of the discussion can be short-circuited by referring to the PEP—in theory anyway.
The growing image-processor unpleasantness
There was a time when care had to be taken when buying hardware if the goal was to run Linux on it. The situation has improved considerably in recent decades, and unsupported hardware is more the exception than the rule. That has, for many years, been especially true of Intel hardware; that company has made a point of ensuring that its offerings work with Linux. So it is a bit surprising that the IPU6 image processor shipped with Alder Lake CPUs lacks support in Linux, and is unlikely to get it anytime soon. The problem highlighted here goes beyond just Intel, though.The IPU6, like most image processors, exists to accept a data stream from a camera sensor and turn it into a useful video stream. These processors can take on a lot of tasks, including rotation, cropping, zooming, color-space conversion, white-balance correction, noise removal, focus management, and more. They are complex devices; the kernel community has responded by creating some equally complex APIs, including Video4Linux2 (V4L2) and media controller, to allow user space to manage them. As long as a device comes with a suitable driver, the kernel can make a camera device available to user space which, with care, can work with it without needing to know the details of the hardware.
As Paul Menzel recently pointed out on the linux-kernel mailing list, there is no such driver for the IPU6, so a mainline Linux kernel cannot drive it. As a result, the kernel lacks support for MIPI cameras on some current laptops, including some versions of the Thinkpad X1 Carbon and Dell XPS 13, which are relatively popular with Linux users (cameras using other interfaces, such as USB UVC, are generally supported). To get around this problem, Dell ships a closed-source, user-space driver in the Ubuntu build it offers on the XPS 13. Lenovo, instead, is not selling the affected systems with Linux preloaded at all at this point.
Laurent Pinchart provided
more details on this situation. IPU6 support on Ubuntu is based on a
kernel driver that provides a V4L2 interface, but which interfaces with a
proprietary user-space driver to actually get the work done. As Ubuntu's
Dell page notes, this solution
is not without its costs: "CPU performance is impacted by a daemon doing
buffer copying for v4l2 loopback device
" and the camera can only
provide 720p resolution. Pinchart went on to say that
the IPU6 will not be the only problematic device out there:
Given the direction the industry is taking, this situation will become increasingly common in the future. With the notable exception of Raspberry Pi who is leading the way in open-source camera support, no SoC vendor is willing today to open their imaging algorithms.
Improving this situation will require work on a couple of fronts. On the user-space side, Pinchart pointed out that the libcamera project was created for the explicit purpose of supporting complex devices like the IPU6; this project was covered here in 2019. Among other things, libcamera was designed to allow the plugging-in of proprietary image-processing code while maximizing free-software choice. It currently supports the earlier IPU3 processor either with or without the proprietary plugin, though not all functionality is available without it.
Providing a similar level of support for the IPU6 should be possible, but it will take a fair amount of work. It also is going to need a proper kernel driver for the IPU6, which could be a problem. Evidently, the complexity of this device is such that the V4L2 API cannot support it, so a new API will be required. A candidate API exists in a preliminary form; it is called CAM (or KCAM) and, according to Sergey Senozhatsky, the plan is to get that merged into the kernel, then to add an IPU6 driver that implements this API. Pinchart responded that this merging will not happen quickly because the API, as proposed, is not seen favorably in the kernel community.
The V4L2 API has required extensive development and evolution over many
years to reach its current point; it will not be replaced overnight. That
is doubly true if the proposed alternative has been developed in isolation
and never been through any sort of public discussion, which is the case
here. Senozhatsky posted a pointer
to a
repository containing a CAM implementation, but that code is only
enlightening in the
vaguest terms. There is no documentation and no drivers actually
implementing this API. It is highly abstracted; as Sakari Ailus put
it: "I wouldn't have guessed this is an API for cameras if
I hadn't been told so
".
Creating a new camera API will be, as Ailus described
it, "a major and risky endeavour
"; it
will require a long
conversation, and it's not clear that CAM has moved the clock forward by
much. According to
Senozhatsky, the CAM code will not be submitted upstream for
"several months
". He later suggested that
this plan could be accelerated, but the fact remains that the community has
not even begun the process of designing an API that is suitable for the
next generation of image processors.
The bottom line is that purchasers of Linux systems are going to have to be more careful for some time; many otherwise nice systems will not work fully without a proprietary camera driver and, as a result, will lack support in many distributions. Making this hardware work at all will involve losing CPU performance to the current workaround. Working all of this out is likely to take years and, even then, it's not clear that there will be viable free-software solutions available. It is a significant step backward for Intel, for Linux, and for the industry as a whole.
Postscript: we were asked to add the following statement by the KCAM developers, who feel that the portrayal of the project in this article was not entirely fair:
Google has been working closely with the community members and hardware vendors since 2018. This includes organising two big workshops (in 2018 and 2020).The development has been done in public and has been presented in public forums to welcome as many interested parties as possible.
We are looking forward to completing the development of the initial proposal and posting it on the mailing list, to get the opinion of the community. ChromeOS believes in “Upstream first” for our kernel work and will continue to work in that direction.
The ABI status of ELF hash tables
It is fair to say that some projects are rather more concerned about preserving ABI compatibility than others; the GNU C Library (glibc) project stands out even among those that put a lot of effort into preserving interface stability, So it may be a bit surprising that a recent glibc change is being blamed for breaking a number of applications, most of which are proprietary games. There is, it seems, a class of glibc changes that can break applications, but which are not deemed to be ABI changes.When the dynamic linker starts a program, it must resolve all of the symbol references into shared libraries (including glibc). That can involve looking up thousands of symbols in long lists. Since this process must complete before an application can actually start running, it needs to happen quickly. Nobody likes a long delay between starting nethack and facing off against that first kobold, after all. So it is not surprising that some effort has gone into optimizing symbol lookup.
When the ELF file for a shared object is created by the linker, one of the sections stored therein contains a hash table for the symbols in that file. This hash table can be used to speed the lookup process and get the application underway. For many years, the System V standard for the format of this table has been DT_HASH; that format is supported by the toolchains on Linux. In 2006, though, the DT_GNU_HASH format was added as well; it includes a number of improvements intended to get nethack players into their dungeons even more quickly, including a better hash algorithm and a Bloom filter to short-circuit the search for missing symbols. This format is not well documented, but this 2017 blog post gives an introduction.
Since the hash table lives in its own ELF section, there is nothing preventing an ELF file from having more than one of them. Linkers on Linux systems can be told to create one format or the other — or to create both, each in its own section. Until recently, glibc has been built (by default) with a linker option explicitly requesting that both formats be created. That changed, though, with the glibc 2.36 release at the beginning of August; it contained a simple patch from Florian Weimer causing only the DT_GNU_HASH format to be generated.
At this point, glibc 2.36 has been installed onto a number of systems running the faster-moving Linux distributions, and few people have noticed the change; the DT_HASH format has not been used for anything on those systems in many years, and the only consequence of its removal is regaining a small amount of disk space. Game players, though, were not so lucky; naturally, the problem relates to a piece of proprietary software that cannot be easily changed.
The Easy Anti-Cheat (EAC) system is a proprietary tool from EPIC intended to prevent game players from cheating by way of modifications to the game executable itself. As one might expect, the actual heuristics used are not documented anywhere and the code is secret, but one of the techniques involved appears to be looking at the symbols in the game executable and ensuring that they match the expected values. To do this, EAC goes rooting through the DT_HASH tables; if an expected table isn't present, EAC proudly proclaims that it has caught a cheater and does not allow the game to run.
Gamers, it seems, find this behavior disappointing. Arkadiusz Hiler, for example, blogged:
I think this whole situation shows why creating native games for Linux is challenging. It’s hard to blame developers for targeting Windows and relying on Wine + friends. It’s just much more stable and much less likely to break and stay broken.
Hiler reported
the problem in the glibc bug tracker; the discussion later moved
to
the project's mailing list as well. The report stated that the change
"breaks SysV ABI compatibility
", suggesting that it should be
reverted. The responses from the project were not entirely sympathetic to
that cause, though. Weimer answered
that: "Any tool that performs symbol lookups needs to support
DT_GNU_HASH these days
". Adhemerval Zanella said:
"I am not sure this characterizes as an ABI break since the symbol lookup
information would be indeed provided (albeit in a different format)
"
Carlos O'Donell agreed that this change was not an ABI break:
Software that is an ELF consumer on Linux has had 16 years to be updated to handle the switch from DT_HASH to DT_GNU_HASH (OS-specific).While I'm sympathetic to application developers and their backwards compatibility requirements, this specific case is about an ELF consumer and such a consumer needs to track upstream Linux ELF developments.
He went on to say that characteristics of the generated ELF file are not part of the glibc ABI, even if changes there break applications, and suggested that the bug should be closed as "won't fix". He also requested that the EAC developers provide reasons for why DT_HASH should be retained by default. As of this writing, the bug remains open.
Blaming the EAC developers for not keeping up with Linux ELF hash-table formats might not be entirely fair. The DT_HASH format is mandated by the System V ABI specification, the DT_GNU_HASH format is undocumented, and there has been no deprecation campaign to get users to move on. Chances are those developers are as surprised as anybody and haven't just been ignoring the "switch to DT_GNU_HASH" entry languishing in their issue tracker for the last decade or so. Regardless of blame, though, something needs to be done to solve this problem and save gamers from the prospect of having to get some actual work done.
If EAC were free software, of course, chances are there would already be a patch circulating to deal with the problem. As it is, only its owner can deal with this problem directly. Meanwhile, though, there is another workaround available: distributors can easily patch the glibc build to restore the DT_HASH section and make the problem go away for now. Doing that and giving EAC (along with a few other programs) some time to move to DT_GNU_HASH seems like the best solution from just about any point of view.
LRU-list manipulation with DAMON
The DAMON subsystem, which entered the kernel during the 5.15 release cycle, uses various heuristics to determine which pages of memory are in active use. Since the beginning, the intent has been to use this information to influence memory management. The 6.0 kernel contains another step in this direction, giving DAMON the ability to actively reorder pages on the kernel's least-recently-used (LRU) lists.The kernel's memory-management developers would like nothing better than the ability to know which pages of memory will be needed in the near future; the kernel could then make sure that those pages were resident in RAM. Unfortunately, current hardware is unable to provide that information, so the memory-management code must make guesses instead. Usually, the best guess is that pages that have been used in the recent past are likely to be used again soon, while those that have gone untouched for some time are probably not needed.
LRU lists
This approach works well, but there is still a problem: there are limits to how closely the kernel can track the usage of each page. Making a note of every access would slow the system to a crawl, so something else must be done. The LRU lists are one way in which the memory-management subsystem tries to answer this question in an efficient way.
Occasionally, the kernel will pull a set of pages off the tail of the active list and place them, instead, at the head of the inactive list. When this happens, the pages are "deactivated", meaning that they are marked in the page tables as "not present". Should some process try to access such a page, a soft page fault will result; the kernel will then observe that the page is still in use and move it back to the active list. Pages that remain on the inactive list, instead, will find their way to the tail, where they will be reclaimed when the kernel needs memory for other uses.
The LRU lists are, thus, a key part of the mechanism that decides which pages stay in RAM and which are reclaimed. Despite their name, though, these lists are at best a rough approximation of which pages have been least (or most) recently used. The description might be better given as "least recently noticed to be used" instead. If there were a better mechanism for understanding which pages are truly in heavy use, it should be possible to use that information to improve on the current LRU lists.
Reordering the lists
DAMON ("Data Access MONitor") is meant to be that mechanism. Through some clever algorithms (described in this article), DAMON tries to create a clearer picture of actual memory usage while, at the same time, limiting its own CPU usage. DAMON is designed to be efficient enough to use on production systems while being accurate enough to improve memory-management decisions.
The 5.16 kernel saw the addition of DAMOS ("DAMON operation schemes"), which adds a rule-based mechanism that can cause actions to be taken whenever specific criteria are met. For example, DAMOS could be configured to pass a region that has not been accessed in the last N seconds to the equivalent of an madvise(MADV_COLD) call. Various other options are available; they are all described in detail in Documentation/admin-guide/mm/damon/usage.rst.
The work merged for 6.0 adds two new operations to DAMOS: lru_prio and lru_deprio. The first will cause the indicated pages to be moved to the head of the active list, making them the last pages that the kernel will try to deactivate or reclaim; the second, instead, will deactivate the given pages, causing them to be moved to the inactive lists. With this change, in other words, DAMOS is reaching deep into the memory-management subsystem, using its (hopefully) superior information to make the ordering of the LRU lists closer to actual usage. This sorting could be especially useful if the system comes under sudden memory pressure and has to start reclaiming memory quickly.
Author SeongJae Park calls this mechanism "proactive LRU-list sorting" or
PLRUS. When properly tuned, he claimed in the patch
series cover letter, this mechanism can yield some nice results: "In
short, PLRUS achieves 10% memory PSI (some) reduction, 14% major page
faults reduction, and 3.74% speedup under memory pressure
". The term
"PSI (some)" here refers to the pressure-stall
information produced by the kernel, which is a measure of how much
processes are being delayed waiting for memory.
The "when properly tuned" caveat is important, though; DAMOS has a complex set of parameters to describe action thresholds and to limit how much CPU time is used by DAMOS itself. Adjusting those parameters can result in significant changes to how the core memory-management subsystem goes about its work. DAMOS offers a lot of flexibility to a full-time administrator who understands how memory management works and who is able to accurately measure the effects of changes. It also makes it easy to completely wreck a system's performance.
To aid administrators who do not have the time or skills to come up with an
optimal DAMOS tuning for their workload, Park also added a new kernel
module called damon_lru_sort. It uses DAMOS to perform proactive
LRU-list sorting under a set of "conservative
" parameters that are
meant to safely improve performance while minimizing overhead. This module
will make using the LRU-list sorting feature easier, but it still has a
significant set of tuning knobs; the
documentation describes them all.
This mechanism is aimed at a similar problem to that addressed by the multi-generational LRU work, which currently seems on track to be merged in 6.1. The multi-generational LRU, too, tries to create a more accurate picture of which pages are in active use so that better page-replacement decisions can be made. There are a number of open questions about how the movement of pages between the generations should be handled; there is talk of allowing the loading of BPF programs to control those decisions, but DAMOS might be able to help as well. The integration between the two mechanisms does not currently exist, but could be a good thing to add.
The advent of this type of ability to tweak memory management is, obviously, a sign that better performance is always desirable. It is also, perhaps, an indication that creating a memory-management subsystem that performs optimally for all workloads is beyond our current capabilities. Kernel developers tend to prefer not to add new configuration knobs on the theory that the kernel should be able to configure itself. Here, though, new knobs are being added in large numbers. Some problems are, it seems, still too hard for the kernel to solve without help.
The container orchestrator landscape
Docker and other container engines can greatly simplify many aspects of deploying a server-side application, but numerous applications consist of more than one container. Managing a group of containers only gets harder as additional applications and services are deployed; this has led to the development of a class of tools called container orchestrators. The best-known of these by far is Kubernetes; the history of container orchestration can be divided into what came before it and what came after.
The convenience offered by containers comes with some trade-offs; someone who adheres strictly to Docker's idea that each service should have its own container will end up running a large number of them. Even a simple web interface to a database might require running separate containers for the database server and the application; it might also include a separate container for a web server to handle serving static files, a proxy server to terminate SSL/TLS connections, a key-value store to serve as a cache, or even a second application container to handle background jobs and scheduled tasks.
An administrator who is responsible for several such applications will quickly find themselves wishing for a tool to make their job easier; this is where container orchestrators step in. A container orchestrator is a tool that can manage a group of multiple containers as a single unit. Instead of operating on a single server, orchestrators allow combining multiple servers into a cluster, and automatically distribute container workloads among the cluster nodes.
Docker Compose and Swarm
Docker Compose is not quite an orchestrator, but it was Docker's first attempt to create a tool to make it easier to manage applications that are made out of several containers. It consumes a YAML-formatted file, which is almost always named docker-compose.yml. Compose reads this file and uses the Docker API to create the resources that it declares; Compose also adds labels to all of the resources, so that they can be managed as a group after they are created. In effect, it is an alternative to the Docker command-line interface (CLI) that operates on groups of containers. Three types of resources can be defined in a Compose file:
- services contains declarations of containers to be launched. Each entry in services is equivalent to a docker run command.
- networks declares networks that can be attached to the containers defined in the Compose file. Each entry in networks is equivalent to a docker network create command.
- volumes defines named volumes that can be attached to the containers. In Docker parlance, a volume is persistent storage that is mounted into the container. Named volumes are managed by the Docker daemon. Each entry in volumes is equivalent to a docker volume create command.
Networks and volumes can be directly connected to networks and filesystems on the host that Docker is running on, or they can be provided by a plugin. Network plugins allow things like connecting containers to VPNs; a volume plugin might allow storing a volume on an NFS server or an object storage service.
Compose provides a much more convenient way to manage an application that consists of multiple containers, but, at least in its original incarnation, it only worked with a single host; all of the containers that it created were run on the same machine. To extend its reach across multiple hosts, Docker introduced Swarm mode in 2016. This is actually the second product from Docker to bear the name "Swarm" — a product from 2014 implemented a completely different approach to running containers across multiple hosts, but it is no longer maintained. It was replaced by SwarmKit, which provides the underpinnings of the current version of Docker Swarm.
Swarm mode is included in Docker; no additional software is required. Creating a cluster is a simple matter of running docker swarm init on an initial node, and then docker swarm join on each additional node to be added. Swarm clusters contain two types of nodes. Manager nodes provide an API to launch containers on the cluster, and communicate with each other using a protocol based on the Raft Consensus Algorithm in order to synchronize the state of the cluster across all managers. Worker nodes do the actual work of running containers. It is unclear how large these clusters can be; Docker's documentation says that a cluster should have no more than 7 manager nodes but does not specify a limit on the number of worker nodes. Bridging container networks across nodes is built-in, but sharing storage between nodes is not; third-party volume plugins need to be used to provide shared persistent storage across nodes.
Services are deployed on a swarm using Compose files. Swarm extended the Compose format by adding a deploy key to each service that specifies how many instances of the service should be running and which nodes they should run on. Unfortunately, this led to a divergence between Compose and Swarm, which caused some confusion because options like CPU and memory quotas needed to be specified in different ways depending on which tool was being used. During this period of divergence, a file intended for Swarm was referred to as a "stack file" instead of a Compose file in an attempt to disambiguate the two; thankfully, these differences appear to have been smoothed over in the current versions of Swarm and Compose, and any references to a stack file being distinct from a Compose file seem to have largely been scoured from the Internet. The Compose format now has an open specification and its own GitHub organization providing reference implementations.
There is some level of uncertainty about the future of Swarm. It once formed the backbone of a service called Docker Cloud, but the service was suddenly shut down in 2018. It was also touted as a key feature of Docker's Enterprise Edition, but that product has since been sold to another company and is now marketed as Mirantis Kubernetes Engine. Meanwhile, recent versions of Compose have gained the ability to deploy containers to services hosted by Amazon and Microsoft. There has been no deprecation announcement, but there also hasn't been any announcement of any other type in recent memory; searching for the word "Swarm" on Docker's website only turns up passing mentions.
Kubernetes
Kubernetes (sometimes known as k8s) is a project inspired by an internal Google tool called Borg. Kubernetes manages resources and coordinates running workloads on clusters of up to thousands of nodes; it dominates container orchestration like Google dominates search. Google wanted to collaborate with Docker on Kubernetes development in 2014, but Docker decided to go its own way with Swarm. Instead, Kubernetes grew up under the auspices of the Cloud Native Computing Foundation (CNCF). By 2017, Kubernetes had grown so popular that Docker announced that it would be integrated into Docker's own product.
Aside from its popularity, Kubernetes is primarily known for its complexity. Setting up a new cluster by hand is an involved task, which requires the administrator to select and configure several third-party components in addition to Kubernetes itself. Much like the Linux kernel needs to be combined with additional software to make a complete operating system, Kubernetes is only an orchestrator and needs to be combined with additional software to make a complete cluster. It needs a container engine to run its containers; it also needs plugins for networking and persistent volumes.
Kubernetes distributions exist to fill this gap. Like a Linux distribution, a Kubernetes distribution bundles Kubernetes with an installer and a curated selection of third-party components. Different distributions exist to fill different niches; seemingly every tech company of a certain size has its own distribution and/or hosted offering to cater to enterprises. The minikube project offers an easier on-ramp for developers looking for a local environment to experiment with. Unlike their Linux counterparts, Kubernetes distributions are certified for conformance by the CNCF; each distribution must implement the same baseline of functionality in order to obtain the certification, which allows them to use the "Certified Kubernetes" badge.
A Kubernetes cluster contains several software components. Every node in the cluster runs an agent called the kubelet to maintain membership in the cluster and accept work from it, a container engine, and kube-proxy to enable network communication with containers running on other nodes.
The components that maintain the state of the cluster and make decisions about resource allocations are collectively referred to as the control plane — these include a distributed key-value store called etcd, a scheduler that assigns work to cluster nodes, and one or more controller processes that react to changes in the state of the cluster and trigger any actions needed to make the actual state match the desired state. Users and cluster nodes interact with the control plane through the Kubernetes API server. To effect changes, users set the desired state of the cluster through the API server, while the kubelet reports the actual state of each cluster node to the controller processes.
Kubernetes runs containers inside an abstraction called a pod, which can contain one or more containers, although running containers for more than one service in a pod is discouraged. Instead, a pod will generally have a single main container that provides a service, and possibly one or more "sidecar" containers that collect metrics or logs from the service running in the main container. All of the containers in a pod will be scheduled together on the same machine, and will share a network namespace — containers running within the same pod can communicate with each other over the loopback interface. Each pod receives its own unique IP address within the cluster. Containers running in different pods can communicate with each other using their cluster IP addresses.
A pod specifies a set of containers to run, but the definition of a pod says nothing about where to run those containers, or how long to run them for — without this information, Kubernetes will start the containers somewhere on the cluster, but will not restart them when they exit, and may abruptly terminate them if the control plane decides the resources they are using are needed by another workload. For this reason, pods are rarely used alone; instead, the definition of a pod is usually wrapped in a Deployment object, which is used to define a persistent service. Like Compose and Swarm, the objects managed by Kubernetes are declared in YAML; for Kubernetes, the YAML declarations are submitted to the cluster using the kubectl tool.
In addition to pods and Deployments, Kubernetes can manage many other types of objects, like load balancers and authorization policies. The list of supported APIs is continually evolving, and will vary depending on which version of Kubernetes and which distribution a cluster is running. Custom resources can be used to add APIs to a cluster to manage additional types of objects. KubeVirt adds APIs to enable Kubernetes to run virtual machines, for example. The complete list of APIs supported by a particular cluster can be discovered with the kubectl api-versions command.
Unlike Compose, each of these objects is declared in a separate YAML document, although multiple YAML documents can be inlined in the same file by separating them with "---", as seen in the Kubernetes documentation. A complex application might consist of many objects with their definitions spread across multiple files; keeping all of these definitions in sync with each other when maintaining such an application can be quite a chore. In order to make this easier, some Kubernetes administrators have turned to templating tools like Jsonnet.
Helm takes the templating approach a step further. Like Kubernetes, development of Helm takes place under the aegis of the CNCF; it is billed as "the package manager for Kubernetes". Helm generates YAML configurations for Kubernetes from a collection of templates and variable declarations called a chart. Its template language is distinct from the Jinja templates used by Ansible but looks fairly similar to them; people who are familiar with Ansible Roles will likely feel at home with Helm Charts.
Collections of Helm charts can be published in Helm repositories; Artifact Hub provides a large directory of public Helm repositories. Administrators can add these repositories to their Helm configuration and use the ready-made Helm charts to deploy prepackaged versions of popular applications to their cluster. Recent versions of Helm also support pushing and pulling charts to and from container registries, giving administrators the option to store charts in the same place that they store container images.
Kubernetes shows no signs of losing momentum any time soon. It is designed to manage any type of resource; this flexibility, as demonstrated by the KubeVirt virtual-machine controller, gives it the potential to remain relevant even if containerized workloads should eventually fall out of favor. Development proceeds at a healthy clip and new major releases come out regularly. Releases are supported for a year; there doesn't seem to be a long-term support version available. Upgrading a cluster is supported, but some prefer to bring up a new cluster and migrate their services over to it.
Nomad
Nomad is an orchestrator from HashiCorp, which is marketed as a simpler alternative to Kubernetes. Nomad is an open source project, like Docker and Kubernetes. It consists of a single binary called nomad, which can be used to start a daemon called the agent and also serves as a CLI to communicate with an agent. Depending on how it is configured, the agent process can run in one of two modes. Agents running in server mode accept jobs and allocate cluster resources for them. Agents running in client mode contact the servers to receive jobs, run them, and report their status back to the servers. The agent can also run in development mode, where it takes on the role of both client and server to form a single-node cluster that can be used for testing purposes.
Creating a Nomad cluster can be quite simple. In Nomad's most basic mode of operation, the initial server agent must be started, then additional nodes can be added to the cluster using the nomad server join command. HashiCorp also provides Consul, which is a general-purpose service mesh and discovery tool. While it can be used standalone, Nomad is probably at its best when used in combination with Consul. The Nomad agent can use Consul to automatically discover and join a cluster, and can also perform health checks, serve DNS records, and provide HTTPS proxies to services running on the cluster.
Nomad supports complex cluster topologies. Each cluster is divided into one or more "data centers". Like Swarm, server agents within a single data center communicate with each other using a protocol based on Raft; this protocol has tight latency requirements, but multiple data centers may be linked together using a gossip protocol that allows information to propagate through the cluster without each server having to maintain a direct connection to every other. Data centers linked together in this way can act as one cluster from a user's perspective. This architecture gives Nomad an advantage when scaled up to enormous clusters. Kubernetes officially supports up to 5,000 nodes and 300,000 containers, whereas Nomad's documentation cites example of clusters containing over 10,000 nodes and 2,000,000 containers.
Like Kubernetes, Nomad doesn't include a container engine or runtime. It uses task drivers to run jobs. Task drivers that use Docker and Podman to run containers are included; community-supported drivers are available for other container engines. Also like Kubernetes, Nomad's ambitions are not limited to containers; there are also task drivers for other types of workloads, including a fork/exec driver that simply runs a command on the host, a QEMU driver for running virtual machines, and a Java driver for launching Java applications. Community-supported task drivers connect Nomad to other types of workloads.
Unlike Docker or Kubernetes, Nomad eschews YAML in favor of HashiCorp Configuration Language (HCL), which was originally created for another HashiCorp project for provisioning cloud resources called Terraform. HCL is used across the HashiCorp product line, although it has limited adoption elsewhere. Documents written in HCL can easily be converted to JSON, but it aims to provide a syntax that is more finger-friendly than JSON and less error-prone than YAML.
HashiCorp's equivalent to Helm is called Nomad Pack. Like Helm, Nomad Pack processes a directory full of templates and variable declarations to generate job configurations. Nomad also has a community registry of pre-packaged applications, but the selection is much smaller than what is available for Helm at Artifact Hub.
Nomad does not have the same level of popularity as Kubernetes. Like Swarm, its development appears to be primarily driven by its creators; although it has been deployed by many large companies, HashiCorp is still very much the center of the community around Nomad. At this point, it seems unlikely the project has gained enough momentum to have a life independent from its corporate parent. Users can perhaps find assurance in the fact that HashiCorp is much more clearly committed to the development and promotion of Nomad than Docker is to Swarm.
Conclusion
Swarm, Kubernetes, and Nomad are not the only container orchestrators, but they are the three most viable. Apache Mesos can also be used to run containers, but it was nearly mothballed in 2021; DC/OS is based on Mesos, but much like Docker Enterprise Edition, the company that backed its development is now focused on Kubernetes. Most "other" container orchestration projects, like OpenShift and Rancher, are actually just enhanced (and certified) Kubernetes distributions, even if they don't have Kubernetes in their name.
Despite (or perhaps, because of) its complexity, Kubernetes currently enjoys the most popularity by far, but HashiCorp's successes with Nomad show that there is still room for alternatives. Some users remain loyal to the simplicity of Docker Swarm, but its future is uncertain. Other alternatives appear to be largely abandoned at this point. It would seem that the landscape has largely settled around these three players, but container orchestration is a still a relatively immature area. Ten years ago, very little of this technology even existed, and things are still evolving quickly. There are likely many exciting new ideas and developments in container orchestration that are still to come.
[Special thanks to Guinevere Saenger for educating me with regard to some of the finer points of Kubernetes and providing some important corrections for this article.]
Brief items
Security
EFF: Code, Speech, and the Tornado Cash Mixer
The Electronic Frontier Foundation has announced that it is representing cryptography professor Matthew Green, who has chosen to republish the sanctioned Tornado Cash open-source code as a GitHub repository.EFF’s most central concern about OFAC’s [US Office of Foreign Assets Control] actions arose because, after the SDN [Specially Designated Nationals] listing of “Tornado Cash,” GitHub took down the canonical repository of the Tornado Cash source code, along with the accounts of the primary developers, including all their code contributions. While GitHub has its own right to decide what goes on its platform, the disappearance of this source code from GitHub after the government action raised the specter of government action chilling the publication of this code.In keeping with our longstanding defense of the right to publish code, we are representing Professor Matthew Green, who teaches computer science at the Johns Hopkins Information Security Institute, including applied cryptography and anonymous cryptocurrencies. Part of his work involves studying and improving privacy-enhancing technologies, and teaching his students about mixers like Tornado Cash. The disappearance of Tornado Cash’s repository from GitHub created a gap in the available information on mixer technology, so Professor Green made a fork of the code, and posted the replica so it would be available for study. The First Amendment protects both GitHub’s right to host that code, and Professor Green’s right to publish (here republish) it on GitHub so he and others can use it for teaching, for further study, and for development of the technology.
Security quotes of the week
Already, previous versions of the [USB] Rubber Ducky could carry out attacks like creating a fake Windows pop-up box to harvest a user's login credentials or causing Chrome to send all saved passwords to an attacker's webserver. But these attacks had to be carefully crafted for specific operating systems and software versions and lacked the flexibility to work across platforms.— Corin Faife on the USB Rubber Ducky at The VergeThe newest Rubber Ducky aims to overcome these limitations. It ships with a major upgrade to the DuckyScript programming language, which is used to create the commands that the Rubber Ducky will enter into a target machine. While previous versions were mostly limited to writing keystroke sequences, DuckyScript 3.0 is a feature-rich language, letting users write functions, store variables, and use logic flow controls (i.e., if this... then that).
That means, for example, the new Ducky can run a test to see if it's plugged into a Windows or Mac machine and conditionally execute code appropriate to each one or disable itself if it has been connected to the wrong target. It also can generate pseudorandom numbers and use them to add variable delay between keystrokes for a more human effect.
Perhaps most impressively, it can steal data from a target machine by encoding it in binary format and transmitting it through the signals meant to tell a keyboard when the CapsLock or NumLock LEDs should light up. With this method, an attacker could plug it in for a few seconds, tell someone, "Sorry, I guess that USB drive is broken," and take it back with all their passwords saved.
"Greenluigi1" found within the firmware image the RSA public key used by the updater, and searched online for a portion of that key. The search results pointed to a common public key that shows up in online tutorials like "RSA Encryption & Decryption Example with OpenSSL in C."— Thomas Claburn at The RegisterThat tutorial and other projects implementing OpenSSL include within their source code that public key and the corresponding RSA private key.
This means Hyundai used a public-private key pair from a tutorial, and placed the public key in its code, allowing "greenluigi1" to track down the private key. Thus he was able to sign Hyundai's files and have them accepted by the updater.
Kernel development
Kernel release status
The current development kernel is 6.0-rc2, released on August 21. Linus said:
The most noticeable fix in here is likely the virtio reverts that fixed the problem people had with running tests on the google cloud VMs, which was the 'pending issue' that we had noticed just as the merge window was closing.
Stable updates: 5.19.3, 5.18.19, 5.15.62, and 5.10.137 were released on August 21. Note that the 5.18.x series ends with 5.18.19.
The 5.19.4, 5.15.63, 5.10.138, 5.4.211, 4.19.256, 4.14.291, and 4.9.326 stable updates are all in the review process; they are due on August 25.
Linux Foundation TAB election: call for nominees
The 2022 election for members of the Linux Foundation Technical Advisory Board (TAB) will be held during the Linux Plumbers Conference, September 12 to 14. The TAB represents the kernel-development community to the Linux Foundation (and beyond) and holds a seat on the Foundation's board of directors. The call for nominees for this year's election has gone out; the deadline for nominations is September 12.Serving on the TAB is an opportunity to help the community; interested members are encouraged to send in a nomination.
Development
Firefox 104 released
Version 104 of the Firefox browser has been released. The most interesting new feature, perhaps, is the ability to analyze a web site's power usage — but that feature is not available on Linux.Julia 1.8 released
Version 1.8 of the Julia language has been released. Changes include typed globals, a new default thread scheduler, some new profiling tools, and more.Krita 5.1.0 released
Version 5.1.0 of the Krita painting program is out. "Krita 5.1 comes with a ton of smaller improvements and technical polish. This release sees updates to usability across the board, improved file format handling, and a whole lot of changes to the selection and fill tools."
LibreOffice 7.4 Community released
The Document Foundation has announced the release of LibreOffice 7.4 Community, which is the community-supported version of the open-source office suite. Version 7.4 comes with new features for the suite as a whole (WebP and EMZ/WMZ support, ...), the Writer word-processor (better change tracking and hyphenation settings, ...), the Calc spreadsheet (16K columns, ...), and more. "Development is now focused on interoperability with Microsoft’s proprietary file formats, and many new features are targeted at users migrating from MS Office". More information can be found in the release notes.
The future of NGINX
This blog post on the NGINX corporate site describes the plans for this web server project in the coming year.
For the core NGINX Open Source software, we continue to add new features and functionality and to support more operating system platforms. Two critical capabilities for security and scalability of web applications and traffic, HTTP3 and QUIC, are coming in the next version we ship.
public-inbox 1.9.0 released
Version 1.9.0 of the public-inbox email archive manager has been released. Improvements include a POP3 server, a new multi-protocol "superserver", some search improvements, and performance improvements. (LWN looked at public-inbox in 2018).Development quote of the week
But contrary to what people repeat off internet blogs, the X server is not seeing a lack of maintenance, manpower, or even new features: XInput 2.4, with support for trackpad gestures, was released approximately a year ago, which shows that it is evolving faster than it was during the heyday of X development in the mid-90s. X is also a stable and mature system, by nature of its much more centralized development methodology, meaning that it requires less manpower to keep working than Wayland, where every feature is preceded by two to three protocol extensions from different organizations, and constant changes in the display server are required to keep up with updates to unstable protocol extensions.— Po Lu
Page editor: Jake Edge
Announcements
Newsletters
Distributions and system administration
Development
Meeting minutes
Calls for Presentations
CFP Deadlines: August 25, 2022 to October 24, 2022
The following listing of CFP deadlines is taken from the LWN.net CFP Calendar.
Deadline | Event Dates | Event | Location |
---|---|---|---|
August 29 | September 9 September 11 |
Qubes OS Summit | Berlin, Germany |
September 1 | October 15 October 16 |
OpenFest 2022 | Sofia, Bulgaria |
September 1 | October 11 October 13 |
Tracing Summit 2022 | London, UK |
September 7 | October 24 October 28 |
Netdev 0x16 | Lisbon, Portugal |
September 28 | November 26 November 27 |
UbuCon Asia 2022 | Seoul, South Korea |
September 30 | November 7 November 9 |
Ubuntu Summit 2022 | Prague, Czech Republic |
September 30 | October 20 October 21 |
Netfilter workshop | Seville, Spain |
October 15 | December 2 December 3 |
OLF Conference 2022 | Columbus, OH, USA |
If the CFP deadline for your event does not appear here, please tell us about it.
Upcoming Events
Events: August 25, 2022 to October 24, 2022
The following event listing is taken from the LWN.net Calendar.
Date(s) | Event | Location |
---|---|---|
August 20 August 26 |
Free and Open Source Software for Geospatial | Florence, Italy |
September 6 September 8 |
Research Software Engineers' Conference 2022 | Newcastle, UK |
September 7 | Kangrejos Rust for Linux Workshop | Oviedo, Spain |
September 9 September 11 |
Qubes OS Summit | Berlin, Germany |
September 10 September 11 |
Clang-built Linux Meetup | Dublin, Ireland |
September 12 September 14 |
Linux Plumbers Conference | Dublin, Ireland |
September 13 September 16 |
Open Source Summit Europe | Dublin, Ireland |
September 14 September 15 |
Git Merge 2022 | Chicago, USA |
September 15 September 16 |
Linux Security Summit Europe | Dublin, Ireland |
September 15 September 18 |
EuroBSDCon 2022 | Vienna, Austria |
September 16 September 18 |
GNU Tools Cauldron | Prague, CZ |
September 16 September 18 |
Ten Years of Guix | Paris, France |
September 19 September 21 |
Open Source Firmware Conference | Gothenburg, Sweden |
September 21 September 22 |
Open Mainframe Summit | Philadelphia, US |
September 21 September 22 |
DevOpsDays Berlin 2022 | Berlin, Germany |
September 26 October 2 |
openSUSE Asia Summit 2022 | Online |
September 28 October 1 |
LibreOffice Conference 2022 | Milan, Italy |
October 1 October 7 |
Akademy 2022 | Barcelona, Spain |
October 4 October 6 |
X.Org Developers Conference | Minneapolis, MN, USA |
October 11 October 13 |
Tracing Summit 2022 | London, UK |
October 13 October 14 |
PyConZA 2022 | Durban, South Africa |
October 15 October 16 |
OpenFest 2022 | Sofia, Bulgaria |
October 16 October 21 |
DjangoCon US 2022 | San Diego, USA |
October 19 October 21 |
ROSCon 2022 | Kyoto, Japan |
October 20 October 21 |
Netfilter workshop | Seville, Spain |
If your event does not appear here, please tell us about it.
Security updates
Alert summary August 18, 2022 to August 24, 2022
Dist. | ID | Release | Package | Date |
---|---|---|---|---|
Debian | DSA-5212-1 | stable | chromium | 2022-08-17 |
Debian | DLA-3074-1 | LTS | epiphany-browser | 2022-08-18 |
Debian | DLA-3076-1 | LTS | freecad | 2022-08-18 |
Debian | DLA-3079-1 | LTS | jetty9 | 2022-08-22 |
Debian | DLA-3078-1 | LTS | kicad | 2022-08-20 |
Debian | DSA-5214-1 | stable | kicad | 2022-08-21 |
Debian | DLA-3077-1 | LTS | ruby-tzinfo | 2022-08-18 |
Debian | DLA-3075-1 | LTS | schroot | 2022-08-18 |
Debian | DSA-5213-1 | stable | schroot | 2022-08-18 |
Fedora | FEDORA-2022-9178229cd7 | F35 | community-mysql | 2022-08-22 |
Fedora | FEDORA-2022-7197cef91f | F36 | community-mysql | 2022-08-22 |
Fedora | FEDORA-2022-3cbf2184bd | F35 | freeciv | 2022-08-18 |
Fedora | FEDORA-2022-b7d8dcefc5 | F35 | microcode_ctl | 2022-08-18 |
Fedora | FEDORA-2022-baf3c3b781 | F36 | qemu | 2022-08-18 |
Fedora | FEDORA-2022-25e4dbedf9 | F36 | rsync | 2022-08-18 |
Fedora | FEDORA-2022-9832c0c04b | F35 | trafficserver | 2022-08-20 |
Fedora | FEDORA-2022-23043f5a0b | F36 | trafficserver | 2022-08-20 |
Fedora | FEDORA-2022-6f5e420e52 | F35 | vim | 2022-08-23 |
Gentoo | 202208-35 | chromium | 2022-08-21 | |
Gentoo | 202208-33 | gettext | 2022-08-21 | |
Gentoo | 202208-34 | tomcat | 2022-08-21 | |
Gentoo | 202208-32 | vim | 2022-08-21 | |
Mageia | MGASA-2022-0289 | 8 | apache-mod_wsgi | 2022-08-20 |
Mageia | MGASA-2022-0288 | 8 | libitrpc | 2022-08-20 |
Mageia | MGASA-2022-0290 | 8 | libxml2 | 2022-08-20 |
Mageia | MGASA-2022-0285 | 8 | nvidia-current | 2022-08-18 |
Mageia | MGASA-2022-0286 | 8 | nvidia390 | 2022-08-18 |
Mageia | MGASA-2022-0292 | 8 | teeworlds | 2022-08-20 |
Mageia | MGASA-2022-0291 | 8 | wavpack | 2022-08-20 |
Mageia | MGASA-2022-0287 | 8 | webkit2 | 2022-08-20 |
Oracle | ELSA-2022-9714 | OL6 | httpd | 2022-08-17 |
Oracle | ELSA-2022-9727 | OL7 | kernel | 2022-08-22 |
Oracle | ELSA-2022-9728 | OL7 | kernel | 2022-08-22 |
Oracle | ELSA-2022-9728 | OL7 | kernel | 2022-08-22 |
Oracle | ELSA-2022-9726 | OL8 | kernel | 2022-08-22 |
Oracle | ELSA-2022-9727 | OL8 | kernel | 2022-08-22 |
Oracle | ELSA-2022-9727 | OL8 | kernel | 2022-08-22 |
Oracle | ELSA-2022-9726 | OL9 | kernel | 2022-08-22 |
Oracle | ELSA-2022-9726 | OL9 | kernel | 2022-08-22 |
Oracle | ELSA-2022-9730 | OL7 | kernel-container | 2022-08-22 |
Oracle | ELSA-2022-9729 | OL8 | kernel-container | 2022-08-22 |
Oracle | ELSA-2022-9730 | OL8 | kernel-container | 2022-08-22 |
Red Hat | RHSA-2022:6119-01 | EL7 | podman | 2022-08-22 |
Slackware | SSA:2022-232-01 | vim | 2022-08-20 | |
SUSE | SUSE-SU-2022:2831-1 | MP4.0 MP4.1 MP4.2 MP4.3 SLE15 SLE-m5.1 SLE-m5.2 oS15.3 oS15.4 | aws-efs-utils, python-ansi2html, python-py, python-pytest-html, python-pytest-metadata, python-pytest-rerunfailures, python-coverage, python-oniconfig, python-unittest-mixins | 2022-08-17 |
SUSE | SUSE-SU-2022:2837-1 | SLE15 SES6 | bluez | 2022-08-18 |
SUSE | SUSE-SU-2022:2864-1 | SLE15 SES6 | bluez | 2022-08-22 |
SUSE | SUSE-SU-2022:2877-1 | MP4.3 SLE15 oS15.4 | cosign | 2022-08-23 |
SUSE | SUSE-SU-2022:2829-1 | SLE15 SES6 | curl | 2022-08-17 |
SUSE | SUSE-SU-2022:2880-1 | MP4.3 SLE15 oS15.4 | dpdk | 2022-08-23 |
SUSE | openSUSE-SU-2022:10096-1 | osB15 | freeciv | 2022-08-24 |
SUSE | SUSE-SU-2022:2876-1 | MP4.2 SLE15 oS15.3 | gfbgraph | 2022-08-23 |
SUSE | SUSE-SU-2022:2867-1 | SLE12 | gimp | 2022-08-22 |
SUSE | SUSE-SU-2022:2830-1 | SLE15 SES6 | gnutls | 2022-08-17 |
SUSE | SUSE-SU-2022:2856-1 | MP4.1 MP4.2 MP4.3 SLE15 SES6 SES7 oS15.3 oS15.4 | java-1_8_0-openjdk | 2022-08-19 |
SUSE | SUSE-SU-2022:2875-1 | MP4.2 SLE15 SLE-m5.1 SLE-m5.2 oS15.3 oS15.4 | kernel | 2022-08-23 |
SUSE | SUSE-SU-2022:2840-1 | SLE12 | kernel | 2022-08-18 |
SUSE | openSUSE-SU-2022:10095-1 | osB15 | nim | 2022-08-24 |
SUSE | SUSE-SU-2022:2855-1 | MP4.1 SLE15 SES6 SES7 oS15.3 oS15.4 | nodejs10 | 2022-08-19 |
SUSE | SUSE-SU-2022:2836-1 | SLE12 | ntfs-3g_ntfsprogs | 2022-08-17 |
SUSE | SUSE-SU-2022:2835-1 | SLE15 oS15.3 oS15.4 | ntfs-3g_ntfsprogs | 2022-08-17 |
SUSE | SUSE-SU-2022:2861-1 | SLE12 | open-iscsi | 2022-08-22 |
SUSE | SUSE-SU-2022:2871-1 | SLE12 | p11-kit | 2022-08-23 |
SUSE | SUSE-SU-2022:2874-1 | MP4.2 MP4.3 SLE15 oS15.3 oS15.4 | perl-HTTP-Daemon | 2022-08-23 |
SUSE | SUSE-SU-2022:2872-1 | SLE12 | perl-HTTP-Daemon | 2022-08-23 |
SUSE | SUSE-SU-2022:2839-1 | MP4.2 SLE15 SLE-m5.1 SLE-m5.2 SES7.1 oS15.3 | podman | 2022-08-18 |
SUSE | SUSE-SU-2022:2834-1 | MP4.3 SLE15 oS15.4 | podman | 2022-08-17 |
SUSE | SUSE-SU-2022:2841-1 | SLE15 | python-PyYAML | 2022-08-18 |
SUSE | SUSE-SU-2022:2878-1 | SLE15 | python-lxml | 2022-08-23 |
SUSE | openSUSE-SU-2022:10098-1 | osB15 | python-treq | 2022-08-24 |
SUSE | SUSE-SU-2022:2859-1 | OS9 SLE12 | rsync | 2022-08-19 |
SUSE | SUSE-SU-2022:2858-1 | SLE12 | rsync | 2022-08-19 |
SUSE | SUSE-SU-2022:2870-1 | MP4.0 MP4.1 MP4.2 MP4.3 SLE15 oS15.3 oS15.4 | rubygem-rails-html-sanitizer | 2022-08-23 |
SUSE | SUSE-SU-2022:2866-1 | MP4.2 MP4.3 SLE15 SLE-m5.1 SLE-m5.2 oS15.3 oS15.4 | systemd-presets-common-SUSE | 2022-08-22 |
SUSE | openSUSE-SU-2022:10094-1 | osB15 | trivy | 2022-08-20 |
SUSE | SUSE-SU-2022:2868-1 | MP4.2 SLE15 oS15.3 | u-boot | 2022-08-22 |
SUSE | SUSE-SU-2022:2869-1 | MP4.3 SLE15 oS15.4 | u-boot | 2022-08-22 |
SUSE | SUSE-SU-2022:2838-1 | OS9 SLE12 | ucode-intel | 2022-08-18 |
SUSE | SUSE-SU-2022:2842-1 | SLE12 | ucode-intel | 2022-08-18 |
SUSE | SUSE-SU-2022:2833-1 | SLE15 | ucode-intel | 2022-08-17 |
SUSE | SUSE-SU-2022:2832-1 | SLE15 SES6 | ucode-intel | 2022-08-17 |
SUSE | SUSE-SU-2022:2846-1 | OS9 SLE12 | zlib | 2022-08-18 |
SUSE | SUSE-SU-2022:2845-1 | SLE12 | zlib | 2022-08-18 |
SUSE | SUSE-SU-2022:2847-1 | SLE12 | zlib | 2022-08-18 |
Ubuntu | USN-5574-1 | 14.04 16.04 18.04 20.04 | exim4 | 2022-08-22 |
Ubuntu | USN-5575-2 | 14.04 16.04 | libxslt | 2022-08-22 |
Ubuntu | USN-5575-1 | 18.04 20.04 22.04 | libxslt | 2022-08-22 |
Ubuntu | USN-5572-1 | 16.04 | linux-aws | 2022-08-18 |
Ubuntu | USN-5577-1 | 20.04 | linux-oem-5.14 | 2022-08-23 |
Ubuntu | USN-5578-1 | 18.04 20.04 22.04 | open-vm-tools | 2022-08-24 |
Ubuntu | USN-5571-1 | 18.04 20.04 22.04 | postgresql-10, postgresql-12, postgresql-14 | 2022-08-18 |
Ubuntu | USN-5573-1 | 16.04 18.04 20.04 | rsync | 2022-08-18 |
Ubuntu | USN-5576-1 | 22.04 | twisted | 2022-08-24 |
Ubuntu | USN-5570-1 | 14.04 16.04 18.04 | zlib | 2022-08-17 |
Kernel patches of interest
Kernel releases
Architecture-specific
Build system
Core kernel
Development tools
Device drivers
Device-driver infrastructure
Documentation
Filesystems and block layer
Memory management
Networking
Security-related
Virtualization and containers
Miscellaneous
Page editor: Jonathan Corbet