An Ubuntu kernel bug causes container crashes
Some system administrators running Ubuntu 20.04 had a rough time on June 8, when Ubuntu published kernel packages containing a particularly nasty bug that was caused by an Ubuntu-specific patch to the kernel. The bug led to a kernel panic whenever a Docker container was started. Fixed packages were made available on June 10, but there are questions about what went wrong with handling the patch; in particular, it is surprising that kernel 5.13, which has been beyond its end-of-life for months, made it onto machines running Ubuntu 20.04, which is supposed to be a long-term support release.
Ubuntu's kernel release lifecycle
Unless it is following a rolling-release model, a Linux distribution project will often pick a kernel branch and stick with it for the lifetime of a distribution release. For example, a release that ships with a 5.4 kernel, as Ubuntu 20.04 did, might receive updates to later 5.4.x kernels, but is unlikely to be upgraded to 5.15 until the next major release of the distribution. For this reason, such projects often prefer or even require a branch that has been designated as a long-term maintenance branch by the stable kernel team. It's easier for a distribution maintainer to sleep at night knowing that the version of the software they are shipping is supported upstream.
Debian and Red Hat both adhere to these rules when picking kernels for
their releases; Ubuntu claims to do so as well, at least for its LTS
(long-term support) releases. Those releases are made every two years, and
supported for five. They ship with a long-term stable kernel; Ubuntu provides
updates to that for the lifetime of the release.
Ubuntu also makes non-LTS releases at six month intervals in between
the LTS releases. In contrast to the LTS releases, these releases are only
supported for about six nine months, and are declared end-of-life (EOL) a month three months after
a newer release is made available. Because of their relatively short
shelf-life, Ubuntu does not restrict itself to long-term kernels for these
releases. The most recent non-LTS release, Ubuntu 21.10, shipped with
Linux 5.13, which is not a long-term branch. In fact, the 5.13
branch was declared EOL on September 18, 2021, almost a month before
Ubuntu 21.10 was released on October 14.
Users who prioritize stability highly value the long window of support that comes with an LTS release, but five years is a relative eternity in the world of hardware, particularly in fast-moving areas like graphics. In order to support newer hardware, Ubuntu periodically publishes new hardware enablement (HWE) stacks for its LTS releases. These are comprised of packages backported from the latest (possibly non-LTS) release. The HWE stack includes updated kernel packages, and may also include updated Xorg and Mesa packages.
According to Ubuntu, the HWE stack is enabled by default for new desktop installs of Ubuntu, but needs to be explicitly chosen for server installs. This opt-in policy also seems to only apply to users installing from the ISO image; the default Ubuntu 20.04 images on Amazon AWS, Azure, and Google Cloud all come with the HWE kernel pre-installed. Many system administrators (including me) choose the HWE stack for their servers as well, either out of a desire for features only available in newer kernels or out of a need for a kernel that works with their hardware.
When considered independently from each other, the decision to bypass long-term kernels for non-LTS releases and the decision to publish HWE kernels to extend the hardware support of LTS releases both seem reasonable. In combination, though, these two decisions can lead to a somewhat surprising situation; users running a "long-term support" distribution can end up running a version of Linux which is considered end-of-life by the kernel developers.
As of this writing, users running the HWE kernel on Ubuntu 20.04 will get a 5.13 kernel backported from 21.10. Ubuntu 22.04, which is the next LTS release, includes the 5.15 kernel, which is a long-term stable branch. This is currently available to 20.04 users under the name "hwe-20.04-edge". It will presumably replace the kernel from 21.10 as hwe-20.04 sometime before July 14, when Ubuntu 21.10 is itself EOL. For now, though, and for the past few months, anyone running the HWE kernel on 20.04 is running a kernel based on 5.13. Since the HWE kernel is the default kernel on all three major clouds, problems with it can affect a large slice of Ubuntu's users.
A tale of four filesystems
The HWE kernel allows using newer hardware and kernel features with Ubuntu LTS, but it seems that this may come with some cost to stability. The root cause of the kernel crash lies at the intersection between no less than four different filesystems, although none of them are filesystems in the traditional sense of something that writes data to persistent storage.
The first is overlayfs. As the name might suggest, overlayfs allows overlaying the files in one directory (the "upper" directory, in overlayfs parlance) on top of the files in another (the "lower" directory.) This results in a mount point that contains all of the files in both the upper and the lower directories; if both directories contain a file with the same name, overlayfs presents the version present in the upper directory. Any changes made to an overlayfs mount are reflected in the upper directory. The functionality provided by overlayfs is particularly valuable to container runtimes such as Docker that store container images as a series of layers; overlayfs provides an efficient way of constructing a container's root directory from these layers. It has been a part of the kernel since version 3.18 in 2014.
The second filesystem involved is AUFS, which does everything that overlayfs does, and a lot more, but its implementation is significantly more complex. AUFS weighs in at about 35,000 lines of code, whereas overlayfs is about 12,000. AUFS was first submitted for inclusion into the kernel in 2008, but was never merged; since then, it has continued to be maintained out-of-tree. Ubuntu included AUFS in its kernels through version 20.10, but dropped it in 21.04.
The third filesystem is shiftfs, which was originally created by James Bottomley in 2018 to allow remapping the user and group IDs in a mounted filesystem, and while it has never been merged upstream, it has been included in Ubuntu's tree since the 5.0 kernel series. Canonical's LXD project can use shiftfs to speed up the creation of unprivileged containers, where the root user inside the container is mapped to a user other than root outside of it; otherwise, filesystems would need to have their user and group IDs changed to be used in that way. It is unlikely that shiftfs will ever land in Linus Torvalds's tree, though, as its functionality is entirely duplicated by the ID-mapped mounts that were added to the kernel in version 5.12. LXD has since been updated to use ID-mapped mounts when available.
The fourth and final filesystem in our story is procfs. As is generally known, each process running on a Linux system has a corresponding directory in /proc. Among a great many other things, each of these directories contains a subdirectory named map_files, which has a collection of symbolic links. Each link corresponds to a range of addresses in the process's address space that has been mapped to a file; the name of each link indicates the range of addresses that are mapped, and the destination is the file that is mapped to that range. For example:
$ ls -l /proc/$$/map_files/ total 0 lr-------- 1 jordan everybody 64 Jun 22 16:21 55e0cc120000-55e0cc14d000 -> /usr/bin/bash lr-------- 1 jordan everybody 64 Jun 22 16:21 55e0cc14d000-55e0cc1fe000 -> /usr/bin/bash ...
The most prominent user of the map_files subdirectory is perhaps the Checkpoint/Restore In Userspace (CRIU) tool, which allows for "checkpointing" a process by serializing its entire state to disk, and later "restoring" it by recreating the process from its serialized state.
What does the patch do?
The patch that caused the kernel panic when creating Docker containers was intended to correct a problem when using overlayfs and shiftfs together. If a process mapped a file from such a mount, the symbolic link in map_files would point to the original "unshifted" version of the file, instead of the path inside the shiftfs mount. This broke checkpointing and restoring Docker containers, because the files the symbolic links in map_files were pointing to were in filesystems that weren't mounted inside the container.
This problem was discovered early in 2020, and fixed shortly after the release of Ubuntu 20.04. At the time, AUFS was included in Ubuntu's kernel. The developers of AUFS had also faced challenges related to differentiating between the real name of a file and its alias inside of an AUFS mount. To address this, the AUFS patch introduces an additional field called vm_prfile to the kernel's vm_area_struct, which is populated with AUFS's name for the file. To fix the problem with overlayfs and shiftfs, Ubuntu's developers needed to keep track of a file's alias inside of a synthetic mount, and, since AUFS had already added vm_prfile for a similar purpose, they chose to reuse it instead of introducing another field. Knowing that their fix was dependent on AUFS being enabled, they also chose to guard it in an #ifdef block — if AUFS was not configured into the kernel, then the patch became a no-op.
How things went wrong
When Ubuntu's developers ported the shiftfs-related patches from their 5.8 kernel branch to their 5.13 and 5.15 kernels, the patch that corrected the problem with map_files and shiftfs was left out, because it depended on AUFS, which had been dropped from Ubuntu's kernel. When those kernels were backported to Ubuntu 20.04, where AUFS continues to be supported, the missing patch was noticed, and it was applied to Ubuntu's 5.13 and 5.15 trees as well.
Unfortunately, the internals of overlayfs changed over time in a way that eventually caused the patch to be incorrect. As a result, when a file on an overlayfs is mapped into memory, the function added by the patch attempts to release a reference to a struct file using fput(), but the structure had already been freed due to an earlier fput() call. That causes the kernel to panic.
On Ubuntu 21.10, where 5.13 is the default kernel, this didn't cause any problems. Since AUFS is not enabled, the #ifdef block around the code introduced by the patch prevented it from being compiled into the kernel. The problem occurred when 5.13 and 5.15 were rebuilt for Ubuntu 20.04. Since an HWE kernel needs to support all of the features that are supported by the kernel it is replacing, AUFS was enabled in these builds, and the code containing the extraneous fput() was compiled in.
The problem was noticed in May, almost immediately after the patch was added back in. However, it appears that 5.13 was overlooked; the patch was reverted in Ubuntu's 5.15 branch and replaced with a version that did not call fput(), but the incorrect version remained in the 5.13 branch and made it into the 5.13 HWE kernel.
According to the changelog, the problematic kernel package was built on June 3, although it may not have been published to Ubuntu's package repositories for some time afterward. The problem was reported on June 8. Until updated packages were made available on June 10, the only recourse available to affected users was to manually roll back to a previous kernel.
Conclusion
Maintaining an out-of-tree kernel patch for any length of time is an arduous task. As much as Linux has an iron-clad guarantee of user-space compatibility, it provides zero assurances about the stability of internal kernel interfaces between versions. Things that do not get merged often quickly fall by the wayside, due to the sheer level of effort required to keep up with changes elsewhere in the kernel.
When Ubuntu ships out-of-tree patches with its LTS releases, it is signing its kernel developers up for the task of maintaining them for at least five years, often across multiple branches of the kernel simultaneously. Sometimes these bets pay off; Ubuntu included overlayfs in its kernel before it was merged, and now it is maintained upstream. On the other hand, even though Ubuntu dropped support for AUFS in 2021, because the distribution shipped it in 20.04, they are on the hook for supporting it until 2025. Their latest LTS release, 22.04, still contains support for shiftfs; those patches will be hanging around in Ubuntu's tree until at least 2027. As the problem with the patch demonstrates, keeping these patches up-to-date is no simple task; changes in other parts of the kernel can and will cause problems, which requires careful attention.
Based on those timelines, it doesn't seem like things are set to get any easier for Ubuntu's kernel developers anytime soon. Indeed, things may actually be destined to become harder; as the kernel now provides equivalent functionality, interest in these out-of-tree alternatives is likely to wane, which will place the burden of maintenance even more squarely upon Ubuntu's shoulders. The bets that don't pay off turn into debt, with compound interest.
In the end, it appears that Ubuntu fell victim to at least some level of self-inflicted complexity. Ubuntu's developers quickly caught and fixed the problem, but only in one of the affected branches. Unfortunately, the branch that was missed is the one that was shipped to users.
Index entries for this article | |
---|---|
GuestArticles | Webb, Jordan |
Posted Jul 5, 2022 20:50 UTC (Tue)
by brauner (subscriber, #109349)
[Link] (1 responses)
Posted Jul 5, 2022 21:11 UTC (Tue)
by snajpa (subscriber, #73467)
[Link]
so, please accept my apologies, I'm gonna crawl into that corner over there and think long and hard about this :D
Posted Jul 5, 2022 21:02 UTC (Tue)
by ribbo (subscriber, #2400)
[Link] (7 responses)
Posted Jul 5, 2022 21:32 UTC (Tue)
by willy (subscriber, #9762)
[Link] (5 responses)
Posted Jul 5, 2022 22:21 UTC (Tue)
by pbonzini (subscriber, #60935)
[Link] (4 responses)
Posted Jul 6, 2022 7:43 UTC (Wed)
by taladar (subscriber, #68407)
[Link] (3 responses)
Posted Jul 6, 2022 10:14 UTC (Wed)
by nim-nim (subscriber, #34454)
[Link]
Posted Jul 6, 2022 15:00 UTC (Wed)
by pbonzini (subscriber, #60935)
[Link]
Posted Jul 7, 2022 12:41 UTC (Thu)
by hkario (subscriber, #94864)
[Link]
Posted Jul 5, 2022 21:33 UTC (Tue)
by jake (editor, #205)
[Link]
ah, yes, thanks for the correction ... i have adjusted the article text accordingly ...
jake
Posted Jul 5, 2022 21:07 UTC (Tue)
by Paf (subscriber, #91811)
[Link]
Posted Jul 5, 2022 23:41 UTC (Tue)
by developer122 (guest, #152928)
[Link] (11 responses)
Best case scenario your tooling is tracing subsystem interactions and noting dependencies in case there's any change at all, worst case it's trying to parse exactly what some code is doing/trying to do which is halting-problem territory.
Posted Jul 6, 2022 1:38 UTC (Wed)
by NightMonkey (subscriber, #23051)
[Link] (4 responses)
Posted Jul 6, 2022 2:38 UTC (Wed)
by DSMan195276 (guest, #88396)
[Link] (3 responses)
I think the more straightforward solution here is just testing after the patch is applied to verify the functionality still works. That's what we really care about anyway, staring at the code only gets you so far if you never actually try it. And also, don't maintain your own patches if you can avoid it :D
Posted Jul 6, 2022 5:53 UTC (Wed)
by developer122 (guest, #152928)
[Link] (2 responses)
Posted Jul 6, 2022 17:39 UTC (Wed)
by NYKevin (subscriber, #129325)
[Link] (1 responses)
The other problem is that your graph is probably not acyclic, because most nontrivial programs contain loops. So not only does X break Y, Y also probably breaks X, meaning that you can't automatically order X and Y with respect to one another.
Posted Jul 7, 2022 22:48 UTC (Thu)
by roc (subscriber, #30627)
[Link]
Posted Jul 6, 2022 4:10 UTC (Wed)
by derobert (subscriber, #89569)
[Link] (3 responses)
You can surely do a bunch of control & data flow analysis, but I suspect you'll get far too many false positives. After all, if it tells you to investigate half the patches, it's not that useful.
If this is really any "docker run", then that's an embarrassing testing failure. (Especially if it hits other container engines, like k8s).
Cloud images probably shouldn't be getting HWE kernels, at least not until a few months after desktop.
Posted Jul 6, 2022 7:41 UTC (Wed)
by smurf (subscriber, #17840)
[Link] (1 responses)
However, C isn't a particularly nice language to do that with, particularly when you start using these tools after the fact you get a heap of false positives.
The real solution is to switch to a language with built-in object lifecycle guarantees. In this case, Rust.
Posted Jul 6, 2022 22:05 UTC (Wed)
by bartoc (guest, #124262)
[Link]
I have my doubts about rust’s ability to prevent this kind of thing, but stuff like that does help (thats why the kernel has lifetime related cocci checks)
Posted Jul 6, 2022 15:16 UTC (Wed)
by iabervon (subscriber, #722)
[Link]
I could see there being relatively few patches flagged for "between 5.x.y and 5.x.y+1, lines your patch affects are part of data flow analysis that doesn't match". It seems like it would give a definite warning for the reference count of an object the patch passes to fput() necessarily being different in 5.13 as compared to 5.8. It wouldn't be able to tell which one is correct, or whether the code had two references and is just releasing them in the opposite order now, but it would be clear that there's some sort of conflict resolution needed.
Posted Jul 6, 2022 21:40 UTC (Wed)
by atnot (subscriber, #124910)
[Link]
Posted Jul 7, 2022 8:52 UTC (Thu)
by bartoc (guest, #124262)
[Link]
This ends up going a _little_ better than you would expect, but not much.
You can do cocci style semantic diff and merge, where the merge is based on the AST, not the text. But this means you need to parse everything perfectly and it's not that much better than a good text-based diff/merge algorithm. I suppose in a way this is what __attribute__((flatten)) or always_inline do.
Actually it would be kinda neat to have a compiler plugin or extension that let you inject code "somewhere else". You could have a compiler plugin that let you write functions to be injected at specific locations. This would be merely "neat" for these kinds of functionality patches but I think it could actually be a useful feature for injecting static tracepoints without cluttering up the code too much (or forgetting to add one someplace).
Come to think of it nim-lang has an experimental feature for this called "term rewriting macros", you can write something like (from the manual):
once this is brought into scope the compiler will rewrite any further expressions like some_ident * 2 as some_ident * some_ident. You can see how this is basically a megaton yield footgun. The matching language is kinda neat, but it's very sensitive to the exact structure of the AST, so it can miss stuff (nim has macros too, but these operate on a kind of alternate "stable" ast that converts too and from the "real" AST. That's harder for TR macros because any change in the "real" ast is usually something TR macros might be interested in, or might be able to break if they don't understand.
Because of these issues, and because TR macros are not widely used, and because they have some performance problems they may be removed.
Oh Mathamatica's "Wolfram Language" is famously based on this concept, and there is a lot of literature out there about how amazing it is. I remain unconvinced, but it is unconventional and interesting for sure.
Oh, you can imagine that once you have such a facility and multiple users pop up trying to use it at once things get quite interesting quite fast.
Posted Jul 6, 2022 21:32 UTC (Wed)
by wtarreau (subscriber, #51152)
[Link] (8 responses)
What's irritating me however (and has for a while) is their insistence on using EOL kernels. They made a good progress on their LTS branches but honestly, "upgrading" from a maintained kernel to an unmaintained one to get new drivers is really not acceptable. For having produced stable kernels myself, I can say it, **EOL kernels are completely bogus**, because the flow of patches that need to be backported is steady, but what happens when the kernel reaches EOL ? They're not merged anymore. After one year a non-LTS EOL kernel probably misses 2000 fixes, for as many bugs that are fixed in all maintained versions around it, but not that one. Some will corrupt data, cause random hangs, disconnect your WiFi during an audio conf, make your screen disappear after resume, leave fantom USB devices after some errors, let an intruder escalate privileges on your machine, etc. There are now huge efforts from the kernel community to provide a wide choice of high-quality LTS kernels, and there is absolutely zero (**ZERO**) excuse nowadays for any distro to ship a kernel that reaches EOL before the end of support of the distro (and even worse before the release here). This bad practice is irresponsible and must stop!
Posted Jul 9, 2022 19:32 UTC (Sat)
by jafd (subscriber, #129642)
[Link] (3 responses)
> a non-LTS EOL kernel probably misses 2000 fixes, for as many bugs that are fixed in all maintained versions around it, but not that one.
What if on the systems running that kernel, none of the fixes touched modules actually used in them?
> Some will corrupt data, cause random hangs
Not experienced once for a year, let's say
> disconnect your WiFi during an audio conf, make your screen disappear after resume, leave fantom USB devices after some errors
Not happened once in the drivers actually used and on that specific hardware.
But what's more likely to happen is that a newer version, while bringing a minor fix to a module or a subsystem you need, will also bring a mighty regression in a driver or a subsystem your workflow absolutely depends upon. A couple articles ago someone commented about precisely this situation here on LWN [0].
That's why there exist users (think companies) which find a kernel that doesn't crap on their hardware 99.999% of the time, and pin it, and swear to never upgrade it ever. Have you thought they may have had enough of the Russian roulette?
Jumping from LTS to LTS can also be akin to jumping centuries in a time travel vehicle. So many changes, so many surprises, so much work to ensure it won't crap on something we absolutely need to work...
[0] https://lwn.net/Articles/889787/, you were in that thread too.
Posted Jul 10, 2022 9:55 UTC (Sun)
by smurf (subscriber, #17840)
[Link] (1 responses)
So up a reasonable CI system. Surprise: you probably need that anyway.
Yes, that's somewhat more effort … but you only need to spend it once, not with every release.
Posted Jul 28, 2022 7:45 UTC (Thu)
by daenzer (subscriber, #7050)
[Link]
Posted Jul 11, 2022 16:47 UTC (Mon)
by wtarreau (subscriber, #51152)
[Link]
But what you seem to be ignoring here is that the older the kernel, the harder it is to backport fixes, and the most likely they are to be wrong, particularly when taken out of the context of all other fixes that were surrounding the original patch. When I was a stable maintainer, I used to receive many messages like "do not take this patch without this one" or "I'll provide you a different one for this version as it's not sufficient" etc. The risk of getting a fix wrong when applying it yourself to a tree without the author's approval is quite high. Thus in addition to missing tons of fixes, the few you get (the so called "security fixes" that make vendors sell) are often bogus and are the ones that will take your system down.
Really, do not use EOL kernels.
Posted Jul 11, 2022 9:38 UTC (Mon)
by ballombe (subscriber, #9523)
[Link] (3 responses)
Posted Jul 11, 2022 16:54 UTC (Mon)
by wtarreau (subscriber, #51152)
[Link]
No but one thing is certain, it will not magically fix all those that are discovered daily and that affect it.
> In particular, the current stable kernel needs to contain 2000 bugs so that when it will be EOLed, it will miss 2000 fixes.
Maybe more maybe less, who knows.
> > In particular, "some will corrupt data, cause random hangs, disconnect your WiFi during an audio conf, make your screen disappear after resume, leave phantom USB devices after some errors, let an intruder escalate privileges on your machine, etc."
But that's why there are LTS kernels for those who want to stick as long as possible to what works best for them. Some people only deploy a kernel on sensitve systems after one year, so that most of the recent regressions are out of the way. I personally deploy new LTS kernels on my laptop so that I can spot changes or bugs early, and have time to get them fixed before these kernels need to reach servers. That's reasonable.
Posted Jul 11, 2022 18:16 UTC (Mon)
by farnz (subscriber, #17727)
[Link] (1 responses)
The trouble is that stable kernels do contain bugs all over the shop, some of which are exploitable. So the question becomes not "are there bugs in my EOL kernel?", to which the answer is definitely "yes", but "are the bugs in my EOL kernel of concern to me, given that I do not know the scope and impact of the bugs in my kernel?", which is a much harder question to answer.
And it's made exponentially harder by regressions in newer kernels which means that there's no good answer - do you take a newer kernel that fails to boot one time in 10 because your PCIe GPU is left in a bad state by firmware, or stick to the older kernel that has a remotely exploitable bug that you don't know about that lets an intruder escalate privileges on your machine.
Ideally, there would simply not be regressions in the kernel, so updating would always be the right thing to do. But that's not the world we live in; my experience is that I'm better off taking Linus's recent release, finding regressions and reporting them ASAP (so that the bug reports go to people who've been working in the right bits of the kernel recently, and bisect is often possible in reasonable time) than putting off updates for as long as possible and then reporting a huge number of regressions in one go, but other people will have had other experiences.
Posted Jul 12, 2022 4:30 UTC (Tue)
by wtarreau (subscriber, #51152)
[Link]
Posted Jul 10, 2022 14:02 UTC (Sun)
by mgedmin (subscriber, #34497)
[Link] (1 responses)
Ubuntu's non-LTS releases are supported for 9 months, and are declared EOL three months after a newer release is made available.
(This doesn't affect the rest of the article in any way, it's just that my inner pedant cannot leave inaccurate information alone.)
Posted Jul 10, 2022 19:33 UTC (Sun)
by jordan (subscriber, #110573)
[Link]
An Ubuntu kernel bug causes container crashes
An Ubuntu kernel bug causes container crashes
Red Hat stable kernels
Red Hat stable kernels
Red Hat stable kernels
Red Hat stable kernels
Red Hat stable kernels
Red Hat stable kernels
Red Hat stable kernels
Red Hat stable kernels
An Ubuntu kernel bug causes container crashes
An Ubuntu kernel bug causes container crashes
An Ubuntu kernel bug causes container crashes
An Ubuntu kernel bug causes container crashes
An Ubuntu kernel bug causes container crashes
An Ubuntu kernel bug causes container crashes
An Ubuntu kernel bug causes container crashes
An Ubuntu kernel bug causes container crashes
An Ubuntu kernel bug causes container crashes
An Ubuntu kernel bug causes container crashes
An Ubuntu kernel bug causes container crashes
An Ubuntu kernel bug causes container crashes
An Ubuntu kernel bug causes container crashes
template optMul{`*`(a, 2)}(a: int): int = a+a
An Ubuntu kernel bug causes container crashes
An Ubuntu kernel bug causes container crashes
An Ubuntu kernel bug causes container crashes
An Ubuntu kernel bug causes container crashes
An Ubuntu kernel bug causes container crashes
An Ubuntu kernel bug causes container crashes
In particular, the current stable kernel needs to contain 2000 bugs so that when it will be EOLed, it will miss 2000 fixes. In particular, "some will corrupt data, cause random hangs, disconnect your WiFi during an audio conf, make your screen disappear after resume, leave phantom USB devices after some errors, let an intruder escalate privileges on your machine, etc."
This is not reassuring.
An Ubuntu kernel bug causes container crashes
For sure the best way not to know about bugs is to use an EOL version that doesn't receive fixes.
> This is not reassuring.
An Ubuntu kernel bug causes container crashes
An Ubuntu kernel bug causes container crashes
An Ubuntu kernel bug causes container crashes
An Ubuntu kernel bug causes container crashes