LWN.net Weekly Edition for March 7, 2019
Welcome to the LWN.net Weekly Edition for March 7, 2019
This edition contains the following feature content:
- Source-code access for the long haul: a service to help companies meet their source-availability obligations.
- A container-confinement breakout: details on a recent container vulnerability.
- The Thunderclap vulnerabilities: the kernel remains vulnerable to hostile hardware.
- Core scheduling: extending the CPU scheduler to provide stronger process isolation.
- A kernel unit-testing framework: a proposal to bring unit testing to the kernel.
- Two topics in user-space access: accessing user-space data from the kernel is tricky, as shown by two separate discussions.
This week's edition also includes these inner pages:
- Brief items: Brief news items from throughout the community.
- Announcements: Newsletters, conferences, security updates, patches, and more.
Please enjoy this week's edition, and, as always, thank you for supporting LWN.net.
Source-code access for the long haul
Corporations that get their feet wet in the sea of free software often find out that not only do they now have obligations to provide source code, but that people will actually try to access it and complain loudly if they can't get it. At the first Copyleft Conference, Alexios Zavras from Intel spoke alongside Stefano Zacchiroli from Software Heritage about how the two organizations are working together. Software Heritage's mission makes it ideally suited to host Intel's many source-code releases in a way that provides stable long-term repositories that Intel can then reference.
This year's FOSDEM was its 19th edition, and it's now a regular and much-loved part of the European free-software year. But for the first time, the Software Freedom Conservancy organized a one-day Copyleft Conference immediately following FOSDEM in Brussels; it is intended to allow a more in-depth exploration of copyleft issues than the Legal and Policy Issues devroom at FOSDEM can accommodate.
Zavras spoke first; he explained that Intel is a large company that deals with and releases a significant amount of free software. That software tends to fall into four categories. Three of them are: software that contains a released version of a well-known piece of free software (e.g. gcc-8.2), software that contains a weird development snapshot or revision of a well-known piece of free software (e.g. gcc rev 267928), and software that consists of a set of patches on top of a well-known piece of free software (e.g. gcc-8.2 + patches.tar.gz). If the well-known piece of free software is published under a copyleft license, Intel will have source distribution obligations as a result. The fourth category is software that is entirely developed inside Intel but that the company has chosen to release under a free license; if it is a copyleft license, then there are still obligations.
![Alexios Zavras [Alexios Zavras]](https://static.lwn.net/images/2019/copyleft-zavras-sm.jpg)
Intel also deals in software with a variety of copyleft licenses, all
of which use slightly different language to describe what must be
distributed and how. The GPLv2 refers to the "complete corresponding
machine-readable source code
", which must "accompany" the binary. GPLv3
has slightly different but functionally similar wording, as do the Mozilla
Public License v2 and the Eclipse Public License v2. In practice, all
require the distribution of the complete corresponding source code (CCS).
In an ideal world, said Zavras, Intel would have foolproof processes in place to collect, package, and provide the source code that each license required—in the way that the license requires. In practice, people change roles or leave the company, reorganizations happen, and things get forgotten. Through no ill-will or malice, a company will often end up failing to honor its copyleft obligations unless it makes it a point to honor them.
Intel does not want to fail in this, so it wondered if it might be
possible to involve a third party in the fulfillment process. The GPL
FAQ is clear that "if the place to copy the object code is a
network server, the Corresponding Source may be on a different server
(operated by you or a third party) that supports equivalent copying
facilities, provided you maintain clear directions next to the object code
saying where to find the Corresponding Source
", and the older GPLv2 FAQ is
comparably
clear on the point. So Intel needed to find a partner that could
provide stable hosting for free source code in a manner that results in
persistent URLs for linking to. Intel decided that Software Heritage was
the perfect organization for this, and at this point in the talk Zavras
introduced Zacchiroli, co-founder and current CTO of Software Heritage
(SWH), to explain why this was so.
Software Heritage's mission is to collect, preserve, and share the source code of all software that is publicly available. It aims to cater to the needs of a number of different classes of users, including cultural heritage, industrial, education, and research users; Zacchiroli said that he would focus on the industrial users.
SWH has two ways of collecting source code for preservation. For one, it crawls the web, much like a search engine and, when it finds source repositories, it revisits them periodically to see if anything new has been added. In addition, it downloads everything it finds, and stores it; but it deduplicates before it stores, so each given source file or commit is stored only once. Regular sites for crawling include GitHub, GitLab, and until it disappeared, Gitorious. The major Linux distributions are also crawled, as are major producers of free software such as GNU and Inria, the French national center for computer science and applied mathematics research. SWH has also recently started to crawl package-management sites such as PyPI and npm. This has resulted in about five billion source files from about one billion commits of some ninety million projects; storing these has produced about 200TB of compressed blobs and 6TB for the indexing database.
![Stefano Zacchiroli [Stefano Zacchiroli]](https://static.lwn.net/images/2019/copyleft-zacchiroli-sm.jpg)
But in addition to crawler pull-based archiving, the project supports various forms of push-based archiving. A prioritized crawl of a new repository can be requested with the "Save code now" button at https://archive.softwareheritage.org/; for crawled content that's already in the archive, there's a ubiquitous "take a new snapshot" button to request a fresh crawl if it seems out of date. There is also a "direct deposit" service that permits the uploading of code. To avoid turning into a "warez dumpster", access to direct deposit is restricted to projects that have arranged to receive credentials to the depository, which is SWORD 2.0 compliant and offers a RESTful API for deposit and monitoring. It is the direct deposit service that Intel uses to upload such CCS as it is obliged, or chooses, to offer to the world.
Each deposit becomes an integral, permanent part of the archive. It is allocated a sui generis persistent identifier (PID), the format of which is defined here. Each PID can be mapped directly to a URL that points to that deposit in the archive. Consider the PID swh:1:rev:a86747d201ab8f8657d145df4376676d5e47cf9f, which points to the SWH "Hello World" example. It can be linked to as https://archive.softwareheritage.org/browse/swh:1:rev:a86747d201ab8f8657d145df4376676d5e47cf9f; files inside the deposit can be directly linked, such as the README file that explains the "Hello World" nature of this code.
In response to a later question about why SWH didn't just use the IETF standards-track hash-based name generator format "urn:sha1:", Zacchiroli said the project didn't want to tie itself to a particular hash algorithm indefinitely; that :1: is a version number, which allows migration to another algorithm when it seems desirable to do so. He made clear, however, that existing PIDs would continue to be supported indefinitely and would remain unaffected by such a development.
As has been said, source code is deduplicated within the archive. This doesn't preclude client-side deduplication by direct depositors: if you know that you're uploading a tarball that contains the entire source of gcc-8.2 or something else that's already been ingested into the archive, you can replace it in your upload with a pointer to the pre-ingested version. The downside of deduplication is that what was uploaded as a nice big tarball isn't stored that way; bulk download of large repositories such as kernel versions requires collecting together millions of objects to reconstruct the original tarball. This makes bulk download an asynchronous process; the Software Heritage Vault is the component that receives requests for bulk download, prepares tarballs, and notifies requesters when they're ready to be fetched.
So Intel can now gradually deploy source code which it either wants to or must publish, in such a way that each deployment produces a PID that uniquely identifies the deployed tree and that can be linked to using HTML. The task now is to integrate this with Intel's regular software production and delivery chain. Once that's underway, fine-tuning can be attempted: Zavras doesn't yet have an opinion on whether it's better to deduplicate client-side, or just send everything and let SWH sort it out, for example.
Zavras expressed a few desires directed at authors of future copyleft licenses: not to require the physical transmission of either CCS or the license text itself, and to be clear that a pointer to required material is acceptable method of distribution. That said, copyleft serves a greater community than just the creators of free software; I feel that any such authors should examine these suggestions in the light of the entire community's needs.
In response to an attendee question, Zacchiroli confirmed that SWH is open to contact from organizations that are interested in helping to mirror the archive. In another answer, it was confirmed that SWH has nothing to say about the correctness or completeness of any CCS uploaded to it; that responsibility remains with those that have the license obligation to make it available. SWH simply commits to long-term availability of what's given to it. Someone asked whether SWH only archived the latest version of what it found, or whether it took all the intermediate Git commits; Zacchiroli said that for crawl-based archiving, it was the latter, but for direct-deposit it could only publish what was given to it. There was also a question about the practicality of client-side deduplication, to which Zacchiroli said that the main new piece of technology to have come out of this collaboration with Intel was a set of tools to assist in doing exactly that.
Long-term hosting of CCS archives can be onerous and difficult to get right in the real world. Although ultimate responsibility for CCS availability rests with the person or organization who has incurred the license obligation, it is perfectly reasonable to involve a third party with experience in CCS publication to assist in discharging these responsibilities. And if you or your organization decides to do that, Software Heritage is well set up to help.
[We would like to thank LWN's travel sponsor, the Linux Foundation, for travel assistance to Brussels for the Copyleft Conference.]
A container-confinement breakout
The recently announced container-confinement breakout for containers started with runc is interesting from a few different perspectives. For one, it affects more than just runc-based containers as privileged LXC-based containers (and likely others) are also affected, though the LXC-based variety are harder to compromise than the runc ones. But it also, once again, shows that privileged containers are difficult—perhaps impossible—to create in a secure manner. Beyond that, it exploits some Linux kernel interfaces in novel ways and the fixes use a perhaps lesser-known system call that was added to Linux less than five years back.
The runc tool implements the container runtime specification of the Open Container Initiative (OCI), so it is used by a number of different containerization solutions and orchestration systems, including Docker, Podman, Kubernetes, CRI-O, and containerd. The flaw, which uses the /proc/self/exe pseudo-file to gain control of the host operating system (thus anything else, including other containers, running on the host), has been assigned CVE-2019-5736. It is a massive hole for containers that run with access to the host root user ID (i.e. UID 0), which, sadly, covers most of the containers being run today.
There are a number of sources of information on the flaw, starting with the announcement from runc maintainer Aleksa Sarai linked above. The discoverers, Adam Iwaniuk and Borys Popławski, put out a blog post about how they found the hole, including some false steps along the way. In addition, one of the LXC maintainers who worked with Sarai on the runc fix, Christian Brauner, described the problems with privileged containers and how CVE-2019-5736 applies to LXC containers. There is a proof of concept (PoC) attached to Sarai's announcement, along with another more detailed PoC he posted the following day after the discoverers' blog post.
As with many exploits, this one takes a rather convoluted path, but it can reliably compromise the host. It does this by either getting a user to create a container using a compromised container image or by having a user attach to a container (e.g. using "docker exec") that an attacker has had write access to. That begins the complicated dance.
The attacker sets up the entry point to the container (which is invoked when the container is created or accessed) to be a symbolic link to /proc/self/exe. That link will resolve to the runc binary in the host filesystem when it is run from, say, a /bin/bash that contains:
#!/proc/self/exeThe effect will be to re-execute the runc command inside the container but using the binary from the host.
The attacker also needs to subvert one of the shared libraries used by runc inside the container; when runc gets re-executed inside the container, it will use the shared libraries from within it. In Sarai's PoC, he chose libseccomp, but other libraries used by runc could be chosen instead. The shared library is changed to add a new global constructor function that will be called after the library is loaded. This function will open /proc/self/exe for read (as it would get an ETXTBSY error if it tried to open it for write). That open() results in a file descriptor that can be accessed via /proc (e.g. /proc/self/fd/3).
The reason that runc-based containers are easier to exploit than LXC-based ones is due to the fact that runc exits after it does its job, while lxc-attach waits around for its command to finish. That means the runc binary is not busy (in the ETXTBSY sense) for most of the time that the container runs. While this non-busy window is nearly always open for runc, it is much smaller for LXC: After the container command completes and lxc-attach exits, there is a small window that can be exploited for privileged LXC containers.
Once it has opened the file, the constructor then executes another binary (using execve()), which terminates the execution of runc. That allows this new binary to open /proc/self/fd/3 for write, since ETXTBSY is no longer returned after the execve(). Now the attacker has a file descriptor that can be used to write anything they want to the runc binary on the host. Game over.
The fix, as implemented by runc and LXC, is to use an anonymous file created by memfd_create(), copy the contents of the runc binary into it, and then seal the file to protect it from being changed. For kernels prior to the introduction of memfd_create() in the 3.17 kernel that was released back in October 2014, there is a fallback fix that uses a file created with O_TMPFILE. The use of memfd_create() led Florian Weimer to wonder about the fix, however:
But Sarai does not
see a problem with the approach; sealing a file in memory ensures that
the right binary is executed in all cases. In response to Steve Grubb's concern that the fix was "more of a
workaround than a root cause fix
", Sarai said:
He did note that his patch
set providing a number of different VFS path-resolution restrictions
"would help fix this
and could help fix a wide variety of other container runtime issues that
have been bothering me for a couple of years
". Those patches are
based on earlier work, including the AT_NO_JUMPS patches from 2017; that
functionality has been proposed in various forms over the years but has
never made it into the mainline—at least so far.
There are a couple of obvious exploit scenarios. The easiest is to have someone use a malicious container image—a practice that is not unknown in the container world. In fact, lots of container users grab images (or other artifacts) from the internet without vetting their contents. A compromise of a running privileged container (which is most of them) could also lead down this path. The attacker could set up the exploit on likely suspects (e.g. /bin/bash) and wait for the container owner to use docker exec—or do something to the container to draw attention to it. Once the host has been compromised by one of its running containers, all of the other running containers and the operating system itself, are compromised.
All in all, it is a nasty little hole. Updating runc, LXC, and others affected would seem mandatory; one hopes it has already long been done. Finding better ways to further isolate containers going forward is needed, but the main tool, user namespaces, has been around for quite a while now. As Brauner noted in his blog post, privileged containers are simply never going to be truly secure.
The Thunderclap vulnerabilities
It should come as no surprise that plugging untrusted devices into a computer system can lead to a wide variety of bad outcomes—though often enough it works just fine. We have reported on a number of these kinds of vulnerabilities (e.g. BadUSB in 2014) along the way. So it will not shock readers to find out that another vulnerability of this type has been discovered, though it may not sit well that, even after years of vulnerable plug-in buses, there are still no solid protections against these rogue devices. This most-recent entrant into this space targets the Thunderbolt interface; the vulnerabilities found have been dubbed "Thunderclap".
There are several different versions of Thunderbolt, either using Mini DisplayPort connectors (Thunderbolt 1 and 2) or USB Type-C (Thunderbolt 3). According to the long list of researchers behind Thunderclap, all of those are vulnerable to the problems they found. Beyond that, PCI Express (PCIe) peripherals are also able to exploit the Thunderclap vulnerabilities, though they are a bit less prone to hotplugging. Thunderclap is the subject of a paper [PDF] and web site. It is more than just a bunch of vulnerabilities, however, as there is a hardware and software research platform that they have developed and released. A high-level summary of the Thunderclap paper was posted to the Light Blue Touchpaper blog by Theo Markettos, one of the researchers, at the end of February.
At its core, Thunderclap exploits the ability of devices with direct memory access (DMA) capability to read system memory, including memory that is not at all related to the supposed function of the device. That memory could easily have sensitive information, such as encryption keys or login credentials, that the user probably does not realize they are exposing. In addition, because USB Type-C connectors are used for charging these days, it will be highly surprising that a "charger" may actually also be a Thunderbolt device—with all of the access that implies. Many users may not realize their "innocuous" charging port is really much more than that.
The primary hardware defense against rogue DMA devices is the I/O memory-management unit (IOMMU), but many operating systems do not even enable it. Windows, Linux, and FreeBSD do not enable the IOMMU for Thunderbolt, though Linux has added some support in 5.0. But even for macOS, which does enable the IOMMU, there are significant holes. The researchers created a fake Thunderbolt network card that had far more access than expected on the systems that they tested. Markettos described it this way (some of which found its way into last week's Security quotes of the week):
On MacOS and FreeBSD, our network card was able to start arbitrary programs as the system administrator, and on Linux it had access to sensitive kernel data structures. Additionally, on MacOS devices are not protected from one another, so a network card is allowed to read the display contents and keystrokes from a USB keyboard.
Worst of all, on Linux we could completely bypass the enabled IOMMU, simply by setting a few option fields in the messages that our malicious network card sent.
In theory, the IOMMU could provide the full protection needed, but the current implementations in today's operating systems are lacking:
Markettos likens the situation with the IOMMU to that of the system-call interface. The latter has undergone lots of testing and hardening over the years, but that process is just starting for the IOMMU:
Linux does have support for Thunderbolt 3 security levels (or access controls), though those are fairly coarse controls. The security levels affect whether certain types of devices can be used with the system, but do not restrict the access of authorized devices. There is a user-space daemon and a command-line program to work with the security levels, as well as GNOME integration. These security levels are also governed by the identification string provided by the device itself, so spoofing is certainly a possibility.
Protections against the Thunderclap vulnerabilities range from the drastic to more prosaic mechanisms:
You can also protect yourself by not leaving your computer unattended in public and not using public USB-C charging stations. Be wary of connecting an unknown device to the Thunderbolt port of your machine, even chargers and projectors that may seem harmless.
The advice about unattended computers is true for regular USB ports—and in general—as well. With each new hardware feature, it seems, the physical security of our computers and other devices becomes ever more important. In some sense, Thunderclap isn't really new, but is simply yet another reminder of the perils of hardware (in)security. While our software has its share of problems, of course, the hardware folks aren't making the security story any easier.
Core scheduling
Kernel developers are used to having to defend their work when posting it to the mailing lists, so when a longtime kernel developer describes their own work as "expensive and nasty", one tends to wonder what is going on. The patch set in question is core scheduling from Peter Zijlstra. It is intended to make simultaneous multithreading (SMT) usable on systems where cache-based side channels are a concern, but even its author is far from convinced that it should actually become part of the kernel.
SMT increases performance by turning one physical CPU into two virtual CPUs that share the hardware; while one is waiting for data from memory, the other can be executing. Sharing a processor this closely has led to security issues and concerns for years, and many security-conscious users disable SMT entirely. The disclosure of the L1 terminal fault vulnerability in 2018 did not improve the situation; for many, SMT simply isn't worth the risks it brings with it.
But performance matters too, so there is interest in finding ways to make SMT safe (or safer, at least) to use in environments with users who do not trust each other. The coscheduling patch set posted last September was one attempt to solve this problem, but it did not get far and has not been reposted. One obstacle to this patch set was almost certainly its complexity; it operated at every level of the scheduling domain hierarchy, and thus addressed more than just the SMT problem.
Zijlstra's patch set is focused on scheduling at the core level only, meaning that it is intended to address SMT concerns but not to control higher-level groups of physical processors as a unit. Conceptually, it is simple enough. On kernels where core scheduling is enabled, a core_cookie field is added to the task structure; it is an unsigned long value. These cookies are used to define the trust boundaries; two processes with the same cookie value trust each other and can be allowed to run simultaneously on the same core.
By default, all processes have a cookie value of zero, placing them all in the same trust group. Control groups are used to manage cookie values via a new tag field added to the CPU controller. By placing a group of processes into their own group and setting tag appropriately, the system administrator can ensure that this group will not share a core with any processes outside of the group.
Underneath, of course, there is some complexity involved in making all of this work. In current kernels, each CPU in an SMT core schedules independently of the others, but that cannot happen when core scheduling is enabled; scheduling decisions must now take into account what is happening elsewhere in the core. So when one CPU in a core calls into the scheduler, it must evaluate the situation for all CPUs in that core. The highest-priority process eligible to run on any CPU in that core is chosen; if it has a cookie value that excludes processes currently running in other CPUs, those processes must be kicked out to ensure that no unwanted sharing takes place. Other, lower-priority processes might replace these outcasts, but only if they have the right cookie value.
The CPU scheduler works hard to avoid moving processes between distant CPUs in an attempt to maximize cache effectiveness. Load balancing kicks in occasionally to even out the load on the system as a whole. The calculation changes a bit, though, when core scheduling is in use; moving a process is more likely to make sense if that process can run on a CPU that would otherwise sit idle, even if the moved process leaves a hot cache behind. Thus, if an SMT CPU is forced idle due to cookie exclusion, a new load balancing algorithm will look across other cores for a process with a matching cookie to move onto the idle CPU.
The patch set has seen a fair amount of discussion. Greg Kerr,
representing Chrome OS, questioned
the control-group interface. Making changes to control groups is a
privileged operation, but he would like for unprivileged processes to be
able to set their own cookies. To that end, he proposed an API based on
ptrace() prctl() calls. Zijlstra replied
that the interface issues can be worked out later; first it's necessary to
get everything working as desired.
Whether that can be done remains to be seen. As Linus Torvalds pointed out, performance is the key consideration here. Core scheduling only makes sense if it provides better throughput than simply running with SMT disabled, so the decision on whether to merge it depends heavily on benchmark results. There is not a lot of data available yet; it seems that perhaps it works better on certain types of virtualized loads (those that only rarely exit back to the hypervisor) and worse on others. Subhra Mazumdar also reported a performance regression on database loads.
Thus, even if core scheduling is eventually accepted, it seems unlikely to
be something that everybody turns on. But it may yet be a tool that proves
useful for some environments where there is a need to get the most out of
the hardware, but where strong isolation between users is also needed. The
process of finishing this work and figuring out if it justifies the costs
seems likely to take a while in any case; this sort of major surgery to the
CPU scheduler is not done lightly, even when its developer doesn't claim to
"hate it with a passion
". So security-conscious users seem
likely to be without an alternative to disabling SMT for some time yet.
A kernel unit-testing framework
For much of its history, the kernel has had little in the way of formal testing infrastructure. It is not entirely an exaggeration to say that testing is what the kernel community kept users around for. Over the years, though, that situation has improved; internal features like kselftest and services like the 0day testing system have increased our test coverage considerably. The story is unlikely to end there, though; the next addition to the kernel's testing arsenal may be a unit-testing framework called KUnit.The KUnit patches, currently in their fourth revision, have been developed by Brendan Higgins at Google. The intent is to enable the easy and rapid testing of kernel components in isolation — unit testing, in other words. That distinguishes KUnit from kernel's kselftest framework in a couple of significant ways. Kselftest is intended to verify that a given feature works in a running kernel; the tests run in user space and exercise the kernel that the system booted. They thus can be thought of as a sort of end-to-end test, ensuring that specific parts of the entire system are behaving as expected. These tests are important to have, but they do not necessarily test specific kernel subsystems in isolation from all of the others, and they require actually booting the kernel to be tested.
KUnit, instead, is designed to run more focused tests, and they run inside the kernel itself. To make this easy to do in any setting, the framework makes use of user-mode Linux (UML) to actually run the tests. That may come as a surprise to those who think of UML as a dusty relic from before the kernel had proper virtualization support (its home page is hosted on SourceForge and offers a bleeding-edge 2.6.24 kernel for download), but UML has been maintained over the years. It makes a good platform for something like KUnit without rebooting the host system or needing to set up virtualization.
Using KUnit is a matter of writing a set of test cases to exercise the code in question and check the results. Each test case is a function with this signature:
void test_case(struct kunit *test);
A test case function that returns normally is deemed to have succeeded; a failure can be indicated by a call to KUNIT_FAIL():
void always_fails(struct kunit *test) { KUNIT_FAIL(test, "I'm so bad I always fail"); }
One could thus write a test case with a bunch of if statements and KUNIT_FAIL() calls but, naturally, a set of helper macros exists to reduce the amount of boilerplate code required. For example:
KUNIT_EXPECT_EQ(test, v1, v2);
will test v1 and v2 for equality and complain loudly if the test fails. A module testing low-level string handling might feature calls like:
KUNIT_EXPECT_EQ(test, strcmp("foo", "foo"), 0); KUNIT_EXPECT_NE(test, strcmp("foo", "bar"), 0); /* ... */
As one would expect, there are a number of these macros for different kinds of tests; see this page for the full set. Note that a test case will continue after one of these "expectations" fails; that may not be desirable if a particular failure means that the remaining tests cannot be performed. For such cases, there is a mirror set of macros with ASSERT instead of EXPECT in their names; if an assertion fails, the rest of the test case (but not any other test cases) will be aborted.
Once a set of test cases are written, they should be gathered together into an array with code like:
static struct kunit_case test_cases[] = { KUNIT_CASE(my_case_1), KUNIT_CASE(my_case_2), {}, };
This array is then packaged into a kunit_module structure this way:
static struct kunit_module test_module = { .name = "My fabulous test module", .init = my_test_init, .exit = my_test_exit, .test_cases = test_cases, }; module_test(test_module);
The init() and exit() functions can be provided if they are needed to set up and clean up after the test cases. The resulting source file implements a special type of kernel module; the next step is to add it to the Kconfig file:
config MY_TEST_MODULE bool "This is my test module and nobody else's" depends on KUNIT
and to add an appropriate line to the makefile as well. Then, running the kunit.py program that comes with KUnit will build a UML kernel and boot it to run any tests that are enabled in the current kernel configuration.
For more information, see the detailed documentation written by Higgins. There is also an extended example provided with the patch set in the form of the conversion of the existing device-tree unit tests to the KUnit framework. There have been some comments on the details of how test cases are written, but the code would appear to be getting closer to ready for merging into the mainline. Then the kernel will have another tool in its testing toolbox. That is just the beginning of course; then somebody has to actually write the tests to go with it.
Two topics in user-space access
Kernel code must often access data that is stored in user space. Most of the time, this access is uneventful, but it is not without its dangers and cannot be done without exercising due care. A couple of recent discussions have made it clear that this care is not always being taken, and that not all kernel developers fully understand how user-space access should be performed. The good news is that kernel developers are currently working on a set of changes to make user-space access safer in the future.
User-space and kernel addresses
The kernel provides a whole set of functions that allow kernel-space code to access user data. Naturally, these functions have to handle all of the possible things that might happen, including data that has been paged out to disk or addresses that don't point to any valid data at all. In the latter case, functions like copy_from_user() will return -EFAULT, which is usually then passed back to user space. The faulty application, which is certainly checking for error returns from system calls, can then do the right thing.
Unpleasant things can happen, though, if the address passed in from user space points to kernel data. If the kernel actually dereferences those addresses, it could allow an attacker to get at data that should be protected. The access_ok() function exists to prevent this from happening, but it can't work if kernel developers forget to call it before passing an address to low-level user-space access functions like __copy_from_user() (the higher-level functions call access_ok() internally). This particular omission has led to some severe vulnerabilities in the past.
This problem was, until recently, aggravated by the fact that, if an attacker tried to exploit a missing-access_ok() vulnerability using a kernel-space address that turned out to be invalid, the kernel would helpfully return -EFAULT. That would allow attackers to probe the kernel's address space at leisure until the target data structures had been found. Back in August 2018, Jann Horn added a check to catch this case and cause a kernel oops when it happens; attackers with access to a missing-access_ok() vulnerability were deprived of the ability to quietly dig around in kernel space, but there were some other, unexpected consequences as well.
As reported by Changbin Du, kernel probes ("kprobes") can be configured to access strings at any address — in either user or kernel space. The chances of such probes seeing invalid addresses are relatively high and, after Horn's patch, they would cause a kernel oops. Linus Torvalds pulled the suggested fix, but objected to the idea that a single function in kprobes (or anywhere else in the kernel) could accept both user-space and kernel addresses and manage to tell them apart.
On most architectures supported by Linux, it is usually relatively easy to distinguish user-space addresses from kernel-space addresses; that is because the two are confined to different parts of the overall address space. On 32-bit x86 systems, the default was for user space to own addresses below 0xc0000000, with the kernel owning everything above that point. Among other things, this layout improves performance by avoiding the need to change page tables when switching between user and kernel mode. But there is nothing that requires the address space to be laid out that way. A classic example is the "4G:4G" mode for x86, which gave the entire 32-bit address space to user space, then switched page tables on entry into the kernel so that the kernel, too, had the full address space.
When something like 4G:4G is in effect, the same address can be meaningful in both user and kernel space, but will point to different data. There is, at that point, no way to reliably distinguish the two types of addresses just by looking at them. There are other environments where the address spaces can overlap in this way, and defensive technologies like kernel page-table isolation are pushing even plain x86 systems in that direction. As a result, any attempt to handle both user-space and kernel addresses without knowing which they are is going to end in grief sooner or later. That explains why Torvalds became so unhappy at any attempt to do so.
The solution for kprobes will be to require accesses to specify whether they are meant for user space or kernel space. To that end, Masami Hiramatsu has been working on a patch set to add a new set of accessors for user-space data. Once those are added, and after some time has passed, it's likely that the current accessors will be changed to work with kernel-space data only.
Kprobes are not the only place where addresses have been mixed up in this
way; it turns
out that BPF programs will call bpf_probe_read() with either
type of address and expect it to work. Changing that, Alexei Starovoitov
said,
could break existing user code. Torvalds responded,
though, that: "It appears that people haven't understood that kernel
and user addresses are distinct, and may have written programs that are
fundamentally buggy
". He would like to see such uses start to fail
on the x86 architecture sometime soon so that users will fix their code
before something more unpleasant happens.
The solution here will be similar to what is being done with kprobes. Two
new functions (with names like bpf_user_read() and
bpf_kernel_read()) will be introduced, and developers will be
strongly encouraged to convert their code over to them. Eventually,
bpf_probe_read() will go away entirely. But, as Torvalds noted,
that will not be happening in the immediate future: "It's really a
'long-term we really need to fix this', where the only question is how soon
'long-term' is
".
Keeping user space walled off
While the kernel must often access user space, unpleasant things can happen when the kernel does so accidentally. Many types of attacks depend on getting the kernel to read data (or execute code) that is located in user space and under the attacker's control. To prevent such things from happening, processor vendors have implemented features to prevent the kernel from accessing user-space pages from random places. Intel's supervisor-mode access prevention (SMAP) and Arm's privileged access never (PAN) mechanisms are examples of this type of feature; when this protection is available, the kernel tries to make use of it.
This protection must, of course, be removed whenever the kernel legitimately needs to get at user-space memory. For the most part, this is handled within the user-space access functions themselves, but there are cases where higher-level code may need to disable user-space access protection. If nothing else, the instructions to enable and disable protection are expensive, so code that performs a series of accesses can be sped up by just disabling protection once for the entire series. This is managed with calls to functions like user_access_begin() and user_access_end().
The code that runs with user-space access protection disabled should be as short as possible. The more code that runs, the bigger the chance that it could contain an exploitable bug. But there is another hazard to be aware of: a call to schedule() could result in another process taking over the processor — with user-space access protection still disabled. Once that happens, there is no knowing when protection could be enabled again or how much buggy code might be executed in the meantime.
The desire to prevent this situation is why user_access_begin()
comes with a special rule: users should call no other
functions while user-space access prevention is disabled. But, as Peter
Zijlstra noted, this rule is "currently unenforced and (therefore
obviously) violated
". That seems likely to change, though, as a
result of his patch
set enhancing the objtool utility with the ability to identify
(and complain about) function calls in these sections of code. Functions
known to be safe to call can be specially marked; the functions that
perform user-space access are about the only obvious candidates for this
annotation.
Both of these cases show that user-space access is trickier and less well understood than many developers expect. A couple of long-time kernel developers (at least) were surprised to learn that any particular address can be valid (but mapped differently) in both kernel and user space. It seems, though, that at least some of these problems can be addressed with better APIs and better tools.
Brief items
Security
Security quotes of the week
It's an impassioned debate, acrimonious at times, but there are real technologies that can be brought to bear on the problem: key-escrow technologies, code obfuscation technologies, and backdoors with different properties. Pervasive surveillance capitalism -- as practiced by the Internet companies that are already spying on everyone -- matters. So does society's underlying security needs. There is a security benefit to giving access to law enforcement, even though it would inevitably and invariably also give that access to others. However, there is also a security benefit of having these systems protected from all attackers, including law enforcement. These benefits are mutually exclusive. Which is more important, and to what degree?
The problem is that almost no policymakers are discussing this policy issue from a technologically informed perspective, and very few technologists truly understand the policy contours of the debate. The result is both sides consistently talking past each other, and policy proposals -- that occasionally become law -- that are technological disasters.
Kernel development
Kernel release status
The 5.0 kernel was released on March 3; lest anybody read too much into the 5.0 number, Linus Torvalds included the usual disclaimer in the announcement: "But I'd like to point out (yet again) that we don't do feature-based releases, and that "5.0" doesn't mean anything more than that the 4.x numbers started getting big enough that I ran out of fingers and toes."
Headline features from this release include the energy-aware scheduling patch set, a bunch of year-2038 work that comes close to completing the core-kernel transition, zero-copy networking for UDP traffic, the Adiantum encryption algorithm, the seccomp trap to user space mechanism, and, of course, lots of new drivers and fixes. See the KernelNewbies 5.0 page for lots of details.
Stable updates: 4.20.14, 4.19.27, 4.14.105, and 4.9.162 were released on March 6.
Quotes of the week
Distributions
Maru 0.6 released
The Maru distribution adds a full Linux desktop to Android devices; it was reviewed here in 2016. The 0.6 release is now available. Changes include a rebase onto LineageOS and Debian 9, and the ability to stream the desktop to a Chromecast device.Distribution quote of the week
Development
Why CLAs aren't good for open source (Opensource.com)
Over at Opensource.com, Richard Fontana argues that contributor license agreements (CLAs) are not particularly useful or helpful for open-source projects. "Since CLAs continue to be a minority practice and originate from outside open source community culture, I believe that CLA proponents should bear the burden of explaining why they are necessary or beneficial relative to their costs. I suspect that most companies using CLAs are merely emulating peer company behavior without critical examination. CLAs have an understandable, if superficial, appeal to risk-averse lawyers who are predisposed to favor greater formality, paper, and process regardless of the business costs." He goes on to look at some of the arguments that CLA proponents make and gives his perspective on why they fall short.
Miscellaneous
Rosenzweig: The federation fallacy
Here's a lengthy piece from Alyssa Rosenzweig on preserving freedom despite the inevitable centralization of successful information services. "Indeed, it seems all networked systems tend towards centralisation as the natural consequence of growth. Some systems, both legitimate and illegitimate, are intentionally designed for centralisation. Other systems, like those in the Mastodon universe, are specifically designed to avoid centralisation, but even these succumb to the centralised black hole as their user bases grow towards the event horizon."
Page editor: Jake Edge
Announcements
Newsletters
Distributions and system administration
- CentOS Pulse Newsletter (March)
- Bits from the Debian Project Leader (February)
- DistroWatch Weekly (March 4)
- LineageOS Changelog 22 (March 1)
- Linux Mint Monthly News (February)
- Lunar Linux Weekly News (March 1)
- openSUSE Tumbleweed Review of the Week (March 1)
- Reproducible Builds Weekly Report (March 5)
- SparkyLinux News (February 28)
- Ubuntu Weekly Newsletter (March 2)
Development
- Emacs News (March 4)
- These Weeks in Firefox (March 5)
- What's cooking in git.git (March 6)
- Koha Community Newsletter (February)
- LibreOffice monthly recap (February 28)
- LLVM Weekly (March 4)
- LXC/LXD/LXCFS Weekly Status (March 5)
- OCaml Weekly News (March 5)
- Perl Weekly (March 4)
- Weekly changes in and around Perl 6 (March 4)
- PostgreSQL Weekly News (March 3)
- Python Steering Council update (February 26)
- Python Weekly Newsletter (February 28)
- Ruby Weekly News (February 28)
- This Week in Rust (March 5)
- Wikimedia Tech News (March 4)
Meeting minutes
- Fedora FESCO meeting minutes (March 4)
- GNOME Foundation Board Minutes (February 1)
- GNOME Foundation Board Minutes (February 5)
- GNOME Foundation Board Minutes (February 11)
- GNOME Foundation Board Minutes (February 25)
- openSUSE board meeting minutes (February 19)
Miscellaneous
- Free Software Supporter (March)
Calls for Presentations
CFP Deadlines: March 7, 2019 to May 6, 2019
The following listing of CFP deadlines is taken from the LWN.net CFP Calendar.
Deadline | Event Dates | Event | Location |
---|---|---|---|
March 10 | April 6 | Pi and More 11½ | Krefeld, Germany |
March 17 | June 4 June 5 |
UK OpenMP Users' Conference | Edinburgh, UK |
March 19 | October 13 October 15 |
All Things Open | Raleigh, NC, USA |
March 24 | July 17 July 19 |
Automotive Linux Summit | Tokyo, Japan |
March 24 | July 17 July 19 |
Open Source Summit | Tokyo, Japan |
March 25 | May 3 May 4 |
PyDays Vienna 2019 | Vienna, Austria |
April 1 | June 3 June 4 |
PyCon Israel 2019 | Ramat Gan, Israel |
April 2 | August 21 August 23 |
Open Source Summit North America | San Diego, CA, USA |
April 2 | August 21 August 23 |
Embedded Linux Conference NA | San Diego, CA, USA |
April 12 | July 9 July 11 |
Xen Project Developer and Design Summit | Chicago, IL, USA |
April 15 | August 26 August 30 |
FOSS4G 2019 | Bucharest, Romania |
April 15 | May 18 May 19 |
Open Source Conference Albania | Trana, Albania |
April 21 | May 25 May 26 |
Mini-DebConf Marseille | Marseille, France |
April 24 | October 27 October 30 |
27th ACM Symposium on Operating Systems Principles | Huntsville, Ontario, Canada |
April 25 | September 21 September 23 |
State of the Map | Heidelberg, Germany |
April 28 | September 2 September 6 |
EuroSciPy 2019 | Bilbao, Spain |
May 1 | August 2 August 4 |
Linux Developer Conference Brazil | São Paulo, Brazil |
May 1 | August 2 August 3 |
DEVCONF.in | Bengaluru, India |
May 1 | July 6 | Tübix 2019 | Tübingen, Germany |
If the CFP deadline for your event does not appear here, please tell us about it.
Upcoming Events
Debian Bug Squashing in Portland
There will be a Bug Squashing Party in Portland, OR on March 10. "Help with debugging, patching, compiling, triaging, confirming or even, dare I say it, finding bugs... all appreciated!"
Events: March 7, 2019 to May 6, 2019
The following event listing is taken from the LWN.net Calendar.
Date(s) | Event | Location |
---|---|---|
March 7 March 8 |
Hyperledger Bootcamp | Hong Kong |
March 7 March 10 |
SCALE 17x | Pasadena, CA, USA |
March 9 | DPDK Summit Bangalore 2019 | Bangalore, India |
March 10 | OpenEmbedded Summit | Pasadena, CA, USA |
March 12 March 14 |
Open Source Leadership Summit | Half Moon Bay, CA, USA |
March 14 | Icinga Camp Berlin | Berlin, Germany |
March 14 | pgDay Israel 2019 | Tel Aviv, Israel |
March 14 March 17 |
FOSSASIA | Singapore, Singapore |
March 19 March 21 |
PGConf APAC | Singapore, Singapore |
March 20 | Open Source Roundtable at Game Developers Conference | San Francisco, CA, USA |
March 20 March 22 |
Netdev 0x13 | Prague, Czech Republic |
March 21 | gRPC Conf | Sunnyvale, CA, USA |
March 23 | Kubernetes Day | Bengaluru, India |
March 23 March 24 |
LibrePlanet | Cambridge, MA, USA |
March 23 March 26 |
Linux Audio Conference | San Francisco, CA, USA |
March 29 March 31 |
curl up 2019 | Prague, Czech Republic |
April 1 April 5 |
SUSECON 2019 | Nashville, TN, USA |
April 1 April 4 |
‹Programming› 2019 | Genova, Italy |
April 2 April 4 |
Cloud Foundry Summit | Philadelphia, PA, USA |
April 3 April 5 |
Open Networking Summit | San Jose, CA, USA |
April 5 April 7 |
Devuan Conference | Amsterdam, The Netherlands |
April 5 April 6 |
openSUSE Summit | Nashville, TN, USA |
April 6 | Pi and More 11½ | Krefeld, Germany |
April 7 April 10 |
FOSS North | Gothenburg, Sweden |
April 10 April 12 |
DjangoCon Europe | Copenhagen, Denmark |
April 13 | OpenCamp Bratislava | Bratislava, Slovakia |
April 13 April 17 |
ACM SIGPLAN/SIGOPS Conference on Virtual Execution Environments | Providence, RI, USA |
April 18 | Open Source 101 | Columbia, SC, USA |
April 26 April 27 |
Grazer Linuxtage | Graz, Austria |
April 26 April 27 |
KiCad Conference 2019 | Chicago, IL, USA |
April 28 April 30 |
Check_MK Conference #5 | Munich, Germany |
April 29 May 1 |
Open Infrastructure Summit | Denver, CO, USA |
April 30 May 2 |
Linux Storage, Filesystem & Memory Management Summit | San Juan, Puerto Rico |
May 1 May 9 |
PyCon 2019 | Cleveland, OH, USA |
May 2 May 4 |
Linuxwochen Österreich 2019 - Wien | Wien, Austria |
May 3 May 4 |
PyDays Vienna 2019 | Vienna, Austria |
May 4 May 5 |
Latch-Up 2019 | Portland, OR, USA |
If your event does not appear here, please tell us about it.
Security updates
Alert summary February 28, 2019 to March 6, 2019
Dist. | ID | Release | Package | Date |
---|---|---|---|---|
Arch Linux | ASA-201903-1 | chromium | 2019-03-03 | |
Arch Linux | ASA-201903-5 | file | 2019-03-04 | |
Arch Linux | ASA-201903-3 | gdm | 2019-03-04 | |
Arch Linux | ASA-201903-6 | lib32-openssl-1.0 | 2019-03-04 | |
Arch Linux | ASA-201903-2 | openssl-1.0 | 2019-03-04 | |
Arch Linux | ASA-201903-4 | pcre | 2019-03-04 | |
CentOS | CESA-2019:0464 | C7 | java-1.7.0-openjdk | 2019-03-06 |
CentOS | CESA-2019:0436 | C7 | java-11-openjdk | 2019-03-06 |
Debian | DLA-1702-1 | LTS | advancecomp | 2019-03-02 |
Debian | DLA-1697-1 | LTS | bind9 | 2019-02-28 |
Debian | DLA-1696-1 | LTS | ceph | 2019-03-01 |
Debian | DLA-1698-1 | LTS | file | 2019-02-28 |
Debian | DLA-1693-1 | LTS | gpac | 2019-02-27 |
Debian | DSA-4399-1 | stable | ikiwiki | 2019-02-28 |
Debian | DLA-1703-1 | LTS | jackson-databind | 2019-03-04 |
Debian | DLA-1699-1 | LTS | ldb | 2019-03-01 |
Debian | DSA-4397-1 | stable | ldb | 2019-02-28 |
Debian | DSA-4402-1 | stable | mumble | 2019-03-05 |
Debian | DLA-1704-1 | LTS | nss | 2019-03-04 |
Debian | DSA-4387-2 | stable | openssh | 2019-03-02 |
Debian | DLA-1701-1 | LTS | openssl | 2019-03-01 |
Debian | DSA-4400-1 | stable | openssl1.0 | 2019-02-28 |
Debian | DSA-4398-1 | stable | php7.0 | 2019-02-28 |
Debian | DLA-1694-1 | LTS | qemu | 2019-02-28 |
Debian | DLA-1705-1 | LTS | sox | 2019-03-05 |
Debian | DLA-1695-1 | LTS | sox | 2019-02-28 |
Debian | DLA-1700-1 | LTS | uw-imap | 2019-03-01 |
Debian | DSA-4401-1 | stable | wordpress | 2019-03-01 |
Fedora | FEDORA-2019-6092f8c0dc | F28 | SDL | 2019-03-02 |
Fedora | FEDORA-2019-7d1a63acc8 | F29 | ansible | 2019-03-01 |
Fedora | FEDORA-2019-21b76d179e | F28 | community-mysql | 2019-03-01 |
Fedora | FEDORA-2019-2c2dfc65d1 | F28 | distcc | 2019-03-02 |
Fedora | FEDORA-2019-dfef0af227 | F29 | distcc | 2019-03-02 |
Fedora | FEDORA-2019-82df33e428 | F28 | drupal7 | 2019-03-06 |
Fedora | FEDORA-2019-0c1d62bf5b | F29 | drupal7 | 2019-03-06 |
Fedora | FEDORA-2019-ff4e1a73a5 | F28 | drupal7-link | 2019-03-06 |
Fedora | FEDORA-2019-55788aeb71 | F29 | drupal7-link | 2019-03-06 |
Fedora | FEDORA-2019-15f5147b27 | F29 | file | 2019-03-01 |
Fedora | FEDORA-2019-7ad9201e59 | F29 | firefox | 2019-03-06 |
Fedora | FEDORA-2019-a5f616808e | F28 | flatpak | 2019-02-28 |
Fedora | FEDORA-2019-e3b2885a25 | F29 | freerdp | 2019-03-02 |
Fedora | FEDORA-2019-caab5920f2 | F29 | gdm | 2019-03-02 |
Fedora | FEDORA-2019-e3b2885a25 | F29 | gnome-boxes | 2019-03-02 |
Fedora | FEDORA-2019-9a6906a128 | F29 | gpsd | 2019-03-06 |
Fedora | FEDORA-2019-541e91b477 | F28 | ignition | 2019-03-06 |
Fedora | FEDORA-2019-4debb6711a | F29 | ignition | 2019-03-06 |
Fedora | FEDORA-2019-7462acf8ba | F29 | kernel | 2019-03-01 |
Fedora | FEDORA-2019-7462acf8ba | F29 | kernel-headers | 2019-03-01 |
Fedora | FEDORA-2019-02e13cb1a8 | F28 | libexif | 2019-03-01 |
Fedora | FEDORA-2019-4fdf19459d | F28 | ming | 2019-03-06 |
Fedora | FEDORA-2019-e0d49261b9 | F29 | ming | 2019-03-06 |
Fedora | FEDORA-2019-f0add5eed0 | F28 | openocd | 2019-03-02 |
Fedora | FEDORA-2019-0a5e82cea8 | F29 | openocd | 2019-03-02 |
Fedora | FEDORA-2019-d248c5aa39 | F28 | php-Smarty | 2019-03-06 |
Fedora | FEDORA-2019-e595e8a7d7 | F29 | php-Smarty | 2019-03-06 |
Fedora | FEDORA-2019-009fdcfb60 | F28 | php-erusev-parsedown | 2019-03-06 |
Fedora | FEDORA-2019-b02e9bf467 | F29 | php-erusev-parsedown | 2019-03-06 |
Fedora | FEDORA-2019-e3b2885a25 | F29 | pidgin-sipe | 2019-03-02 |
Fedora | FEDORA-2019-ec55814c1c | F29 | python-django | 2019-03-01 |
Fedora | FEDORA-2019-e3b2885a25 | F29 | remmina | 2019-03-02 |
Fedora | FEDORA-2019-b3aec99d2c | F29 | xpdf | 2019-03-03 |
openSUSE | openSUSE-SU-2019:0294-1 | 15.0 | hiawatha | 2019-03-06 |
openSUSE | openSUSE-SU-2019:0274-1 | 42.3 | kernel | 2019-03-01 |
openSUSE | openSUSE-SU-2019:0275-1 | 42.3 | kernel-firmware | 2019-03-01 |
openSUSE | openSUSE-SU-2019:0265-1 | 15.0 | libqt5-qtbase | 2019-02-27 |
openSUSE | openSUSE-SU-2019:0276-1 | 42.3 | php5 | 2019-03-01 |
openSUSE | openSUSE-SU-2019:0291-1 | 42.3 | procps | 2019-03-04 |
openSUSE | openSUSE-SU-2019:0292-1 | 42.3 | python | 2019-03-06 |
openSUSE | openSUSE-SU-2019:0293-1 | 15.0 | supportutils | 2019-03-06 |
openSUSE | openSUSE-SU-2019:0268-1 | 42.3 | systemd | 2019-03-01 |
Oracle | ELSA-2019-0462 | OL6 | java-1.7.0-openjdk | 2019-03-05 |
Oracle | ELSA-2019-0464 | OL7 | java-1.7.0-openjdk | 2019-03-06 |
Oracle | ELSA-2019-0464 | OL7 | java-1.7.0-openjdk | 2019-03-05 |
Oracle | ELSA-2019-0435 | OL7 | java-1.8.0-openjdk | 2019-03-02 |
Oracle | ELSA-2019-0435 | OL7 | java-1.8.0-openjdk | 2019-03-03 |
Oracle | ELSA-2019-0436 | OL7 | java-11-openjdk | 2019-03-02 |
Oracle | ELSA-2019-0436 | OL7 | java-11-openjdk | 2019-03-03 |
Red Hat | RHSA-2019:0462-01 | EL6 | java-1.7.0-openjdk | 2019-03-05 |
Red Hat | RHSA-2019:0464-01 | EL7 | java-1.7.0-openjdk | 2019-03-05 |
Red Hat | RHSA-2019:0435-01 | EL7 | java-1.8.0-openjdk | 2019-02-28 |
Red Hat | RHSA-2019:0436-01 | EL7 | java-11-openjdk | 2019-02-28 |
Red Hat | RHSA-2019:0457-01 | EL7 | redhat-virtualization-host | 2019-03-05 |
Red Hat | RHSA-2019:0461-01 | EL7 | rhvm-appliance | 2019-03-05 |
Red Hat | RHSA-2019:0458-01 | EL7 | vdsm | 2019-03-05 |
Scientific Linux | SLSA-2019:0462-1 | SL6 | java-1.7.0-openjdk | 2019-03-05 |
Scientific Linux | SLSA-2019:0464-1 | SL7 | java-1.7.0-openjdk | 2019-03-05 |
Scientific Linux | SLSA-2019:0435-1 | SL7 | java-1.8.0-openjdk | 2019-02-28 |
Scientific Linux | SLSA-2019:0436-1 | SL7 | java-11-openjdk | 2019-02-28 |
Slackware | SSA:2019-060-01 | infozip | 2019-03-01 | |
Slackware | SSA:2019-062-01 | python | 2019-03-03 | |
SUSE | SUSE-SU-2019:0510-1 | SLE12 | bluez | 2019-02-28 |
SUSE | SUSE-SU-2019:0537-1 | caasp-container-manifests, changelog-generator-data-sles12sp3-velum, kubernetes-salt, rubygem-aes_key_wrap, rubygem-json-jwt, sles12sp3-velum-image, velum | 2019-03-02 | |
SUSE | SUSE-SU-2019:0539-1 | SLE15 | freerdp | 2019-03-04 |
SUSE | SUSE-SU-2019:0527-1 | SLE15 | gdm | 2019-03-01 |
SUSE | SUSE-SU-2019:0541-1 | SLE12 | kernel | 2019-03-04 |
SUSE | SUSE-SU-2019:0540-1 | SLE15 | obs-service-tar_scm | 2019-03-04 |
SUSE | SUSE-SU-2019:0512-1 | SLE12 | openssl-1_1 | 2019-02-28 |
SUSE | SUSE-SU-2019:0511-1 | OS7 SLE12 | webkit2gtk3 | 2019-02-28 |
Ubuntu | USN-3900-1 | 14.04 16.04 18.04 18.10 | libgd2 | 2019-02-28 |
Ubuntu | USN-3901-1 | 18.04 | linux, linux-aws, linux-gcp, linux-kvm, linux-oem, linux-oracle, linux-raspi2 | 2019-03-05 |
Ubuntu | USN-3901-2 | 14.04 16.04 | linux-hwe, linux-aws-hwe, linux-azure, linux-gcp, linux-oracle | 2019-03-05 |
Ubuntu | USN-3898-2 | 12.04 | nss | 2019-02-27 |
Ubuntu | USN-3898-1 | 14.04 16.04 18.04 18.10 | nss | 2019-02-27 |
Ubuntu | USN-3885-2 | 14.04 16.04 18.04 18.10 | openssh | 2019-03-04 |
Ubuntu | USN-3899-1 | 16.04 18.04 18.10 | openssl, openssl1.0 | 2019-02-27 |
Kernel patches of interest
Kernel releases
Architecture-specific
Build system
Core kernel
Device drivers
Device-driver infrastructure
Documentation
Filesystems and block layer
Memory management
Security-related
Virtualization and containers
Page editor: Rebecca Sobol