|
|
Subscribe / Log in / New account

LWN.net Weekly Edition for February 28, 2019

Welcome to the LWN.net Weekly Edition for February 28, 2019

This edition contains the following feature content:

This week's edition also includes these inner pages:

  • Brief items: Brief news items from throughout the community.
  • Announcements: Newsletters, conferences, security updates, patches, and more.

Please enjoy this week's edition, and, as always, thank you for supporting LWN.net.

Comments (none posted)

GMP and assert()

By Jake Edge
February 27, 2019

A report of a potential security problem in the GNU Multiple Precision Arithmetic (GMP) library was met with a mixed reaction, from skepticism to responses verging on hostility, but the report ultimately raised a question worth pondering. What role should assertions (i.e. calls to the POSIX assert() macro) play in error handling? An assertion that fails leads to a process exit, which may not be what a developer calling into a library expects. Unexpected behavior is, of course, one step on a path that can lead to security holes.

Jeffrey Walton posted a message with "Asserts considered harmful (or GMP spills its sensitive information)" as its subject line to the oss-security mailing list right at the end of 2018. He noted that GMP uses assert() for error checking, which could lead to a number of undesirable effects. Software calling GMP might well be handling encryption keys or other sensitive information, which could be exposed via core dumps or other means (e.g. error-reporting services) due to the abort() call made in response to the failing assertion.

As the subject of the mail might indicate, the tone of the message was a tad combative and, perhaps, a bit condescending. That may help explain some of the reactions to it. Walton's message begins as follows:

The GMP library uses asserts to crash a program at runtime when presented with data it did not expect. The library also ignores user requests to remove asserts using Posix's -DNDEBUG. Posix asserts are a [debugging] aide intended for [development], and using them in production software ranges from questionable to insecure.

Many programs can safely use assert to crash a program at runtime. However, the [prerequisite] is, the program cannot handle sensitive information like user passwords, user keys or sensitive documents.

High integrity software, like GMP and Nettle, cannot safely use an assert to crash a program.

He goes on to describe what happens when an assertion fails and how that can lead to sensitive information leaking. He also includes an example that uses the Nettle cryptographic library (which calls into GMP) to trigger an assertion, crash, and core dump. It turns out that the assertion is actually caused by a bug in Nettle that has since been fixed. Walton built Nettle and GMP with -DNDEBUG, which is meant to disable assert(). As noted above, that did not disable assert() in GMP, but did work for Nettle. However, that tickled a bug in one of the Nettle test cases, which caused the GMP assertion to fail. Discovering that led to some dismissive responses, but the larger point still stands: should applications expect libraries to abort()?

Most in the thread argued that the danger of trying to continue past a failed assertion could be worse than the information leak. Security-sensitive programs should already be ensuring that no core dumps are produced and that any abort()-reporting services are disabled. In addition, as Vincent Lefèvre pointed out, there are plenty of other ways for a program to crash (e.g. segmentation fault) that could lead to the same information leak. So programs handling sensitive information need to be prepared for that in any case.

The crux of Walton's argument is that the assertions would be better handled by returning an error rather than crashing (or attempting to crash, since programs could be set up to handle the SIGABRT). That may be true but it would change the API and, perhaps, slow down the library, Lefèvre said.

Moreover, some asserts may come from the detection of an inconsistent state. In this case, it is better to abort. Otherwise letting the program continue may have worse consequences.

The developer of Nettle, Niels Möller, agreed: "Crashing in a controlled fashion may also be *more* secure that continuing execution with undefined results. Depending on circumstances, of course." He went on to say that Walton's perspective differed from his own:

I read the general statement "asserts considered harmful" as your personal [opinion], likely based on experience with very different development projects than I'm involved with.

In the final analysis, developers of programs handling sensitive information need to be more careful than other developers. One could argue that those developers should be familiar with the behavior of any libraries that they use, so that assertion failures are not unexpected. Beyond that, as GMP developer Torbjörn Granlund pointed out, the API of a library is determined by its developers; other projects are free to use the library—or not—at their discretion.

Comments (46 posted)

Revisiting PEP 394

By Jake Edge
February 27, 2019

With the uptake of Python 3 (and the imminent end of life for Python 2.7), there is a question of which version of Python a user should get when they type "python" at the command line or have it as part of a shebang ("#!") line in a script. Back in 2011, PEP 394 ("The 'python' Command on Unix-Like Systems") was created as an informational PEP that relayed the recommendations of the Python core developers to Linux distributions and others in a similar position about which version to point python to. Now, Petr Viktorin, one of the authors of the PEP, would like to revisit those recommendations, which is something that is suggested in the PEP itself.

Viktorin is concerned that the recommendations essentially require a distribution to include Python 2 if it wants to install a "python", in addition to explicit "python2" and/or "python3" commands. The current recommendations are that, if there is a python, it should point to the same place as python2, and that scripts which are compatible with both Python 2 and Python 3 may continue to use python on their shebang lines. That effectively means a distribution with shiny new Python 3 scripts (which also support Python 2) needs to ship Python 2, which is not what some distributions want.

The main issue is that some distributions are supported for ten or more years, but Python 2, in the form of Python 2.7, will reach its end of life on January 1, 2020—just ten months away. That leaves those distributions with some tough choices. They can either break with PEP 394 by making the version invoked by python configurable, for example. Or they have to ship scripts that could happily work on Python 2 with shebang lines that will invoke Python 3 (e.g. #!/usr/bin/python3).

As Victor Stinner pointed out, this problem has been discussed before, including at the 2018 Python Language Summit. Both Viktorin and Stinner work for Red Hat, so their perspective is largely focused on Fedora and Red Hat Enterprise Linux (RHEL). Stinner also noted a pull request (PR) that Viktorin filed for PEP 394 that would clarify some of the language in it. In a comment on the PR, Viktorin said that where python points is configurable for the RHEL 8 beta:

I'm responsible for how this is handled in RHEL 8 beta, where /usr/bin/python is configurable (though configuring it is discouraged). I don't recommend that in the PEP – I don't think it needs to cover distros that need to lock in the behavior of /usr/bin/python for a decade.

The eventual intent, at least according to a comment by Guido van Rossum on an earlier PR, is that python won't actually point anywhere because it won't exist; users will need to explicitly choose either python2 or python3. That is part of what Viktorin's PR is aiming for as well. He wants to make two changes to the PEP: allow installing a "python" (or, an "unversioned Python", as he calls it) to be optional for a distribution and to recommend that scripts supporting both Python 2 and Python 3 use python3 in their shebang lines. That last is a bit counterintuitive, since it means that those scripts will really only run under Python 3, regardless of their ability to run under either version. It is, Viktorin said, "the least bad option".

Some would like to see the unversioned Python point to Python 3, either now or soon, however. Antoine Pitrou pointed out that users of the conda package management tool for Python have been using unversioned Python as Python 3 for several years. Stinner said that he advocates unversioned Python be the same as the latest Python version. There is a problem with installing Python without providing an unversioned binary or link, he said: the documentation refers to python in lots of places.

Barry Warsaw agreed that it was desirable to have the unversioned Python be the latest Python 3 version, though it may be too soon for the PEP to recommend that. But both Stinner and Warsaw also brought up another potential solution: the Python Launcher for Unix that Brett Cannon is working on. It is modeled on the "py" launcher for Windows, which was specified in PEP 397 ("Python launcher for Windows"). py finds the "right" Python interpreter to invoke based on its configuration, command-line arguments, and any shebang line.

Cannon's version of the launcher is written in Rust (and is available as a crate). Since there is no legacy of using it on Unix systems, distributions would be able to point it at the version that makes the most sense for them. Users and administrators could still change the default behavior as they wished as well. Cannon said that py is still missing some basic functionality, but he is working on rectifying that. He intends to keep working on it even if others do not find it all that useful:

In my experience after using 'py' on Windows I consistently miss it on UNIX now, so to me there is enough of a benefit that I will continue to chip away at the project until it's done regardless of whether anyone else uses it. :)

Unlike nearly everyone else commenting in the thread, Gregory P. Smith did not think py is a solution to the problem. In his view, any distributions that are not installing some unversioned Python, when the language is used for its core functionality (e.g. various administrative scripts), has "done something undesirable for the world at large". That is because that makes it impossible to write a shebang line that invokes the latest version of Python installed and will work across all of the distributions. There are "hacks" that can be done to work around that problem and those should be adopted, he said. As he forecast, though, that wasn't a wildly popular approach.

As he did at the language summit, Matthias Klose represented the Debian and Ubuntu Python packagers in the thread. He described the plans for unversioned Python; for Debian, python is never planned to point to Python 3, while Ubuntu has not made a final decision, but it currently does not install an unversioned Python. Debian will continue to have python point to Python 2 until it no longer ships Python 2, then remove it entirely. Klose is trying to ensure that, for upcoming distribution releases starting in 2020 or 2021, Python scripts in both Debian and Ubuntu packages use explicit shebang lines.

Klose does not think there is any way to solve the problem that Smith is complaining about: "there is *no* way to have a sane way to have what you want". Smith more or less concurred:

We've created a world where #! lines cannot be used to invoke an intentionally compatible script across a wide variety of platforms over time. But our decision to do that was the decision to have an incompatible release in the first place.

Too late now. :)

Nick Coghlan agreed with Smith that not having an unversioned Python "does indeed break the world (search for Fedora 28 and Ubuntu 16.04 Ansible Python issues for more)". But he thinks it is time to consider changing the stance of the PEP. The only thing that is stopping Fedora from doing what it thinks is the right thing (pointing the unversioned Python at Python 3) is the PEP:

For RHEL 8, the resolution was "Well, we'll ignore the upstream PEP, then" and make the setting configurable anyway, but Fedora tries to work more closely with upstream than that - if we think upstream are giving people bad or outdated advice, then we'll aim to get the advice changed rather than ignoring it.

In this case, the advice is outdated: there have been a couple of years of releases with /usr/bin/python missing, so it's time to move to the "/usr/bin/python3" side of source compatibility, and go back to having "just run python" be the way you start a modern Python interpreter, even when you're using the system Python on a Linux distro.

He pointed out that Viktorin is not asking to change the recommendation to follow Fedora's lead, "just for the PEP to acknowledge it as a reasonable choice for distros to make given the looming Python 2 End of Life". Warsaw thought that made sense:

PEP 394 shouldn't *prevent* distros from doing what they believe is in the best interest of their users. While we do want consistency in the user experience across Linux distros (and more broadly, across all supported platforms), I think we also have to acknowledge that we're still in a time of transition (maybe more so right now), so we should find ways to allow for experimentation within that context. I'm not sure that I agree with all the proposed changes to PEP 394, but those are the guidelines I think I'll re-evaluate the PR by.

Both Coghlan and Warsaw, along with Cannon, Van Rossum, and Carol Willing, are members of the new steering council, which means they will be participating in determining the outcome of the changes to this PEP. More importantly, perhaps, both are also co-authors of the PEP (with Viktorin and Kerrick Staley), so their opinions clearly matter. So far, Van Rossum has not commented in the thread or on the PR. It has been almost a year since he was opposed to the last effort to change the PEP. A lot of water has gone under the bridge since then, so he might be more inclined to accept these changes (or a subset of them)—or simply to defer to the other members of the council.

Comments (52 posted)

Memory-mapped I/O without mysterious macros

By Jonathan Corbet
February 26, 2019
Concurrency is hard even when the hardware's behavior is entirely deterministic; it gets harder in situations where operations can be reordered in seemingly random ways. In these cases, developers tend to reach for barriers as a way of enforcing ordering, but explicit barriers are tricky to use and are often not the best way to think about the problem. It is thus common to see explicit barriers removed as code matures. That now seems to be happening with an especially obscure type of barrier used with memory-mapped I/O (MMIO) operations.

The core idea behind MMIO is that a peripheral device makes a set of registers available on the system's memory bus. The kernel can map those registers into its address space, then control the device by reading and writing those registers. I/O buses have often taken liberties with ordering when it comes to delivering operations to peripherals; that leads to rituals like performing an unneeded read from a PCI device's register space to force previous writes to be posted. But in some cases, the hardware can take things further and reorder operations arriving from different CPUs, even if the code performing those operations is strictly ordered. That, of course, can lead to a variety of amusing mixups.

Fortunately, kernel developers tend not to be amused by such things, so they take steps to ensure that this type of reordering does not happen. Back in 2004, Jesse Barnes introduced a special sort of memory barrier operation called mmiowb(); its job was to ensure that all MMIO writes initiated prior to the barrier would be posted to the device before any writes initiated after the barrier. mmiowb() was duly adopted by developers whose code needs to run on the affected hardware; there are now several hundred call sites in the kernel.

Explicit barrier operations can work, but they have their pitfalls. Developers must remember to insert barrier operations anywhere that reordering could cause things to go astray. That tempts developers to sprinkle them throughout the code without necessarily thinking about exactly what those barriers are protecting against. In normal use, barriers need to come in groups of two or more, one in each place where a race might happen, but there is no indication of where any given barrier's siblings have been placed in the code, making it harder to understand what is going on. Code relying on explicit barriers, as a result, can be subject to rare failures that are nearly impossible to reproduce or diagnose.

Will Deacon would like to improve this situation. He recently posted a patch set wherein mmiowb() is said to stand for "Mysterious Macro Intended to Obscure Weird Behaviors"; his objective is to remove this macro, or at least hide it from the view of most developers. The core idea is one that comes up often in software development: developers should not be counted on to get complex concurrency issues right, especially in situations where the computer can do it for them.

MMIO registers must be protected from concurrent accesses by multiple CPUs in the system; if that hasn't been done, there is nothing that barriers can do to stave off disaster. In the kernel, the implication is that code performing MMIO must be holding a spinlock that will prevent other processors from getting in the way. Spinlocks have already been defined as barriers when it comes to access to system RAM, freeing most kernel code from having to worry about memory-ordering issues in spinlock-protected code. Deacon's plan is to extend this definition to MMIO ordering on systems that need it.

The patch set creates a per-CPU array of structures like:

    struct mmiowb_state {
	u16	nesting_count;
	u16	mmiowb_pending;
    };

Then, three sets of hooks are placed in the low-level spinlock and MMIO code. The functions that acquire spinlocks will call this function after the acquisition succeeds:

    static inline void mmiowb_spin_lock(void)
    {
	if (__this_cpu_inc_return(__mmiowb_state.nesting_count) == 1)
	    __this_cpu_write(__mmiowb_state.mmiowb_pending, 0);
    }

This function increments nesting_count, which is essentially keeping track of the number of spinlocks currently held by each CPU. When the first lock is acquired (when nesting_count is incremented to one), the mmiowb_pending flag is set to zero, indicating that no MMIO write operations have (yet) been performed in this critical section.

While I/O memory looks like memory, and it can be tempting to access it by simply dereferencing a pointer, that does not always work on every architecture, so kernel developers use helper functions instead. Deacon's patch set adds a call to mmiowb_set_pending() to the helpers that perform write operations; it simply sets the mmiowb_pending flag to one, indicating that MMIO write operations have been performed since the last time it was cleared.

Finally, operations that release a spinlock will call:

    static inline void mmiowb_spin_unlock(void)
    {
	if (__this_cpu_xchg(__mmiowb_state.mmiowb_pending, 0))
	    mmiowb();
	__this_cpu_dec_return(__mmiowb_state.nesting_count);
    }

Here, mmiowb_pending is set back to zero and simultaneously tested; if its previous value was non-zero, mmiowb() is called. Then the nesting count is decremented.

With these changes in place, there is no longer any need for driver-level code to make explicit calls to mmiowb(). That will, instead, happen automatically whenever MMIO operations have been performed inside a spinlock-protected critical section. That frees driver authors from the need to think about whether MMIO barriers are needed in any given situation. It also ensures that code will do the right thing, even if it is written by a developer who tests on machines with stricter MMIO-ordering guarantees and who has never even heard of mmiowb().

It is thus unsurprising that nobody has spoken out against these changes, even though the patch set modifies 178 files. Linus Torvalds said "I love removing mmiowb()"; he did, however, have some comments on how to make the implementation a bit more efficient. A revision of the patch set can thus be expected; after that, chances are that mmiowb() calls (and MMIO-barrier-related weird behavior) in driver code will soon be a thing of the past.

Comments (6 posted)

Containers as kernel objects — again

By Jonathan Corbet
February 22, 2019
Linus Torvalds once famously said that there is no design behind the Linux kernel. That may be true, but there are still some guiding principles behind the evolution of the kernel; one of those, to date, has been that the kernel does not recognize "containers" as objects in their own right. Instead, the kernel provides the necessary low-level features, such as namespaces and control groups, to allow user space to create its own container abstraction. This refusal to dictate the nature of containers has led to a diverse variety of container models and a lot of experimentation. But that doesn't stop those who would still like to see the kernel recognize containers as first-class kernel-supported objects.

One of those people is David Howells, who has posted a patch set designed to stir up this debate. It starts by defining a container as a kernel object that has a set of namespaces, a root directory, and a set of processes running inside it, one of which is deemed to be the "init" process. Creating one of these containers would be done by calling one of a set of new system calls:

    int container_create(const char *name, unsigned int flags);

The provided name can appear in a new /proc/containers file, but does not otherwise seem to be used much. The flags, instead, control which namespaces should be inherited from the creator and which should be created for the container as part of this call. Other flags include CONTAINER_KILL_ON_CLOSE to cause the container to be killed if the returned file descriptor is closed, and CONTAINER_NEW_EMPTY_FS_NS to cause the container to be created with a new mount namespace containing no filesystems at all.

The return value is a file descriptor used to manipulate the container, as described below. A poll() call on this descriptor will indicate POLLHUP should the last process in the container die.

On creation, though, the container contains no processes; that is changed with a call to:

    pid_t fork_into_container(int container_fd);

This call is like fork(), with the exception that the new child process will be running inside the indicated container as its init process. There can be only one init process, so fork_into_container() will only work once for any given container.

A socket can be created inside a container (from the outside) by calling:

    int container_socket(int container_fd, int domain, int type, int protocol);

It is also possible to mount filesystems inside the container from the outside using Howells's proposed new filesystem mounting API, which has also been updated recently. A call to fsopen() would create a superblock as usual; the fsconfig() call could then be used to indicate that the superblock should exist inside the container object. This allows a container's filesystem tree to be constructed from outside, which is undoubtedly a useful feature; it eliminates the need to perform privileged mounting operations from inside the container itself. The mount() and (proposed) move_mount() calls could also be used to move a mounted filesystem into a container.

The "*at()" series of system calls (such as openat()) allows the provision of a file descriptor to indicate where the search for a given pathname should start. Howells's patch set extends this functionality by allowing a file descriptor representing a container to be passed into these calls; the result would be to start the search at the container's root directory.

Finally, there is also a mechanism by which key-management "upcalls" (wherein the kernel calls out to user space to request that a cryptographic key be provided) can be intercepted by the container creator. That, along with the addition of a separate keyring to each container, gives the creator control over which keys are seen (and can be used) by processes inside the container. The intended use case here is to allow containers to make use of keys to authenticate access to remote filesystems.

If all of this has a bit of a familiar ring to it, that should not be surprising; Howells proposed much of this functionality back in 2017. He ran into a fair amount of opposition at the time to the idea of adding a container concept to the kernel, and this work disappeared from view. Now that it has returned, many of the same objections are being raised. James Bottomley, for example, said:

I thought we got agreement years ago that containers don't exist in Linux as a single entity: they're currently a collection of cgroups and namespaces some of which may and some of which may not be local to the entity the orchestration system thinks of as a "container".

Howells, however, is unimpressed by this complaint: "I wasn't party to that agreement and don't feel particularly bound by it". In a world where there is no overall design behind the kernel such a position may be tenable, but it's not, on its own, a particularly compelling argument for why the status quo should be changed in such a significant way. It is also not the best way to win friends, which could be helpful; as things stand, some of the rejections of this work have been less than entirely amicable.

One could perhaps make an argument that the lack of a proper container object was necessary in the early days, when the community was still trying to figure out how containers should work in general. Now that we have several years of experience and a set of emerging container-related standards, perhaps the time has come to codify some of what has been understood into kernel features that make containers as a whole easier to deal with. This argument has not yet been made, though, with the result that the status quo has a high likelihood of winning out here. We may yet get a formal container abstraction in the kernel, but this version of this patch set seems unlikely to be it.

Comments (19 posted)

Reimplementing printk()

By Jake Edge
February 26, 2019

The venerable printk() function has been part of Linux since the very beginning, though it has undergone a fair number of changes along the way. Now, John Ogness is proposing to fundamentally rework printk() in order to get rid of handful of issues that currently plague it. The proposed code does this by adding yet another ring-buffer implementation to the kernel; this one is aimed at making printk() work better from hard-to-handle contexts. For a task that seems conceptually simple—printing messages to the console—printk() is actually a rather complex beast; that won't change if these patches are merged, though many of the problems with the current implementation will be removed.

In the cover letter of his RFC patch set, Ogness lays out seven problems that he sees with the current printk() implementation. The buffer used by printk() is protected by a raw spinlock, which restricts the contexts from which the buffer can be accessed. Calling printk() from a non-maskable interrupt (NMI) or a recursive context, where something called from printk() also calls printk(), currently means that the logging of the message is deferred, which could cause the message to be lost. Printing to slow consoles can result in large latencies, a printk() call may end up taking unbounded time while other deferred messages are printed ahead of the one the caller actually wanted to print.

Two other problems are identified by Ogness. Timestamps on the messages are added when a message is added to the buffer but, due to deferrals, that can happen well after the printk() call was made. While that behavior has the side effect of nicely sorting the messages in terms of time, it is "neither accurate nor reliable". In addition, because printk() tries to satisfy all of its users, it ends up compromising too much:

Loglevel INFO is handled the same as ERR. There seems to be an endless effort to get printk to show _all_ messages as quickly as possible in case of a panic (i.e. printing from any context), but at the same time try not to have printk be too intrusive for the callers. These are conflicting requirements that lead to a printk implementation that does a sub-optimal job of satisfying both sides.

In order to fix those problems, he is proposing the addition of a kernel-internal printk() ring buffer that allows for multiple lockless readers; writers use a per-CPU synchronization mechanism that works in any context. This ring-buffer implementation was inspired by a suggestion from Peter Zijlstra, Ogness said. The actual writing of the messages is moved to a dedicated kernel thread, which is preemptible, so the latencies can be bounded. The new ring buffer will be allocated in the initialized data segment of the kernel, so it will be available early in the boot process, even before the memory-management subsystem is available. Timestamps will be generated early in the printk() function.

Beyond that, a new kind of "emergency message" is defined. Those messages will appear on certain consoles immediately; no waiting for any other printk() messages that are queued up for the kernel thread. In order to participate, consoles will need to implement the new write_atomic() operation. The patch set includes an implementation of write_atomic() for the 8250 UART driver. There is a new kernel configuration parameter, CONFIG_LOGLEVEL_EMERGENCY, which sets the lowest log-level value for emergency messages; it defaults to LOG_WARNING. That value can also be set by the "emergency_loglevel=" kernel command-line parameter at boot time.

So instead of trying to ensure that all messages go out as quickly as possible, the proposal would effectively partition printk() messages into two buckets—at least for consoles that implement write_atomic(). Regular messages will be written out by the kernel thread, which gets scheduled and preempted normally, so it may not be flushing all of the messages right away. The messages at the emergency level and above will go out immediately to an emergency console. As he noted in the thread, there are options if losing regular messages in a crash becomes a problem:

As long as all critical messages are print directly and immediately to an emergency console, why is it is problem if the informational messages to consoles are sometimes delayed or lost? And if those informational messages _are_ so important, there are things the user can do. For example, create a realtime userspace task to read /dev/kmsg.

There are some downsides and open issues in the proposal, which Ogness also lists. The output from printk() has changed somewhat, which may have unforeseen consequences. A CPU ID will be printed as part of emergency messages to help disambiguate multiple simultaneous messages; those messages are separated from regular printk() messages using a newline character, though there may be an option added to make them stand out more. In addition:

Be aware that printk output is no longer time-sorted. Actually, it never was, but now you see the real timestamps. This seems strange at first.

More details on the ring buffer and its API, along with some early performance numbers, can be seen in a patch adding a file to the Documentation directory.

The reaction to the patch set has been positive overall; there have been the usual questions and comments, of course. printk() has been the subject of two relatively recent Kernel Summit sessions (in 2016 and 2017); two of those proposing printk() changes at the summits, Sergey Senozhatsky and Petr Mladek, have also commented on the patches. Mladek had suggestions for improvements to the ring-buffer code, as did Linus Torvalds. There have been no major complaints about the overall goal and plan, however, so it would seem that we could see these changes go into the mainline in a development cycle or two.

Comments (20 posted)

Development statistics for the 5.0 kernel

By Jonathan Corbet
February 21, 2019
The announcement of the 5.0-rc7 kernel prepatch on February 17 signaled the imminent release of the final 5.0 kernel and the end of this development cycle. 5.0, as it turns out, brought in fewer changesets than its immediate predecessors, but it was still a busy cycle with a lot of developers participating. Read on for an overview of where the work came from in this release cycle.

As of this writing, 12,517 non-merge changesets have been pulled into the mainline repository for the 5.0 release. This is low compared to the kernels that came before:

CycleChangesets
4.1514,866
4.1613,630
4.1713,541
4.1813,283
4.1914,043
4.2013,884
5.012,517(so far)

One has to go back to 4.7, released in July 2016, to find a development cycle that brought in fewer changesets than 5.0. The number of developers contributing to 5.0 was 1,712, roughly equivalent to previous cycles; 276 of those developers made their first kernel contribution in this development cycle.

The most active developers were:

Most active 5.0 developers
By changesets
Christoph Hellwig2131.7%
Masahiro Yamada1351.1%
Colin Ian King1351.1%
Jens Axboe1120.9%
Arnaldo Carvalho de Melo1120.9%
Yangtao Li1060.8%
Yue Haibing1000.8%
Kuninori Morimoto950.8%
Andy Shevchenko940.8%
Rob Herring920.7%
Maxime Ripard910.7%
Boris Brezillon890.7%
Jakub Kicinski830.7%
Michael Straube830.7%
Thierry Reding820.7%
Ville Syrjälä820.7%
Geert Uytterhoeven800.6%
Linus Walleij800.6%
Paul E. McKenney780.6%
Gustavo A. R. Silva770.6%
By changed lines
Olof Johansson418346.0%
Kan Liang314584.5%
Yong Zhi227993.3%
Aaro Koskinen204623.0%
Firoz Khan159812.3%
Jens Axboe130091.9%
Tony Lindgren122371.8%
Boris Brezillon114221.7%
Sean Christopherson106141.5%
Dong Aisheng79981.2%
Eric Biggers74761.1%
Manivannan Sadhasivam67241.0%
Christoph Hellwig61990.9%
Federico Vaga58770.8%
Jordan Crouse57720.8%
Kuninori Morimoto52550.8%
Florian Westphal51200.7%
Mauro Carvalho Chehab50970.7%
Lorenzo Bianconi49410.7%
Sagi Grimberg48270.7%

Christoph Hellwig was the most prolific contributor of changesets this time around; he did a lot of work in the block subsystem and the DMA API. Masahiro Yamada's work was mostly focused on improvements to the kernel's build system, Colin Ian King continues to make spelling and coding-style fixes throughout the tree, Jens Axboe converted a lot of block drivers to the multiqueue API (along with many other block-layer changes), and Arnaldo Carvalho de Melo worked extensively on the perf utility. Of the top twenty developers with regard to changesets, only one got there through work on the staging tree — a significant change from years past.

Switching to the "lines changed" column: Olof Johansson only contributed seven changesets to 5.0, but one of them was removing the old and unmaintained eicon ISDN driver. Other top contributors in the "lines changed" column include Kan Liang for adding some JSON metrics to the perf utility, Yong Zhi for the Intel IPU3 driver, Aaro Koskinen for work on MIPS OCTEON support, and Firoz Khan, who reworked how the system-call tables are generated for most architectures.

A total of 226 employers supported work on 5.0, which is a typical number. The most active of those were:

Most active 5.0 employers
By changesets
Intel136010.9%
(None)9377.5%
(Unknown)8676.9%
Red Hat8596.9%
Linaro5524.4%
Google4873.9%
Mellanox4813.8%
SUSE4183.3%
AMD4153.3%
Renesas Electronics3943.1%
IBM3472.8%
Huawei Technologies3202.6%
(Consultant)3112.5%
Facebook2682.1%
Bootlin2612.1%
NXP Semiconductors2472.0%
ARM2261.8%
Oracle2041.6%
Canonical1711.4%
Code Aurora Forum1481.2%
By lines changed
Intel11615816.8%
Facebook668169.7%
Linaro403685.8%
Red Hat330414.8%
(None)321914.7%
(Unknown)268583.9%
Mellanox264873.8%
Google240993.5%
Nokia206003.0%
Bootlin190192.7%
AMD171372.5%
NXP Semiconductors167162.4%
SUSE145462.1%
Renesas Electronics141032.0%
IBM130681.9%
Atomide122371.8%
Huawei Technologies109521.6%
Code Aurora Forum105661.5%
(Consultant)93531.4%
ARM70341.0%

The kernel development community relies heavily on its testers and reviewers. The testing and review picture for 5.0 looks like this:

Test and review credits in 5.0
Tested-by
Andrew Bowers506.4%
Ming Lei374.7%
Jarkko Sakkinen202.6%
Arnaldo Carvalho de Melo182.3%
Janusz Krzysztofik172.2%
Alan Tull172.2%
Tony Luck162.1%
Aaron Brown151.9%
Jesper Dangaard Brouer151.9%
Heiko Stuebner141.8%
Marek Szyprowski131.7%
Corentin Labbe131.7%
Adam Ford121.5%
Wolfram Sang111.4%
Tom Zanussi111.4%
Steve Longerbeam111.4%
Ravulapati Vishnu vardhan Rao111.4%
David Ahern101.3%
Jarkko Nikula101.3%
Ondrej Jirman101.3%
Reviewed-by
Rob Herring1863.8%
Ville Syrjälä1252.5%
Simon Horman1082.2%
Geert Uytterhoeven921.9%
Hannes Reinecke901.8%
Christoph Hellwig831.7%
Alex Deucher721.5%
David Sterba691.4%
Andrew Morton601.2%
Omar Sandoval601.2%
John Hurley581.2%
Rodrigo Vivi571.2%
Chris Wilson571.2%
Sagi Grimberg561.1%
Petr Machata561.1%
Daniel Vetter521.1%
Christian König511.0%
Chao Yu481.0%
Andy Shevchenko440.9%
Nikolay Borisov410.8%

The kernel's repository can tell us who the patches came from, but it is silent on the question of where they came from. Some insights, though, can be had by looking at the time zone stored in the commit time for each patch. For 5.0, the result looks like this:

Originating time zone for 5.0 patches
OffsetChangesetsNotes
-8:00 1,676 US west coast
-7:00 622 US mountain
-6:00 361 US central
-5:00 939 US east coast
-4:00 295
-3:00 158 Brazil
-2:00 105
0:00 1,611 UK
+1:00 2,812 Western Europe
+2:00 1,457 Eastern Europe
+3:00 447 Finland, Russia
+5:30 513 India
+8:00 952 China
+9:00 302 Japan, Korea
+10:00 99 Australia
+11:00 140 Australia

A few time zones with less than ten changesets have been omitted from the above table. The association of time zones with countries is, of course, approximate. Daylight savings time can throw things off, as can developers whose systems are not set to their local time. If nothing else, the number of patches with times in UTC is probably higher than the number that actually came from countries in that time zone. There are still a few conclusions that can be drawn, though: it seems clear that an awful lot of kernel work still happens at or just east of the Prime Meridian, for example.

More than anything else, though, this table highlights something we already knew: the Linux kernel community is truly global in scope. Patches come in at a high rate from all over the world and are integrated in a (usually) smooth manner. In this sense, the 5.0 kernel is just like the many that came before it; it's business as usual in the kernel community.

Comments (16 posted)

Page editor: Jonathan Corbet

Brief items

Security

Security quotes of the week

The findings show that in every docker image we scanned, we found vulnerable versions of system libraries. The official Node.js image ships 580 vulnerable system libraries, followed by the others each of which ship at least 30 publicly known vulnerabilities.
Liran Tal summarizes some of the findings in a Snyk security report [PDF]

ETSI backed down, and the next revision of their weakened variant will be called "ETS" instead. Instead of thinking of this as "Enterprise Transport Security," which the creators say the acronym stands for, you should think of it as "Extra Terrible Security."

Internet security as a whole is greatly improved by forward secrecy. It's indefensible to make it worse in the name of protecting a few banks from having to update their legacy decrypt systems. Decryption makes networks less secure, and anyone who tells you differently is selling something (probably a decryption middlebox). Don't use ETS, don't implement it, and don't standardize it.

Jacob Hoffman-Andrews in the Electronic Frontier Foundation (EFF) blog

We built a fake network card that is capable of interacting with the operating system in the same way as a real one, including announcing itself correctly, causing drivers to attach, and sending and receiving network packets. To do this, we extracted a software model of an Intel E1000 from the QEMU full-system emulator and ran it on an FPGA. Because this is a software model, we can easily add malicious behaviour to find and exploit vulnerabilities.

We found the attack surface available to a network card was much richer and more nuanced than was previously thought. By examining the memory it was given access to while sending and receiving packets, our device was able to read traffic from networks that it wasn't supposed to. This included VPN plaintext and traffic from Unix domain sockets that should never leave the machine.

Theo Markettos

Comments (none posted)

Kernel development

Kernel release status

The current development kernel is 5.0-rc8, released instead of the expected 5.0-final on February 24. Linus said: "This may be totally unnecessary, but we actually had more patches come in this last week than we had for rc7, which just didn't make me feel the warm and fuzzies. And while none of the patches looked all that scary, some of them were to pretty core files, so it wasn't all just random rare drivers (although those kinds also existed)."

Stable updates: 4.20.12, 4.19.25, 4.14.103, 4.9.160, 4.4.176, and 3.18.136 were released on February 23, followed by 4.20.13, 4.19.26, 4.14.104, and 4.9.161 on February 27.

Comments (none posted)

Quotes of the week

eBPF is and will be a tool for the masses to customize and control their systems however they want, and with whatever policies they see fit. It is a technology that has truly relinquished control from system software engineers over to the users, where it belongs.
David Miller

I am relieved to know that when my mail client embeds HTML tags into raw text, it will only be the second most annoying thing I've done on e-mail.
Greg Kerr

Comments (none posted)

Distributions

Distribution quote of the week

The effort required to properly participate in Discourse conversations vs regular mailing lists was higher, and high enough that the majority of developers just simply stopped communicating outside of IRC. This made asynchronous communication very difficult. Moreover, the total communication between developers/contributors and users went down over the past couple of years because of this.
Neal Gompa (OpenMandriva gives up on Discourse)

Comments (none posted)

Development

GCC 8.3 Released

Version 8.3 of the GNU Compiler Collection has been released. "GCC 8.3 is a bug-fix release from the GCC 8 branch containing important fixes for regressions and serious bugs in GCC 8.2 with more than 153 bugs fixed since the previous release."

Full Story (comments: none)

Git v2.21.0

Git v2.21.0 has been released. "It is comprised of 500 non-merge commits since v2.20.0, contributed by 74 people, 20 of which are new faces." The release notes are included in the announcement.

Full Story (comments: none)

Go 1.12 released

Version 1.12 of the Go language has been released. "Some of the highlights include opt-in support for TLS 1.3, improved modules support (in preparation for being the default in Go 1.13), support for windows/arm, and improved macOS & iOS forwards compatibility". See the release notes for details.

Comments (2 posted)

Development quote of the week

make debug, again, … yay, the test is failing, we have a reproducer! Quick, to the gdb-mobile (it's like the bat-mobile, but less cool, and with a GNU instead of a bat)!
Julien Voisin (Thanks to Paul Wise)

Comments (none posted)

Miscellaneous

The Linux Foundation Launches ELISA Project Enabling Linux In Safety-Critical Systems

The Linux Foundation has announced the formation of the Enabling Linux in Safety Applications (ELISA) project to create tools and processes for companies to use to build and certify safety-critical Linux applications. "Building off the work being done by SIL2LinuxMP project and Real-Time Linux project, ELISA will make it easier for companies to build safety-critical systems such as robotic devices, medical devices, smart factories, transportation systems and autonomous driving using Linux. Founding members of ELISA include Arm, BMW Car IT GmbH, KUKA, Linutronix, and Toyota. To be trusted, safety-critical systems must meet functional safety objectives for the overall safety of the system, including how it responds to actions such as user errors, hardware failures, and environmental changes. Companies must demonstrate that their software meets strict demands for reliability, quality assurance, risk management, development process, and documentation. Because there is no clear method for certifying Linux, it can be difficult for a company to demonstrate that their Linux-based system meets these safety objectives."

Comments (2 posted)

Page editor: Jake Edge

Announcements

Newsletters

Distributions and system administration

Development

Meeting minutes

Calls for Presentations

CFP Deadlines: February 28, 2019 to April 29, 2019

The following listing of CFP deadlines is taken from the LWN.net CFP Calendar.

DeadlineEvent Dates EventLocation
February 28 May 16 Open Source Camp | #3 Ansible Berlin, Germany
February 28 June 4
June 6
sambaXP 2019 Goettingen, Germany
March 4 June 14
June 15
Hong Kong Open Source Conference 2019 Hong Kong, Hong Kong
March 10 April 6 Pi and More 11½ Krefeld, Germany
March 17 June 4
June 5
UK OpenMP Users' Conference Edinburgh, UK
March 19 October 13
October 15
All Things Open Raleigh, NC, USA
March 24 July 17
July 19
Automotive Linux Summit Tokyo, Japan
March 24 July 17
July 19
Open Source Summit Tokyo, Japan
March 25 May 3
May 4
PyDays Vienna 2019 Vienna, Austria
April 1 June 3
June 4
PyCon Israel 2019 Ramat Gan, Israel
April 2 August 21
August 23
Open Source Summit North America San Diego, CA, USA
April 2 August 21
August 23
Embedded Linux Conference NA San Diego, CA, USA
April 12 July 9
July 11
​Xen Project ​Developer ​and ​Design ​Summit Chicago, IL, USA
April 15 August 26
August 30
FOSS4G 2019 Bucharest, Romania
April 15 May 18
May 19
Open Source Conference Albania Trana, Albania
April 21 May 25
May 26
Mini-DebConf Marseille Marseille, France
April 24 October 27
October 30
27th ACM Symposium on Operating Systems Principles Huntsville, Ontario, Canada
April 25 September 21
September 23
State of the Map Heidelberg, Germany
April 28 September 2
September 6
EuroSciPy 2019 Bilbao, Spain

If the CFP deadline for your event does not appear here, please tell us about it.

Upcoming Events

LibrePlanet 2019: Coming to Cambridge, MA

LibrePlanet will take place March 23-24 in Cambridge, MA. This announcement covers registration, some presentations, and a call for volunteers and sponsors.

Comments (none posted)

2019 Linux Plumbers Conference planning is underway

LPC will be held September 9-11 in Lisbon, Portugal; colocated with the Kernel Summit. A call for proposals should be announced soon.

Full Story (comments: none)

Events: February 28, 2019 to April 29, 2019

The following event listing is taken from the LWN.net Calendar.

Date(s)EventLocation
February 26
February 28
16th USENIX Symposium on Networked Systems Design and Implementation Boston, MA, USA
March 5
March 6
Automotive Grade Linux Member Meeting Tokyo, Japan
March 7
March 8
Hyperledger Bootcamp Hong Kong
March 7
March 10
SCALE 17x Pasadena, CA, USA
March 9 DPDK Summit Bangalore 2019 Bangalore, India
March 10 OpenEmbedded Summit Pasadena, CA, USA
March 12
March 14
Open Source Leadership Summit Half Moon Bay, CA, USA
March 14 pgDay Israel 2019 Tel Aviv, Israel
March 14
March 17
FOSSASIA Singapore, Singapore
March 14 Icinga Camp Berlin Berlin, Germany
March 19
March 21
PGConf APAC Singapore, Singapore
March 20 Open Source Roundtable at Game Developers Conference San Francisco, CA, USA
March 20
March 22
Netdev 0x13 Prague, Czech Republic
March 21 gRPC Conf Sunnyvale, CA, USA
March 23
March 26
Linux Audio Conference San Francisco, CA, USA
March 23 Kubernetes Day Bengaluru, India
March 23
March 24
LibrePlanet Cambridge, MA, USA
March 29
March 31
curl up 2019 Prague, Czech Republic
April 1
April 4
‹Programming› 2019 Genova, Italy
April 1
April 5
SUSECON 2019 Nashville, TN, USA
April 2
April 4
Cloud Foundry Summit Philadelphia, PA, USA
April 3
April 5
Open Networking Summit San Jose, CA, USA
April 5
April 6
openSUSE Summit Nashville, TN, USA
April 5
April 7
Devuan Conference Amsterdam, The Netherlands
April 6 Pi and More 11½ Krefeld, Germany
April 7
April 10
FOSS North Gothenburg, Sweden
April 10
April 12
DjangoCon Europe Copenhagen, Denmark
April 13 OpenCamp Bratislava Bratislava, Slovakia
April 13
April 17
ACM SIGPLAN/SIGOPS Conference on Virtual Execution Environments Providence, RI, USA
April 18 Open Source 101 Columbia, SC, USA
April 26
April 27
Grazer Linuxtage Graz, Austria
April 26
April 27
KiCad Conference 2019 Chicago, IL, USA
April 28
April 30
Check_MK Conference #5 Munich, Germany

If your event does not appear here, please tell us about it.

Security updates

Alert summary February 21, 2019 to February 27, 2019

Dist. ID Release Package Date
Arch Linux ASA-201902-25 bind 2019-02-26
Arch Linux ASA-201902-27 elasticsearch 2019-02-26
Arch Linux ASA-201902-26 kibana 2019-02-26
Arch Linux ASA-201902-28 logstash 2019-02-26
Arch Linux ASA-201902-22 msmtp 2019-02-25
Arch Linux ASA-201902-21 python-mysql-connector 2019-02-25
Arch Linux ASA-201902-24 systemd 2019-02-26
Arch Linux ASA-201902-23 thunderbird 2019-02-26
CentOS CESA-2019:0373 C6 firefox 2019-02-20
CentOS CESA-2019:0374 C7 firefox 2019-02-20
CentOS CESA-2019:0375 C7 flatpak 2019-02-20
CentOS CESA-2019:0416 C6 java-1.8.0-openjdk 2019-02-26
CentOS CESA-2019:0415 C6 kernel 2019-02-26
CentOS CESA-2019:0420 C6 polkit 2019-02-26
CentOS CESA-2019:0368 C7 systemd 2019-02-20
Debian DSA-4395-2 stable chromium 2019-02-27
Debian DLA-1689-1 LTS elfutils 2019-02-25
Debian DLA-1691-1 LTS exiv2 2019-02-26
Debian DLA-1686-1 LTS freedink-dfarc 2019-02-24
Debian DLA-1690-1 LTS liblivemedia 2019-02-26
Debian DLA-1692-1 LTS phpmyadmin 2019-02-27
Debian DSA-4377-3 stable rssh 2019-02-22
Debian DLA-1687-1 LTS sox 2019-02-24
Debian DLA-1688-1 LTS waagent 2019-02-25
Fedora FEDORA-2019-7a554204c1 F29 SDL 2019-02-26
Fedora FEDORA-2019-1fccede810 F29 createrepo_c 2019-02-21
Fedora FEDORA-2019-1fccede810 F29 dnf 2019-02-21
Fedora FEDORA-2019-1fccede810 F29 dnf-plugins-core 2019-02-21
Fedora FEDORA-2019-1fccede810 F29 dnf-plugins-extras 2019-02-21
Fedora FEDORA-2019-f455ef79b8 F28 docker 2019-02-21
Fedora FEDORA-2019-4dc1e39b34 F29 docker-latest 2019-02-23
Fedora FEDORA-2019-3f9a71578d F28 java-1.8.0-openjdk 2019-02-25
Fedora FEDORA-2019-362387a66d F28 java-1.8.0-openjdk-aarch32 2019-02-27
Fedora FEDORA-2019-b084fa3ea5 F29 java-1.8.0-openjdk-aarch32 2019-02-27
Fedora FEDORA-2019-7bdeed7fc5 F29 kernel 2019-02-26
Fedora FEDORA-2019-16de0047d4 F28 kernel-headers 2019-02-26
Fedora FEDORA-2019-7bdeed7fc5 F29 kernel-headers 2019-02-26
Fedora FEDORA-2019-16de0047d4 F28 kernel-tools 2019-02-26
Fedora FEDORA-2019-7bdeed7fc5 F29 kernel-tools 2019-02-26
Fedora FEDORA-2019-50ca715929 F29 koji 2019-02-25
Fedora FEDORA-2019-1fccede810 F29 libcomps 2019-02-21
Fedora FEDORA-2019-1fccede810 F29 libdnf 2019-02-21
Fedora FEDORA-2019-3d38ab031e F28 mgetty 2019-02-27
Fedora FEDORA-2019-da586db907 F29 mgetty 2019-02-27
Fedora FEDORA-2019-4e72b179e4 F29 pagure 2019-02-24
Fedora FEDORA-2019-387e017332 F29 poppler 2019-02-25
Fedora FEDORA-2019-963ea958f9 F28 runc 2019-02-21
Fedora FEDORA-2019-afade40f3d F28 spice 2019-02-23
Mageia MGASA-2019-0096 6 giflib 2019-02-20
Mageia MGASA-2019-0091 6 irssi 2019-02-20
Mageia MGASA-2019-0097 6 kernel 2019-02-21
Mageia MGASA-2019-0098 6 kernel-linus 2019-02-21
Mageia MGASA-2019-0095 6 libexif 2019-02-20
Mageia MGASA-2019-0102 6 libreoffice 2019-02-22
Mageia MGASA-2019-0101 6 libtiff 2019-02-22
Mageia MGASA-2019-0092 6 poppler 2019-02-20
Mageia MGASA-2019-0100 6 spice 2019-02-22
Mageia MGASA-2019-0099 6 spice-gtk 2019-02-22
Mageia MGASA-2019-0094 6 tcpreplay 2019-02-20
Mageia MGASA-2019-0093 6 zziplib 2019-02-20
openSUSE openSUSE-SU-2019:0235-1 GraphicsMagick 2019-02-22
openSUSE openSUSE-SU-2019:0238-1 ansible 2019-02-23
openSUSE openSUSE-SU-2019:0232-1 15.0 build 2019-02-22
openSUSE openSUSE-SU-2019:0252-1 15.0 docker-runc 2019-02-27
openSUSE openSUSE-SU-2019:0243-1 15.0 dovecot23 2019-02-26
openSUSE openSUSE-SU-2019:0248-1 15.0 firefox 2019-02-26
openSUSE openSUSE-SU-2019:0261-1 15.0 gvfs 2019-02-27
openSUSE openSUSE-SU-2019:0247-1 kauth 2019-02-26
openSUSE openSUSE-SU-2019:0242-1 15.0 42.3 kauth 2019-02-26
openSUSE openSUSE-SU-2019:0242-1 15.0 42.3 kauth 2019-02-26
openSUSE openSUSE-SU-2019:0237-1 mosquitto 2019-02-23
openSUSE openSUSE-SU-2019:0233-1 15.0 mosquitto 2019-02-22
openSUSE openSUSE-SU-2019:0234-1 42.3 nodejs6 2019-02-22
openSUSE openSUSE-SU-2019:0240-1 pspp, spread-sheet-widget 2019-02-25
openSUSE openSUSE-SU-2019:0244-1 11.4 python-Jinja2 2019-02-26
openSUSE openSUSE-SU-2019:0245-1 15.0 python-numpy 2019-02-26
openSUSE openSUSE-SU-2019:0239-1 python-python-gnupg 2019-02-23
openSUSE openSUSE-SU-2019:0254-1 15.0 qemu 2019-02-27
openSUSE openSUSE-SU-2019:0255-1 15.0 systemd 2019-02-27
openSUSE openSUSE-SU-2019:0249-1 thunderbird 2019-02-26
openSUSE openSUSE-SU-2019:0251-1 15.0 thunderbird 2019-02-27
openSUSE openSUSE-SU-2019:0250-1 42.3 thunderbird 2019-02-26
Oracle ELSA-2019-0416 OL6 java-1.8.0-openjdk 2019-02-26
Oracle ELSA-2019-0415 OL6 kernel 2019-02-26
Oracle ELSA-2019-0420 OL6 polkit 2019-02-26
Red Hat RHSA-2019:0396-01 EL6 chromium-browser 2019-02-25
Red Hat RHSA-2019:0374-01 EL7 firefox 2019-02-19
Red Hat RHSA-2019:0375-01 EL7 flatpak 2019-02-19
Red Hat RHSA-2019:0416-01 EL6 java-1.8.0-openjdk 2019-02-26
Red Hat RHSA-2019:0415-01 EL6 kernel 2019-02-26
Red Hat RHSA-2019:0420-01 EL6 polkit 2019-02-26
Red Hat RHSA-2019:0368-01 EL7 systemd 2019-02-19
Scientific Linux SLSA-2019:0373-1 SL6 firefox 2019-02-21
Scientific Linux SLSA-2019:0374-1 SL7 firefox 2019-02-21
Scientific Linux SLSA-2019:0375-1 SL7 flatpak 2019-02-21
Scientific Linux SLSA-2019:0416-1 SL6 java-1.8.0-openjdk 2019-02-26
Scientific Linux SLSA-2019:0415-1 SL6 kernel 2019-02-26
Scientific Linux SLSA-2019:0420-1 SL6 polkit 2019-02-26
Scientific Linux SLSA-2019:0368-1 SL7 systemd 2019-02-21
Slackware SSA:2019-054-01 file 2019-02-23
Slackware SSA:2019-057-01 openssl 2019-02-26
SUSE SUSE-SU-2019:0505-1 SLE15 amavisd-new 2019-02-27
SUSE SUSE-SU-2019:0498-1 SLE12 apache2 2019-02-26
SUSE SUSE-SU-2019:0504-1 SLE15 apache2 2019-02-27
SUSE SUSE-SU-2019:0499-1 SLE12 ceph 2019-02-26
SUSE SUSE-SU-2019:0495-1 SLE15 containerd, docker, docker-runc, golang-github-docker-libnetwork, runc 2019-02-26
SUSE SUSE-SU-2019:0470-1 SLE12 kernel 2019-02-22
SUSE SUSE-SU-2019:0466-1 OS7 SLE12 kernel-firmware 2019-02-22
SUSE SUSE-SU-2019:0496-1 SLE15 openssh 2019-02-26
SUSE SUSE-SU-2019:0449-1 SLE12 php5 2019-02-20
SUSE SUSE-SU-2019:0450-1 OS7 SLE12 procps 2019-02-20
SUSE SUSE-SU-2019:0483-1 OS7 python-Django 2019-02-25
SUSE SUSE-SU-2019:0482-1 OS7 SLE12 python 2019-02-25
SUSE SUSE-SU-2019:0481-1 OS7 python-amqp, python-oslo.messaging, python-ovs, python-paramiko, python-psql2mysql 2019-02-25
SUSE SUSE-SU-2019:0489-1 OS7 SLE12 qemu 2019-02-26
SUSE SUSE-SU-2019:0471-1 SLE12 qemu 2019-02-22
SUSE SUSE-SU-2019:0457-1 SLE12 qemu 2019-02-21
SUSE SUSE-SU-2019:0480-1 SLE15 supportutils 2019-02-25
SUSE SUSE-SU-2018:3033-2 OS7 SLE12 texlive 2019-02-21
SUSE SUSE-SU-2019:0469-1 SLE15 thunderbird 2019-02-22
SUSE SUSE-SU-2019:0497-1 SLE15 webkit2gtk3 2019-02-26
Ubuntu USN-3893-2 12.04 bind9 2019-02-25
Ubuntu USN-3893-1 14.04 16.04 18.04 18.10 bind9 2019-02-22
Ubuntu USN-3896-1 14.04 16.04 18.04 18.10 firefox 2019-02-26
Ubuntu USN-3866-2 14.04 16.04 18.04 18.10 ghostscript 2019-02-21
Ubuntu USN-3866-3 14.04 16.04 18.04 18.10 ghostscript 2019-02-26
Ubuntu USN-3894-1 14.04 16.04 gnome-keyring 2019-02-26
Ubuntu USN-3895-1 14.04 16.04 18.04 18.10 ldb 2019-02-26
Ubuntu USN-3897-1 14.04 16.04 18.04 18.10 thunderbird 2019-02-26
Full Story (comments: none)

Kernel patches of interest

Kernel releases

Linus Torvalds Linux 5.0-rc8 Feb 24
Greg KH Linux 4.20.13 Feb 27
Greg KH Linux 4.20.12 Feb 23
Greg KH Linux 4.19.26 Feb 27
Greg KH Linux 4.19.25 Feb 23
Sebastian Andrzej Siewior v4.19.25-rt16 Feb 26
Sebastian Andrzej Siewior v4.19.23-rt14 Feb 21
Greg KH Linux 4.14.104 Feb 27
Greg KH Linux 4.14.103 Feb 23
Greg KH Linux 4.9.161 Feb 27
Greg KH Linux 4.9.160 Feb 23
Greg KH Linux 4.4.176 Feb 23
Greg KH Linux 3.18.136 Feb 23

Architecture-specific

Build system

Peter Zijlstra objtool: UACCESS validation Feb 25

Core kernel

Alexei Starovoitov bpf: per program stats Feb 22

Development tools

Device drivers

Device-driver infrastructure

Documentation

Memory management

Networking

Security-related

Miscellaneous

Page editor: Rebecca Sobol


Copyright © 2019, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds