LWN.net Weekly Edition for September 20, 2018
Welcome to the LWN.net Weekly Edition for September 20, 2018
This edition contains the following feature content:
- Project Treble: Google's attempt to reduce Android fragmentation may be working, but that success did not come cheaply.
- Code, conflict, and conduct: a time of change in the kernel development community.
- Resource control at Facebook: kernel developers at Facebook have made a number of changes to facilitate better resource usage.
- Compiling kernel UAPI headers with C++: making the kernel's user-space API friendly for C++ code.
- Fedora reawakens the hibernation debate: a change in suspend behavior has users concerned and highlights some differences over how this feature should be controlled.
This week's edition also includes these inner pages:
- Brief items: Brief news items from throughout the community.
- Announcements: Newsletters, conferences, security updates, patches, and more.
Please enjoy this week's edition, and, as always, thank you for supporting LWN.net.
Project Treble
Android's Project Treble is meant as a way to reduce the fragmentation in the Android ecosystem. It also makes porting Android 8 ("Oreo"—the first version to mandate Treble) more difficult, according to Fedor Tcymbal. He described the project and what it means for silicon and device vendors in a talk at Open Source Summit North America 2018 in Vancouver, Canada.
Tcymbal works for Mera, which is a software services company. It has worked with various system-on-chip (SoC) and device vendors to get Android running on their hardware. The company has done so for several Android versions along the way, but Android 8 "felt very different" because of Treble. It was "much more difficult to upgrade" to Oreo than to previous Android versions.
He put up several quotes from Google employees describing Treble (slides [PDF]). Those boil down to an acknowledgment that the changes made for Treble were extensive—and expensive. More than 300 developers worked for more than a year on Treble. It makes one wonder what could justify that amount of effort, Tcymbal said.
Google presents Project Treble as a re-architecture of Android to make it easier for vendors to update their devices to new versions of the mobile operating system. The Android Open Source Project (AOSP) architecture is layered, with apps at the top; the application framework, Binder, and Android system services in the middle; and hardware abstraction layers (HALs) and the Linux kernel at the bottom. Google provides the middle layer, while vendors provide the HALs and kernel.
He then outlined the normal pre-Treble process for an Android dessert release. When the release is made, it only runs on the reference platform (e.g. Pixel). Silicon vendors then get the kernel running for their SoC, turn it into a board support package (BSP), and hand that off to device makers to do all of the customizations for the actual device. The software is then given to the phone carriers, which get it out to the end users. This last step is not really required, but is common in North America, he said.
Each of those steps takes some time, which generally adds up to 6-12 months. In addition, those steps only happen if each participant is interested in making it happen. If the SoC or device is no longer being sold, those vendors may not do their part, so no upgrade is available to users. "Google does not like this", Tcymbal said.
In July, nearly a year after its release, Oreo was only on 12% of the devices in the field. That means that all of the new features, security upgrades, and so on, really only get to people after a few years when they buy a new device. In fact, the uptake for Oreo is roughly the same as the percentage of devices that remain on Jelly Bean and KitKat, which were released 5 or 6 years ago. That is different from the iOS world, where devices get updated right away. So Google's effort in creating new versions is largely wasted, he said; that is why the company spent so much effort on Treble.
HALs and APIs
When a new version of Android is released, all of the old apps continue to work because the old APIs are still supported. Upgrading to a new version of Android took so much time because Google did not recognize that it needed to do the same thing several layers down, Tcymbal said. The strict line separating Android system services and the HALs did not exist prior to Treble; that is the purpose of the project.
Prior to Treble, the HAL interfaces were specified as a bunch of C header files. Each new version of Android meant there were new interfaces and header files. So each HAL needed to be updated to the new interfaces and rebuilt for the new version of Android. That's the reason that vendors had to spend so much time updating.
With Treble, there is now a HAL interface definition language (HIDL) instead of the C header files. In addition, Google has several transitional steps that vendors can use to move toward the new architecture. Legacy HALs can still be called from a process, rather than from Binder, which is the way of the future. Google has created a wrapper for legacy HALs to make it easier to use those with Treble. The eventual destination that Google envisions is for the legacy HALs to be replaced with a vendor implementation of hardware services. Mera did not take that step, but continued using the legacy HALs; Tcymbal thinks that is likely what other vendors will do as well.
Treble introduces a vendor interface object that is checked before an over-the-air (OTA) update is performed. The device sends a manifest and compatibility matrix to the OTA server, which checks to see if the new version can be run on the device. The object has information about HAL versions and interfaces, kernel versions and configuration options, SELinux policy versions, and the Android verified boot version. The information provided by the device can be used to ensure that a device will not upgrade to a new Android version. Tcymbal said he does not understand why Google allowed that, because device vendors may have the incentive to block upgrades in order to sell newer devices.
There is now a "golden" /system image that gets installed on a device and it must run without modification. There is a new /vendor partition for the vendor-specific HALs. That all makes sense, he said, but it means the device must be repartitioned in order to upgrade to Android 8. However, some vendors are unwilling to allow that upgrade because they do not want to risk breaking users' working phones.
The compatibility test suite (CTS) for AOSP is meant to check that the Android framework is working as expected. Treble adds a vendor test suite (VTS) that extends the testing a few layers down into the system. It is meant to check whether a HAL is actually providing the interface it claims to provide. Vendors that want to certify their devices (in order to get access to Google Play services) need to pass the VTS. Passing that suite means that the /system golden image will run on the device.
Problems
Treble has mandated a lot of new things for device and SoC vendors, he said, which has caused some problems for some of those vendors. Normally, it would take a small team two months or so to upgrade to a new version of Android. Treble took six months for his team, he said; in general, most vendors found it took three times as long.
Another problem is with Binder, he said. The idea to have Binder mediate the hardware-specific handling, but it "is kind of slow" because it uses interprocess communication (IPC) to communicate. That is too slow for things like graphics, so there will always be a need for pass-through HALs, which is an exception to the architectural vision.
A big change like Treble can only lead to new bugs. He has heard that the Treble changes were around one million lines of code; with that size of a change, the Android developers "probably broke something". They may well have introduced new security vulnerabilities as well.
His last concern about Treble was that it might lead to more uniformity in the device space. Having to conform to various Google mandates might make it more difficult for device makers to introduce new features. Google says it wants all kinds of different devices, but Treble is a step toward standardization that could reduce that diversity.
Success?
Android 9 ("Pie") was introduced recently, but only adds a few final touches to Treble—most of the work was done for Oreo. It does give us a chance to see how well Treble is working out, he said. There are lots of devices upgrading from Oreo to Pie, which is a great result. In addition, there were 12 devices supported by the Pie beta release, where normally it would only be supported on a Pixel reference platform (or possibly one other on top of that).
While it is not directly related to Treble, Tcymbal found it interesting that the mandatory API target for new apps has been raised to API level 26 (which is what Android 8.0 provides). New apps cannot get on the Play store if they do not conform to that. This shows the overall direction that Google is headed: Treble pushes the device makers to conform, now the app makers are also being pushed. Without Treble, that would not have been possible, he said.
In summary, Treble is a bug fix for the worst kind of bug—an architectural bug. Google did not think through the implications of upgrading to newer versions and so it had to pay the price. Device and SoC vendors also paid for the mistake. Treble has clearly succeeded from a technical perspective, but it is less clear if it has succeeded from a business perspective. The idea was to extend the life of devices to four or five years, so it is too early to tell if that will be the case. We will have to wait and see.
[I would like to thank LWN's travel sponsor, the Linux Foundation, for travel assistance to attend Open Source Summit in Vancouver.]
Code, conflict, and conduct
A couple of surprising things happened in the kernel community on September 16: Linus Torvalds announced that he was taking a break from kernel development to focus on improving his own behavior, and the longstanding "code of conflict" was replaced with a code of conduct based on the Contributor Covenant. Those two things did not quite come packaged as a set, but they are clearly not unrelated. It is a time of change for the kernel project; there will be challenges to overcome but, in the end, less may change than many expect or fear.
Some time off
Torvalds has always been known as a harsh critic; what perhaps fewer people have seen is that he has always been willing to be just as harsh toward himself when (in his opinion) the situation warranted it. He has seemingly come to the conclusion that some self-criticism is indeed called for now:
Torvalds went on to say that he will be taking a break from kernel work at
least through
the end of the 4.19 development cycle — which should come in around four
weeks. Greg Kroah-Hartman will be handling pull requests during this time;
he has already done a number of pulls as of this writing.
Torvalds, meanwhile, will "get some assistance on how to understand
people’s emotions and respond appropriately
". He expressed a strong
wish to continue leading the kernel project once things settle, and said
that he was "looking forward
" to meeting with the maintainer
community at the Maintainers Summit at the end of October.
As for what brought this moment about, we may never know for sure. He
talks about being confronted by members of the community, but people have
been telling him for years that some things should change. If somebody did
get through to him this time, they did it privately. In any case, we seem
to have arrived at, as Torvalds put it, one of "the inflection points where
development flow and behavior [change]
". Sometimes,
perhaps, it really is just a matter of waiting for the right moment when
change becomes possible.
The response to this announcement on the net as a whole has, as one might imagine, covered a wide spectrum. Some people like Torvalds just as he is and do not welcome any change, while others are quick to say that he is beyond redemption and probably cannot change. Most people, though, seem to recognize an individual confronting some internal pain and are waiting hopefully to see what comes next. We wish him the best and look forward to his return.
A new code of conduct
Critics of the kernel community have spent years calling for the establishment of a proper code of conduct. The adoption of a "code of conflict" in 2015 did little to mollify those critics, many of whom saw it as, at best, an empty gesture that would change little. Judging three years later whether they were right is harder than one might think. There is little in the way of hard evidence that the code of conflict brought about any changes in behavior. On the other hand, the kernel community continues to grow, and most of the approximately 4,000 people who contribute each year have a fine experience. The web sites that specialize in publicizing inflammatory messages found in the mailing lists have had to dig harder than in the past. Perhaps the code of conflict helped to moderate behavior a bit, or perhaps we are just seeing the slow, long-term trend toward professionalism that has been evident in the community for at least two decades.
Attentive readers will note that my name appears as one of the signoffs on the patch adding the new code of conduct; they might wonder why I chose to do so despite my beliefs that (1) the situation is not as bad as many like to portray it, and (2) things are getting better anyway. Over the last weekend, I was informed that there was a window of opportunity to change the code and given the chance to comment on the new one. How or why this window came to be is still not entirely clear; I did not know about Torvalds's plans until I read the announcement along with everybody else. I saw the new code as a way of encouraging the community's slow drift toward greater civility. It was not the code I would have written, but I agree with the principles expressed there and believe that it can be adopted and used in the pragmatic manner in which the community approaches most problems.
One of the strongest criticisms against the old code of conflict is that it did not enumerate the types of behavior that are unwelcome. The new one cannot be criticized on that account. Such laundry lists of misbehavior can leave a bad taste in the mouth; that can be especially true if, like me, you are an older, run-of-the-mill white male; it is easy to look at a list like that and say "everybody is protected except me". It can look, rightly or wrongly, like a threatening change pushed by people with a hostile agenda.
But the purpose of such a code is not to threaten anybody; indeed, it is the opposite. It is a statement that we all have the right to be who we are and participate in the community as equals without having to contend with bullying, abuse, and worse. Posting a patch should not be a frightening experience. The code of conduct is a way of saying that, if we drive away contributors with a hostile environment, we don't just impose a personal cost on those contributors; we also make ourselves weaker as a community. Not even the kernel community, which still attracts at least 200 first-time contributors in each development cycle, is so rich that it can afford to lose talent in that way.
I can also understand a hostility to rules. I grew up in a small Wyoming town that had a sign reading — literally — "Population: 35". There is little need for formal rules in such a place; there are no strangers and everybody knows that they have to get along for many years into the future. Every such town has some eccentric and difficult personalities; people learn how to cope with them.
Cities do not run the same way, though; cities need rules. There are too many people coming from too many different backgrounds to just get along without some sort of framework. The kernel community is the free-software equivalent of a city at this point. It has grown hugely, and is divided into a number of neighborhoods, some of which are rather friendlier than others. Many bad experiences reported by developers are associated with crossing into a new neighborhood and unwittingly violating one of the norms in place there. There is a place for some city-wide rules on how we deal with each other so that such bad experiences become much rarer.
The early O'Reilly books about Linux, including mine, used a wild-west cover theme for a reason. But the kernel community is not the wild frontier that it was back then; it has become a far more disciplined, professional, and even welcoming place. In the end, my hope in signing off on the new code of conduct was that it would drive home the point that we are no longer the wild west, that it would encourage further progress toward civility and professionalism, and that it would help the kernel community continue to scale into the future. Those of us who work on the kernel tend to be in it for the long term; we want the community ten years from now to be better than what we have now, because we expect to still be a part of it.
Hazards
One need not look far to find predictions of doom in many flavors. Some, seemingly including the author of the Community Covenant, think that the kernel community may be a lost cause. There is, among some people, a sort of hostility toward the kernel community that brings a special toxicity of its own; there is little to be done about that. On the other hand, plenty of people — generally not those with a lot of contributions to their credit — claim that the adoption of a code of conduct will force developers out and be the beginning of the end of the kernel project. Now that the "social justice warriors" have taken over, the real developers will flee and it will all collapse into a heap.
That outcome seems unlikely, but there will indeed be challenges in adopting this code in such a large community. The nature of kernel development requires relatively high levels of pushback on code submissions, many of which will not be merged in anything resembling their current form. It attracts passionate developers with strong opinions who do not always understand the need to create a kernel that works for a huge and diverse set of users. It attracts developers who are trying to solve problems above their technical ability — perhaps at the direction of their employers. Kernel developers care deeply about the kernel itself and are unwilling to stand by idly if they see changes that threaten to make the kernel worse. And the kernel, perhaps more than any other project, is surrounded by a vocal community of onlookers with opinions on every move the community makes. These are all the makings of a relatively high level of conflict.
The community as a whole will have to find a way to implement the code that handles the community's inherent conflicts without collateral damage. It is thus unhelpful that it was adopted so abruptly, without a community-wide discussion; that will not sit well with some developers. It would have almost certainly been better to go more slowly, and involve at least the invitees to the Maintainers Summit; on the other hand, any public discussion would have had a high probability of being an unpleasant experience for everybody involved.
It seems almost certain that some people will try to test the boundaries of this code and its enforcement mechanisms in an attempt to show which effects it will (or will not) have. Neither the boundaries nor the consequences for crossing them are particularly well specified at this point. Some parts of the code do not entirely seem to fit; it's not clear that the kernel project can even have an "official social media account" or an "appointed representative", for example. Those who end up enforcing this code will have to handle complaints with great care, protecting members of the community while fending off those who might, as some seem to fear, attempt to use the code to further an unrelated agenda.
The written code was changed with a single patch; implementing that code may be a rather longer and harder process. Like the kernel's out-of-memory killer, the code of conduct might be seen as a collection of heuristics that never quite converge on what is really wanted.
That said, the chances are that what will emerge from the dust will be something that looks like the same kernel community that has achieved so much in the last 27 years. Only hopefully it will be a bit friendlier, a bit more diverse, and much stronger. This could indeed be one of those inflection points mentioned by Torvalds in his announcement; the kernel has always emerged from those episodes stronger than ever. It is hard to see any reasons why this time should be different.
Resource control at Facebook
Facebook runs a lot of programs and it tries to pack as many as it can onto each machine. That means running close to—and sometimes beyond—the resource limits on any given machine. How the system reacts when, for example, memory is exhausted, makes a big difference in Facebook getting its work done. Tejun Heo came to 2018 Open Source Summit North America to describe the resource control work that has been done by the team he works on at Facebook.
He began by presenting a graph (which can be seen in his slides [PDF]) that showed two different systems reacting to the same type of workload. In each case, the systems were handling typical Facebook requests at the same time a stress-producing process on the system was allocating memory at the rate of 10MB per second. One of those systems, running the standard Facebook kernel, would eventually encounter the out-of-memory (OOM) killer, which resulted in 40 minutes or so of downtime. The other system, running the code that Heo was there to talk about, had some relatively minor periods of handling fewer requests (from nearly 700 requests per second to 400-500) over that same period.
His team set out with a goal: "work-conserving full-OS resource isolation across applications". That's all a bit of a word salad, so he unpacked some of the perhaps unfamiliar terms. "Work conserving" means that machines stay busy; they do not go idle if there is work to do. "Full OS" refers to making the isolation transparent to the rest of the system. For example, virtual machines (VMs) can allocate various resources for isolating a process, but they lose integration with the underlying operating system. Memory that is devoted to a VM cannot be used by other VMs even when it is not being used. In a container environment, though, the underlying OS is integrated with the isolated workloads; keeping that integration is what "full OS" is intended to convey.
When the team set out on the project, it didn't seem that difficult, Heo said. There is a price that any machine in the Facebook cluster must pay—called the "fbtax"—which covers all of the infrastructure required to participate. That includes monitoring and other control programs that run on each node. If those auxiliary, but necessary, processes have memory leaks or other excessive resource consumption problems, they can take down a system for nearly an hour, as shown in the graph. The new "fbtax2" project set out to try to ensure systems could still respond to the real work that needs to be done even in the face of other misbehavior.
Challenges
As might be guessed, there were some challenges. For memory, the .high and .max values used by the memory controller are not work conserving. Because the machines at Facebook are fully utilized, creating artificial limits on memory just led to reduced performance and more fragile systems. In addition, the kernel's OOM killer does not protect the health of the workload and cannot detect if a workload is thrashing; it only detects whether or not the kernel can make progress. So if the workload is thrashing, but the kernel is not directly affected, the OOM killer will not run.
There is also "no good I/O controller to use", he said. The .max value for the I/O controller is, once again, not work conserving. I/O bandwidth is more oversubscribed than memory is for Facebook. The settings for I/O are based on bytes per second and I/O operations per second (IOPS) and it is difficult to find a middle ground when using those. What the Completely Fair Queuing (CFQ) I/O scheduler implements is not good for SSDs, or even hard disks, Heo said. The I/O controller for control groups version 2 (cgroups v2) does better than its predecessor, but it still does not account for all I/O that a container generates. In particular, filesystem metadata and swap I/O are not properly accounted for.
Priority inversions can occur when low-priority cgroups get throttled but generate I/O from filesystem operations or from swapping. For example, the ext4 filesystem only journals metadata, not the actual data, which can create a hard dependency across all writes in the system. The mmap_sem semaphore is another way that priority inversions can be caused. Even something as simple as ps needs to acquire mmap_sem in order to read /proc/PID/cmdline. A low-priority cgroup can hold that semaphore for a long time while doing I/O; if a higher priority cgroup wants to do something that requires the semaphore, it will be blocked.
Solutions
He then moved on to the solutions that the team has found. Instead of .max/.high values for controlling memory, .min and .low values were added. These provide more work conservation because there are no artificial limits; if a cgroup needs more memory and memory is available, it gets more memory. These settings are more forgiving as well; they can be a bit wrong and the system will function reasonably well. There is also work being done to provide proportional memory pressure to cgroups, he said.
It is difficult to tell whether a process is slow because of some inherent limitation in the program or whether it is waiting for some resource; the team realized it needed some visibility into that. Johannes Weiner has been working on the "pressure stall information" (PSI) metric for the last two years. It can help determine that "if I had more of this resource, I might have been able to run this percentage faster". It looks at memory, I/O, and CPU resource usage for the system and for individual cgroups to derive information that helps in "determining what's going on in the system".
PSI is used for allocating resources to cgroups, but is also used by oomd, which is the user-space OOM killer that has been developed by the team. Oomd looks at the PSI values to check the health of the system; if those values are too bad, it will remediate the problem before the kernel OOM killer gets involved.
The configuration of oomd can be workload-dependent; if the web server is being slowed down more than 10%, that is a big problem, Heo said. On the other hand, if Chef or YUM are running 40% slower, "we don't really care". Oomd can act in the first case and not in the second because it provides a way to specify context-specific actions. There are still some priority inversions that can occur and oomd can also help ameliorate those.
The I/O latency controller is something that Josef Bacik has been working on for the past year, Heo said. One of the key problems for an I/O controller is that there is no metric for how much I/O bandwidth is available. As an alternative to the existing I/O controller, you can try to guarantee the latency of I/O completions, which is what the latency controller does. One workload might need 30ms completions, while another needs 20ms; it requires some experimentation to determine those numbers.
The latency controller is "a lot more work conserving" and can be used for both SSDs and hard disks. It supports "do first, pay later" for metadata and swap I/O operations, which allows I/O operations caused by lower priority cgroups to run at a higher priority to help avoid priority inversions. Once those I/Os complete, they will be charged to the lower priority cgroup, which will reduce or delay its ability to do I/O in the future. Also, the latency controller works with the multiqueue block layer, "which is the only thing we use anymore", he said.
Facebook has switched to Btrfs as a way to fix the priority inversions. That is easy to do, he said, since the team has several core Btrfs developers (Bacik, Chris Mason, and Omar Sandoval) on it. Adding aborts for page-cache readahead solved most of the problems they were experiencing with mmap_sem contention, for example. The "do first, pay later" for swap I/O helped with priority inversion problems as well.
Results
Heo went on to describe the infrastructure at Facebook on the path toward looking at the results of his team's work. Facebook has hundreds of thousands of machines with Btrfs as their root filesystem at this point and a seven-digit number of Btrfs containers, Heo said. The change to Btrfs happened over just a few weeks time once his team told the infrastructure team what was wanted; "it was breathtaking to see". He noted that SUSE developers also did a lot of the upstream Btrfs work.
Without swap enabled, "all anonymous memory becomes memlocked", he said. By enabling swap, it allows memory pressure to build up gradually, Facebook enables swap for all cgroups except the ones used for the main workload.
He then went through the cgroups that are used on the systems and their settings; all of these cgroups are managed as systemd "slices". The "hostcritical" slice has processes like oomd, sshd, systemd-journald, and rsyslog in it. It has mem.min=352M and io.latency=50ms (for a hard-disk-based machine). The "workload" cgroup has mem.low=17G and the same latency setting, while the "system" slice (which contains everything else) has no memory setting and io.latency=75ms.
Oomd is then configured to kill a memory hog in the system cgroup under certain conditions. If the workload slice is under moderate memory pressure and the system slice is under high memory pressure, or the system slice suffers from prolonged high memory pressure, a memory hog will be killed. Similarly, an I/O hog in the system cgroup will be killed if the workload is under moderate I/O pressure and the system cgroup is under high I/O pressure. In addition, a swap hog will be killed if swap is running out.
He then showed a number of graphs that demonstrated better handling of various types of stress with fbtax2 versus the existing code. Adding memory leaks of 10MB/s, 50MB/s, and 100MB/s would effectively crash the existing systems, but the fbtax2 systems were able to continue on, sometimes with sizable dips in their performance, but still servicing requests.
When I/O stress was applied, the existing code would continue running, but take huge dips in performance, while the fbtax2 code took fewer and smaller dips. Those dips were still fairly large, though, which may be caused by trying to control too much, Heo said; that can make the system slice too unstable. It is important "to give the kernel some slack" so that it can balance things. All of the graphs can be seen in his slides.
So Facebook now has working full-OS resource isolation, Heo said; fbtax2 is rolling out throughout its infrastructure. Much of the code is already upstream, what remains should be there in the next two development cycles. There are still some things to do, including better batch workload handling and thread-pool management. While latency control is a really effective tool, it is not that great for multiple workloads with different needs. The team is looking at proportional I/O control to see if that can help in those scenarios. He closed by pointing attendees at the Facebook Linux projects page for more information on these and other projects.
[I would like to thank LWN's travel sponsor, the Linux Foundation, for travel assistance to attend Open Source Summit in Vancouver.]
Compiling kernel UAPI headers with C++
Linux kernel developers tend to take a dim view of the C++ language; it is seen, rightly or wrongly, as a sort of combination of the worst (from a system-programming point of view) features of higher-level languages and the worst aspects of C. So it takes a relatively brave person to dare to discuss that language on the kernel mailing lists. David Howells must certainly be one of those; he not only brought up the subject, but is working to make the kernel's user-space API (UAPI) header files compatible with C++.If somebody were to ask why this goal is desirable, they would not be the first to do so. The question has not actually gotten a complete answer, but some possible motivations come to mind. The most obvious one is that some developers might actually want to write programs in C++ that need access to the kernel's API; there is no accounting for taste, after all. For most system calls, the details of the real kernel API (as opposed to the POSIX-like API exposed by the C library) tend to be hidden, but there are exceptions; the most widespread of those is almost certainly the ioctl() system call. There is a large set of structures used with ioctl(); their definition is a big part of the kernel's UAPI. If a C++ compiler cannot compile those UAPI definitions, then those ioctl() calls cannot be invoked from C++.
C++ got its start as a sort of superset of C, so most C code could, in the early days, be compiled with a C++ compiler. The two languages have diverged over the years, though, making it easier to write C code that can no longer be compiled in that way. A look at the changes in Howells's patch set gives some good examples of where things can go wrong.
One common stumbling point is the use of identifiers that C++ has claimed as keywords. The drm_i810_dma structure, for example, contains a member called virtual, while struct virtio_net_ctrl_header has a member called class. Given the frequent use of members called private in the kernel, it is surprising that only one (struct keyctl_dh_params) seems to have made it into the UAPI. The C++ compiler gets rather grumpy when it encounters those keywords used as identifiers, so something needs to change if the UAPI headers are to be acceptable to it.
One developer suggested that, for example, C++ developers could be asked to compile their programs with a command-line option like -Dclass=_class to sidestep the problem. It turns out, though, that this approach, while it is indeed effective at getting the structure in question to compile under C++, has a certain risk of creating unintended difficulties elsewhere in the program. So a different approach is necessary. The solution that was chosen is to change the definition of the structure to look like this:
struct virtio_net_ctrl_hdr {
union {
#ifndef __cplusplus
__u8 class;
#endif
__u8 _class;
};
__u8 cmd;
};
The addition of the anonymous union allows the old (C++ keyword) name to be used in C code, while also allowing the addition of a new name that can be used under either language. Changing the structures in this way was not universally popular, but there do not appear to be a lot of good alternatives, given that breaking code written in C is not acceptable.
There are various other problems to be solved; for example, ending a structure with an array of unspecified length is not allowed in C++. So a definition like the rather tersely named struct bkey:
struct bkey {
__u64 high;
__u64 low;
__u64 ptr[];
};
must be changed by giving ptr an explicit dimension of zero. Other problems turn out to be simply bugs; some structures were defined using kernel-specific types that are not available in user space, for example. In at least one case, the structure involved should never have been exposed to user space to begin with and had never been used in communications with the kernel. Cleaning such things up makes sense even if one does not care about the larger goal of C++ compatibility.
The final step in the patch series is the addition of a script that will feed (almost) all of the UAPI header files to g++ as part of the kernel's build process. The output of this compilation is discarded, but it serves a useful purpose; any developer who breaks the ability to compile those files under C++ will get some immediate feedback to that effect. At least, they will if they have g++ installed; otherwise the test is skipped to avoid breaking the kernel build as a whole. Should this series be merged, kernel developers will not necessarily like C++ any more than they do now, but they will at least be more friendly toward C++ developers trying to use their exported API headers.
Fedora reawakens the hibernation debate
Behavioral changes can make desktop users grumpy; that is doubly true for changes that arrive without notice and possibly risk data loss. Such a situation recently arose in the Fedora 29 development branch in the form of a new "suspend-then-hibernate" feature. This feature will almost certainly be turned off before Fedora 29 reaches an official release, but the discussion and finger-pointing it inspired reveal some significant differences of opinion about how this kind of change should be managed.
Unexpected hibernation
Fedora tester Kamil Paral recently noticed a change in laptop power-management behavior in the bleeding-edge Fedora repository. A laptop that had been suspended in the evening would turn out to be hibernated the following morning. "Suspended", in this context, means that the system as a whole has been powered down, but power to memory remains active so that its contents are preserved. Such a system can be quickly restored to an operational state by powering up the rest of the hardware; all applications that were running before will still be there. "Hibernated", instead, means that the contents of main memory have been written to persistent storage so that the entire system can be powered down. Restoring the system requires reading the hibernation image back into memory from storage.
The new suspend-then-hibernate feature attempts to provide the best of both worlds by suspending the system initially, then automatically hibernating three hours after suspension. The motivation for this feature is clear: a suspended system continues to drain its battery (albeit slowly), while a hibernated system does not. A hibernated system left without power for weeks can be expected to resume properly; a suspended system will typically lose everything after a handful of days. This behavior has some clear advantages, but it also raises a number of concerns, especially as an unannounced change with no easy way for the user to change it.
One reason why many users may dislike this change was expressed by Adam Williamson. A suspended system is almost immediately ready for work upon resume; a hibernated system, instead, may require a significant amount of time to read the image back into memory, followed by a period of relatively sluggish behavior as the kernel's page cache is repopulated. For users who are not concerned about the extra power used by suspend — because the system is left plugged in or because they know they will be resuming it before the battery gets low — this extra resume latency can be bothersome while providing no real advantages.
That, on its own, would be a good argument for giving users control over whether a system uses suspend-then-hibernate or simply suspends. But there is a more compelling reason as well: this feature depends on hibernation working correctly, and hibernation is not one of the best-supported features in the Linux kernel.
Hibernation, initially called "software suspend" or "suspend to disk" in kernel circles, was once a hot topic in kernel development — back in 2004. A lot of effort went into making it work, and heated battles were fought over which of two competing implementations should be in the mainline. That discussion slowed considerably around ten years ago, though, when modern suspend-to-RAM functionality became reliable. For many of us, hibernation was only something we used because suspending was not available; once the latter worked, many people never looked back.
As a result, work on hibernation dropped off, to the point that almost nothing has happened in that area in the last decade and few people test it. Some kernel features can survive years of neglect and still work well; hibernation is not one of them. It is a low-level feature that can be defeated by quirks in almost any part of the system, and there are a lot of quirky systems out there. Hibernation is the sort of feature that has to be made to work separately on every new machine. So, while hibernation works on many well-behaved machines, it is rather less reliable on many others. The only way to know for sure is to try it and see if the system resumes reliably over time. Even if hibernation generally works on a particular system, the use of other features, such as UEFI secure boot or encrypted disks, is likely to break it.
What now?
The Linux community is full of users who are happy to experiment with this kind of feature and know how to avoid losing data if something explodes. Silently hibernating all systems after three hours of suspension, though, will widely expand the group performing such experiments, bringing in users who are rather less adventurous and who are not prepared for things to fail when they open their laptops in the morning. It is, in other words, the kind of feature that seems likely to swell the ranks of former desktop Linux users.
Most of the participants in the discussion were fairly quick to reach
the conclusion that suspend-then-hibernate is not an appropriate feature to
silently add to every user's system with no option to turn it off. Some,
such as Matthias Clasen disagreed,
though, saying: "If it works, why should we make it configurable? I
don't think anybody wants their battery drained
". But
from there opinions diverged somewhat.
The addition of this feature to Fedora came about in two steps. The first was addition of a suspend-to-hibernate command (later renamed suspend-then-hibernate) to systemd. The GNOME developers noticed this feature, and added a patch to automatically use it, instead of ordinary suspend, when it is available. Since GNOME chose to start using this feature, and since GNOME provides the control interface that users see, it seems natural to think that GNOME's interface should provide control over whether suspend-then-hibernate is used. But, it appears, the GNOME developers disagree with that idea.
In particular, two GNOME developers, Bastien
Nocera and Clasen,
argued that if this particular systemd feature does not work
reliably, it should be disabled in systemd rather than in GNOME. Neither
seems to see any other
reason why users might want to disable its use or have any control over it
in general. Nocera seems to have a
longer-term grudge against Fedora's handling of power management
that isn't helping here. Matthew Miller, meanwhile, has argued for
some sort of easy user control that "ideally does not involve the
command line
", which would suggest that something needs to be made
available in the GNOME interface somewhere if the feature is not to be
disabled altogether.
As of this writing, what will happen next is not entirely clear. The opposition from the GNOME developers makes it relatively unlikely that control over suspend-then-hibernate will be provided at that level. In the short term, that may not matter, as few people who have looked at the situation seem to think that this feature is ready to inflict on users. So the right answer for Fedora 29 may be to just turn it off entirely.
For the longer term, a few different approaches have been suggested. Lennart Poettering posted a list of changes that, he thinks, might make the feature safe to turn on. Chris Murphy, instead, has proposed that GNOME adopt an API allowing applications to save and restore their state; he pointed out that systems like Android do not use hibernation, and suggested that GNOME-based systems should take a similar approach. That, however, would require a lot of work and is clearly not a short-term solution. Williamson suggested looking at "hybrid sleep" (where the system writes the hibernation image immediately, then suspends indefinitely) instead. Hybrid sleep would retain the resume latency of suspend, while only using the hibernation image as a last resort should the battery run out. Which of these ideas, if any, will gain traction is unclear at this point.
One of Fedora's goals is to be the first with useful new features, but this particular one appears to not be ready even for many early adopters. Indeed, unless priorities change and some serious effort goes into developing and testing hibernation, it may never truly be safe for the user community as a whole. But there is clearly an appetite for better desktop power management, so it is good that developers are starting to explore this space again.
Brief items
Security
Security quotes of the week
a) problems; that
b) intersect with the Internet;
there will always be calls to break the Internet to solve them.
We suffered a crushing setback today, but it doesn't change the mission. To fight, and fight, and fight, to keep the Internet open and free and fair, to preserve it as a place where we can organise to fight the other fights that matter, about inequality and antitrust, race and gender, speech and democratic legitimacy.
If this vote had gone the other way, we'd still be fighting today. And tomorrow. And the day after.
Kernel development
Kernel release status
The current development kernel is 4.19-rc4, released on September 16. Linus said: "Nothing particularly odd stands out on the technical side in the kernel updates for last week - rc4 looks fairly average in size for this stage in the release cycle, and all the other statistics look pretty normal too."
The 4.19-rc4 announcement also carried the news that Linus will be taking a break from kernel development to work on personal issues; Greg Kroah-Hartman will see 4.19 through to a conclusion.
Stable updates: 4.18.8, 4.14.70, 4.9.127, and 4.4.156 were released on September 15.
Quote of the week
Distributions
The first /e/ beta is available
/e/ is Gaël Duval's project to build a privacy-oriented smartphone distribution; the first beta is now available with support for a number of devices. "At our current point of development, we have an '/e/' ROM in Beta stage: forked from LineageOS 14.1, it can be installed on several devices (read the list). The number of supported devices will grow over time, depending on more build servers and more contributors who can maintain or port to specific devices (contributors welcome). The ROM includes microG configured by default with Mozilla NLP so users can have geolocation functionality even when GPS signal is not available."
Distribution quote of the week
Development
HHVM ending support for PHP
The HHVM project has announced that the Hack language and PHP will truly be going separate ways. The HHVM v3.30 release, due by the end of the year, will be the last to support code written in PHP. "Ultimately, we recommend that projects either migrate entirely to the Hack language, or entirely to PHP7 and the PHP runtime." HHVM was first announced in 2011 as a compiler for the PHP language.
LLVM 7.0.0 released
Version 7.0.0 of the LLVM compiler suite is out. "It is the result of the community's work over the past six months, including: function multiversioning in Clang with the 'target' attribute for ELF-based x86/x86_64 targets, improved PCH support in clang-cl, preliminary DWARF v5 support, basic support for OpenMP 4.5 offloading to NVPTX, OpenCL C++ support, MSan, X-Ray and libFuzzer support for FreeBSD, early UBSan, X-Ray and libFuzzer support for OpenBSD, UBSan checks for implicit conversions, many long-tail compatibility issues fixed in lld which is now production ready for ELF, COFF and MinGW, new tools llvm-exegesis, llvm-mca and diagtool". The list of new features is long; see the overall release notes, the Clang release notes, the Clang tools release notes, and the LLD linker release notes for more information.
PostgreSQL adopts a code of conduct
The PostgreSQL community has, after an extended discussion, announced the adoption of a code of conduct "which is intended to ensure that PostgreSQL remains an open and enjoyable project for anyone to join and participate in".
Versity announces next generation open source archiving filesystem
Versity Software has announced that it has released ScoutFS under GPLv2. "ScoutFS is the first GPL archiving file system ever released, creating an inherently safer and more user friendly option for storing archival data where accessibility over very large time scales, and the removal of vendor specific risk is a key consideration."
Apache SpamAssassin 3.4.2 released
SpamAssassin 3.4.2 is out, the first release from this spam-filtering project since 3.4.1 came out in April 2015. It fixes some remotely exploitable security issues, so SpamAssassin users probably want to update in the near future. "The exploit has been seen in the wild but not believe to have been purposefully part of a Denial of Service attempt. We are concerned that there may be attempts to abuse the vulnerability in the future. Therefore, we strongly recommend all users of these versions upgrade to Apache SpamAssassin 3.4.2 as soon as possible."
Development quote of the week
This is a quotidian story that normally wouldn't be worth writing about except in light of the argument discussed in yesterday's post that young developers shouldn't waste their time learning ancient editors like Vim and Emacs. Just stick with that flashy new editor that's easy to learn and has a glitzy UI. Except then you find it isn't up to the job and you have to revert to one of those dusty old editors that folks were telling you weren't worth your time.
Miscellaneous
The (awesome) economics of open source (Opensource.com)
Over at Opensource.com, Red Hat's Michael Tiemann looks at open source from the perspective of the economic theories of Ronald Coase, who won the 1991 Nobel Prize for Economics. Those theories help explain why companies like Red Hat (and Cygnus Solutions, which Tiemann founded) have prospered even in the face of economic arguments about why they should not. "Successful open source software companies 'discover' markets where transaction costs far outweigh all other costs, outcompete the proprietary alternatives for all the good reasons that even the economic nay-sayers already concede (e.g., open source is simply a better development model to create and maintain higher-quality, more rapidly innovative software than the finite limits of proprietary software), and then—and this is the important bit—help clients achieve strategic objectives using open source as a platform for their own innovation. With open source, better/faster/cheaper by itself is available for the low, low price of zero dollars. As an open source company, we don't cry about that. Instead, we look at how open source might create a new inflection point that fundamentally changes the economics of existing markets or how it might create entirely new and more valuable markets."
Lights, Camera, Open Source: Hollywood Turns to Linux for New Code Sharing Initiative (Linux Journal)
Linux Journal covers the new Academy Software Foundation (ASWF), which is a project aimed at open-source collaboration in movie-making software that was started by the Academy of Motion Picture Arts and Sciences (AMPAS) and the Linux Foundation. "Still at the early stages, the ASWF has yet to develop any of its own projects, but there is interest in having them host a number of very popular projects, such as Industrial Light & Magic’s OpenEXR HDR image file format, color management solution OpenColorIO, and OPenVDB, which is used for working with those hard-to-handle objects like clouds and fluids. Along with promoting cooperation on the development of a more robust set of tools for the industry, one of the goals of the organization moving forward is to put out a shared licensing template that they hope will help smooth the tensions over licensing. It follows that with the growth of projects, navigating the politics over usage rights is bound to be a tricky task."
Page editor: Jake Edge
Announcements
Newsletters
Distributions and system administration
- DistroWatch Weekly (September 17)
- LineageOS Changelog 20 (September 17)
- This Week in Lubuntu Development (September 13)
- Lunar Linux Weekly News (September 14)
- Qubes OS development update (September 19)
- Reproducible Builds Weekly Report (September 18)
- Ubuntu Weekly Newsletter (September 17)
Development
- Emacs News (September 17)
- What's cooking in git.git (September 14)
- Git Rev News (September 19)
- LLVM Weekly (September 17)
- LXC/LXD/LXCFS Weekly Status (September 17)
- OCaml Weekly News (September 18)
- OpenStack Technical Committee Report (September 18)
- Perl Weekly (September 17)
- Weekly changes in and around Perl 6 (September 17)
- PostgreSQL Weekly News (September 17)
- Python Weekly Newsletter (September 13)
- Ruby Weekly News (September 13)
- This Week in Rust (September 18)
- Wikimedia Tech News (September 17)
Meeting minutes
- Fedora Council minutes (September 19)
- Fedora FESCO meeting minutes (September 17)
- GNOME Foundation Board Minutes (September 11)
- Python Governance Discussion #1 Notes (September 15)
- Python Governance Discussion #2 Notes (September 15)
Calls for Presentations
SCALE 17x Call For Papers Is Now Open
Southern California Linux Expo, SCALE 17X, will be held March 7-10 in Pasadena, CA. The call for papers is open until October 31.LAC 2019 : Call for Participation
Linux Audio Conference will take place March 23-26 in San Francisco, CA. "LAC 2019 invites submissions of papers, posters, and workshops addressing all areas of audio processing based on Linux and open source software. All submissions and presentations are in English. Submitted papers are expected to respect academic standards and must be complete (*a simple abstract is not enough*). Submissions can focus on technical, artistic, and/or scientific issues and can target developers and/or users." The submission deadline is December 7.
openSUSE Summit Nashville
An openSUSE Summit will be held April 5-6, 2019 in Nashville, TN. The call for papers closes January 15. "There is one openSUSE/open source track. There are three talks that can be submitted for the openSUSE SUmmit Nashville. One is a short talk with a 15-minute limit; the other talk that can be submitted is a long talk with a 45-minute limit. A 90-minute workshop is also an available option for people submitting a talk for the summit."
CFP Deadlines: September 20, 2018 to November 19, 2018
The following listing of CFP deadlines is taken from the LWN.net CFP Calendar.
| Deadline | Event Dates | Event | Location |
|---|---|---|---|
| September 20 | September 21 September 23 |
Orconf | Gdansk, Poland |
| September 30 | December 7 | PGDay Down Under 2018 | Melbourne, Australia |
| October 1 | October 5 October 6 |
Open Source Days | Copenhagen, Denmark |
| October 5 | November 12 | Ceph Day Berlin | Berlin, Germany |
| October 7 | October 20 October 21 |
OSS Víkend | Košice, Slovakia |
| October 22 | December 3 December 4 |
DPDK Summit North America 2018 | San Jose, CA, USA |
| October 26 | January 25 January 27 |
DevConf.cz | Brno, Czechia |
| October 31 | March 7 March 10 |
SCALE 17x | Pasadena, CA, USA |
| October 31 | November 24 November 25 |
DebUtsav Kochi 2018 | Kerala, India |
| November 3 | February 2 February 3 |
FOSDEM | Brussels, Belgium |
| November 9 | March 23 March 24 |
LibrePlanet | Cambridge, MA, USA |
| November 15 | February 25 February 26 |
Vault Linux Storage and Filesystems Conference | Boston, MA, USA |
| November 16 | March 19 March 21 |
PGConf APAC | Singapore, Singapore |
| November 18 | January 21 | LCA2019 System Administration Miniconf | Christchurch, New Zealand |
If the CFP deadline for your event does not appear here, please tell us about it.
Upcoming Events
RISC-V microconference accepted for LPC
There will be a RISC-V microconference at Linux Plumbers Conference, November 13-15 in Vancouver, Canada. "The primary objective of the RISC-V microconference at Plumbers is to initiate a community-wide discussion about the design problems/ideas for different Linux kernel features that will lead to a better, stable kernel for RISC-V."
Announcing Netdev 0x13 conference
The NetDev Society has announced that Netdev 0x13 will be held March 20-22, 2019 in Prague, Czech Republic. "At 0x12 we tried a new format where the first day constitutes developer working sessions via workshops and educational sessions via tutorials. We have received feedback which we are in the process of evaluating and we will be notifying you of any format changes. The window for suggestions is still open..."
Events: September 20, 2018 to November 19, 2018
The following event listing is taken from the LWN.net Calendar.
| Date(s) | Event | Location |
|---|---|---|
| September 17 September 21 |
GNU Radio Conference 2018 | Henderson, NV, USA |
| September 17 September 21 |
Linaro Connect | Vancouver, Canada |
| September 20 September 23 |
EuroBSDcon 2018 | Bucharest, Romania |
| September 21 September 23 |
Orconf | Gdansk, Poland |
| September 21 September 23 |
Video Dev Days 2018 | Paris, France |
| September 24 September 27 |
ApacheCon North America | Montreal, Canada |
| September 24 September 25 |
Embedded Recipes 2018 | Paris, France |
| September 26 | Open Source Backup Conference 2018 | Cologne , Germany |
| September 26 September 28 |
Kernel Recipes 2018 | Paris, France |
| September 26 September 28 |
X.org Developer Conference | A Coruña, Spain |
| September 26 September 28 |
LibreOffice Conference | Tirana, Albania |
| September 28 September 30 |
All Systems Go! 2018 | Berlin, Germany |
| October 5 October 6 |
Open Source Days | Copenhagen, Denmark |
| October 6 October 7 |
Linux Days 2018 | Prague, Czech Republic |
| October 9 | PostgresConf South Africa 2018 | Johannesburg, South Africa |
| October 10 October 12 |
PyCon ZA 2018 | Johannesburg, South Africa |
| October 12 October 13 |
Ohio LinuxFest 2018 | Columbus, OH, USA |
| October 15 October 18 |
Tcl/Tk Conference | Houston, TX, USA |
| October 18 October 19 |
Osmocom Conference 2018 | Berlin, Germany |
| October 20 October 21 |
OSS Víkend | Košice, Slovakia |
| October 21 October 23 |
All Things Open | Raleigh, NC, USA |
| October 22 October 24 |
Open Source Summit Europe | Edinburgh, UK |
| October 22 October 24 |
Embedded Linux Conference Europe | Edinburgh, UK |
| October 23 October 26 |
PostgreSQL Conference Europe 2018 | Lisbon, Portugal |
| October 24 | Proxtalks 2018 | Frankfurt am Main, Germany |
| October 24 October 26 |
KVM Forum 2018 | Edinburgh, UK |
| October 24 October 26 |
PyCon.DE 2018 | Karlsruhe, Germany |
| October 25 October 26 |
Free Software and Open Source Symposium | Toronto, Canada |
| October 25 | Real-Time Summit | Edinburgh, UK |
| October 25 | Tracing Summit 2018 | Edinburgh, UK |
| October 25 October 26 |
Linux Security Summit | Edinburgh, UK |
| October 25 October 26 |
GStreamer Conference 2018 | Edinburgh, Scotland, UK |
| October 27 October 28 |
Sonoj Convention | Cologne, Germany |
| October 29 October 31 |
LISA18 | Nashville, TN, USA |
| October 29 October 30 |
OpenWrt Summit | Lisbon, Portugal |
| November 2 | Software Freedom Law Center’s Fall Conference | New York, NY, USA |
| November 3 November 4 |
OpenFest 2018 | Sofia, Bulgaria |
| November 5 November 7 |
MesosCon 2018 | New York, NY, USA |
| November 5 November 8 |
Open Source Monitoring Conference | Nuremberg, Germany |
| November 8 | Open Source Camp | Nürnberg, Germany |
| November 8 November 11 |
QtCon Brazil 2018 | São Paulo, Brazil |
| November 9 November 10 |
Seattle GNU/Linux | Seattle, WA, USA |
| November 9 | August Penguin Conference 2018 | Tel Aviv, Israel |
| November 9 November 13 |
PyCon Canada 2018 | Toronto, Canada |
| November 12 November 15 |
Linux Kernel Summit | Vancouver, Canada |
| November 12 | Ceph Day Berlin | Berlin, Germany |
| November 12 | RabbitMQ Summit 2018 | London, UK |
| November 13 November 15 |
OpenStack Summit | Berlin, Germany |
| November 13 November 15 |
Linux Plumbers Conference | Vancouver, BC, Canada |
| November 14 November 15 |
KubeCon+CloudNativeCon China | Shanghai, China |
| November 15 | NLUUG (fall conference) | Utrecht, The Netherlands |
If your event does not appear here, please tell us about it.
Security updates
Alert summary September 13, 2018 to September 19, 2018
| Dist. | ID | Release | Package | Date |
|---|---|---|---|---|
| CentOS | CESA-2018:2693 | C6 | firefox | 2018-09-13 |
| CentOS | CESA-2018:2692 | C7 | firefox | 2018-09-13 |
| Debian | DSA-4297-1 | stable | chromium-browser | 2018-09-19 |
| Debian | DSA-4293-1 | stable | discount | 2018-09-14 |
| Debian | DLA-1504-1 | LTS | ghostscript | 2018-09-13 |
| Debian | DSA-4294-1 | stable | ghostscript | 2018-09-16 |
| Debian | DLA-1506-1 | LTS | intel-microcode | 2018-09-16 |
| Debian | DSA-4273-2 | stable | intel-microcode | 2018-09-16 |
| Debian | DLA-1507-1 | LTS | libapache2-mod-perl2 | 2018-09-18 |
| Debian | DSA-4296-1 | stable | mbedtls | 2018-09-16 |
| Debian | DLA-1500-2 | LTS | openssh | 2018-09-12 |
| Debian | DSA-4295-1 | stable | thunderbird | 2018-09-16 |
| Debian | DLA-1505-1 | LTS | zutils | 2018-09-15 |
| Fedora | FEDORA-2018-0d0a652afe | F27 | firefox | 2018-09-13 |
| Fedora | FEDORA-2018-f1b1ed38b3 | F27 | ghostscript | 2018-09-17 |
| Fedora | FEDORA-2018-c39ae23dc8 | F28 | ghostscript | 2018-09-17 |
| Fedora | FEDORA-2018-1a85045c79 | F27 | icu | 2018-09-17 |
| Fedora | FEDORA-2018-40decc4158 | F27 | java-1.8.0-openjdk-aarch32 | 2018-09-14 |
| Fedora | FEDORA-2018-c650019e9c | F28 | java-1.8.0-openjdk-aarch32 | 2018-09-14 |
| Fedora | FEDORA-2018-59e4747e0f | F28 | kernel-headers | 2018-09-14 |
| Fedora | FEDORA-2018-59e4747e0f | F28 | kernel-tools | 2018-09-14 |
| Fedora | FEDORA-2018-ec9bc84fda | F27 | libzypp | 2018-09-17 |
| Fedora | FEDORA-2018-6db422c637 | F28 | matrix-synapse | 2018-09-14 |
| Fedora | FEDORA-2018-4a21a8ca59 | F27 | nspr | 2018-09-18 |
| Fedora | FEDORA-2018-1a7a5c54c2 | F28 | nspr | 2018-09-14 |
| Fedora | FEDORA-2018-4a21a8ca59 | F27 | nss | 2018-09-18 |
| Fedora | FEDORA-2018-1a7a5c54c2 | F28 | nss | 2018-09-14 |
| Fedora | FEDORA-2018-4a21a8ca59 | F27 | nss-softokn | 2018-09-18 |
| Fedora | FEDORA-2018-1a7a5c54c2 | F28 | nss-softokn | 2018-09-14 |
| Fedora | FEDORA-2018-4a21a8ca59 | F27 | nss-util | 2018-09-18 |
| Fedora | FEDORA-2018-1a7a5c54c2 | F28 | nss-util | 2018-09-14 |
| Fedora | FEDORA-2018-dbeb27d783 | F27 | okular | 2018-09-18 |
| Fedora | FEDORA-2018-f56ded11c4 | F27 | openssh | 2018-09-13 |
| Fedora | FEDORA-2018-83116f8692 | F27 | pango | 2018-09-13 |
| Fedora | FEDORA-2018-8b1b2373b4 | F27 | zsh | 2018-09-14 |
| Fedora | FEDORA-2018-ec9bc84fda | F27 | zypper | 2018-09-17 |
| Fedora | FEDORA-2018-45183aab17 | F27 | zziplib | 2018-09-13 |
| Mageia | MGASA-2018-0372 | 6 | flash-player-plugin | 2018-09-13 |
| Mageia | MGASA-2018-0373 | 6 | kernel | 2018-09-14 |
| Mageia | MGASA-2018-0375 | 6 | kernel-linus | 2018-09-14 |
| Mageia | MGASA-2018-0374 | 6 | kernel-tmb | 2018-09-14 |
| Mageia | MGASA-2018-0371 | 6 | ntp | 2018-09-13 |
| openSUSE | openSUSE-SU-2018:2742-1 | 15.0 42.3 | GraphicsMagick | 2018-09-17 |
| openSUSE | openSUSE-SU-2018:2724-1 | chromium | 2018-09-15 | |
| openSUSE | openSUSE-SU-2018:2728-1 | 15.0 42.3 | chromium | 2018-09-15 |
| openSUSE | openSUSE-SU-2018:2731-1 | 15.0 | curl | 2018-09-15 |
| openSUSE | openSUSE-SU-2018:2736-1 | 42.3 | curl | 2018-09-15 |
| openSUSE | openSUSE-SU-2018:2734-1 | ffmpeg-4 | 2018-09-15 | |
| openSUSE | openSUSE-SU-2018:2723-1 | 15.0 42.3 | ffmpeg-4 | 2018-09-15 |
| openSUSE | openSUSE-SU-2018:2738-1 | 42.3 | kernel | 2018-09-16 |
| openSUSE | openSUSE-SU-2018:2739-1 | 15.0 | libzypp, zypper | 2018-09-17 |
| openSUSE | openSUSE-SU-2018:2727-1 | okular | 2018-09-15 | |
| openSUSE | openSUSE-SU-2018:2733-1 | 15.0 42.3 | okular | 2018-09-15 |
| openSUSE | openSUSE-SU-2018:2712-1 | 42.3 | python3 | 2018-09-14 |
| openSUSE | openSUSE-SU-2018:2730-1 | 15.0 | spice-gtk | 2018-09-15 |
| openSUSE | openSUSE-SU-2018:2740-1 | 42.3 | tomcat | 2018-09-17 |
| openSUSE | openSUSE-SU-2018:2741-1 | 15.0 | zsh | 2018-09-17 |
| Oracle | ELSA-2018-2692 | OL7 | firefox | 2018-09-12 |
| Oracle | ELSA-2018-2692 | OL7 | firefox | 2018-09-13 |
| Oracle | ELSA-2018-4219 | OL5 | kernel | 2018-09-18 |
| Oracle | ELSA-2018-4214 | OL5 | kernel | 2018-09-14 |
| Oracle | ELSA-2018-4215 | OL6 | kernel | 2018-09-13 |
| Oracle | ELSA-2018-4214 | OL6 | kernel | 2018-09-14 |
| Oracle | ELSA-2018-4216 | OL6 | kernel | 2018-09-14 |
| Oracle | ELSA-2018-4215 | OL7 | kernel | 2018-09-13 |
| Oracle | ELSA-2018-4216 | OL7 | kernel | 2018-09-14 |
| Red Hat | RHSA-2018:2721-01 | OSP13.0 | OpenStack Platform | 2018-09-18 |
| Red Hat | RHSA-2018:2707-01 | EL6 | flash-plugin | 2018-09-13 |
| Red Hat | RHSA-2018:2712-01 | RHS 5.6 5.7 | java-1.7.1-ibm | 2018-09-17 |
| Red Hat | RHSA-2018:2713-01 | RHS5.8 | java-1.8.0-ibm | 2018-09-17 |
| Red Hat | RHSA-2018:2715-01 | OSP10.0 | openstack-neutron | 2018-09-17 |
| Red Hat | RHSA-2018:2710-01 | OSP13.0 | openstack-neutron | 2018-09-17 |
| Red Hat | RHSA-2018:2714-01 | OSP10.0 | openstack-nova | 2018-09-17 |
| Scientific Linux | OPENAFS-SA-2018-001:2:3 | SL6 SL7 | OpenAFS | 2018-09-12 |
| Scientific Linux | SLSA-2018:2693-1 | SL6 | firefox | 2018-09-12 |
| Scientific Linux | SLSA-2018:2692-1 | SL7 | firefox | 2018-09-12 |
| Slackware | SSA:2018-256-01 | ghostscript | 2018-09-13 | |
| Slackware | SSA:2018-257-01 | php | 2018-09-14 | |
| SUSE | SUSE-SU-2018:2717-1 | SLE11 | curl | 2018-09-14 |
| SUSE | SUSE-SU-2018:2715-1 | SLE12 | curl | 2018-09-14 |
| SUSE | SUSE-SU-2018:2714-1 | SLE15 | curl | 2018-09-14 |
| SUSE | SUSE-SU-2018:2716-1 | OS7 SLE12 | libzypp, zypper | 2018-09-14 |
| SUSE | SUSE-SU-2018:2719-1 | SLE11 | openssh-openssl1 | 2018-09-14 |
| SUSE | SUSE-SU-2018:2704-1 | podman | 2018-09-13 | |
| SUSE | SUSE-SU-2018:2709-1 | SLE15 | spice-gtk | 2018-09-14 |
| SUSE | SUSE-SU-2018:2699-1 | SLE12 | tomcat | 2018-09-13 |
| Ubuntu | USN-3722-6 | 12.04 | clamav | 2018-09-18 |
| Ubuntu | USN-3722-5 | 14.04 16.04 18.04 | clamav | 2018-09-18 |
| Ubuntu | USN-3765-2 | 12.04 | curl | 2018-09-17 |
| Ubuntu | USN-3765-1 | 14.04 16.04 18.04 | curl | 2018-09-17 |
| Ubuntu | USN-3761-2 | 14.04 16.04 18.04 | firefox | 2018-09-13 |
| Ubuntu | USN-3761-3 | 14.04 16.04 18.04 | firefox | 2018-09-17 |
| Ubuntu | USN-3768-1 | 14.04 16.04 18.04 | ghostscript | 2018-09-19 |
| Ubuntu | USN-3767-2 | 12.04 | glib2.0 | 2018-09-19 |
| Ubuntu | USN-3767-1 | 14.04 16.04 18.04 | glib2.0 | 2018-09-19 |
| Ubuntu | USN-3747-2 | 18.04 | openjdk-lts | 2018-09-12 |
| Ubuntu | USN-3766-1 | 14.04 16.04 18.04 | php5, php7.0, php7.2 | 2018-09-18 |
| Ubuntu | USN-3766-2 | 12.04 | php5 | 2018-09-19 |
Kernel patches of interest
Kernel releases
Architecture-specific
Core kernel
Device drivers
Device-driver infrastructure
Documentation
Filesystems and block layer
Memory management
Networking
Security-related
Virtualization and containers
Page editor: Rebecca Sobol
