LWN.net Weekly Edition for July 9, 2020
Welcome to the LWN.net Weekly Edition for July 9, 2020
This edition contains the following feature content:
- Linux Mint drops Ubuntu Snap packages: an Ubuntu derivative takes a stand against Snaps.
- Btrfs at Facebook: how this next-generation filesystem is being used at massive scale and where the remaining problems are.
- Sleepable BPF programs: (some) BPF programs can lose their insomnia.
- Netflix releases open-source crisis-management tool: a looks at Netflix's Dispatch system.
- Home Assistant improves performance in 0.112 release: what's in the latest Home Assistant version.
- Hugo: a static-site generator: a fast and simple way to populate web sites with content.
This week's edition also includes these inner pages:
- Brief items: Brief news items from throughout the community.
- Announcements: Newsletters, conferences, security updates, patches, and more.
Please enjoy this week's edition, and, as always, thank you for supporting LWN.net.
Linux Mint drops Ubuntu Snap packages
The Linux Mint project has made good on previous threats to actively prevent
Ubuntu Snap packages from being installed through the APT package-management
system without the user's consent. This move is the result of "major
worries
" from Linux Mint on Snap's impact with regard to user choice
and software freedom. Ubuntu's parent company, Canonical, seems open to
finding a solution to satisfy the popular distribution's concerns — but it
too has interests to consider.
Backstory
The Linux Mint distribution is based on Debian and Ubuntu, providing over 30,000 packages from these projects. These packages are provided using the well-known APT packaging system used by both upstream distributions. Ubuntu, however, in 2014 started distributing software in parallel to APT using a technology called Snap.
Snap is a self-contained package-deployment system designed to make it easier to manage dependencies of an application in a Linux environment. Developed by Canonical, Snap is designed so that its packages contain all of the specific dependencies a software package needs to run, bundled into a single filesystem image. This allows a software package to run on a system that has otherwise incompatible versions of needed libraries, or even to have two different versions of a single software package with different dependencies easily coexist on a single machine. Essentially, it allows one package to be created per architecture that can run on any common Linux distribution.
The technology solves important package-management problems for Canonical and Ubuntu. It also has a strategic business value, as it allows Canonical-managed software to be installed on a competing distribution. The technology problem Canonical wants to solve is to simplify support for software packages, such as Chromium, across the multiple versions of Ubuntu. Relying strictly on APT requires independent packages to be maintained for each Ubuntu version, since various Ubuntu releases ship with different and potentially incompatible libraries. This represents a large workload that Canonical would rather not deal with, and to which Snap provides an elegant technical solution.
From a business perspective, widespread adoption of Snap as a universal
package-distribution technology would put Canonical in a strong position to
control Linux software distribution. This fact is not lost by Canonical's
competition — Red Hat supports a similar Flatpak technology. Unlike Snap, however, the
Flatpak project aims to be an
independent community and a "true upstream open source project,
dedicated to providing technology and services that can be used by all, with
no vendor lock-in.
"
The problems with Linux Mint came to a head when Ubuntu moved Chromium to Snap distribution in Ubuntu 19.10. On the surface, that isn't a problem in and of itself — the Linux Mint project can always start providing its own Chromium APT packages. The problem was the decision to change the Ubuntu chromium-browser APT package itself upstream in Ubuntu. Previously, that package would simply install Chromium directly. With the change, it would instead install the Snap package-management tools first and then install the Snap equivalent of the Chromium package — without making it clear to the user what was happening.
This behavior isn't limited to Chromium either, Canonical is using this approach on other packages as well, such as gnome-software. This decision to use APT to install a secondary, commercially-controlled package-management system, which then installs the software the user wanted, is the center of the controversy with Linux Mint. As was reported on the Linux Mint blog when the Chromium change was made:
A self-installing Snap Store which overwrites part of our APT package base is a complete NO NO. It's something we have to stop and it could mean the end of Chromium updates and access to the snap store in Linux Mint.
What's at stake and recent events
The recent decision by the Linux Mint project isn't about Chromium or any single package, but rather about the entire approach Canonical is taking to Snap and APT. Snap packages are effectively black-boxes; they cannot be reproduced independently as the packaging data is controlled by the package maker alone. Further, the Snap package manager is locked into Canonical's library (called the Snap Store) so independent third-party Snap repositories aren't currently an option, either. While Snap packages can be downloaded and installed from other sources manually, doing so requires passing along a rather ominous --dangerous flag.
According to Canonical Engineering Manager Ken VanDine, the decision to replace APT packages with Snap is driven by efficiency, and to apply pressure in ways beneficial to Canonical. Regarding moving the gnome-software APT package to Snap, VanDine said:
By shipping such a key application as a snap it will continue to keep pressure on to ensure we keep improving the experience while also reducing our maintenance burden for the LTS and future releases of Ubuntu.
So while the Linux Mint project took action specifically when Ubuntu's
decision impacted the popular chromium-browser package, it seems
this was more of the tipping point for the project than an isolated concern.
According to the project, these moves from Canonical "could reduce
access to free (as in beer) software and free (as in freedom)
software.
" It wasn't a sudden decision either — a year ago the Linux
Mint project wanted to find a reasonable solution to the problem by working
with Canonical, however nothing seems to have come of those conversations. In
fact, it is unclear if any conversations happened at all.
What is known is that, when Ubuntu 20.04 was released a year later, the APT
package that installed Chromium still installed the Snap version without the
consent (or necessarily even knowledge) of the user. In response, the Linux
Mint project made good on its previous threat; starting with Linux Mint 20
"Chromium won't be an empty package which installs snapd behind your
back. It will be an empty package which tells you why it's empty and tells
you where to look to get Chromium yourself.
" In addition, Linux Mint
20's APT package manager "will forbid snapd from getting
installed.
" The project doesn't appear to be outright forbidding users
from using Snap if they want to, but doing so would require explicit
action from the user as "by default APT won't allow repository
packages [to install Snap] on your behalf.
" Presumably this applies to
any package that Ubuntu has migrated to Snap upstream — including both
chromium-browser and gnome-software.
It appears the decision had an effect, and prompted Canonical to at least discuss a change in course according to recent reporting by ZDNet. A representative from Canonical, quoted by ZDNet, said:
We would welcome Linux Mint to engage with us and our community to discuss such topics, as we do with other distributions, and work together going forward.
Canonical's community manager for Ubuntu engineering services, Alan Pope,
provided a justification specifically for the migration of Chromium to Snap.
The justification, however, didn't cite the idea of keeping
"pressure
" expressed by VanDine regarding other packages like
gnome-software:
Maintaining a single release of Chromium is a significant time investment for the Ubuntu Desktop Team working with the Ubuntu Security team to deliver updates to each stable release. As the teams support numerous stable releases of Ubuntu, the amount of work is compounded. Comparing this workload to other Linux distributions which have a single supported rolling release misses the nuance of supporting multiple Long Term Support (LTS) and non-LTS releases. [...]
A Snap needs to be built only once per architecture, and will run on all systems that support Snapd. This covers all supported Ubuntu releases including 14.04 with Extended Security Maintenance (ESM), as well as other distributions like Debian, Fedora, Mint, and Manjaro.
It will be interesting to see what if anything comes out of this to
facilitate a resolution between the parties. In its blog posts, Linux Mint has
mentioned various changes to Snap, such as allowing third-parties to run
independent Snap package servers, that might help get to a solution. As the
project wrote, "snap could work both as a client and a file format, if
it didn't lock us into a single store.
" Doing so, however, would
likely run contrary to Canonical's strategic goals.
What's next
It's important to realize that, fundamentally, a technology like Snap would be useful for those who create packages, including proprietary software vendors. The problem in this case doesn't seem to be the technology, but Canonical's sole control over it — and the way it's being introduced through APT. As the Linux Mint project stated in its 2019 post:
[Snap] was supposed to make it possible to run newer apps on top of older libraries and to let 3rd party editors publish their software easily toward multiple distributions [...] What we didn't want it to be was for Canonical to control the distribution of software between distributions and 3rd party editors, to prevent direct distribution from editors, to make it so software worked better in Ubuntu than anywhere else and to make its store a requirement.
Ultimately there are multiple interests at play here leading to uncertainty on how things will work out. On one hand, you have companies like Canonical that see the strategic value of controlling the technology behind application distribution, and is using it to apply pressure where it wants to see change. On the other hand, you have distributions like Linux Mint that are prepared to actively block the technology while it is controlled by a single commercial interest. Hopefully this latest move by Linux Mint will prompt a change toward more openness, as a packaging system like Snap implemented in an open way would be a win for the entire Linux community.
Btrfs at Facebook
The Btrfs filesystem has had a long and sometimes turbulent history; LWN first wrote about it in 2007. It offers features not found in any other mainline Linux filesystem, but reliability and performance problems have prevented its widespread adoption. There is at least one company that is using Btrfs on a massive scale, though: Facebook. At the 2020 Open Source Summit North America virtual event, Btrfs developer Josef Bacik described why and how Facebook has invested deeply in Btrfs and where the remaining challenges are.Every Facebook service, Bacik began, runs within a container; among other things, that makes it easy to migrate services between machines (or even between data centers). Facebook has a huge number of machines, so it is impossible to manage them in any sort of unique way; the company wants all of these machines to be as consistent as possible. It should be possible to move any service to any machine at any time. The company will, on occasion, bring down entire data centers to test how well its disaster-recovery mechanisms work.
Faster testing and more
All of these containerized services are using Btrfs for their root filesystem. The initial use case within Facebook, though, was for the build servers. The company has a lot of code, implementing the web pages, mobile apps, test suites, and the infrastructure to support all of that.
The Facebook workflow dictates that nobody commits code directly to a repository. Instead, there is a whole testing routine that is run on each change first. The build system will clone the company repository, apply the patch, build the system, and run the tests; once that is done, the whole thing is cleaned up in preparation for the next patch to test. That cleanup phase, as it turns out, is relatively slow, averaging two or three minutes to delete large directory trees full of code. Some tests can take ten minutes to clean up, during which that machine is unavailable to run the next test.
The end result was that developers were finding that it would take hours to get individual changes through the process. So the infrastructure team decided to try using Btrfs. Rather than creating a clone of the repository, the test system just makes a snapshot, which is a nearly instantaneous operation. After the tests are run, the snapshot is deleted, which also appears to be instantaneous from a user-space point of view. There is, of course, a worker thread actually cleaning up the snapshot in the background, but cleaning up a snapshot is a lot faster than removing directories from an ordinary filesystem. This change saves a lot of time in the build system and reduced the capacity requirement — the number of machines needed to do builds and testing — by one-third.
After that experiment worked out so well, the infrastructure team switched
fully to Btrfs; the entire build system now uses it. It turns out that
there was another strong reason to switch to Btrfs: its support for on-disk
compression. The point here is not just saving storage space, but also
extending its lifetime. Facebook spends a lot of money on flash storage —
evidently inexpensive, low-quality flash storage at that. The company
would like this storage to last as long as possible, which
implies minimizing the number of write cycles performed. Source code tends
to compress well, so compression reduces the number of blocks written
considerably, slowing the process of wearing out the storage devices.
This work, Bacik said, was done by the infrastructure team without any sort of encouragement by Facebook's Btrfs developers; indeed, they didn't even know that it was happening. He was surprised by how well it worked in the end.
Then, there is the case of virtual machines for developers. Facebook has an "unreasonable amount" of engineering staff, Bacik said; each developer has a virtual machine for their work. These machines contain the entire source code for the web site; it is a struggle to get the whole thing to fit into the 800GB allotted for each and still leave room for some actual work to be done. Once again, compression helps to save the day, and works well, though he did admit that there have been some "ENOSPC issues" (problems that result when the filesystem runs out of available space).
Another big part of the Facebook code base is the container system itself, known internally as "tupperware". Containers in this system use Btrfs for the root filesystem, a choice that enables a number of interesting things. The send and receive mechanism can be used both to build the base image and to enable fast, reliable upgrades. When a task is deployed to a container, a snapshot is made of the base image (running a version of CentOS) and the specific container is loaded on top. When that service is done, cleanup is just a matter of deleting the working subvolume and returning to the base snapshot.
Additionally, Btrfs compression once again reduces write I/O, helping Facebook to make the best use of cheap flash drives. Btrfs is also the only Linux filesystem that works with the io.latency and io.cost (formerly io.weight) block I/O controllers. These controllers don't work with ext4 at all, he said, and there are some problems still with XFS; he has not been able to invest the effort to make things work better on those filesystems.
An in-progress project concerns the WhatsApp service. WhatsApp messages are normally stored on the users' devices, but they must be kept centrally when the user is offline. Given the number of users, that's a fair amount of storage. Facebook is using XFS for this task, but has run into unspecified "weird scalability issues". Btrfs compression can help here as well, and snapshots will be useful for cleaning things up.
But Btrfs, too, has run into scalability problems with this workload. Messages are tiny, compressed files; they are small enough that the message text is usually stored with the file's metadata rather than in a separate data extent. That leads to filesystems with hundreds of gigabytes of metadata and high levels of fragmentation. These problems have been addressed, Bacik said, and it's "relatively smooth sailing" now. That said, there are still some issues left to be dealt with, and WhatsApp may not make the switch to Btrfs in the end.
The good, the bad, and the unresolved
Bacik concluded with a summary of what has worked well and what has not. He told the story of tracking down a bug where Btrfs kept reporting checksum errors when working with a specific RAID controller. Experience has led him to assume that such things are Btrfs bugs, but this time it turned out that the RAID controller was writing some random data to the middle of the disk on every reboot. This problem had been happening for years, silently corrupting filesystems; Btrfs flagged it almost immediately. That is when he started to think that, perhaps, it's time to start trusting Btrfs a bit more.
Another unexpected benefit was the help Btrfs has provided in tracking down microarchitectural processor bugs. Btrfs tends to stress the system's CPU more than other filesystems; features like checksumming, compression, and work offloaded to threads tend to keep things busy. Facebook, which builds its own hardware, has run into a few CPU problems that have been exposed by Btrfs; that made it easy to create reproducers to send to CPU vendors in order to get things fixed.
In general, he said, he has spent a lot of time trying to track down systemic problems in the filesystem. Being a filesystem developer, he is naturally conservative; he worries that "the world will burn down" and it will all be his fault. In almost every case, these problems have turned out to have their origin in the hardware or other parts of the system. Hardware, he said, is worse than Btrfs when it comes to quality.
What he was most happy with, though, was perhaps the fact that most Btrfs use cases in the company have been developed naturally by other groups. He has never gone out of his way to tell other teams that they need to use Btrfs, but they have chosen it for its merits anyway.
[PULL QUOTE: "It's not Btrfs if there isn't an ENOSPC issue", he said. END QUOTE] All is not perfect, however. At the top of his list was bugs that manifest when Btrfs runs out of space, a problem that has plagued Btrfs since the beginning; "it's not Btrfs if there isn't an ENOSPC issue", he said, adding that he has spent most of his career chasing these problems. There are still a few bad cases in need of fixing, but these are rare occurrences at this point. He is relatively happy, finally, with the state of ENOSPC handling.
There have been some scalability problems that have come up, primarily with the WhatsApp workload as described above. These bugs have highlighted some interesting corner cases, he said, but haven't been really difficult to work out. There were also a few "weird, one-off issues" that mostly affected block subsystem maintainer Jens Axboe; "we like Jens, so we fixed it".
At the top of the list of problems still needing resolution is quota groups; their overhead is too high, he said, and things just break at times. He plans to solve these problems within the next year. There are users who would like to see encryption at the subvolume level; that would allow the safe stacking of services with differing privacy requirements. Omar Sandoval is working on that feature now.
Then there is the issue of RAID support, another longstanding problem area for Btrfs. Basic striping and mirroring work well, and have done so for years, he said. RAID 5 and 6 have "lots of edge cases", though, and have been famously unreliable. These problems, too, are on his list, but solving them will require "lots of work" over the next year or two.
Sleepable BPF programs
When support for classic BPF was added to the kernel many years ago, there was no question of whether BPF programs could block in their execution. Their functionality was limited to examining a packet's contents and deciding whether the packet should be forwarded or not; there was nothing such a program could do to block. Since then, BPF has changed a lot, but the assumption that BPF programs cannot sleep has been built deeply into the BPF machinery. More recently, classic BPF has been pushed aside by the extended BPF dialect; the wider applicability of extended BPF is now forcing a rethink of some basic assumptions.BPF programs can now do many things that were not possible for classic BPF programs, including calling helper functions in the kernel, accessing data structures ("maps") shared with the kernel or user space, and synchronizing with spinlocks. The core assumption that BPF programs are atomic has not changed, though. Once the kernel jumps into a BPF program, that program must complete without doing anything that might put the thread it is running in to sleep. BPF programs themselves have no way of invoking any sort of blocking action, and the helper functions exported to BPF programs by the kernel are required to be atomic.
As BPF gains functionality and grows toward some sort of sentient singularity moment, though, the inability to block is increasingly getting in the way. There has, thus, been interest in making BPF programs sleepable for some time now, and that interest has recently expressed itself as code in the form of this patch set from Alexei Starovoitov.
The patch adds a new flag, BPF_F_SLEEPABLE, that can be used when loading BPF programs into the kernel; it marks programs that may sleep during their execution. That, in turn, informs the BPF verifier about the nature of the program, and brings a number of new restrictions into effect. Most of these restrictions are the result of the simple fact that the BPF subsystem was never designed with sleepable programs in mind. Parts of that subsystem have been updated to handle sleeping programs correctly, but many other parts have not. That is likely to change over time but, until then, the functionality implemented by any part of the BPF subsystem that still expects atomicity is off-limits to sleepable programs.
For example, of the many types of BPF programs supported by the kernel, only two are allowed to block: those run from the Linux security module subsystem and tracing programs (BPF_PROG_TYPE_LSM and BPF_PROG_TYPE_TRACING). Even then, tracing programs can only sleep if they are attached to security hooks or are attached to functions that have been set up for error injection. Other types of programs are likely to be added in the future, but the coverage will never be universal. Many types of BPF programs are invoked from within contexts that, themselves, do not allow sleeping — deep within the network packet-processing code or attached to atomic functions, for example — so making those programs sleepable is just not going to happen.
Sleepable BPF programs are also limited in the types of maps they can access; anything but hash or array maps is out of bounds. There is a further restriction with hash maps: they must be preallocated so that elements need not be allocated while a sleepable program is running. Most of the internal housekeeping within maps is currently done using read-copy-update (RCU), a protection scheme that breaks down if a BPF program is blocked while holding a reference to a map entry. The hash and array maps have been extended to use RCU tasks, which can handle sleeping code, for some operations. Once again, it seems likely that support for the combination of BPF maps and sleepable programs will grow over time.
Sleepable BPF programs, thus, run with a number of restrictions that do not apply to the atomic variety. With Starovoitov's patch set, there is exactly one thing they can do that's unavailable to atomic BPF programs; it takes the form of a new helper function:
long bpf_copy_from_user(void *dest, u32 size, const void *user_ptr);
This helper is a wrapper around the kernel's copy_from_user() function, which copies data from user space into the kernel. User-space data may not be resident in RAM when a call like this is made, so callers must always be prepared to block while one or more pages are brought in from secondary storage. This has prevented BPF programs from reading user-space data directly; now sleepable BPF programs will be able to do so. One potential use for this ability would be to allow security-related BPF programs to follow user-space pointers and get a better idea of what user space is actually up to.
This patch set is in its fifth revision as of this writing and seems likely to find its way into the mainline during the 5.9 merge window. After that, the restrictions on what sleepable BPF programs can do are likely to start getting in the way of users, so it would not be surprising to see work loosening those restrictions showing up relatively quickly. For some use cases, at least, BPF insomnia should soon be a thing of the past.
Netflix releases open-source crisis-management tool
Earlier this year, Netflix developed and released a new Apache-licensed project named Dispatch. It is designed to coordinate the response to and the resolution of security-related incidents, but the project aims for more than just that. Rather, it hopes to be valuable for any type of one-off incident that needs coordination across an organization, such as a service outage.
The idea behind Dispatch is to serve as a central clearinghouse for incidents, specifically focusing on four areas: resource management (Netflix's phrase describing both the data about the incident and the data about the response), individual engagement, incident life-cycle, and what Netflix calls "incident learning" — the ability to build off past incidents to inform future ones. Under the hood, the project runs on Python 3.7+ with FastAPI for the server, a PostgreSQL database, and VueJS for the web-based UI.
End users configure and interact with Dispatch primarily through a web interface, and the provided user documentation explains day-to-day use of the tool. For lower-level administrative and development needs, multiple tools are accessible via the command-line.
Fundamentally, Dispatch is a dispatcher and coordinator of data across many pre-existing tools within an organization. Rather than taking the approach of creating a "batteries included" solution, Dispatch takes advantage of the existing APIs provided by common tools used in an organization, such as Google's office suite (GSuite), Jira, and Slack. Unfortunately, the project does not currently support any open-source alternatives to these tools out-of-box.
Workflows defined in Dispatch, triggered when an incident is reported, can automatically do things such as create Slack rooms for relevant personnel and GSuite documents so that participants can begin coordinated work toward a solution. As the incident progresses, the work being done is all available within Dispatch so that it can be referenced.
For example, consider a security incident requiring coordination across multiple departments. There are developers, operations staff, business stakeholders, and others who need to be involved. Dispatch rules can be created to do things like:
- Create a Slack channel for the incident, pulling in the appropriate development and operations personnel based on the nature of the incident
- Create a Jira ticket to track the issue
- Create GSuite documents and spreadsheets for supplementary information
- Create a calendar invite for a Zoom call to discuss the resolution of the problem
When a participant is brought into an incident by Dispatch, they are provided with the context and information they will need to contribute. For example, the following is a representative message a participant joining a Slack channel would receive, as taken from the Dispatch user documentation:
These resources are then all tracked together by Dispatch as a single entity — not only enabling coordination, but also automatically notifying the relevant participants as the situation progresses. Once an incident has been resolved, this collection of information is retained for post-mortem analysis or to reference in future incidents. This approach enables various people who normally may not work directly together at all to focus on a shared problem by providing an overarching coordinated context.
Dispatch plugins are required to integrate with external services. The project ships with plugins for GSuite, Jira, PagerDuty, Slack, and Zoom. These are nice, but as mentioned earlier, it would be helpful to also have integration with some open-source tools. A cursory search didn't turn up any plugins written by third parties for those tools, yet. That being said, if all that is standing in the way is a missing plugin, the project provides documentation on building a new plugin to address that need.
The concept of using existing tools and coordinating them in this fashion has its benefits: one of the biggest is that people within the organization are enabled to participate in important one-off incidents more effectively. When trying to move quickly to address a one-off security incident, not having to learn a new tool to do so is a positive. This makes sense for an organization with a staff the size of Netflix, where teams may not all work directly together. Notably it could also be overkill for smaller organizations, where there is more day-to-day interaction between all of the personnel.
Dispatch is an interesting tool, but is a new open-source project. So far, the project doesn't appear to have formed a strong community outside of the original developers at Netflix, and it is difficult to understand exactly how many outside contributors there have been. The GitHub project page indicates 29 contributors in the history of the project, and judging by the credits given in the release logs, there have been at least a few from outside Netflix. Since the project was announced in February 2020, there have been four releases — the last being June 17. The project's GitHub page also indicates that quite a few people have taken interest in the code, including a few security folks from both Airbnb and Cloudflare (according to their GitHub profiles). It's too early to tell if that means a community is starting to solidify around the project, but does mean that it has at least caught a few eyes.
For readers who are interested in playing with the code, the project provides installation instructions for setting things up via Docker. As expected, the source code is also available for those who would like to take a look.
Home Assistant improves performance in 0.112 release
The Home Assistant project has released version 0.112 of the open-source home automation hub we have previously covered, which is the eighth release of the project this year. While previous releases have largely focused on new integrations and enhancements to the front-end interface, in this release the focus has shifted more toward improving the performance of the database. It is important to be aware that there are significant database changes and multiple potential backward compatibility breaks to understand before attempting an upgrade to take advantage of the improvements.
According to the release notes written by contributor Franck Nijhof, better performance has been a major goal of this release with a focus on both the logbook and history components. This builds on the work of the previous release (v0.111) from a performance perspective, which focused on reducing the time it takes to initialize the hub at startup.
Faster historic data
logbook and history are essentially different views of the same data — the historic state of an entity at any given point in time. The history component is more of a raw view of the state changes of entities, where logbook is a more user-friendly and higher-level representation of the data in reverse chronological order. Over time, as Home Assistant runs, it produces a great deal of historic data regarding the home. Every time a light turns on or a sensor takes a reading, it is recorded in the database and made accessible by these components via the web interface. The volume of data can be quite large, and in previous versions it resulted in poor performance when attempting to access it in the UI.
Home Assistant 0.112 offers "absolutely game-changing performance
improvements to the logbook and history panels
", Nijhof said.
"Honestly, I avoided using the logbook in the past because of the
slowness it had.
" The majority of the performance improvements seem to
be tied to schema improvements in the underlying database. Users who are
upgrading can expect the first time Home Assistant 0.112 starts to take
anywhere from minutes to hours while a data migration is performed. The time
the migration takes, and thus the initial post-upgrade startup time, depends
on the quantity of data that needs to be migrated to the new schema.
Beyond schema changes, logbook also improves the user interface it provides, with more robust date and time filtering controls to make it easier to dig into the hub's data. Additionally, logbook now tracks the person associated with an entry in the log, making it easier to understand why a particular state change happened (knowing "John" turned on the kitchen lights versus an automation rule).
Improvements to automation condition syntax
Another notable improvement relates to designing automation rules whose conditions deal with multiple entities. It is a common practice to place conditions involving multiple entities' state values on an automated action ("automation" in Home Assistant terms) before it executes. In the past this had to be done verbosely, by creating a separate condition for each entity. Starting with version 0.112, multiple conditions can be collapsed into a single conditional and the list of entities it applies to. For example, let's assume there is an automation action that should only occur if both the kitchen and living room lights are on. In previous releases, this could be accomplished as follows:
condition:
- condition: state
entity_id: light.kitchen
state: 'on'
- condition: state
entity_id: light.living_room
state: 'on'
As of version 0.112, these conditions can be expressed more compactly as follows:
condition:
- condition: state
entity_id:
- light.kitchen
- light.living_room
state: 'on'
Providing a list to entity_id isn't the only improvement to the state condition either, as lists can also now be provided to indicate multiple state values to accept in the conditional:
# Instead of creating multiple condition entries
# for two different acceptable states, one can now
# be used.
condition:
- condition: state
entity_id: alarm_control_panel.home
state:
- armed_home
- armed_away
General improvements
Version 0.112 brings a few notable UI changes and improvements. For an integration that offers specialized configuration or development tooling (such as MQTT), those integration-specific items have been consolidated. Previously these tools were scattered across the configuration and development tool sections of the interface. With version 0.112, integration-specific tools have been moved into a dedicated configuration section for the integration.
The 0.112 release makes additional headway toward the goal of providing
UI-based integration configuration (as opposed to requiring direct editing of
YAML files). In total it adds twelve new
UI-configurable integrations. Other changes include information pages — like
system-level logs (these capture Python exceptions for example, rather than
hub events like logbook) — that have been moved from the
development tools section into the server control section of the
configuration UI. As Nijhof wrote, "[...] the logs and information
pages used to be in the development tools panel, but they didn't really
belong there. They aren't really tools for developing, they provide
information on your setup.
"
Upgrading will require being mindful of compatibility breaks; the release notes specifically draw attention to the use of the hidden attribute in automations, scenes, and scripts. Beginning with this release the attribute will be ignored. The hidden attribute is something the project has been working on deprecating for some time, and it has been completely removed in this release. This will likely cause problems for users during the upgrade, when older-style configurations still using hidden no longer work as expected — users who are still using the attribute will need to remove it from the relevant configurations. The release notes also indicate various other backward compatibility breaks related to specific integrations, making the complete list worth a review before upgrading.
Finally, the 0.112 release added four brand new integrations and nine new platforms. This notably includes a completely rewritten integration for the Smappee power-monitoring platform. This rewrite was contributed by Smappee itself, a sign of Home Assistant's expanding acceptance into the broader commercial ecosystem. Complete details on these new integrations can be found in the release notes.
With every release, Home Assistant continues to mature and we'll continue to follow the project's progression as it does. Overall there are a lot of improvements in this release, and plenty to look forward to when upgrading to it.
Hugo: a static-site generator
Static web-site generators take page content written in a markup language and render it into fully baked HTML, making it easy for developers to upload the result and serve a web site simply and securely. This article looks at Hugo, a static-site generator written in Go and optimized for speed. It is a flexible tool that can be configured for a variety of use cases: simple blogs, project documentation, larger news sites, and even government services.
Background
Static-site generators have probably been around since the invention of HTML: most sites have a common header and footer, and site owners don't want to copy and paste that manually into each of their pages. Back then, developers would be more likely to write a few dozen lines of custom Perl or Bash to string their HTML together. That approach definitely works, but today's static-site generators are faster and more powerful than a simple custom script.
A lot of sites handle common headers (and the like) by dynamically rendering the content for every page view; for example, by using PHP connected to a database. In 2020, developers might be more likely to use a React frontend connected to an HTTP API that serves the content. Both of these setups, however, are overkill when a site just needs to serve some mostly static content.
Many personal sites and blogs, as well as project and company documentation sites, can be generated statically. Serving pre-rendered HTML is faster and more reliable than serving dynamically rendered content. Wikipedia lists some of the advantages that static web sites provide:
- Improved security over dynamic websites (dynamic websites are at risk to web shell attacks if a vulnerability is present)
- Improved performance for end users compared to dynamic websites
- Fewer or no dependencies on systems such as databases or other application servers
Of course, some web sites (particularly web applications) will always require dynamic content. Even LWN, where the article contents are static, needs to be able to serve different content for logged-in subscribers and non-logged-in visitors. So there will always be a need for dynamic servers and APIs. But when a site's content is static and public, or when any dynamic content can be handled on the client side, a static-site generator can significantly simplify the system.
Because static-site generators are relatively easy to build, there are many open-source options available. All of the major programming languages have fairly popular offerings, for example: Jekyll in Ruby, Pelican in Python, JBake in Java, Sculpin in PHP, Hexo in JavaScript, as well as hundreds of others. Here we look at Hugo, which is an active and well-maintained open-source project.
Hugo was created in 2013 by Steve Francia (now a member of the core Go
team), but Bjørn Erik Pedersen took over as the lead developer in 2015. The
project is hosted on GitHub;
it is currently 43,000 lines of Go source (excluding comments) licensed
under the permissive Apache License 2.0. It has a thorough documentation site built, of
course, using Hugo. The project also has plenty of involvement from
the community, with a relatively high-traffic discussion forum and a large number
of user-submitted themes. Pedersen
said
in 2017 that Hugo's infrastructure (but not development) has some corporate
sponsorship, "Netlify hosts our sites for free and Discourse keeps
the forum running. Travis, Appveyor and CircleCI for builds. But other than
that, no sponsors.
"
It doesn't matter too much what programming language a site generator is written in (I personally use Jekyll but have never needed to write Ruby), though developers often lean toward a tool that's written in a language they're familiar with. Even if a developer uses Hugo, they will probably never have to write Go, though if they are building or customizing a theme, they may need to dip into Go's HTML templating syntax.
According to BuiltWith, Hugo usage has grown sharply over the last three years. It is used by several notable projects, including the redesigned Smashing Magazine site, the 1Password Support site and blog, the Let's Encrypt web site, as well as the US government's Digital.gov site.
How it works
When run without options, hugo traverses the Markdown files in the content subdirectory and outputs rendered HTML to a subdirectory named public. All of this is configurable — the default config file is config.toml in the root of the project. Output is rendered according to a theme, which is a handful of Go HTML templates that define exactly what HTML is generated. The theme includes CSS styling as well as any images or JavaScript files needed for the theme.
Before writing this article, I set up a simple test site with three articles and an "About" page — the kind of thing one might use for a personal web site or blog. It took about 30 minutes to choose a theme, configure Hugo, and add some dummy content. Hugo can be installed from popular operating system package managers, pre-built binaries on GitHub (for Linux, various BSDs, macOS, and Windows), or built from source. For Hugo, building from source is as simple as installing Go (if it is not already installed), cloning the project, and typing go build. On my machine it took about 20 seconds from clone to running "hugo version".
During development, typing "hugo server" starts a local web server to serve the rendered content from RAM. In this mode, Hugo automatically watches the files and rebuilds the HTML as needed. There's even some JavaScript in the rendered page to automatically reload it in the web browser as soon as that piece of content is changed (using LiveReload with a web socket). Hugo's built-in server is based on the production-ready net/http server, but it's usually simpler for deployment to pre-build and upload the rendered files to a static-site host such as Amazon S3, or serve them behind a regular web server such as NGINX.
Like other static-site generators, Hugo uses a simple file format for pages that consists of "front matter" in YAML format, separated from the Markdown content by "---" lines. Here is the content for the first post of my example site:
--- title: "First Post" date: 2020-07-06T09:33:48+12:00 categories: - Development --- The quick brown fox jumps over the lazy dog. Ee equals em cee squared. Let's try a [link](https://benhoyt.com/) and **some bold text**. Another paragraph. Hugo seems to be working. Writing Markdown is nice.
To give an idea of what a Hugo theme template looks like, here's the single.html template that's part of the Soho theme I used in my test (it renders a single blog article):
{{ define "main" -}} <div class="post"> <h1>{{ .Title }}</h1> {{ if ne .Type "page" -}} <div class="post-date"> <time datetime="{{ .Date.Format "2006-01-02T15:04:05Z0700" }}"> {{ .Date.Format "Jan 2, 2006" }} </time> · {{ .ReadingTime }} min read </div> {{- end }} {{ .Content }} </div> {{- end }}
Hugo supports "taxonomies", which are different ways of categorizing pages, and for each taxonomy it creates a page that lists all the articles with that taxonomy assigned. By default it creates "tags" and "categories" taxonomies, for example the "Development" category on my test site. The documentation shows various ways of ordering taxonomies or adding metadata to them.
The tool supports various ways to control page URLs ("permalinks") in the rendered content, with the default of /:section/:filename/ producing URLs like /posts/third-article/. The :section name comes from the subdirectory name in content (e.g. posts), while :filename is the name of the file where the content came from without any extension. These "directory URLs" are done by creating the rendered HTML file in a directory with an index file: /posts/third-article/index.html.
One of the more advanced features of Hugo is what it calls "shortcodes", which are small snippets of HTML (with parameters) that can be used to augment Markdown's limited functionality without falling back to HTML. There are many builtin shortcodes, for example gist (to include a GitHub Gist), highlight (syntax-highlighted code), and youtube (an inline YouTube video). For example, the following shortcode:
{{< youtube w7Ft2ymGmfc >}}
will produce this HTML output:
<div style="position: relative; padding-bottom: 56.25%; height: 0; overflow: hidden;"> <iframe src="https://www.youtube.com/embed/w7Ft2ymGmfc?autoplay=1" style="position: absolute; top: 0; left: 0; width: 100%; height: 100%; border:0;" allowfullscreen title="YouTube Video"></iframe> </div>
Hugo also includes more advanced features such as generating a table of contents from the headings within an article, alternative markup formats (Markdown is the default), a pipeline for pre-processing CSS and JavaScript, and image processing commands to resize and crop images.
Hugo is billed as being fast, with its home page saying "at
<1 ms per page, the average site builds in less than a
second
". Smashing Magazine's 7500-page site apparently
built in around 13 seconds. This speed is due in part to it being written
in Go, a language that compiles to native binaries (rather than a
bytecode-interpreted language like Python), but also partly due to various
benchmarking and performance efforts that contributors have made over the
years; for example, see the release notes for
version 0.63 released in January 2020.
In general with static-site generators, content authors need to be developers, or at least be familiar with using a text editor, Markdown, and Git. However, web host Netlify has an open-source tool called Netlify CMS that allows non-developers to contribute using a WYSIWYG editor — the tool commits to a Git repository behind the scenes. Hugo's documentation has a list of other similar tools.
Hosting options
With static-site generation, developers can choose to host the rendered output on any host or web server. Popular options are GitHub Pages, Amazon S3, Netlify, or self-hosting using the Caddy or NGINX web servers. Hugo has a documentation page with detailed setup information for various options.
For my test site, I decided to use Amazon S3 (with AWS's Cloudfront CDN in front of it). This is pretty simple to set up, and it will handle many millions of page views for just a few dollars per month. You can of course avoid the cloud entirely and host on whatever server you have around — even a cheap server will be able to serve thousands of static file requests per second.
It would be straightforward to use continuous integration (for example, GitHub Actions) to run a Hugo build on every commit to a site's Git repository, ensuring the rendered version is up to date at all times.
Wrapping up
Hugo doesn't seem to have a public roadmap or fixed release schedule; in the last couple of years there has been a release about every month, with bug-fix releases in between. In July 2018, Pedersen talked about "the road to 1.0", but right now the maintainers seem to be quite content with 0.x version numbers. The release names are often light-hearted, with a "Christmas Edition" and a "40K GitHub Stars Edition".
Static-site generators are a relatively simple way to build and deploy web sites with mostly static content. Hugo's speed, permissive license, and array of features make it an attractive choice for developers, whether it's for a small personal web site or a large content-heavy one. All in all, Hugo is an active project that is worth keeping an eye on.
[We would like to thank Jim Garrison, who suggested this topic.]
Brief items
Security
Sandboxing in Linux with zero lines of code (Cloudflare blog)
The Cloudflare blog is running an overview of sandboxing with seccomp(), culminating in a tool written there to sandbox any existing program. "We really liked the 'zero code seccomp' approach with systemd SystemCallFilter= directive, but were not satisfied with its limitations. We decided to take it one step further and make it possible to prohibit any system call in any process externally without touching its source code, so came up with the Cloudflare sandbox. It’s a simple standalone toolkit consisting of a shared library and an executable. The shared library is supposed to be used with dynamically linked applications and the executable is for statically linked applications."
Security quotes of the week
There are plenty of reasons to be concerned about TikTok, it's connections to China, and the security of the app. But none of that means that the US government has the right to just ban it. While Trump may want to pretend he's a dictator, and Pompeo may want to pretend he works for a dictator, that's not how any of this works.
According to experts to who ZDNet spoke this week, and a statement from the leaker himself, the list was compiled by scanning the entire internet for devices that were exposing their Telnet port. The hacker then tried using (1) factory-set default usernames and passwords, or (2) custom, but easy-to-guess password combinations.
And if you need more security and privacy principles for the IoT, here's a list of over twenty.
Kernel development
Kernel release status
The current development kernel is 5.8-rc4, released on July 5. Linus said: "The end result is that it's been fairly calm, and there's certainly been discussion of upcoming fixes, but I still have the feeling that 5.8 is looking fairly normal and things are developing smoothly despite the size of this release."
It's worth noting that the 5.8-rc5 release will raise the minimum GCC requirement to version 4.9.
Stable updates: none have been released in the last week. The relatively small 5.7.8, 5.4.51, 4.19.132, 4.14.188, 4.9.230, and 4.4.230 updates are all in the review process; they are due on July 9.
Quote of the week
Distributions
OpenSUSE Leap 15.2 released
The openSUSE Leap 15.2 release is now available; see the announcement for a long list of new features. "In general, software packages in the distribution grew by the hundreds. Data fusion, Machine Learning and AI aren't all that is new in openSUSE Leap 15.2; a Real-Time Kernel for managing the timing of microprocessors to ensure time-critical events are processed as efficiently as possible is available in this release."
Distribution quote of the week
Development
Book: Perl 7: A Risk-Benefit Analysis
Dan Book has done a detailed analysis of the Perl 7 transition. "Large amount of CPAN modules will not work in Perl 7; plans for working around this would either involve every affected CPAN author, which is a virtual impossibility for the stated 1 year time frame; or the toolchain group, a loose group of people who each maintain various modules and systems that are necessary for CPAN to function, who either have not been consulted as of yet or have not revealed their plans related to the tools they maintain. Going into this potential problem sufficiently would be longer than this blog post, but suffice to say that a Perl where highly used CPAN modules don't seamlessly work is not Perl."
Development quote of the week
We're really bad at writing software.
[...] But the people who design and build bridges, they're great at it. Bridges get built on time, on budget, and last for dozens, hundreds, even thousands of years. Bridge building is, if you think about it, kind of awesome. And bridges are such a common occurrence that they’re also incredibly boring. No one is amazed when a bridge works correctly, and everyone is kind of amazed when software does.
Unfortunately, the world is very dependent on software. It might even depend more on software than it does on bridges. So we have to get better at writing software far faster than we got good at building bridges.
Miscellaneous
LPC town hall #2: the kernel report
The Linux Plumbers Conference has announced the second in a brief series of "town hall" events leading up to the full (virtual) conference starting August 24. This one features LWN editor Jonathan Corbet presenting a version of his "Kernel Report" talk covering the current and future state of the kernel-development community. This talk is scheduled for July 16 at 9:00AM US/Mountain time (8:00AM US/Pacific, 3:00PM UTC). Mark your calendars.The "Open Usage Commons" launches
Google has announced the creation of the Open Usage Commons, which is intended to help open-source projects manage their trademarks. From the organization's own announcement: "We created the Open Usage Commons because free and fair open source trademark use is critical to the long-term sustainability of open source. However, understanding and managing trademarks takes more legal know-how than most project maintainers can do themselves. The Open Usage Commons is therefore dedicated to creating a model where everyone in the open source chain – from project maintainers to downstream users to ecosystem companies – has peace of mind around trademark usage and management. The projects in the Open Usage Commons will receive support specific to trademark protection and management, usage guidelines, and conformance testing." Initial members include the Angular, Gerrit, and Istio projects.
Page editor: Jake Edge
Announcements
Newsletters
Distributions and system administration
Development
- Emacs News (July 6)
- These Weeks in Firefox (July 8)
- What's cooking in git.git (July 6)
- LLVM Weekly (July 6)
- LXC/LXD/LXCFS Weekly Status (July 6)
- OCaml Weekly News (July 7)
- Perl Weekly (July 6)
- PostgreSQL Weekly News (July 5)
- Python Weekly Newsletter (July 2)
- Racket News (July 6)
- Weekly Rakudo News (July 6)
- Ruby Weekly News (July 2)
- This Week in Rust (July 8)
- Wikimedia Tech News (July 6)
Meeting minutes
- Fedora FESCO meeting minutes (July 8)
Calls for Presentations
CFP Deadlines: July 9, 2020 to September 7, 2020
The following listing of CFP deadlines is taken from the LWN.net CFP Calendar.
Deadline | Event Dates | Event | Location |
---|---|---|---|
July 13 | September 29 October 1 |
ApacheCon 2020 | Online |
July 15 | October 6 October 8 |
2020 Virtual LLVM Developers' Meeting | online |
July 21 | October 15 October 17 |
openSUSE LibreOffice Conference | Online |
July 27 | August 7 August 9 |
Nest With Fedora | Online |
July 31 | October 20 October 23 |
[Canceled] PostgreSQL Conference Europe | Berlin, Germany |
July 31 | October 29 October 30 |
[Virtual] Linux Security Summit Europe | Virtual |
August 14 | October 2 October 5 |
PyCon India 2020 | Virtual |
August 28 | October 8 October 9 |
PyConZA 2020 | Online |
September 1 | September 14 | GNU Radio Conference | Virtual |
If the CFP deadline for your event does not appear here, please tell us about it.
Upcoming Events
Events: July 9, 2020 to September 7, 2020
The following event listing is taken from the LWN.net Calendar.
Date(s) | Event | Location |
---|---|---|
July 6 July 12 |
[VIRTUAL] SciPy 2020 | Austin, TX, USA |
July 6 July 9 |
[Virtual] XenProject Developer and Design Summit 2020 | |
July 10 | [CANCELED] PGDay Russia 2020 | St. Petersburg, Russia |
July 16 July 18 |
[CANCELED] Linux Developer Conference Brazil | São Paulo, Brazil |
July 22 July 28 |
[VIRTUAL] GUADEC 2020 | Zacatecas, Mexico |
July 23 July 26 |
[Virtual] EuroPython 2020 | Online |
July 25 August 2 |
Hackers On Planet Earth | Online |
August 6 August 9 |
[Canceled] miniDebConf Montreal 2020 | Montreal, Canada |
August 7 August 9 |
Nest With Fedora | Online |
August 13 August 21 |
Netdev 0x14 | Virtual |
August 20 | [Virtual] RustConf | online |
August 23 August 29 |
DebConf20 | online |
August 25 August 27 |
Linux Plumbers Conference | Virtual |
August 26 August 28 |
[Canceled] FOSS4G Calgary | Calgary, Canada |
August 28 | Linux Kernel Maintainer Summit | Virtual |
September 4 September 11 |
Akademy 2020 | Virtual, Online |
If your event does not appear here, please tell us about it.
Security updates
Alert summary July 2, 2020 to July 8, 2020
Dist. | ID | Release | Package | Date |
---|---|---|---|---|
Debian | DSA-4714-1 | stable | chromium | 2020-07-01 |
Debian | DSA-4714-2 | stable | chromium | 2020-07-04 |
Debian | DSA-4716-1 | stable | docker.io | 2020-07-02 |
Debian | DSA-4713-1 | stable | firefox-esr | 2020-07-01 |
Debian | DSA-4715-1 | stable | imagemagick | 2020-07-02 |
Debian | DSA-4717-1 | stable | php7.0 | 2020-07-05 |
Debian | DSA-4719-1 | stable | php7.3 | 2020-07-06 |
Debian | DSA-4720-1 | stable | roundcube | 2020-07-08 |
Debian | DSA-4718-1 | stable | thunderbird | 2020-07-05 |
Fedora | FEDORA-2020-f822ea9330 | F31 | alpine | 2020-07-03 |
Fedora | FEDORA-2020-386249cec2 | F32 | alpine | 2020-07-03 |
Fedora | FEDORA-2020-c9bff9688e | F32 | ceph | 2020-07-06 |
Fedora | FEDORA-2020-77f89ab772 | F31 | chromium | 2020-07-08 |
Fedora | FEDORA-2020-08561721ad | F32 | chromium | 2020-07-02 |
Fedora | FEDORA-2020-8ba9376229 | F31 | firefox | 2020-07-08 |
Fedora | FEDORA-2020-55077d678a | F32 | firefox | 2020-07-03 |
Fedora | FEDORA-2020-1f7fc0d0c9 | F32 | gssdp | 2020-07-04 |
Fedora | FEDORA-2020-3d23d3ea02 | F31 | gst | 2020-07-07 |
Fedora | FEDORA-2020-9e6f5b3ae2 | F32 | gst | 2020-07-07 |
Fedora | FEDORA-2020-1f7fc0d0c9 | F32 | gupnp | 2020-07-04 |
Fedora | FEDORA-2020-df3e1cfde9 | F32 | hostapd | 2020-07-03 |
Fedora | FEDORA-2020-74dd64990b | F32 | libfilezilla | 2020-07-04 |
Fedora | FEDORA-2020-ccd9bdb2eb | F32 | libldb | 2020-07-04 |
Fedora | FEDORA-2020-9c97633708 | F32 | mediawiki | 2020-07-05 |
Fedora | FEDORA-2020-1cb4c3697b | F32 | mutt | 2020-07-03 |
Fedora | FEDORA-2020-8c33e3a771 | F31 | ngircd | 2020-07-08 |
Fedora | FEDORA-2020-e6d1d849c5 | F32 | ngircd | 2020-07-08 |
Fedora | FEDORA-2020-a0b39d58db | F32 | ntp | 2020-07-02 |
Fedora | FEDORA-2020-c52106e48a | F32 | python-pillow | 2020-07-04 |
Fedora | FEDORA-2020-8bdd3fd7a4 | F32 | python36 | 2020-07-04 |
Fedora | FEDORA-2020-ccd9bdb2eb | F32 | samba | 2020-07-04 |
Fedora | FEDORA-2020-de27bb80af | F31 | xpdf | 2020-07-05 |
Fedora | FEDORA-2020-f34d97b1fd | F32 | xpdf | 2020-07-05 |
Mageia | MGASA-2020-0282 | 7 | curl | 2020-07-05 |
Mageia | MGASA-2020-0279 | 7 | docker | 2020-07-05 |
Mageia | MGASA-2020-0274 | 7 | firefox | 2020-07-05 |
Mageia | MGASA-2020-0273 | 7 | libexif | 2020-07-05 |
Mageia | MGASA-2020-0270 | 7 | libupnp | 2020-07-05 |
Mageia | MGASA-2020-0283 | 7 | libvirt | 2020-07-06 |
Mageia | MGASA-2020-0280 | 7 | libvncserver | 2020-07-05 |
Mageia | MGASA-2020-0271 | 7 | libxml2 | 2020-07-05 |
Mageia | MGASA-2020-0276 | 7 | mailman | 2020-07-05 |
Mageia | MGASA-2020-0284 | 7 | mariadb | 2020-07-07 |
Mageia | MGASA-2020-0281 | 7 | ntp | 2020-07-05 |
Mageia | MGASA-2020-0286 | 7 | pdns-recursor | 2020-07-07 |
Mageia | MGASA-2020-0275 | 7 | perl-YAML | 2020-07-05 |
Mageia | MGASA-2020-0269 | 7 | python-httplib2 | 2020-07-05 |
Mageia | MGASA-2020-0285 | 7 | ruby | 2020-07-07 |
Mageia | MGASA-2020-0278 | 7 | tcpreplay | 2020-07-05 |
Mageia | MGASA-2020-0277 | 7 | tomcat | 2020-07-05 |
Mageia | MGASA-2020-0272 | 7 | vlc | 2020-07-05 |
openSUSE | openSUSE-SU-2020:0925-1 | 15.1 | Virtualbox | 2020-07-03 |
openSUSE | openSUSE-SU-2020:0928-1 | 15.1 | chocolate-doom | 2020-07-05 |
openSUSE | openSUSE-SU-2020:0939-1 | 15.2 | chocolate-doom | 2020-07-07 |
openSUSE | openSUSE-SU-2020:0937-1 | 15.2 | coturn | 2020-07-07 |
openSUSE | openSUSE-SU-2020:0935-1 | 15.2 | kernel | 2020-07-07 |
openSUSE | openSUSE-SU-2020:0944-1 | 15.2 | live555 | 2020-07-07 |
openSUSE | openSUSE-SU-2020:0934-1 | 15.1 | ntp | 2020-07-06 |
openSUSE | openSUSE-SU-2020:0917-1 | 15.2 | opera | 2020-07-03 |
openSUSE | openSUSE-SU-2020:0931-1 | 15.1 | python3 | 2020-07-06 |
openSUSE | openSUSE-SU-2020:0940-1 | 15.2 | python3 | 2020-07-07 |
openSUSE | openSUSE-SU-2020:0933-1 | 15.1 | rust, rust-cbindgen | 2020-07-06 |
openSUSE | openSUSE-SU-2020:0945-1 | 15.2 | rust, rust-cbindgen | 2020-07-07 |
Oracle | ELSA-2020-2824 | OL6 | firefox | 2020-07-07 |
Oracle | ELSA-2020-2827 | OL7 | firefox | 2020-07-07 |
Oracle | ELSA-2020-2827 | OL7 | firefox | 2020-07-07 |
Oracle | ELSA-2020-2613 | OL6 | thunderbird | 2020-07-07 |
Oracle | ELSA-2020-2774 | OL8 | virt:ol | 2020-07-07 |
Red Hat | RHSA-2020:2838-01 | EL7.6 | file | 2020-07-07 |
Red Hat | RHSA-2020:2824-01 | EL6 | firefox | 2020-07-06 |
Red Hat | RHSA-2020:2827-01 | EL7 | firefox | 2020-07-06 |
Red Hat | RHSA-2020:2828-01 | EL8 | firefox | 2020-07-06 |
Red Hat | RHSA-2020:2825-01 | EL8.0 | firefox | 2020-07-06 |
Red Hat | RHSA-2020:2826-01 | EL8.1 | firefox | 2020-07-06 |
Red Hat | RHSA-2020:2846-01 | EL7.6 | gettext | 2020-07-07 |
Red Hat | RHSA-2020:2833-01 | EL7.6 | kdelibs | 2020-07-07 |
Red Hat | RHSA-2020:2831-01 | EL7.2 | kernel | 2020-07-07 |
Red Hat | RHSA-2020:2832-01 | EL7.3 | kernel | 2020-07-07 |
Red Hat | RHSA-2020:2851-01 | EL7.6 | kernel | 2020-07-07 |
Red Hat | RHSA-2020:2854-01 | EL7 | kernel-alt | 2020-07-07 |
Red Hat | RHSA-2020:2842-01 | EL7.6 | microcode_ctl | 2020-07-07 |
Red Hat | RHSA-2020:2850-01 | EL8.0 | nghttp2 | 2020-07-07 |
Red Hat | RHSA-2020:2823-01 | EL8.1 | nghttp2 | 2020-07-06 |
Red Hat | RHSA-2020:2848-01 | EL8 | nodejs:10 | 2020-07-07 |
Red Hat | RHSA-2020:2849-01 | EL8.1 | nodejs:10 | 2020-07-07 |
Red Hat | RHSA-2020:2852-01 | EL8 | nodejs:12 | 2020-07-07 |
Red Hat | RHSA-2020:2847-01 | EL8.1 | nodejs:12 | 2020-07-07 |
Red Hat | RHSA-2020:2835-01 | EL7.6 | php | 2020-07-07 |
Red Hat | RHSA-2020:2844-01 | EL7.6 | qemu-kvm | 2020-07-07 |
Red Hat | RHSA-2020:2817-01 | RHSC | rh-nginx116-nginx | 2020-07-02 |
Red Hat | RHSA-2020:2839-01 | EL7.6 | ruby | 2020-07-07 |
Red Hat | RHSA-2020:2840-01 | EL7.6 | tomcat | 2020-07-07 |
Scientific Linux | SLSA-2020:2824-1 | SL6 | firefox | 2020-07-07 |
Scientific Linux | SLSA-2020:2827-1 | SL7 | firefox | 2020-07-07 |
Slackware | SSA:2020-186-01 | libvorbis | 2020-07-04 | |
Slackware | SSA:2020-189-01 | seamonkey | 2020-07-07 | |
SUSE | SUSE-SU-2020:1164-2 | SLE15 | LibVNCServer | 2020-07-07 |
SUSE | SUSE-SU-2020:1873-1 | SLE15 | LibVNCServer | 2020-07-07 |
SUSE | SUSE-SU-2020:0111-2 | SLE15 | Mesa | 2020-07-07 |
SUSE | SUSE-SU-2019:2463-2 | SLE15 | SDL2 | 2020-07-07 |
SUSE | SUSE-SU-2019:3033-2 | SLE15 | djvulibre | 2020-07-07 |
SUSE | SUSE-SU-2019:3184-2 | SLE15 | ffmpeg | 2020-07-07 |
SUSE | SUSE-SU-2020:14421-1 | SLE11 | firefox | 2020-07-08 |
SUSE | SUSE-SU-2020:1417-2 | SLE15 | freetds | 2020-07-08 |
SUSE | SUSE-SU-2020:0594-2 | SLE15 | gd | 2020-07-07 |
SUSE | SUSE-SU-2020:1300-2 | SLE15 | gstreamer-plugins-base | 2020-07-07 |
SUSE | SUSE-SU-2020:0819-2 | SLE15 | icu | 2020-07-08 |
SUSE | SUSE-SU-2020:1511-2 | SLE15 | java-11-openjdk | 2020-07-07 |
SUSE | SUSE-SU-2020:1621-2 | SLE15 | libEMF | 2020-07-08 |
SUSE | SUSE-SU-2020:1553-2 | SLE15 | libexif | 2020-07-08 |
SUSE | SUSE-SU-2019:2971-2 | SLE15 | libjpeg-turbo | 2020-07-06 |
SUSE | SUSE-SU-2020:0629-2 | SLE15 | librsvg | 2020-07-07 |
SUSE | SUSE-SU-2020:1297-2 | SLE15 | libvpx | 2020-07-08 |
SUSE | SUSE-SU-2020:1839-1 | OS7 OS8 OS9 SLE12 SES5 | mozilla-nspr, mozilla-nss | 2020-07-03 |
SUSE | SUSE-SU-2020:14418-1 | SLE11 | mozilla-nspr, mozilla-nss | 2020-07-06 |
SUSE | SUSE-SU-2020:1850-1 | SLE15 | mozilla-nss | 2020-07-06 |
SUSE | SUSE-SU-2020:1864-1 | SLE12 | nasm | 2020-07-07 |
SUSE | SUSE-SU-2020:1843-1 | SLE15 | nasm | 2020-07-06 |
SUSE | SUSE-SU-2019:2425-2 | SLE15 | nmap | 2020-07-08 |
SUSE | SUSE-SU-2020:14415-1 | SLE11 | ntp | 2020-07-01 |
SUSE | SUSE-SU-2020:1823-1 | SLE15 | ntp | 2020-07-02 |
SUSE | SUSE-SU-2019:3192-2 | SLE15 | opencv | 2020-07-08 |
SUSE | SUSE-SU-2020:1859-1 | OS7 OS8 SLE12 SES5 | openldap2 | 2020-07-06 |
SUSE | SUSE-SU-2020:14419-1 | SLE11 | openldap2 | 2020-07-06 |
SUSE | SUSE-SU-2020:1855-1 | SLE12 | openldap2 | 2020-07-06 |
SUSE | SUSE-SU-2020:1856-1 | SLE15 | openldap2 | 2020-07-06 |
SUSE | SUSE-SU-2020:1695-2 | SLE15 | osc | 2020-07-08 |
SUSE | SUSE-SU-2020:1682-2 | SLE15 | perl | 2020-07-07 |
SUSE | SUSE-SU-2020:1857-1 | SLE12 | permissions | 2020-07-06 |
SUSE | SUSE-SU-2020:1858-1 | SLE15 | permissions | 2020-07-06 |
SUSE | SUSE-SU-2020:1860-1 | SLE15 | permissions | 2020-07-06 |
SUSE | SUSE-SU-2020:1661-2 | SLE15 | php7 | 2020-07-07 |
SUSE | SUSE-SU-2019:2891-2 | SLE15 | python-ecdsa | 2020-07-08 |
SUSE | SUSE-SU-2020:1822-1 | SLE15 | python3 | 2020-07-02 |
SUSE | SUSE-SU-2020:1828-1 | SLE12 | systemd | 2020-07-02 |
SUSE | SUSE-SU-2020:1842-1 | SLE12 | systemd | 2020-07-04 |
SUSE | SUSE-SU-2020:1580-2 | SLE15 | texlive-filesystem | 2020-07-08 |
SUSE | SUSE-SU-2020:1591-2 | SLE15 | thunderbird | 2020-07-08 |
SUSE | SUSE-SU-2020:1841-1 | SLE15 | tomcat | 2020-07-04 |
SUSE | SUSE-SU-2020:1819-1 | SLE15 | unbound | 2020-07-01 |
SUSE | SUSE-SU-2020:1396-2 | SLE15 | zstd | 2020-07-03 |
Ubuntu | USN-4420-1 | 18.04 20.04 | cinder, python-os-brick | 2020-07-07 |
Ubuntu | USN-4415-1 | 16.04 18.04 19.10 20.04 | coturn | 2020-07-06 |
Ubuntu | USN-4408-1 | 16.04 18.04 19.10 20.04 | firefox | 2020-07-02 |
Ubuntu | USN-4416-1 | 16.04 18.04 19.10 | glibc | 2020-07-06 |
Ubuntu | USN-4407-1 | 16.04 18.04 19.10 20.04 | libvncserver | 2020-07-02 |
Ubuntu | USN-4414-1 | 16.04 18.04 | linux, linux-aws, linux-aws-hwe, linux-gcp, linux-gcp-4.15, linux-gke-4.15, linux-hwe, linux-kvm, linux-oem, linux-oracle, linux-raspi2, linux-snapdragon | 2020-07-02 |
Ubuntu | USN-4411-1 | 20.04 | linux, linux-aws, linux-gcp, linux-kvm, linux-oracle, linux-riscv | 2020-07-02 |
Ubuntu | USN-4412-1 | 18.04 19.10 | linux, linux-azure, linux-gcp, linux-gcp-5.3, linux-hwe, linux-kvm, linux-oracle, linux-oracle-5.3 | 2020-07-02 |
Ubuntu | USN-4413-1 | 18.04 | linux-gke-5.0, linux-oem-osp1 | 2020-07-02 |
Ubuntu | USN-4410-1 | 20.04 | net-snmp | 2020-07-02 |
Ubuntu | USN-4417-2 | 12.04 14.04 | nss | 2020-07-06 |
Ubuntu | USN-4417-1 | 16.04 18.04 19.10 20.04 | nss | 2020-07-06 |
Ubuntu | USN-4418-1 | 16.04 18.04 19.10 20.04 | openexr | 2020-07-06 |
Ubuntu | USN-4409-1 | 12.04 14.04 16.04 18.04 19.10 20.04 | samba | 2020-07-02 |
Kernel patches of interest
Kernel releases
Architecture-specific
Core kernel
Development tools
Device drivers
Device-driver infrastructure
Documentation
Filesystems and block layer
Memory management
Networking
Security-related
Virtualization and containers
Miscellaneous
Page editor: Rebecca Sobol