LWN.net Weekly Edition for April 23, 2020
Welcome to the LWN.net Weekly Edition for April 23, 2020
This edition contains the following feature content:
- Debian discusses Discourse: another large project confronts the question of moving from email to an online forum system.
- Python keyword argument auto-assignment: an attempt to solve a problem many Python developers didn't know they had.
- The integrity policy enforcement security module: a new security module focused on keeping a system from running unapproved binaries.
- How to unbreak LTTng: recent changes have created problems for the out-of-tree LTTng tracing subsystem; what can be done to fix it?
- Proactive compaction for the kernel: a proposed memory-management change to make huge pages more readily available.
This week's edition also includes these inner pages:
- Brief items: Brief news items from throughout the community.
- Announcements: Newsletters, conferences, security updates, patches, and more.
Please enjoy this week's edition, and, as always, thank you for supporting LWN.net.
Debian discusses Discourse
Much of the free software we run every day was developed over email, and the developers of that software, who may have been using email for decades, tend to be somewhat attached to it. The newer generation of developers that came later, though, has proved remarkably resistant to the charms of email-based communication. That has led to an ongoing push to replace email with other forms of communication; often the "other form" of choice is a web-based system called Discourse. Moving to Discourse tends to be controversial; LWN covered related discussions in the Fedora and Python projects in 2018. Now it is Debian's turn to confront this question.Debian's discussion began on April 10, when Neil McGovern announced the existence of a Discourse instance at discourse.debian.net. He asked for feedback on whether the project was interested in Discourse, and said that he thought it might make sense to move some mailing lists over:
One invariant over the Debian project's long history is that, if one asks for feedback, one will get it. Indeed, the "asking" part is generally unnecessary. Feedback is exactly what McGovern got.
Moves to systems like Discourse often raise concerns about usability, and that was certainly the case here. Developers who do not want to have to fire up a web browser to communicate with other members of a project are not in short supply anywhere. It has often been pointed out that Discourse offers email integration, and just as often that this integration falls rather short of how fully email-based communication works. Discourse cannot be used offline, doesn't work with terminal-based workflows, and requires a JavaScript-enabled browser that may be too resource-intensive for the PDP-11-based development system that somebody is surely still using. There was little new in that part of the debate.
Moderation, trust levels, and gamification
One of the features that Discourse proponents tend to like is the support for distributed moderation of discussions. A discussion on a Debian list seems sure to turn out developers who are opposed to moderation in any form, and indeed one such duly put in an appearance. Overall, though, opposition to moderation was not a strong theme in the discussion; Russ Allbery pointed out that the mailing list on which the discussion was being held is moderated, and said that this is a good thing:
Given the recent history of attacks on the Debian project, arguments against moderation of the communication channels seem less likely than usual to find wide support.
That said, the way in which Discourse handles moderation did raise a few eyebrows. Rather than having specific people designated as moderators, Discourse spreads that task among the "trusted" members of the community. There are, by default, five trust levels; new users start at level 0 and work their way up from there. At level 3, users can flag posts and cause them to be hidden.
Movement through the trust levels is managed automatically by the system (with the exception of the highest level, which requires manual promotion). Moving up requires that the user spend a specific amount of time on the site, read a certain number of articles, hand out and receive "likes", and more. To implement this mechanism, Discourse tracks the amount of time spent reading each article. Reaching level 3 requires visiting the site 50 out of the last 100 days, replying to at least ten different topics, viewing at least 25% of new topics, and more. Users can be demoted back to level 2 if they fail to maintain that level of performance.
This aspect of Discourse repels a number of Debian developers for a couple of reasons. Debian folks are naturally resistant to the idea of a communication system that is monitoring their activity, tracking the time spent on each topic, and making decisions based on that data. Many of them use free software precisely to get away from that kind of thing. They also dislike the whole "gamification" aspect of this system — a feeling that is only made stronger by the extensive system of "badges" handed out by the system to encourage various types of activity. As Ansgar Burchardt described it:
Current project leader Sam Hartman agreed that
this mechanism "does not seem to meet Debian's needs very well for
trust
". McGovern said
that this mechanism can be reconfigured or turned off entirely if the
project desires, but he also emphasized that it does help to make the
moderation system work well.
Archival and new developers
Another area of concern is archival. Email is easily archived by anybody with the storage space to accumulate a mailbox full of messages; Discourse conversations, instead, live within the instance's database. This seems more like a problem that needs to be worked out than a serious blocking issue, though. Charles Plessy pointed out that email archives are often messy and do not make finding required information easy; he suggested that a system like Discourse might work better for the task of getting useful information out of the archive. There is also, McGovern pointed out, a mechanism to automatically summarize the discussion on a topic (seemingly by picking out the most "liked" posts) that make it easier to figure out what the state of a discussion is.
The tone of the discussion overall struck many as hostile toward the idea of using Discourse. That may at least partly be the result of a fear that the project's mailing lists will be turned off and use of Discourse will be mandatory. But there were still positive comments about aspects of the system. For example, Allbery said multiple times that he is fully comfortable with an email-based workflow, but still sees things to like in Discourse. These include splitting threads that have wandered off topic, pinning messages with important information where all will see them, the quick ability to indicate agreement with a "+1" click, and more.
And, underneath the whole discussion, is an awareness that, while current developers may be comfortable with email, the newer developers they might like to attract often are not. As Steve McIntyre put it:
Moving to a more "contemporary" system might help to keep some of those developers around, but there is the opposite problem noted by Raphael Hertzog: web forums can also drive developers away. Finding a solution that appeals to everybody is not a straightforward task.
McGovern, meanwhile, has expressed some frustration with the topic and suggested that he might give up on the whole thing. That would be understandable, but might also be unfortunate; Debian might not, in the end, prove as hostile to the idea as it seems now. With some work, it may be possible to move a Discourse instance in the direction that current Debian developers will accept while appealing to new developers; Marco Möller posted some ideas that could be a good starting point. But this is Debian, so the process of reaching consensus on a change like this is never going to be smooth.
Python keyword argument auto-assignment
A recent thread on the python-ideas mailing list explores adding a feature to Python, which is the normal fare for that forum. The problem being addressed is real, but may not be the highest-priority problem for the language on many people's lists. Function calls that have multiple keyword arguments passed from a variable of the same name (e.g. keyword=keyword) require developers to repeat themselves and can be somewhat confusing, especially to newcomers. The discussion of ways to fix it highlighted some lesser-known corners of the language, however, regardless of whether the idea will actually result in a change to Python.
Some background
Python functions and methods can take both positional and keyword arguments (or parameters), as follows:
def fun(a, b, kword=None):
pass
That would define the function fun() taking two positional
arguments (a and b) and one keyword argument
(kword). In general, all arguments can be passed
using their name, though arguments passed positionally must go first. The
following would be legal and illegal ways to call fun():
fun(42, None)
fun(a=42, b=None) # both receive a=42, b=None, kword=None
fun(1, 2, 3)
fun(1, b=2, kword=3) # both receive a=1, b=2, kword=3
fun(1, b=2, 3)
fun(a=1, 2, kword=3) # both illegal; SyntaxError
A change for Python 3.8 added the ability
to specify positional-only parameters to functions so that
library authors can prevent users from passing them as keywords. That came
from PEP 570
("Python Positional-Only Parameters
"), which is the
counterpart of a way to add keyword-only arguments that was added to the
language in 2006 based on PEP 3102
("Keyword-Only Arguments
"). Keyword-only arguments are
specified using "*" in the argument list of the function
definition, so when positional-only
parameters came along, "/" was, a bit controversially, chosen
to indicate them.
One other change in Python 3.8 also played a role in the discussion: the addition of debug support for f-strings (i.e. "formatted strings"). With this feature, programmers do not have to repeat themselves when printing variable values, as is often done for debugging. For example, both of the following will result in the same output:
foo = 23
bar = 'forty two'
print(f'foo={foo} bar={bar}')
print(f'{foo=} {bar=}') # both result in: foo=23 bar=forty two
Allowing constructs like "{foo=}" is simply a convenience feature
but one that is likely to see a lot of use.
Keyword arguments
Passing arguments by keyword is generally quite useful, but it can get pretty verbose at times, especially when passing a variable of the same name as the keyword:
some_obj.some_method(config=config,
history=history,
context=context,
parent=parent,
children=children)
That kind of thing most often occurs when a function's parameters are
passed down to another, similar function, so the parameter names are the
same between the two.
So, Rodrigo Martins de Oliveira wondered
if it would make sense to adapt the syntax used for debug f-strings for
calls:
some_obj.some_method(config=, history=, context=, parent=, children=)
He outlined some justifications for the idea, along with the limitations of
his proposal; the syntax would only work
for function calls, for example. It was not, shall we say, met with
universal acclaim; several people
posted their opposition, suggesting that it was not really that common of a
pattern. As Steven D'Aprano put
it:
But it should be noted that arguments of the form "x=x" are often confusing to new Python programmers. The different scope of the two sides of the "assignment" is not necessarily obvious so the usage can be puzzling. Adding different syntax will not make that problem go away, of course, but making it look different from an assignment might be helpful—or not.
The conversation quickly turned to alternate syntax, even though many participating seemed less than entirely enthusiastic about the feature itself. Oliveira suggested using "*" in the call to set off the x=x variables:
self.do_something(positional, *, keyword) # this would be equivalent to: self.do_something(positional, keyword=keyword)
But D'Aprano was concerned about confusion:
open('example.txt', encoding, newline, errors, mode)
open('example.txt', *, encoding, newline, errors, mode)
Would you agree that the two of those have *radically* different
meanings? The first will, if you are lucky, fail immediately; the second
may succeed. But to the poor unfortunate reader who doesn't know what
the star does, the difference is puzzling.
He noted that the problem is even worse for a function where all of the parameters have the same type so that mixing things up will not cause an immediate failure. That led Alex Hall to propose using the more visually distinct "**" instead. That choice is in reference to the dictionary unpacking operator that is often used with keyword arguments. But Chris Angelico wondered why there needed to be a "mode switch" in the argument list; having "*" (or "**") change the interpretation of subsequent arguments is likely to be confusing and error-prone. The original "keyword=" proposal was easier to see, he thought:
Another suggestion also used "**" but grouped auto-filled keyword arguments using either parentheses (suggested by Dominik Vilsmeier) or curly braces (Hall). That notation would look something like:
requests.post(url, **(data, params)) # one way
requests.post(url, **{data, params}) # another
# equivalent to
requests.post(url, data=data, params=params)
That led D'Aprano to post
a "pseudo-PEP" using the curly-bracket notation. It described the
usage and interpretation of the feature; he encouraged Oliveira to
"mine this for useful nuggets in your PEP
". D'Aprano pointed
to subprocess.Popen()
as having "one of the most complex signatures in the Python
standard library
", so he used it as a lengthy example of how the feature
would interact with other argument-handling features:
subprocess.Popen(
# Positional arguments.
args, bufsize, executable,
*stdfiles,
# Keyword arguments.
shell=True, close_fds=False, cwd=where, creationflags=flags,
env={'MYVAR': 'some value'},
**textinfo,
# Keyword unpacking shortcut.
**{preexec_fn, startupinfo, restore_signals, pass_fds,
universal_newlines, start_new_session}
)
which will expand to:
subprocess.Popen(
# Positional arguments.
args, bufsize, executable,
*stdfiles,
# Keyword arguments.
shell=True, close_fds=False, cwd=where, creationflags=flags,
env={'MYVAR': 'some value'},
**textinfo,
# Keyword unpacking shortcut expands to:
preexec_fn=preexec_fn,
startupinfo=startupinfo,
restore_signals=restore_signals,
pass_fds=pass_fds,
universal_newlines=universal_newlines,
start_new_session=start_new_session
)
Vilsmeier is not so
sure that the mechanism truly avoids the mode-switching problem;
instead of "**" and ")" as delimiters, it would be
"**{" and "}". "I'm not convinced that `**{` and
`}` make the beginning and end of the
mode 'really obvious'
" D'Aprano disagreed
with that, unsurprisingly, but also pointed out that multiple "**{}"
constructs can be used in a single call.
In the end, the idea will only turn into a concrete proposal if Oliveira (or someone) writes a PEP, Angelico said:
Rodrigo, it may be time to start thinking about writing a PEP. If the Steering Council approves, I would be willing to be a (non-core-dev) sponsor; alternatively, there may be others who'd be willing to sponsor it. A PEP will gather all the different syntax options and the arguments for/against each, and will mean we're not going round and round on the same discussion points all the times.
A PEP normally requires a core developer to sponsor it, however; Eric V. Smith duly
volunteered
for that duty, but for a perhaps counter-intuitive reason: "I'd be arguing that it get
rejected
". That does not mean the PEP would not have value,
though:
Oliveira is up to the task; he plans to work with Smith and Angelico on a PEP, though, of course, he is hoping it gets accepted:
Like Smith, I am rather skeptical that the feature will ever be added to Python, but have certainly been wrong before. It is a useful exercise to get the idea down on "paper", with all of the different alternatives explored, no matter how it turns out in the end. This is not the first time the idea has been raised, nor will it be the last, so having a PEP to point to will help forestall endless rehashing of things considered—and either accepted or rejected—down the road.
The integrity policy enforcement security module
There are many ways to try to keep a system secure. One of those, often employed in embedded or other dedicated-purpose systems, is to try to ensure that only code that has been approved (by whoever holds that power over the system in question) can be executed. The secure boot mechanism, which is intended to keep a computer from booting anything but a trusted kernel, is one piece of this puzzle, but its protection only extends through the process of booting the kernel itself. Various mechanisms exist for protecting a system after it boots; a new option for this stage is the Integrity Policy Enforcement (IPE) security module, posted by Deven Bowers.IPE is one of a new generation of security modules that has been enabled by the ongoing work to implement module stacking. It does not attempt to provide a full security enforcement mechanism like SELinux, AppArmor, or Smack do; instead, it focuses specifically on the task of vetting attempts to execute code. And, in particular, its enforcement mechanism comes down to a simple question: does the code that the system is proposing to execute come from an appropriately signed disk volume?
IPE is designed to work with dm-verity, which provides integrity checking for block devices. Each dm-verity volume has a root hash, which is derived from the hashes of the individual blocks in that volume. Whenever blocks are read from this volume, the hashes are checked up to the root to ensure that nothing has been tampered with. Assuming everything is working as intended, the data read from a dm-verity volume is guaranteed to be the data that the creator put there and hashed, with no subsequent tampering.
While dm-verity can be used to ensure that nobody has corrupted a disk image, there are still a couple of pieces missing when it comes to ensuring the integrity of the system as a whole. One is ensuring that the root hash for the volume is the one that the creator of the volume intended; that can be done by either storing the hash value separately or applying a cryptographic signature. Even a verified, integrity-protected volume is only of limited use, though, if the system is able to execute code that doesn't come from that volume.
There are a few security modules that can address these problems; IPE attempts to do so in a relatively simple way. When it is active, IPE's entire purpose is to make sure that all execution is done from code found on volumes that are protected by dm-verity, and which have the appropriate hashes or signatures. To that end, it has a simple policy language that the system administrator can use to describe which executables are acceptable.
The first line of a policy declaration is special; it provides a name and a version number:
policy_name="Evil lockdown policy" policy_version=6.6.6
The name simply identifies the policy; the version number is used to prevent a system from being rolled back to an earlier version of the policy. IPE will, though, allow a policy to be overwritten by another with the same version number, for whatever reason.
Everything else describes a portion of the desired policy by tying an operation to an access decision. For example, this rule would allow any attempt to execute a file:
op=EXECUTE action=ALLOW
The set of operations that can be controlled is made up of:
- EXECUTE: execute a file or load a file for execution. This includes calls to mmap() or mprotect() that would create executable memory regions.
- FIRMWARE: the loading of firmware.
- KMODULE: the loading of a kernel module with insmod or modprobe.
- KEXEC_IMAGE and KEXEC_INITRAMFS: booting a new kernel via the kexec mechanism; KEXEC_INITRAMFS controls the provision of an initramfs for the new kernel.
- POLICY: the loading of integrity-measurement policies. Note that this refers to the IMA integrity subsystem, not IPE.
- X509_CERT: the loading of certificates into IMA.
The choices for action= are just ALLOW or DENY.
Thus far, though, we have not made any connection to verified sources of executable code. That is done by adding qualifiers to describe the provenance of specific binaries. The first step is probably to allow running code from a filesystem image that was provided by the secure boot mechanism:
op=EXECUTE boot_verified=TRUE action=ALLOW
On the assumption that the secure boot mechanism has ensured that the initial RAM filesystem is verified, this lets the system run the programs found there. That forms part of the chain that lets the system bootstrap itself to the point of running from the real filesystem. Of course, there is no way for IPE to know for sure that secure boot was used; what this option really does is just enable trust for the initial filesystem. Once that happens, there are two more qualifiers to describe where code can (or cannot) come from:
dmverity_roothash=hash
dmverity_signature=TRUE|FALSE
The first causes the rule to apply to a dm-verity volume whose root-level hash is hash. Normally the action would be ALLOW to enable execution from a known-good volume. It could be set to DENY, though, to specifically disallow a volume that, for example, contains code with a known vulnerability. A rule with dmverity_signature checks whether the executable comes from a volume where the root hash has been signed by a key that appears in the kernel's trusted keyring.
The special DEFAULT qualifier describes what happens when no rule matches a specific situation. For example, a line like the following would deny everything by default, and might be a logical starting point for a real-world configuration:
DEFAULT action=DENY
If there were a need to allow, for example, the loading of kernel modules by default, the above line could be preceded by something like:
DEFAULT op=KMODULE action=ALLOW
The first non-default rule that applies to a given situation makes the decision for the operation in question, so the ordering of rules is important.
That is the entire policy language in the current patch set. Since it may be desirable to have IPE active before the system has reached a point where it can load a policy, an initial policy can be specified at build time with the SECURITY_IPE_BOOT_POLICY configuration variable. In the absence of an initial policy, IPE is disabled until a policy is loaded, which is done by the policy text to /sys/kernel/security/ipe/new_policy; the policy files must be signed with a key found in the trusted keyring. There can be multiple policies loaded at any given time, which is why they have names; a policy can be selected as the active policy by writing its name to the ipe.active_policy sysctl knob. There does not appear to be any restriction of the ability to switch between policies at run time.
One interesting behavioral quirk is that, by default, IPE will ignore any lines in the policy file that it is not able to parse successfully. This is done in the name of compatibility, but it could also have the effect of causing policies to be changed in an undesirable way by way of a typo. The ipe.strict_parse sysctl knob can be used to turn such mistakes into fatal errors.
See this patch for the documentation file included with IPE.
As is always the case with this sort of technology, IPE can be used for good or evil purposes. The most obvious use case is to lock down consumer devices, preventing their owners from making modifications. But it can also be used to, for example, ensure that a router running OpenWrt continues to run the software the owner put there. IPE, at least, is relatively simple, meaning that setting it up on a private machine is feasible without investing a great deal of time.
As of this writing, IPE has not seen a lot of review, so it's difficult to say when, or in what form, it may be merged into the mainline. From a first reading, though, it doesn't appear that there are a whole lot of reasons to keep it out.
How to unbreak LTTng
Back in February, the kernel community discussed the removal of a couple of functions that could be used by loadable modules to gain access to symbols (functions and data structures) that were not meant to be available to them. That change was merged during the 5.7 merge window. This change will break a number of external modules that depended on the removed functions; since many of those modules are proprietary, this fact does not cause a great deal of anguish in the kernel community. But there are a few out-of-tree modules with GPL-compatible licenses that are also affected by this change; one of those is LTTng. Fixing LTTng may not be entirely straightforward.LTTng is a tracing subsystem; to carry out that sort of task, it must be able to hook into the kernel in a number of fairly deep places. It is unsurprising that LTTng was accessing parts of the kernel that are not deemed suitable for export to modules in general. Losing access to kallsyms_on_each_symbol() deprived LTTng of the ability to find those addresses, thus breaking much of its functionality. That is not welcome news to those who work on — or use — LTTng.
LTTng developer Mathieu Desnoyers has responded to this change with a patch series exporting a number of new symbols; with those available, LTTng can do what it needs to do without using the rather more general kallsyms_on_each_symbol() function. For example, LTTng needs access to stack_trace_save_user() to be able to save user-space stack traces. It also needs access to functions like task_prio(), disk_name(), and get_pfn_blocks_mask(). LTTng obtains kernel information from tracepoints as well, of course, and that usage will increase as tracepoints replace some of the direct internal accesses that were used before. The patch set raises the number of arguments that can be passed to a BPF program from a tracepoint to an eye-opening 13 (to allow more information to be passed out via a specific tracepoint), but that change may prove to be unnecessary in the end.
Anybody who has watched the kernel community for any period of time can
probably guess what sort of reception this patch series received.
Christoph Hellwig was characteristically
blunt: "Which part of every added export needs an in-tree user
did you not get?
" The kernel community as a whole is strongly
resistant to the idea of adding any sort of support for code that is
outside of the kernel repository. Much of that resistance comes from a
dislike for proprietary kernel modules in general, but there is a bit more
to it than that.
LTTng, being free software, should not be affected by any antipathy for proprietary kernel code. But, as Greg Kroah-Hartman explained, there are still reasons to avoid adding support for free, out-of-tree modules. Once those modules are supported in some way, they add constraints to what kernel developers can do. Internal kernel interfaces can be changed as needed; since all of the users of those interfaces are present in the same code base, they can be changed at the same time. If external modules have to be supported, though, it becomes harder to make such changes, since the users cannot be changed to match. Indeed, it becomes difficult to even know when a change might cause problems elsewhere.
Thus, Kroah-Hartman said:
This all suggests that there is not much of a path forward for LTTng. It is unable to function without access to kernel internals, and that access is being expressly denied.
There is, of course, one other option that was first raised
by Steve Rostedt: "I guess we should be open to allowing LTTng
modules in the kernel as well
". If LTTng were actually a part of
the mainline kernel, there would no longer be problems with giving access
to the resources that it needs.
This is not a new idea. Numerous attempts have been made to get the LTTng code into the mainline kernel, without success. In the early days, before the kernel had any sort of tracing capability at all, adding that feature was a hard sell. Kernel developers now are heavily dependent on tracing for their own work and would strongly resist any attempt to take that capability away, but it was not that long ago that many of the same developers were unconvinced that tracing was needed at all. During that time, getting any tracing features into the kernel was not easy.
Over time, some low-level LTTng code found its way in, but LTTng as a whole has not followed. More recently, in 2011, LTTng was brought into the staging tree by Kroah-Hartman as a first step toward merging it. That move brought about a great deal of hostility, some of which seems familiar; a rather lengthy thread was set off by an attempt to export task_prio(), for example. In the end, LTTng was pushed back out of the staging tree — as it was before and has been ever since.
So LTTng would appear to be in a difficult position: unable to function outside of the kernel, and unable to be merged. Leaving LTTng broken would cause serious harm to a lot of users, though, and seems unlikely to advance the cause of Linux or free software in general. So perhaps the time has come for something to give. If a handful of symbols truly cannot be exported for this subsystem, perhaps some space could be found in the mainline for a widely used tracing subsystem, even if it somehow duplicates some of the functionality that is already there.
Proactive compaction for the kernel
Many applications benefit significantly from the use of huge pages. However, huge-page allocations often incur a high latency or even fail under fragmented memory conditions. Proactive compaction may provide an effective solution to these problems by doing memory compaction in the background. With my proposed proactive compaction implementation, typical huge-page allocation latencies are reduced by a factor of 70-80 while incurring minimal CPU overhead.
Memory compaction is a feature of the Linux kernel that makes larger, physically contiguous blocks of free memory available. Currently, the kernel uses an on-demand compaction scheme. Whenever a per-node kcompactd thread is woken up, it compacts just enough memory to make available a single page of the needed size. Once a page of that size is made available, the thread goes back to sleep. This pattern of compaction often causes a high latency for higher-order allocations and hurts performance for workloads that need to burst-allocate a large number of huge pages.
Experiments where compaction is manually triggered on a system with a fragmented memory state show that it could be brought to a fairly compacted memory state within one second for a 32GB system. Such data suggests that a proactive compaction scheme in the kernel could allow allocating a significant fraction of memory as huge pages while keeping allocation latencies low.
My recent patch provides an implementation of the proactive compaction approach. It exposes a single tunable: /sys/kernel/mm/compaction/proactiveness, which accepts values in the range [0, 100], with a default value of 20. This tunable determines how aggressively the kernel should compact memory in the background. The patch reuses the existing, per-NUMA-node kcompactd threads to do background compaction. Each of these threads periodically calculates a per-node fragmentation score (an indicator of the memory fragmentation) and compares it against a threshold, which is derived from the tunable.
The per-node proactive (background) compaction process is started by its corresponding kcompactd thread when the node's fragmentation score exceeds the high threshold. The compaction process remains active till the node's score falls below the low threshold, or one of the back-off conditions (defined below) is met.
Memory compaction involves bringing together "movable" pages at a zone's end to create larger, physically contiguous, free regions at a zone's beginning. If there are "unmovable" pages, like kernel allocations, spread across the physical address space, this process is less effective. Compaction has a non-trivial system-wide impact as pages belonging to different processes are moved around, which could also lead to latency spikes in unsuspecting applications. Thus, the kernel must not overdo compaction and should have a good back-off strategy.
The patch implements back-offs in the following scenarios:
- When the current node's kswapd thread is active (to avoid interfering with the reclaim process).
- When there is contention on the per-node lru_lock or per-zone lock (to avoid hurting non-background, latency-sensitive contexts).
- When there is no progress (reduction in node's fragmentation score value) after a round of compaction.
If any of these back-off conditions is true, the proactive compaction process is deferred for a specific time interval.
Per-node fragmentation score and threshold calculation
As noted earlier, this proactive compaction scheme is controlled by a single tunable called proactiveness. All required values like per-node fragmentation score and thresholds are derived from this tunable.
A node's score is in the range [0, 100] and is defined as the sum of all the node's zone scores, where a zone's score (Sz) is defined as:
Sz = (Nz / Nn) * extfrag(zone, HUGETLB_PAGE_ORDER)
where:
- Nz is the total number of pages in the zone.
- Nn is the total number of pages in the zone's parent node.
- extfrag(zone, HUGETLB_PAGE_ORDER) is the external fragmentation with respect to the huge-page order in this zone.
In general, this is the way to calculate external fragmentation with respect to any order:
extfrag(zone, order) = ((Fz - Hz) / Fz) * 100
where:
- Fz is the total number of free pages in the zone.
- Hz is the number of free pages available in blocks of size >= 2order.
This per-zone score value is in the range [0, 100], and is defined as 0 when Fz = 0. The reason for calculating the per-zone score this way is to avoid wasting time trying to compact special zones like ZONE_DMA32 and focus on zones like ZONE_NORMAL, which manage most of the memory (Nz ≈ Nn). For smaller zones (Nz << Nn), the score tends to 0, and thus can never cross the low threshold value (defined below).
The low (Tl) and high (Th) thresholds, against which these scores are compared, are defined as follows:
Tl = 100 - proactiveness
Th = min(10 + Tl, 100)
These thresholds are in the range [0, 100]. Once a zone's score exceeds Th, proactive compaction will be done until the score drops below Tl.
Performance evaluation
For a true test of memory compaction efficacy, the first step is to fragment the system memory such that no huge pages are directly available for allocation (i.e., an initial fragmentation score of 100 for all NUMA nodes). With the system in such a fragmented memory state, any run of a huge-page-heavy workload would highlight the effects of the kernel's compaction policy. With on-demand compaction, a majority of huge-page allocations hit the direct-compaction path, leading to high allocation latencies. With proactive compaction, the expectation is to avoid these latencies except when the compaction process is unable to catch up with the huge-page allocation rate.
For evaluating proactive compaction, a high level of external fragmentation (as described above) was triggered by a user-space program that allocated almost all memory as huge pages and then freed 75% of pages from each huge-page aligned chunk. All the tests were done on an x86_64 system with 1TB of RAM and two NUMA nodes, using kernel version 5.6.0-rc4. The first huge-page-heavy workload was a test kernel driver that allocated as many huge pages as possible, measuring latency for each allocation, until it hit an allocation failure. The driver was able to allocate almost 98% of free memory as huge pages with or without the patch. However, with the vanilla kernel (without the proactive compaction patch), 95-percentile latency was 33799µs while with the patch (with proactiveness tunable set to 20), this latency was 429µs — a reduction by a factor of 78.
To further measure the performance, a Java heap allocation test was used, which allocated 700GB of heap space, after fragmenting the memory as described above. This workload shows the effect of reduced allocation latencies in the run time of huge-page-heavy workloads. On the vanilla kernel, the run time was ~27 minutes, while with the patch (proactiveness=20), the run time came down to roughly four minutes. Tests with higher proactiveness values did not show any further speedups or slowdowns.
Some questions were raised by Vlastimil Babka, who has worked on proactive compaction along the way and helpfully reviewed some of these patches:
The comment led to further investigation into the proactive kcompactd behavior. For the Java heap test, per-node fragmentation scores were recorded during the program's runtime, together with kcompactd threads' CPU usage. The data clearly shows that proactive compaction is active throughout the runtime of the test program and that a kcompactd thread takes 100% of one of the CPU cores while it is active.
The above plot shows changes in per-node fragmentation scores as a Java process tries to allocate 700GB of the heap area on a two-node system with 512GB memory each. The proactiveness tunable is set to 20, so the low and high thresholds are 80 and 90, respectively. Before the test program was run, the system memory was fragmented, such that no huge pages were available for direct allocation. The heap allocation started on Node-0, where the score rose as huge pages are used. When the score gets above 90, proactive compaction is triggered on the node, bringing the score back to 80. Eventually, all memory on Node-0 was exhausted (around 90 seconds into the run), at which point the allocation started from Node-1, where the same compaction pattern repeated.
As described earlier, compaction is an expensive operation, and we don't want to pay the cost unless it is able to reduce the external fragmentation. To evaluate the back-off behavior, another test was created where unmovable pages were scattered throughout the physical address space. With the system in such a memory state, compaction cannot do much apart from warming the CPU. The back-off mechanism implemented in this patch correctly identifies such a situation by checking that a node's score does not go down after a round of compaction. When that happens, further rounds of proactive compaction are deferred for some time.
Future work
The patch has already been through some review cycles, which have helped refine many of its aspects. Some kernel developers recognize the need for a more proactive compaction, and given these encouraging numbers, the patch will hopefully be accepted, probably after a few more iterations.
As a future direction, I am focusing on refining the per-node fragmentation score and threshold calculations, which drive the background proactive compaction process. Currently, the score calculation does not take into account per-node characteristics like differing TLB latencies, which would be important in heterogeneous systems. Future patches will likely add scaling factors to both score and threshold calculations to account for these per-node characteristics.
Brief items
Kernel development
Kernel release status
The current development kernel is 5.7-rc2, released on April 19. Linus said: "Everything continues to look fairly normal, with commit counts right in the middle of what you'd expect for rc2. And most of the changes are tiny and don't look scary at all."
Stable updates: 5.6.5, 5.5.18, 5.4.33, and 4.19.116 came out on April 17, followed by 5.6.6, 5.5.19, 5.4.34, and 4.19.117 on April 21. The 5.5.x line ends with 5.5.19, so users will want to be looking at moving forward to 5.6.
The 5.6.7, 5.4.35 4.19.118, 4.14.177, 4.9.220, and 4.4.220 updates are in the review process; they are due on April 24.
Garrett: Linux kernel lockdown, integrity, and confidentiality
Matthew Garrett has posted an overview of the kernel lockdown capability merged in 5.4. "If you verify your boot chain but allow root to modify that kernel, the benefits of the verified boot chain are significantly reduced. Even if root can't modify the on-disk kernel, root can just hot-patch the kernel and then make this persistent by dropping a binary that repeats the process on system boot. Lockdown is intended as a mechanism to avoid that, by providing an optional policy that closes off interfaces that allow root to modify the kernel."
Distributions
Debian Project Leader Election 2020 Results
The results are in from this year's Debian project leader election; the winner is Jonathan Carter.Yocto Project 3.1 LTS (Dunfell 23.0.0)
The Yocto Project has announced its 3.1 LTS release of its distribution-building system. Changes include a 5.4 kernel, the removal of all Python 2 code, improvements in the build equivalence mechanism (described in this article), and more.Distribution quotes of the week
Development
Python 2.7.18, the end of an era
Python 2.7.18 is out. This is the last release and end of support for Python 2. "Python 2.7 has been under active development since the release of Python 2.6, more than 11 years ago. Over all those years, CPython's core developers and contributors sedulously applied bug fixes to the 2.7 branch, no small task as the Python 2 and 3 branches diverged. There were large changes midway through Python 2.7's life such as PEP 466's feature backports to the ssl module and hash randomization. Traditionally, these features would never have been added to a branch in maintenance mode, but exceptions were made to keep Python 2 users secure. Thank you to CPython's community for such dedication."
Development quotes of the week
[...] Users still on Python 2 can use e to compute the instantaneously compounding interest on their technical debt.
But that would require a number of changes, and I don't think that round corners would get us close to there.
Miscellaneous
How to livestream a conference in just under a week (FSF)
On the FSF blog, Zoe Kooyman describes how the LibrePlanet 2020 conference was converted to a virtual conference in a week's time—using free software, naturally. "In 2016, we gained some livestreaming experience when we interviewed Edward Snowden live from Moscow. To minimize the risk of failed recordings due to overly complex or error-prone software systems, we made it a priority to achieve a pipeline with low latency, good image quality, and low CPU usage. The application we used then was Jitsi Meet, and the tech info and scripts we used for streaming from 2016 are available for your information and inspiration. Naturally, for this year, with no time for researching other applications, we opted to build on our experience with Jitsi Meet. We hosted our own instance for remote speakers to connect to and enter a video call with the conference organizers. A screen capture of this call was then simultaneously recorded by the FSF tech team, and streamed out to the world via Gstreamer and Icecast."
Page editor: Jake Edge
Announcements
Newsletters
Distributions and system administration
- Final Bits from the Outgoing DPL (April 19)
- Bits from the new DPL (April 21)
- DistroWatch Weekly (April 20)
- Lunar Linux Weekly News (April 17)
- openSUSE Tumbleweed Review of the Week (April 17)
- Ubuntu Weekly Newsletter (April 18)
Development
- Emacs News (April 20)
- These Weeks in Firefox (April 17)
- What's cooking in git.git (April 20)
- LLVM Weekly (April 20)
- OCaml Weekly News (April 21)
- Perl Weekly (April 20)
- PostgreSQL Weekly News (April 19)
- Python Weekly Newsletter (April 16)
- Weekly Rakudo News (April 20)
- Ruby Weekly News (April 16)
- Wikimedia Tech News (April 20)
Meeting minutes
- Fedora FESCO meeting minutes (April 20)
Calls for Presentations
CFP Deadlines: April 23, 2020 to June 22, 2020
The following listing of CFP deadlines is taken from the LWN.net CFP Calendar.
| Deadline | Event Dates | Event | Location |
|---|---|---|---|
| April 23 | May 1 May 2 |
openSUSE Virtual Summit | |
| May 1 | August 26 August 28 |
[Canceled] FOSS4G Calgary | Calgary, Canada |
| May 22 | May 28 May 31 |
MiniDebConf Online | |
| June 5 | September 13 September 18 |
The C++ Conference 2020 | Online |
| June 14 | September 4 September 11 |
Akademy 2020 | Virtual, Online |
| June 15 | August 25 August 27 |
Linux Plumbers Conference | Virtual |
If the CFP deadline for your event does not appear here, please tell us about it.
Upcoming Events
Events: April 23, 2020 to June 22, 2020
The following event listing is taken from the LWN.net Calendar.
If your event does not appear here, please tell us about it.
Security updates
Alert summary April 16, 2020 to April 22, 2020
| Dist. | ID | Release | Package | Date |
|---|---|---|---|---|
| Arch Linux | ASA-202004-14 | apache | 2020-04-17 | |
| Arch Linux | ASA-202004-15 | chromium | 2020-04-17 | |
| Arch Linux | ASA-202004-13 | git | 2020-04-15 | |
| Arch Linux | ASA-202004-16 | openvpn | 2020-04-19 | |
| Arch Linux | ASA-202004-17 | webkit2gtk | 2020-04-21 | |
| Debian | DLA-2178-1 | LTS | awl | 2020-04-18 |
| Debian | DSA-4660-1 | stable | awl | 2020-04-21 |
| Debian | DLA-2180-1 | LTS | file-roller | 2020-04-18 |
| Debian | DSA-4659-1 | stable | git | 2020-04-20 |
| Debian | DLA-2179-1 | LTS | jackson-databind | 2020-04-18 |
| Debian | DSA-4661-1 | stable | openssl | 2020-04-21 |
| Debian | DLA-2181-1 | LTS | shiro | 2020-04-19 |
| Debian | DSA-4658-1 | stable | webkit2gtk | 2020-04-16 |
| Fedora | FEDORA-2020-70fa57d566 | F30 | cacti | 2020-04-15 |
| Fedora | FEDORA-2020-c1745db1aa | F31 | cacti | 2020-04-15 |
| Fedora | FEDORA-2020-70fa57d566 | F30 | cacti-spine | 2020-04-15 |
| Fedora | FEDORA-2020-c1745db1aa | F31 | cacti-spine | 2020-04-15 |
| Fedora | FEDORA-2020-b2df49bb01 | F30 | chromium | 2020-04-15 |
| Fedora | FEDORA-2020-161c87cbc7 | F31 | chromium | 2020-04-18 |
| Fedora | FEDORA-2020-68ab318468 | F30 | firefox | 2020-04-16 |
| Fedora | FEDORA-2020-cdef88bb89 | F31 | git | 2020-04-18 |
| Fedora | FEDORA-2020-97e8a67945 | F31 | golang-github-buger-jsonparser | 2020-04-15 |
| Fedora | FEDORA-2020-73c00eda1c | F30 | kernel | 2020-04-15 |
| Fedora | FEDORA-2020-73c00eda1c | F30 | kernel-headers | 2020-04-15 |
| Fedora | FEDORA-2020-73c00eda1c | F30 | kernel-tools | 2020-04-15 |
| Fedora | FEDORA-2020-5a77f0d68f | F31 | libssh | 2020-04-18 |
| Fedora | FEDORA-2020-68ab318468 | F30 | nss | 2020-04-16 |
| Fedora | FEDORA-2020-b6dbdc3071 | F31 | thunderbird | 2020-04-16 |
| Mageia | MGASA-2020-0174 | 7 | chromium-browser-stable | 2020-04-17 |
| Mageia | MGASA-2020-0175 | 7 | git | 2020-04-17 |
| Mageia | MGASA-2020-0178 | 7 | php | 2020-04-20 |
| Mageia | MGASA-2020-0176 | 7 | python-bleach | 2020-04-20 |
| Mageia | MGASA-2020-0177 | 7 | webkit2 | 2020-04-20 |
| openSUSE | openSUSE-SU-2020:0523-1 | ansible | 2020-04-16 | |
| openSUSE | openSUSE-SU-2020:0540-1 | chromium | 2020-04-19 | |
| openSUSE | openSUSE-SU-2020:0541-1 | 15.1 | chromium | 2020-04-20 |
| openSUSE | openSUSE-SU-2020:0524-1 | 15.1 | git | 2020-04-16 |
| openSUSE | openSUSE-SU-2020:0534-1 | gnuhealth | 2020-04-17 | |
| openSUSE | openSUSE-SU-2020:0535-1 | 15.1 | gstreamer-rtsp-server | 2020-04-17 |
| openSUSE | openSUSE-SU-2020:0539-1 | mp3gain | 2020-04-19 | |
| openSUSE | openSUSE-SU-2020:0522-1 | 15.1 | mp3gain | 2020-04-15 |
| Oracle | ELSA-2020-1379 | OL8 | container-tools:ol8 | 2020-04-15 |
| Oracle | ELSA-2020-1508 | OL6 | java-1.7.0-openjdk | 2020-04-21 |
| Oracle | ELSA-2020-1506 | OL6 | java-1.8.0-openjdk | 2020-04-21 |
| Oracle | ELSA-2020-1317 | OL8 | nodejs:10 | 2020-04-15 |
| Oracle | ELSA-2020-1489 | OL7 | thunderbird | 2020-04-16 |
| Oracle | ELSA-2020-1489 | OL7 | thunderbird | 2020-04-19 |
| Oracle | ELSA-2020-1495 | OL8 | thunderbird | 2020-04-19 |
| Oracle | ELSA-2020-1497 | OL8 | tigervnc | 2020-04-19 |
| Oracle | ELSA-2020-1358 | OL8 | virt:ol | 2020-04-15 |
| Red Hat | RHSA-2020:1487-01 | EL6 | chromium-browser | 2020-04-16 |
| Red Hat | RHSA-2020:1504-01 | EL6 | chromium-browser | 2020-04-21 |
| Red Hat | RHSA-2020:1511-01 | EL7 | git | 2020-04-21 |
| Red Hat | RHSA-2020:1513-01 | EL8 | git | 2020-04-21 |
| Red Hat | RHSA-2020:1518-01 | EL8.0 | git | 2020-04-21 |
| Red Hat | RHSA-2020:1510-01 | EL7.6 | http-parser | 2020-04-21 |
| Red Hat | RHSA-2020:1486-01 | EL7.5 | ipmitool | 2020-04-16 |
| Red Hat | RHSA-2020:1508-01 | EL6 | java-1.7.0-openjdk | 2020-04-21 |
| Red Hat | RHSA-2020:1507-01 | EL7 | java-1.7.0-openjdk | 2020-04-21 |
| Red Hat | RHSA-2020:1506-01 | EL6 | java-1.8.0-openjdk | 2020-04-21 |
| Red Hat | RHSA-2020:1512-01 | EL7 | java-1.8.0-openjdk | 2020-04-21 |
| Red Hat | RHSA-2020:1515-01 | EL8 | java-1.8.0-openjdk | 2020-04-22 |
| Red Hat | RHSA-2020:1516-01 | EL8.0 | java-1.8.0-openjdk | 2020-04-22 |
| Red Hat | RHSA-2020:1509-01 | EL7 | java-11-openjdk | 2020-04-21 |
| Red Hat | RHSA-2020:1514-01 | EL8 | java-11-openjdk | 2020-04-21 |
| Red Hat | RHSA-2020:1517-01 | EL8.0 | java-11-openjdk | 2020-04-22 |
| Red Hat | RHSA-2020:1524-01 | EL6 | kernel | 2020-04-22 |
| Red Hat | RHSA-2020:1493-01 | EL7 | kernel-alt | 2020-04-16 |
| Red Hat | RHSA-2020:1505-01 | EL7.5 | qemu-kvm-ma | 2020-04-21 |
| Red Hat | RHSA-2020:1503-01 | SCL | rh-git218-git | 2020-04-21 |
| Red Hat | RHSA-2020:1523-01 | SCL | rh-maven35-jackson-databind | 2020-04-21 |
| Red Hat | RHSA-2020:1488-01 | EL6 | thunderbird | 2020-04-16 |
| Red Hat | RHSA-2020:1489-01 | EL7 | thunderbird | 2020-04-16 |
| Red Hat | RHSA-2020:1495-01 | EL8 | thunderbird | 2020-04-16 |
| Red Hat | RHSA-2020:1496-01 | EL8.0 | thunderbird | 2020-04-16 |
| Red Hat | RHSA-2020:1497-01 | EL8 | tigervnc | 2020-04-16 |
| Scientific Linux | SLSA-2020:1021-1 | SL7 | GNOME | 2020-04-20 |
| Scientific Linux | SLSA-2020:1180-1 | SL7 | ImageMagick | 2020-04-20 |
| Scientific Linux | SLSA-2020:1037-1 | SL7 | advancecomp | 2020-04-20 |
| Scientific Linux | SLSA-2020:1176-1 | SL7 | avahi | 2020-04-20 |
| Scientific Linux | SLSA-2020:1113-1 | SL7 | bash | 2020-04-20 |
| Scientific Linux | SLSA-2020:1061-1 | SL7 | bind | 2020-04-20 |
| Scientific Linux | SLSA-2020:1101-1 | SL7 | bluez | 2020-04-20 |
| Scientific Linux | SLSA-2020:1050-1 | SL7 | cups | 2020-04-20 |
| Scientific Linux | SLSA-2020:1020-1 | SL7 | curl | 2020-04-20 |
| Scientific Linux | SLSA-2020:1062-1 | SL7 | dovecot | 2020-04-20 |
| Scientific Linux | SLSA-2020:1034-1 | SL7 | doxygen | 2020-04-20 |
| Scientific Linux | SLSA-2020:1080-1 | SL7 | evolution | 2020-04-20 |
| Scientific Linux | SLSA-2020:1011-1 | SL7 | expat | 2020-04-20 |
| Scientific Linux | SLSA-2020:1022-1 | SL7 | file | 2020-04-20 |
| Scientific Linux | SLSA-2020:1338-1 | SL7 | firefox | 2020-04-20 |
| Scientific Linux | SLSA-2020:1420-1 | SL7 | firefox | 2020-04-20 |
| Scientific Linux | SLSA-2020:1138-1 | SL7 | gettext | 2020-04-20 |
| Scientific Linux | SLSA-2020:1511-1 | SL7 | git | 2020-04-21 |
| Scientific Linux | SLSA-2020:1121-1 | SL7 | httpd | 2020-04-20 |
| Scientific Linux | SLSA-2020:1508-1 | SL6 | java-1.7.0-openjdk | 2020-04-21 |
| Scientific Linux | SLSA-2020:1507-1 | SL7 | java-1.7.0-openjdk | 2020-04-21 |
| Scientific Linux | SLSA-2020:1506-1 | SL6 | java-1.8.0-openjdk | 2020-04-21 |
| Scientific Linux | SLSA-2020:1512-1 | SL7 | java-1.8.0-openjdk | 2020-04-21 |
| Scientific Linux | SLSA-2020:1509-1 | SL7 | java-11-openjdk | 2020-04-21 |
| Scientific Linux | SLSA-2020:1524-1 | SL6 | kernel | 2020-04-22 |
| Scientific Linux | SLSA-2020:1016-1 | SL7 | kernel | 2020-04-20 |
| Scientific Linux | SLSA-2020:1045-1 | SL7 | lftp | 2020-04-20 |
| Scientific Linux | SLSA-2020:1051-1 | SL7 | libosinfo | 2020-04-20 |
| Scientific Linux | SLSA-2020:1189-1 | SL7 | libqb | 2020-04-20 |
| Scientific Linux | SLSA-2020:1151-1 | SL7 | libreoffice | 2020-04-20 |
| Scientific Linux | SLSA-2020:1185-1 | SL7 | libsndfile | 2020-04-20 |
| Scientific Linux | SLSA-2020:1190-1 | SL7 | libxml2 | 2020-04-20 |
| Scientific Linux | SLSA-2020:1054-1 | SL7 | mailman | 2020-04-20 |
| Scientific Linux | SLSA-2020:1100-1 | SL7 | mariadb | 2020-04-20 |
| Scientific Linux | SLSA-2020:1003-1 | SL7 | mod_auth_mellon | 2020-04-20 |
| Scientific Linux | SLSA-2020:1126-1 | SL7 | mutt | 2020-04-20 |
| Scientific Linux | SLSA-2020:1167-1 | SL7 | nbdkit | 2020-04-20 |
| Scientific Linux | SLSA-2020:1081-1 | SL7 | net-snmp | 2020-04-20 |
| Scientific Linux | SLSA-2020:1173-1 | SL7 | okular | 2020-04-20 |
| Scientific Linux | SLSA-2020:1112-1 | SL7 | php | 2020-04-20 |
| Scientific Linux | SLSA-2020:1135-1 | SL7 | polkit | 2020-04-20 |
| Scientific Linux | SLSA-2020:1074-1 | SL7 | poppler and evince | 2020-04-20 |
| Scientific Linux | SLSA-2020:1131-1 | SL7 | python | 2020-04-20 |
| Scientific Linux | SLSA-2020:1091-1 | SL7 | python-twisted-web | 2020-04-20 |
| Scientific Linux | SLSA-2020:1132-1 | SL7 | python3 | 2020-04-20 |
| Scientific Linux | SLSA-2020:1116-1 | SL7 | qemu-kvm | 2020-04-20 |
| Scientific Linux | SLSA-2020:1208-1 | SL7 | qemu-kvm | 2020-04-20 |
| Scientific Linux | SLSA-2020:1172-1 | SL7 | qt | 2020-04-20 |
| Scientific Linux | SLSA-2020:1000-1 | SL7 | rsyslog | 2020-04-20 |
| Scientific Linux | SLSA-2020:1084-1 | SL7 | samba | 2020-04-20 |
| Scientific Linux | SLSA-2020:1068-1 | SL7 | squid | 2020-04-20 |
| Scientific Linux | SLSA-2020:1175-1 | SL7 | taglib | 2020-04-20 |
| Scientific Linux | SLSA-2020:1334-1 | SL7 | telnet | 2020-04-20 |
| Scientific Linux | SLSA-2020:1036-1 | SL7 | texlive | 2020-04-20 |
| Scientific Linux | SLSA-2020:1488-1 | SL6 | thunderbird | 2020-04-16 |
| Scientific Linux | SLSA-2020:1489-1 | SL7 | thunderbird | 2020-04-20 |
| Scientific Linux | SLSA-2020:1181-1 | SL7 | unzip | 2020-04-20 |
| Scientific Linux | SLSA-2020:1047-1 | SL7 | wireshark | 2020-04-20 |
| Scientific Linux | SLSA-2020:1178-1 | SL7 | zziplib | 2020-04-20 |
| Slackware | SSA:2020-106-01 | bind | 2020-04-15 | |
| Slackware | SSA:2020-112-01 | git | 2020-04-21 | |
| Slackware | SSA:2020-107-01 | openvpn | 2020-04-16 | |
| SUSE | SUSE-SU-2020:14342-1 | SLE11 | apache2 | 2020-04-21 |
| SUSE | SUSE-SU-2020:1018-1 | OS8 SLE12 SES5 | freeradius-server | 2020-04-17 |
| SUSE | SUSE-SU-2020:1020-1 | SLE12 | freeradius-server | 2020-04-17 |
| SUSE | SUSE-SU-2020:1023-1 | SLE15 | freeradius-server | 2020-04-17 |
| SUSE | SUSE-SU-2020:1021-1 | SLE12 | libqt4 | 2020-04-17 |
| SUSE | SUSE-SU-2020:1058-1 | SLE12 | openssl-1_1 | 2020-04-21 |
| SUSE | SUSE-SU-2020:1057-1 | SLE12 | puppet | 2020-04-21 |
| SUSE | SUSE-SU-2020:1009-1 | quartz | 2020-04-16 | |
| SUSE | SUSE-SU-2020:1027-1 | SLE15 | thunderbird | 2020-04-17 |
| Ubuntu | USN-4336-1 | 18.04 | binutils | 2020-04-22 |
| Ubuntu | USN-4332-1 | 16.04 18.04 19.10 | file-roller | 2020-04-20 |
| Ubuntu | USN-4334-1 | 16.04 18.04 19.10 | git | 2020-04-21 |
| Ubuntu | USN-4330-1 | 12.04 14.04 16.04 18.04 19.10 | php5, php7.0, php7.2, php7.3 | 2020-04-15 |
| Ubuntu | USN-4333-1 | 12.04 14.04 16.04 18.04 19.10 | python2.7, python3.4, python3.5, python3.6, python3.7 | 2020-04-21 |
| Ubuntu | USN-4335-1 | 16.04 | thunderbird | 2020-04-21 |
| Ubuntu | USN-4331-1 | 18.04 19.10 | webkit2gtk | 2020-04-20 |
Kernel patches of interest
Kernel releases
Architecture-specific
Build system
Core kernel
Development tools
Device drivers
Device-driver infrastructure
Documentation
Filesystems and block layer
Memory management
Networking
Security-related
Virtualization and containers
Page editor: Rebecca Sobol
