|
|
Log in / Subscribe / Register

LWN.net Weekly Edition for October 24, 2019

Welcome to the LWN.net Weekly Edition for October 24, 2019

This edition contains the following feature content:

This week's edition also includes these inner pages:

  • Brief items: Brief news items from throughout the community.
  • Announcements: Newsletters, conferences, security updates, patches, and more.

Please enjoy this week's edition, and, as always, thank you for supporting LWN.net.

Comments (none posted)

BPF and the realtime patch set

By Jake Edge
October 23, 2019

Back in July, Linus Torvalds merged a patch in the 5.3 merge window that added the PREEMPT_RT option to the kernel build-time configuration. That was meant as a signal that the realtime patch set was moving from its longtime status as out-of-tree code to a fully supported kernel feature. As the code behind the configuration option makes its way into the mainline, some friction can be expected; we are seeing a bit of that now with respect to the BPF subsystem.

The thread started with a patch posted by Sebastian Andrzej Siewior to the BPF mailing list. The patch mentioned three problems with BPF running when realtime is enabled and added Kconfig directives to only allow BPF to be configured into kernels that did not have PREEMPT_RT:

Disable BPF on PREEMPT_RT because
  • it allocates and frees memory in atomic context
  • it uses up_read_non_owner()
  • BPF_PROG_RUN() expects to be invoked in non-preemptible context

Siewior said that he had tried to address the memory allocation problems "but I have no idea how to address the other two issues". In that thread, he also gave an overview of what is needed to "play nicely" with the realtime patch set.

Daniel Borkmann replied that the simple approach Siewior took would not actually disable all of BPF, as there are other BPF-using subsystems that would not be affected by the change. Siewior asked for feedback on one possible way to solve that, but David Miller made it clear that he does not think this approach makes sense: "Turning off BPF just because PREEMPT_RT is enabled is a non-starter it is absolutely essential functionality for a Linux system at this point."

However, as Siewior said, there are fundamental incompatibilities between the implementation of BPF and the needs of the realtime patch set. Thomas Gleixner provided more detail on the problem areas in the hopes of finding other ways to deal with them:

  • #1) BPF disables preemption unconditionally with no way to do a proper RT substitution like most other infrastructure in the kernel provides via spinlocks or other locking primitives.
  • #2) BPF does allocations in atomic contexts, which is a dubious decision even for non RT. That's related to #1
  • #3) BPF uses the up_read_non_owner() hackery which was only invented to deal with already existing horrors and not meant to be proliferated.

Miller replied that BPF is needed by systemd and the IR drivers already; "We're moving to the point where even LSM modules will be implemented in bpf." In his earlier message, he said that turning off BPF would disable any packet sniffing so that tcpdump and Wireshark would not function. To a certain extent, he oversold the need for BPF as Gleixner pointed out; Gleixner was running Debian testing with Siewior's patch applied and not encountering any systemd or other difficulties. Furthermore, even though packet sniffing is not using BPF, thus requiring a copy to user space for each packet, it does still work, Gleixner said, so that is not really an argument for requiring BPF either.

Beyond that, though, he was really looking for feedback on "how to tackle these issues on a technical level". Some of that did start to come about in a sub-thread with BPF maintainer Alexei Starovoitov, who wondered about disabling preemption and noted that he is a "complete noob in RT". Gleixner explained the situation at some length.

Essentially, the realtime kernel cannot disable preemption or interrupts for arbitrarily long periods of time. The realtime patches substitute realtime-aware locks for the spinlocks and rwlocks that do disable preemption or interrupts in the non-realtime kernel. Those realtime-aware locks can sleep, however, so they cannot be used from within code sections that have explicitly disabled preemption or interrupts.

As Starovoitov explained, BPF disables preemption "because of per-cpu maps and per-cpu data structures that are shared between bpf program execution and kernel execution". But, he said, BPF does not call into code that might sleep, so there should be no problems on that score. But that is only when looking at the BPF code from a non-realtime perspective, Gleixner said; because of the lock substitution, code that does not look like it could sleep actually can sleep since the realtime locks (e.g. sleeping spinlocks) do so. That's what makes using preempt_disable() (and local_irq_disable()) problematic in the realtime context. He said that the local_lock() mechanism in the realtime tree might be a way forward to better handle the explicit preemption disabling in BPF.

But, he said, there is still the outstanding problem of BPF making calls to up_read_non_owner(), which allows a read-write semaphore (rwsem) to be unlocked by a process that is not the owner of the lock. That breaks the realtime requirement that the locker is the same as the unlocker in order to deal with priority inheritance correctly.

Starovoitov also said that BPF does not have unbounded runtime within the preemption-disabled sections, since it has a bound on the number of instructions that can be in a BPF program. But the limit on the number of instructions was recently raised from 4096 to one million, which will result in unacceptable preemption-disabled windows as Gleixner noted:

Assuming a instruction/cycle ratio of 1.0 and a CPU frequency of 2GHz, that's 500us of preempt disabled time. Out of bounds by at least one order of [magnitude] for a lot of RT scenarios.

Even the earlier limit of 4096 would result in 2µs of preemption-disabled time, which may be problematic Clark Williams said; "[...] there are some customer cases on the horizon where 2us would be a significant fraction of their max latency".

The local_lock() scheme seemed viable to Starovoitov, but he thought the overall approach taken by Siewior's patch was backward:

But reading your other replies the gradual approach we're discussing here doesn't sound acceptable ? And you guys insist on disabling bpf under RT just to merge some out of tree code ? I find this rude and not acceptable.

If RT wants to get merged it should be disabled when BPF is on and not the other way around.

But Gleixner did not see things that way at all; he noted that he was planning to investigate local locks as a possible way forward and that there was no insistence on anything. In addition, turning off realtime when BPF was enabled was always an option; "[...] I could have done that right away without even talking to you. That'd have been dishonest and sneaky." He also lamented that the discussion had degraded to that point.

For his part, Starovoitov said that he simply thinks disabling an existing in-kernel feature in order to ease the path for the realtime patches is likely to "backfire on RT". He suggested getting the code upstream without riling other subsystem developers was a better way forward:

imo it's better to get key RT bits (like local_locks) in without sending contentious patches like the one that sparked this thread. When everyone can see what this local_lock is we can figure out how and when to use it.

That's where things were left at the time of this writing. It is not clear that the practical effect of which subsystem disables the other makes any real difference to users. For now, users of BPF will not be able to use the realtime patches and vice versa. Fixing the underlying problems that prevent the two from coexisting certainly seems more important than squabbling over who disables who; there will be users who want both in their systems after all. For those who run mainline kernels, though, that definitely cannot happen until realtime gets upstream; once it does, a piecemeal approach to resolving the incompatibilities between BPF and realtime can commence.

Comments (23 posted)

Changing the Python release cadence

By Jake Edge
October 23, 2019

There has been discussion about the release cadence of Python for a couple of years now. The 18-month cycle between major releases of the language is seen by some core developers as causing too much delay in getting new features into the hands of users. Now there are two competing proposals for ways to shorten that cycle, either to one year or by creating a rolling-release model. In general, the steering council has seemed inclined toward making some kind of release-cycle change—one of those Python Enhancement Proposals (PEPs) may well form the basis of Python's release cadence moving forward.

Some history

The idea had undoubtedly been discussed before, but the current push began with a 2018 Python Language Summit presentation on changing to a yearly major release that was given by Łukasz Langa—the release manager for Python 3.8 and 3.9. Roughly a year later, he proposed doubling the release frequency in the first posting of PEP 596 ("Python 3.9 Release Schedule"). That would mean a major Python release every nine months. Around the time that PEP 596 was being discussed, Nick Coghlan created PEP 598 ("Introducing minor feature releases"), which was meant to speed up feature releases; it was an early progenitor of the rolling-release model.

Nine months, as proposed in PEP 596, was too fast of a cycle for many, but the steering council did indicate support for a change of the release cycle; a one-year cycle was seemingly a better fit for most commenters. The council thought that the 3.9 schedule PEP was the wrong place to propose this kind of change, however; it suggested that a new PEP be created to propose the cadence change. To that end, Langa created PEP 602 ("Annual Release Cycle for Python"), returning to his original one-year cadence that was better received than the shorter cycle.

In the discussion of the PEP, Steve Dower suggested an even faster pace of continuous alpha releases. Coghlan was intrigued by Dower's proposal, so they teamed up to create PEP 605 ("A rolling feature release stream for CPython"). When it was posted for discussion, Coghlan noted that he had withdrawn his earlier proposal in favor of the new one.

PEP 605 itself is something of a wall of text, but it is also accompanied by a lengthy design discussion posting as well. As part of the discussion on both PEPs, though, it became clear that there was some confusion about the background and, in particular, what problems they were both trying to solve. That, naturally, led to yet another PEP, which was authored by Langa, Dower, and Coghlan: PEP 607 ("Reducing CPython's Feature Delivery Latency"). Looking at that PEP would seem to be a good starting point for disentangling the various pieces here.

Rationale

PEP 607 lays out a number of reasons for "delivering smaller collections of features to Python’s users more frequently"; it also describes the reasoning for making major releases at roughly the same time of year (every year for PEP 602 and every other year for PEP 605). The size of the batches of features that are delivered is one of the areas that the PEPs are trying to address. Right now, roughly 18 months worth of changes are delivered at once, which increases the chances that there will be problems for users while also complicating the process of tracking down where any problem came from because there are more interacting features to wade through. PEP 602 simply reduces the 18-month window to a 12-month window, while PEP 605 will regularly deliver new features every two months for "a subset of Python’s user base that opts in to running a rolling stream of beta releases".

Then there is the problem of latency between the development of features and their delivery to users. Given that missing a particular major release means that a feature will languish for another 18 months, developers may push changes before they are truly ready. Similarly, features that could be in the hands of users are simply hanging around in the repository unused. Once again, the PEP 602 solution is simply shortening the cycle;

PEP 605 proposes to address it by actively creating a community of Python users that regularly install and use CPython beta releases, providing an incentive for core developers to start shipping changes earlier in the pre-release cycle, in order to obtain feedback before the feature gets locked down in a stable release.

Regularity in the release schedule is helpful to coordinate with other projects (e.g. Linux distributions) and events in the Python world (e.g. PyCon US, core developer sprints), but it is also helpful to regular users—and developers. Rather than consulting a calendar or PEP, the dates of interest will be predictable: major releases in October every year for PEP 602 or August of every other year for PEP 605. Other dates (feature freezes and the like) can be derived from that baseline.

It is also generally the case that major changes in the CPython interpreter or the standard library are not actually tested by users until after a major release. That means the feedback on new features is limited until after they have been solidified, requiring a multi-year deprecation cycle to correct any problems. Both of the PEPs are targeting ways to get feedback earlier in the cycle. PEP 602 would start the alpha period for a new release immediately after the major release (i.e. October of each year), while PEP 605 has a wider scope:

PEP 605 proposes to address this problem by actively promoting adoption of CPython pre-releases for running production workloads (not just for library and application compatibility testing), and adjusting the pre-release management process as necessary to make that a reasonable thing to do.

There are, of course, risks when changing the release cadence, some of which both PEPs have considered. Users and distributors may either skip some major releases or may update every time the language does. PEP 602 may cause some of the "skippers" to skip two releases instead of just one, but there is a prolonged deprecation cycle that will hopefully ensure that doing so will not miss out on deprecation warnings. For non-skippers, the support periods will remain roughly the same as they are today under PEP 602 (roughly 18 months of full support plus 42 months of security-fix-only support).

PEP 605 takes a different approach, obviously. Skippers may simply end up moving to the non-skipper bucket since there will be 24 months between major releases. Non-skippers should find that each release will have prolonged full support (roughly 24 months) with a shorter period of bug-fix-only support (36 months) added on. In addition, the idea is that the two-month beta releases will be more widely used, thus tested, which will lead to "more stable final releases with wider ecosystem support at launch".

Meanwhile, those who follow the CPython master branch should see no real impact, though PEP 605 is trying to entice them into switching to a better tested rolling beta stream. Third-party libraries are obviously a major component of the Python ecosystem; these include things like NumPy, SciPy, Django, and all manner of libraries available in the Python Package Index (PyPI). For those, the major pain point is the number of different Python versions that need to be simultaneously supported. This is an area where it would seem that PEP 605 has a clear advantage in that there will be fewer major releases made. The overall support period for a given release is the same in each PEP (60 months), so there will be roughly twice as many active major releases under PEP 602, though it will only be a minor increase from the current situation.

Which is not to say that PEP 605 is without flaws, though. It is a big change from what the Python ecosystem is used to, while PEP 602 is a fairly straightforward shortening of the existing state of affairs. The main complaint in the PEP 602 discussion thread (aside from those that spawned PEP 605) was about the October date, which is poorly situated for Fedora. For the most part, commenters were either in favor or neutral. In fact, most of the comments were about the still-under-discussion ideas that eventually resulted in PEP 605.

Rolling ... or roiling?

In the PEP 605 discussion, things were more mixed. Langa, unsurprisingly, was opposed to the approach. Paul Moore was skeptical that the larger projects in the ecosystem would participate in the rolling releases, leading to simply a slower release cycle. The term "beta" was discussed, as well, with some wondering if it had baggage that would worry potential adopters. But Coghlan said that there may be some confusion about what he and Dowers are aiming for:

There seems to be a fundamental disconnect here, where folks seem to think Steve and I want everyone to start using the rolling releases. We don’t - we only want folks that are thoroughly dissatisfied with the slower full release cadence to use them.

[...]

Globally, my guess is that the latter group would number somewhere in the thousands (somewhere between 0.1% and 1% of our user numbers), which should be sufficient for more robust pre-release feedback. It wouldn’t be the millions that use the stable releases, but that’s a large part of the point: encouraging a larger group of self-selected early adopters to contribute to improving API designs before we lock them down in a stable release for the rest of our multiple-orders-of-magnitude larger overall user community.

PEP 605 proposes that the two-month rolling releases be broken up into alpha releases, which could contain changes to the CPython ABI, and beta releases that would maintain ABI compatibility. That would mean that C-language extensions might need to be reworked and rebuilt for the alpha releases, but should not need to be for the beta releases. The "alpha" and "beta" terms are being used rather differently than most expect. It also leads to a rather unexpected pattern of release numbers, both of which may be contributing to the confusion. For example, the first 3.9 rolling release would presumably be an alpha, thus 3.9.0a1; after that, there would likely be betas for the next few bimonthly releases: 3.9.0b2, 3.9.0b3, ... If there were an ABI breaking change for the fourth release, though, it would be 3.9.0a4.

That unconventional pattern of release versions was considered potentially confusing by several in the discussion. Fedora Python maintainer Miro Hrončok noted that the distribution would have to hack things to make the versions sort correctly. Moore agreed:

Overall, mixing alphas and betas like this seems like a fairly gratuitous violation of an extremely common convention, and while we can argue all we like that it won’t affect anyone in practice, it’s still going to break a lot of assumptions.

The discussion continues as of this writing. At some point, the council will need to decide between the two—or to stay with the status quo. As council member Brett Cannon said in early October, there will be a need to make a decision fairly soon in order to nail down the 3.9 schedule, which effectively started running on October 14 with the release of Python 3.8. There is also an upcoming election for a new steering council, which will "basically freeze up PEP decision making until mid-December". It should be noted that Coghlan is also a member of the council, but seems likely to recuse himself from a decision on a PEP that he has co-authored.

Though the rolling model seems like a huge departure, it does really only affect a self-selected subset of the Python community. The 24-month major release cycle will certainly have more widespread effects, but is not a huge stretch. In fact, while Python has been released on an 18-month cycle for some time, it is documented to be an 18-24-month cycle, so PEP 605 does not really make any changes to that—just to "recent" practice. One might guess that the council will lean toward the least "radical sounding" proposal, thus PEP 602, but one never knows. Langa is the Python 3.9 release manager, which may also be a factor, but certainly PEP 605 and the status quo should not be counted out. It will be interesting to see how it all plays out.

Comments (10 posted)

"git request-pull" and confusing diffstats

By Jonathan Corbet
October 21, 2019
When a kernel subsystem maintainer has a set of commits to send up the chain toward the mainline, the git request-pull command is usually the right tool for the job. But various maintainers have noticed over the years that this command can sometimes generate confusing results when confronted with anything but the simplest of histories. A brief conversation on the linux-kernel mailing list delved into why this situation comes about and what maintainers can do in response.

While git request-pull is intended to be a general-purpose command, it's no secret that its output is designed for the needs of one specific consumer; it contains almost everything Linus Torvalds needs to know when considering a request to pull a tree into the mainline kernel. This information includes the commit upon which the tree is based, the timestamp for the most recent commit in the tree, the tree to pull the commits from, the maintainer's description of the changes, a list of commits (in the git shortlog format), and a diffstat showing which files will be touched. See a typical pull request to see how all of those elements are worked into a single email.

That example is generated from a relatively straightforward development history that looks something like this:

[commit stream]

Generating both the commit log and the diffstat for this history is relatively straightforward, and the pull requests looks exactly as one would expect.

Recently, Will Deacon ran into a more complex situation, though. His tree was initially based in 5.4-rc1, but then required a merge of 5.4-rc2 to obtain the dependencies for a fix. The history ended up looking something like this:

[commit stream 2]

When one runs git request-pull for a tree like this, the commit-log portion will look exactly as expected — it contains the commits in the local tree. The diffstat, though, is likely to reflect a large number of unrelated changes, making the pull request look like a scary beast indeed. In essence, that diffstat will reflect not just the local changes, but also everything that was pulled into the local tree when 5.4-rc2 was merged. That makes it hard, at best, to see what the pull request will actually do.

Deacon was surprised that the commit log was correct while the diffstat was wrong. Torvalds explained it this way:

So logs and diffs are fundamentally different.

A log is an operation on a _set_ of commits (that's the whole point - you don't list the beginning and the end, you list all the commits in between), while a diff is fundamentally an operation on two end-points and shows the code difference between those two points.

And the summary versions of those operations (shortlog and diffstat) are no different.

He went on to say that, when there are only two endpoints and the history is simple, it is not hard for a tool like Git to calculate the difference between them. Throwing another merge into the mix complicates the situation, though, by adding another endpoint. The end result is the useless diffstat included in the pull request.

Deacon resolved the issue by merging the current mainline with the tree to be pulled:

[commit stream 3]

The diffstat could then be generated manually choosing the mainline and the merged tree as the two endpoints; the only differences that will be visible will be those that are not in the current mainline — the changes to be pulled from Deacon's tree, in other words. The clean diffstat was then manually patched into the pull request. The merge itself can then be thrown away; it should not be part of the commit stream sent upstream. As Torvalds explained, performing the merge reduced the diffstat problem back to two simple endpoints, so the result became as one would expect.

Should maintainers perform this sort of dance before sending pull requests upstream? It seems that Torvalds appreciates the effort; he described the result as a "good quality" pull request. He also noted, though, that he often gets pull requests with confusing diffstats and doesn't really have a hard time dealing with them. Still, maintainers who want to be sure that their pull requests are as pleasing to the recipient as possible may want to go to the extra effort.

The best solution, of course, would be to fix git request-pull to do the right thing in this sort of situation. Depending on how complex the merge history is, though, "the right thing" may not always be entirely obvious. It might also, like the merge described above, require changing the repository, which the request-pull command does not currently do. But, as Ingo Molnar noted, it should at least be possible for git request-pull to detect this situation and issue a warning when it arises. Then, at least, developers would not be surprised by a bogus diffstat — something that can easily happen immediately after having sent it upstream.

Comments (10 posted)

Really fixing getrandom()

By Jonathan Corbet
October 17, 2019
The final days of the 5.3 kernel development cycle included an extensive discussion of the getrandom() API and the reversion of an ext4 improvement that was indirectly causing boot hangs due to a lack of entropy. Blocking filesystem improvements because they are too effective is clearly not a good long-term development strategy for the kernel, so there was a consensus that some sort of better solution had to be found. What was lacking was an idea of what that solution should be. It is thus surprising that the problem appears to have been dealt with in 5.4 with little in the way of dissent or disagreement.

The root of the problem in 5.3 was the blocking behavior of getrandom(), which will prevent a caller from proceeding until enough entropy has been collected to initialize the random-number generator. This behavior was subjected to a fair amount of criticism, and few felt the need to defend it. But changing getrandom() is not easy; any changes that might cause it to return predictable "random" numbers risks creating vulnerabilities in any number of security-sensitive applications. So, while various changes to the API were proposed, actually bringing about those changes looks like a multi-year project at best.

There is another way to ensure that getrandom() doesn't block during the bootstrap process: collect enough entropy to ensure that the random-number generator is fully initialized by the time that somebody needs it. That can be done in a number of ways. If the CPU has a hardware random-number generator, that can be used; some people distrust this solution, but mixing entropy from the hardware with other sources is considered to be safe by most, even if the hardware generator is somehow suspect. But not all CPUs have hardware random-number generators, so this solution is not universal in any case. Other potential solutions, such as having the bootloader or early user space initialize the random pool, can work in some situations but are not universal either.

Another possible solution, though, is jitter entropy: collecting entropy from the inherently nondeterministic nature of current hardware. The timing of even a simple sequence of instructions can vary considerably as the result of multiple layers of cache, speculative execution, and other tasks running on the system. Various studies into jitter entropy over the years suggest that it is real; it might not be truly nondeterministic, but it is unpredictable, and data generated using jitter entropy can pass the various randomness test suites.

Shortly after the 5.3 release, Thomas Gleixner suggested using a simple jitter-entropy mechanism to initialize the entropy pool. Linus Torvalds described this solution as "not very reliable", but he clearly thought that the core idea had some merit. He proposed a variant of his own that, after some brief discussion, was committed into the mainline at the end of the 5.4 merge window.

Torvalds's patch adds a new function, try_to_generate_entropy(), which is called if somebody is requesting random data and the entropy pool is not yet fully initialized. It is simple enough to discuss in detail:

    static void try_to_generate_entropy(void)
    {
	struct {
	    unsigned long now;
	    struct timer_list timer;
	} stack;

This function starts by declaring a timer on the stack, which is a relatively rare occurrence in the kernel. Putting it onto the stack was a deliberate choice, though; if the timer function runs on a different CPU, it will cause cache contention (and unpredictable timing) for accesses to this function's stack space.

	stack.now = random_get_entropy();

	/* Slow counter - or none. Don't even bother */
	if (stack.now == random_get_entropy())
	    return;
	timer_setup_on_stack(&stack.timer, entropy_timer, 0);

On most architectures, random_get_entropy() just reads the timestamp counter (TSC) directly. The TSC increments for every clock cycle, so two subsequent calls should always return different values. Just how different they will be, though, is where the unpredictability comes in. The code above is simply verifying that the system on which the kernel is running does indeed have a high-frequency TSC; without that, this algorithm will not work. Most current hardware does have a TSC, fortunately. Assuming the TSC check passes, the timer is prepared for use.

	while (!crng_ready()) {
	    if (!timer_pending(&stack.timer))
		mod_timer(&stack.timer, jiffies+1);
	    mix_pool_bytes(&input_pool, &stack.now, sizeof(stack.now));
	    schedule();
	    stack.now = random_get_entropy();
	}

This loop is the core of the jitter-entropy algorithm. The timer declared above is armed to expire in the near future (the next "jiffy", which be anytime between 0ms and 10ms); that expiration will happen by way of an interrupt and may occur on a different CPU. The expiration of the timer adds complexity to the instruction stream (and, hopefully, more randomness as well).

Each time through the loop, the TSC is queried and its value is mixed into the entropy pool. If the timer has expired, it is re-armed to run again. Then the scheduler is invoked, adding more complex code and an unpredictable amount of execution before that call returns. The loop itself runs until the entropy pool is deemed to be initialized. Given that a lot can happen even in one jiffy, this loop can be expected to run quite a few times and mix many TSC readings into the entropy pool.

	del_timer_sync(&stack.timer);
	destroy_timer_on_stack(&stack.timer);
	mix_pool_bytes(&input_pool, &stack.now, sizeof(stack.now));
    }

The post-loop cleanup gets rid of the timer and mixes in the last bit of entropy.

The timer function itself looks like this:

    static void entropy_timer(struct timer_list *t)
    {
	credit_entropy_bits(&input_pool, 1);
    }

In other words, every time the timer expires, the entropy pool is deemed to have gained one bit of entropy. It takes 128 bits of entropy for the pool to be considered ready, so the jitter loop may have to run for up to 128 jiffies — potentially just over one second on a system with a 100HZ tick frequency — if the system in question has no other sources of entropy. That could result in a perceivable pause during the boot process, but it is far better than blocking outright.

When Torvalds decided to merge this code, he suggested that it might not be the definitive solution to the problem:

I'm not saying my patch is going to be the last word on the issue. I'm _personally_ ok with it and believe it's not crazy, and if it then makes serious people go "Eww" and send some improvements to it, then it has served its purpose.

He was thus following the time-honored practice of submitting a patch in the hope that it would inspire somebody to create a better one. Thus far, though, there has been little in the way of commentary on this change — somewhat surprising, given the diversity of opinions expressed earlier in the discussion. Unless somebody comes along with a better idea or shows that the entropy produced by this algorithm is somehow predictable, this change seems likely to stand. Hopefully it achieves the goal of preventing getrandom() blocking while, at the same time, ensuring that random numbers from the kernel are always truly unpredictable.

Comments (31 posted)

Implementing alignment guarantees for kmalloc()

October 18, 2019

This article was contributed by Marta Rybczyńska

kmalloc() is a frequently used primitive for the allocation of small objects in the kernel. During the 2019 Linux Storage, Filesystem, and Memory Management Summit, Vlastimil Babka led a session about the unexpected alignment problems developers face when using this function. After a few months he has come back with the second version of a patch set implementing a natural alignment guarantee for kmalloc(). From the strong opposition it faced initially, it seemed that the change would not get accepted. However, it ended up in Linus Torvalds's tree. Let's explore what happened.

The issue Babka wanted to fix is the fact that kmalloc() sometimes returns objects that are not naturally aligned (that is, aligned to the object size if that size is a power of two). Most of the time, though, kmalloc() does return naturally aligned objects and some drivers and subsystems have come to depend on that property. The exceptions are when SLUB debugging is enabled or when the SLOB allocator is used. kmalloc() is essentially a shell around the SLAB, SLUB or SLOB allocator, depending on the kernel configuration; interested readers may wish to read an article on the reasons SLUB was introduced and look at a LinuxCon 2014 slide set [PDF] on the three allocators. Unexpectedly returning an unaligned object can cause data corruption and other errors. In response to that problem, Babka proposed to guarantee natural alignment for allocated objects with power-of-two size, so that all alignment expectations are fulfilled.

For and against kmalloc() alignment

In the patch set discussion, Christopher Lameter (the creator of the SLUB allocator) disagreed with the idea of adding natural alignment and noted that kmalloc() has its own alignment limit (KMALLOC_MINALIGN) for a reason: to allow optimized memory layout without wasting memory. The SLOB allocator is an example; it is designed for small embedded systems and to incur minimal overhead. The patch from Babka would change that expected behavior. Also, any future allocators would have to take those new constraints into account and that would prevent them from implementing certain optimizations in their memory layout.

Matthew Wilcox was in favor of Babka's proposal, as there are many subsystems that already depend on the implied alignment behavior. He mentioned examples like the persistent-memory (pmem) and RAM-disk drivers. The XFS filesystem, without an alignment guarantee, would need slab caches for each object size between 512 bytes and PAGE_SIZE, and it may need even more of them depending on what kmalloc() does guarantee.

Dave Chinner agreed with providing alignment for small objects and spoke for further alignment of large objects (bigger than a page) to page boundaries. This need was seen when using pmem with KASAN. He suggested, though, using a GFP flag to tell the allocator to return a naturally aligned object, and to fail if it cannot. That would avoid the need for higher-level subsystems to create additional caches. Babka and other developers preferred to deal with the issue without a separate flag.

A heated debate followed about the severity of the issue. Lameter disagreed that the misalignment cases are frequent, or even seen in practice, as the drivers affected are enabled in distribution test systems that use debug options. The cases of bad alignment should have been seen in that testing, according to him. Christoph Hellwig noted that the breakage often happens under special conditions, like buffers that cross a page boundary.

From a private NAK to the mainline

Following the debate, Babka asked for formal approval or disapproval of the patch set:

So if anyone thinks this is a good idea, please express it (preferably in a formal way such as Acked-by), otherwise it seems the patch will be dropped (due to a private NACK, apparently).

David Sterba commented that he has had to apply workarounds for misalignment cases and would be happy to remove them when the generic code is fixed. Darrick J. Wong seconded Sterba's opinion and expressed his strong preference for open discussion:

Oh, I didn't realize ^^^^^^^^^^^^ that *some* of us are allowed the privilege of gutting a patch via private NAK without any of that open development discussion inconvenience. <grumble>

Lameter followed up stating that the options to detect misalignment have been available for years and are ready to use. Wilcox disagreed, as the issues show up when debugging options are enabled and this is particularly the case when all of the other features should work fine:

People who are enabling a debugging option to debug their issues, should not have to first debug all the other issues that enabling that debugging option uncovers!

Andrew Morton moved the discussion back to the technical subject and asked for verification of the patch's correctness. Lameter confirmed that it is technically fine, while still disagreeing with the intent. That was followed by a number of acknowledgments (Acked-by:) from kernel developers showing their support for Babka's solution.

That series of approvals ended the public discussion; Babka did not resend the patch set or submit a third version. The situation seemed blocked as the patch set had support of multiple developers, but not from the maintainer of the SLUB allocator, which is heavily affected by the patch set. However, the patch was included in Morton's tree and was merged to the mainline on October 7th.

Summary

This discussion shows an example of the kernel community working on a change that affects a behavior that has been present for a long time. It is not a surprise that not all developers agreed with the solution — however, in this case, the one disagreeing was the maintainer of one of the modified subsystems. The final result shows that such changes can be accepted into the mainline since there was wide support from kmalloc() users and other memory-management developers.

Comments (7 posted)

Page editor: Jonathan Corbet

Brief items

Kernel development

Kernel release status

The current development kernel is 5.4-rc4, released on October 20. "This release cycle remains pretty normal. In fact, the rc's have been a bit on the smaller side of the average of the last few releases, and rc4 continues this, if only barely."

Stable updates: 5.3.7, 4.19.80, 4.14.150, 4.9.197, and 4.4.197 were released on October 18.

Comments (none posted)

Distributions

Tails 4.0

Tails (The Amnesic Incognito Live System) is, as the spelled out name implies, a privacy focused distribution, designed to run from removable media. Version 4.0 has been released. "We are especially proud to present you Tails 4.0, the first version of Tails based on Debian 10 (Buster). It brings new versions of most of the software included in Tails and some important usability and performance improvements. Tails 4.0 introduces more changes than any other version since years."

Comments (1 posted)

Ubuntu 19.10 (Eoan Ermine) released

Ubuntu has announced the release of 19.10 "Eoan Ermine" in desktop and server editions as well as all of the different flavors: Ubuntu Budgie, Kubuntu, Lubuntu, Ubuntu Kylin, Ubuntu MATE, Ubuntu Studio, and Xubuntu. "The Ubuntu kernel has been updated to the 5.3 based Linux kernel, and our default toolchain has moved to gcc 9.2 with glibc 2.30. Additionally, the Raspberry Pi images now support the new Pi 4 as well as 2 and 3. Ubuntu Desktop 19.10 introduces GNOME 3.34 the fastest release yet with significant performance improvements delivering a more responsive experience. App organisation is easier with the ability to drag and drop icons into categorised folders and users can select light or dark Yaru theme variants. The Ubuntu Desktop installer also introduces installing to ZFS as a root filesystem as an experimental feature." More information can also be found in the release notes.

Full Story (comments: 8)

Distribution quote of the week

Packages are like a super saturated liquid below the freezing point. There are 20,000+ packages and ~400 active packagers. Little events are going to cause either tiny crystals to grow around a package (a module of 1 package like the RHEL perl-CGI) or a giant crystal (rust and java are probably going to grow until anything built with those languages will have to be in the module) and it will happen very very fast.. [much] faster than expected.. and like a crystal growth it will have lots of faults and crack open in ways not expected also.
Stephen J Smoogen

Comments (none posted)

Development

Bazel 1.0 released

Google has announced version 1.0 of its Bazel build system. "A growing list of Bazel users attests to the widespread demand for scalable, reproducible, and multi-lingual builds. Bazel helps Google be more open too: several large Google open source projects, such as Angular and TensorFlow, use Bazel. Users have reported 3x test time reductions and 10x faster build speeds after switching to Bazel."

Comments (15 posted)

Firefox 70 released

Version 70 of the Firefox web browser is out. The headline features include a new password generator and a "privacy protection report" showing users which trackers have been blocked. "Amazing user features and protections aside, we’ve also got plenty of cool additions for developers in this release. These include DOM mutation breakpoints and inactive CSS rule indicators in the DevTools, several new CSS text properties, two-value display syntax, and JS numeric separators." See the release notes for more details.

Comments (33 posted)

LTTng 2.11.0 "Lafontaine" released

After more than two years of development, the Linux trace toolkit next generation (LTTng) project has released version 2.11.0 of the kernel and user-space tracing tool. The release covers the LTTng tools, LTTng user-space tracer, and LTTng kernel modules. It includes a number of new features that are described in the announcement including session rotation, dynamic user-space tracing, call-stack capturing for the kernel and user space, improved networking performance, NUMA awareness for user-space tracing buffer allocation, and more. "The biggest feature of this release is the long-awaited session rotation support. Session rotations now allow you to rotate an ongoing tracing session much in the same way as you would rotate logs. The 'lttng rotate' command rotates the current trace chunk of the current tracing session. Once a rotation is completed, LTTng does not manage the trace chunk archive anymore: you can read it, modify it, move it, or remove it. Because a rotation causes the tracing session’s current sub-buffers to be flushed, trace chunk archives are never redundant, that is, they do not overlap over time, unlike snapshots. Once a rotation is complete, offline analyses can be performed on the resulting trace, much like in 'normal' mode. However, the big advantage is that this can be done without interrupting tracing, and without being limited to tools which implement the 'live' protocol."

Full Story (comments: none)

Development quotes of the week

My theory is this: our average bus factor is one. I don't have any hard evidence to back this up, no hard research to rely on. I'd love to be proven wrong. I'd love for this to change.

But unless economics of technology production change significantly in the coming decades, this problem will remain, and probably worsen, as we keep on scaffolding an entire civilization on shoulders of hobbyists that are barely aware their work is being used to power phones, cars, airplanes and hospitals.

Antoine Beaupré

It certainly seems that companies promote contribution to their projects as a way of getting hired: if you do enough interning or auditioning, eventually they might reward you with a position. In communications I have had, such things have been implied, suggested or even actually stated: keep up with your efforts and maybe there is an opportunity to be had.

You don't have to be a prize-winning economist to realise that with enough people wanting to get ahead, there is little incentive for that reward to be granted. The eager brushing off of volunteer effort is also a familiar story in the Free Software realm amongst organisations and projects alike.

Paul Boddie

Comments (none posted)

Miscellaneous

GNOME's patent-troll counterattack

Rothschild Patent Imaging LLC filed a patent suit against the GNOME Foundation in September, asserting a violation in the Shotwell photo manager. GNOME has now gone on the counterattack, questioning the validity of the patent and whether it applies to Shotwell at all. There is also an unspecified counterclaim to strike back against Rothschild. "We want to send a message to all software patent trolls out there — we will fight your suit, we will win, and we will have your patent invalidated. To do this, we need your help."

Comments (28 posted)

Page editor: Jake Edge

Announcements

Newsletters

Distributions and system administration

Development

Meeting minutes

Calls for Presentations

Call for Submissions for Netdev 0x14 open

Netdev Society has opened the call for submissions for Netdev 0x14, a conference that focuses on networking based on the Linux kernel and its associated user APIs. Netdev will take place March 17-20 in Vancouver, Canada. The submission deadline is January 15.

Full Story (comments: none)

CFP Deadlines: October 24, 2019 to December 23, 2019

The following listing of CFP deadlines is taken from the LWN.net CFP Calendar.

DeadlineEvent Dates EventLocation
October 25 March 23
March 27
[CANCELED] SUSECON 2020 was Dublin, Online only
November 8 February 1
February 2
FOSDEM 2020 Brussels, Belgium
November 17 February 3 Copyleft Conference Brussels, Belgium
November 20 March 14
March 15
[CANCELED] LibrePlanet Boston, MA, USA
November 22 March 23
March 27
[CANCELED] Postgres Conference 2020 New York, NY, USA
November 30 March 5
March 8
Southern California Linux Expo Pasadena, CA, USA
December 4 March 30
April 2
POSTPONED: KubeCon + CloudNativeCon Europe 2020 Amsterdam, The Netherlands
December 8 January 13 Open ISA (RISC-V, OpenPOWER, etc) Miniconf at Linux.conf.au Gold Coast, Australia
December 20 March 19
March 22
[CANCELED] FOSSASIA OpenTechSummit Singapore, Singapore
December 22 January 14 lca Kernel Miniconf Gold Coast, Australia
December 22 January 13 LCA Creative Arts Miniconf Gold Coast, Australia

If the CFP deadline for your event does not appear here, please tell us about it.

Upcoming Events

Ohio LinuxFest

Ohio LinuxFest will take place November 1-2 in Columbus, OH. The schedule is available. Enthusiast level admission is free; professional training and certification exams are available for a fee. See the registration page for more information.

Comments (none posted)

linux.conf.au 2020 - Conference schedule available

The schedule for LCA (January 13-17 in Gold Coast, Australia) has been announced. "The first two days include 12 miniconfs covering an array of interesting subject areas. Following this, we will be running 80 talks plus eight tutorials across Wednesday, Thursday and Friday of the main conference. The sessions cover a wide variety of topics, including open source software, open hardware, Linux kernel, documentation and community. This is overlayed with our theme for linux.conf.au 2020 of "Who's Watching", focusing on security, privacy and ethics." All of the miniconfs are accepting proposals for sessions until December 8.

Full Story (comments: none)

Events: October 24, 2019 to December 23, 2019

The following event listing is taken from the LWN.net Calendar.

Date(s)EventLocation
October 25
October 27
NixCon Brno, Czech Republic
October 26 The Future is Open Rochester, NY, USA
October 26
October 27
Sonoj Convention Cologne, Germany
October 26 FOSS XR Conference Amsterdam, The Netherlands
October 27
October 30
27th ACM Symposium on Operating Systems Principles Huntsville, Ontario, Canada
October 28
October 30
USENIX 33rd Large Installation System Administration Conference Portland, OR, USA
October 28
October 30
Open Source Summit Europe Lyon, France
October 28
October 30
Embedded Linux Conference Lyon, France
October 30
November 1
KVM Forum 2019 Lyon, France
October 31 Automated Testing Summit 2019 Lyon, France
October 31 Real-time Linux Summit - At ELCE 2019 Lyon, France
October 31
November 1
Linux Security Summit Lyon, France
October 31
November 1
GStreamer Conference 2019 Lyon, France
November 1
November 2
Ohio LinuxFest 2019 Columbus, OH, USA
November 2
November 3
OpenFest 2019 Sofia, Bulgaria
November 2
November 3
North Bay Python 2019 Petaluma, CA, USA
November 2
November 3
Konference OpenAlt Brno, Czechia
November 4
November 6
Open Infrastructure Summit Shanghai, China
November 4
November 7
Open Source Monitoring Conference Nuremberg, Germany
November 4
November 8
Tcl 2019 - 26th Annual Tcl/Tk Conference Houston, TX, USA
November 7 Open Source Camp | #4 Foreman Nuremberg, Germany
November 9
November 10
VideoLAN Dev Days 2019 Tokyo, Japan
November 12
November 15
Linux Application Summit Barcelona, Spain
November 16
November 17
Capitole du Libre 2019 Toulouse, France
November 17
November 22
Supercomputing Denver, CO, USA
November 19
November 20
DevOpsDays Galway 2019 Galway, Ireland
November 19
November 21
KubeCon + CloudNativeCon San Diego, CA, USA
December 9 Open FinTech Forum New York, NY, USA

If your event does not appear here, please tell us about it.

Security updates

Alert summary October 17, 2019 to October 23, 2019

Dist. ID Release Package Date
Arch Linux ASA-201910-12 go 2019-10-23
Arch Linux ASA-201910-11 go-pie 2019-10-23
Arch Linux ASA-201910-13 pacman 2019-10-23
Arch Linux ASA-201910-9 sudo 2019-10-16
Arch Linux ASA-201910-10 xpdf 2019-10-23
CentOS CESA-2019:3158 C6 java-1.7.0-openjdk 2019-10-22
CentOS CESA-2019:3157 C7 java-1.7.0-openjdk 2019-10-23
CentOS CESA-2019:3136 C6 java-1.8.0-openjdk 2019-10-22
CentOS CESA-2019:3128 C7 java-1.8.0-openjdk 2019-10-23
CentOS CESA-2019:3127 C7 java-11-openjdk 2019-10-23
CentOS CESA-2019:3067 C7 jss 2019-10-21
CentOS CESA-2019:3055 C7 kernel 2019-10-21
CentOS CESA-2019:2964 C7 patch 2019-10-23
Debian DLA-1966-1 LTS aspell 2019-10-19
Debian DLA-1962-1 LTS graphite-web 2019-10-21
Debian DLA-1968-1 LTS imagemagick 2019-10-21
Debian DLA-1967-1 LTS libpcap 2019-10-22
Debian DLA-1713-2 LTS libsdl1.2 2019-10-17
Debian DLA-1714-2 LTS libsdl2 2019-10-17
Debian DSA-4545-1 stable mediawiki 2019-10-18
Debian DLA-1961-1 LTS milkytracker 2019-10-21
Debian DLA-1965-1 LTS nfs-utils 2019-10-19
Debian DSA-4546-1 stable openjdk-11 2019-10-20
Debian DSA-4548-1 stable openjdk-8 2019-10-21
Debian DLA-1963-1 LTS poppler 2019-10-18
Debian DLA-1963-2 LTS poppler 2019-10-18
Debian DLA-1964-1 LTS sudo 2019-10-17
Debian DSA-4547-1 stable tcpdump 2019-10-21
Debian DLA-1960-1 LTS wordpress 2019-10-17
Fedora FEDORA-2019-f36ac0db92 F30 java-11-openjdk 2019-10-21
Fedora FEDORA-2019-057d691fd4 F30 kernel 2019-10-18
Fedora FEDORA-2019-057d691fd4 F30 kernel-headers 2019-10-18
Fedora FEDORA-2019-057d691fd4 F30 kernel-tools 2019-10-18
Fedora FEDORA-2019-c4cdd73c74 F30 mediawiki 2019-10-18
Fedora FEDORA-2019-65c33bdc2a F29 radare2 2019-10-19
Mageia MGASA-2019-0296 7 e2fsprogs 2019-10-17
Mageia MGASA-2019-0295 7 kernel 2019-10-17
Mageia MGASA-2019-0297 7 libpcap and tcpdump 2019-10-17
Mageia MGASA-2019-0294 7 nmap 2019-10-17
Mageia MGASA-2019-0298 7 sudo 2019-10-17
openSUSE openSUSE-SU-2019:2321-1 GraphicsMagick 2019-10-16
openSUSE openSUSE-SU-2019:2340-1 15.0 dhcp 2019-10-20
openSUSE openSUSE-SU-2019:2341-1 15.1 dhcp 2019-10-20
openSUSE openSUSE-SU-2019:2365-1 15.0 gcc7 2019-10-23
openSUSE openSUSE-SU-2019:2364-1 15.1 gcc7 2019-10-22
openSUSE openSUSE-SU-2019:2343-1 15.0 libpcap 2019-10-21
openSUSE openSUSE-SU-2019:2345-1 15.1 libpcap 2019-10-21
openSUSE openSUSE-SU-2019:2361-1 15.0 libreoffice 2019-10-22
openSUSE openSUSE-SU-2019:2347-1 15.0 15.1 lighttpd 2019-10-21
openSUSE openSUSE-SU-2019:2333-1 15.0 sudo 2019-10-17
openSUSE openSUSE-SU-2019:2344-1 15.0 tcpdump 2019-10-21
openSUSE openSUSE-SU-2019:2348-1 15.1 tcpdump 2019-10-21
Oracle ELSA-2019-3158 OL6 java-1.7.0-openjdk 2019-10-21
Oracle ELSA-2019-3157 OL7 java-1.7.0-openjdk 2019-10-21
Oracle ELSA-2019-3157 OL7 java-1.7.0-openjdk 2019-10-21
Oracle ELSA-2019-3136 OL6 java-1.8.0-openjdk 2019-10-17
Oracle ELSA-2019-3128 OL7 java-1.8.0-openjdk 2019-10-16
Oracle ELSA-2019-3128 OL7 java-1.8.0-openjdk 2019-10-16
Oracle ELSA-2019-3127 OL7 java-11-openjdk 2019-10-16
Oracle ELSA-2019-3127 OL7 java-11-openjdk 2019-10-16
Oracle ELSA-2019-3067 OL7 jss 2019-10-16
Oracle ELSA-2019-3067 OL7 jss 2019-10-16
Oracle ELSA-2019-3055 OL7 kernel 2019-10-17
Red Hat RHSA-2019:3193-01 EL7 firefox 2019-10-23
Red Hat RHSA-2019:3158-01 EL6 java-1.7.0-openjdk 2019-10-21
Red Hat RHSA-2019:3157-01 EL7 java-1.7.0-openjdk 2019-10-21
Red Hat RHSA-2019:3136-01 EL6 java-1.8.0-openjdk 2019-10-17
Red Hat RHSA-2019:3134-01 EL8 java-1.8.0-openjdk 2019-10-17
Red Hat RHSA-2019:3127-01 EL7 java-11-openjdk 2019-10-16
Red Hat RHSA-2019:3135-01 EL8 java-11-openjdk 2019-10-17
Red Hat RHSA-2019:3187-01 EL7.4 kernel 2019-10-23
Red Hat RHSA-2019:3170-01 EL7.4 python 2019-10-22
Red Hat RHSA-2019:3179-01 EL7 qemu-kvm-rhev 2019-10-22
Red Hat RHSA-2019:3168-01 EL7.4 wget 2019-10-22
Scientific Linux SLSA-2019:3158-1 SL6 java-1.7.0-openjdk 2019-10-22
Scientific Linux SLSA-2019:3157-1 SL7 java-1.7.0-openjdk 2019-10-22
Scientific Linux SLSA-2019:3136-1 SL6 java-1.8.0-openjdk 2019-10-21
Scientific Linux SLSA-2019:3128-1 SL7 java-1.8.0-openjdk 2019-10-17
Scientific Linux SLSA-2019:3127-1 SL7 java-11-openjdk 2019-10-17
Scientific Linux SLSA-2019:3067-1 SL7 jss 2019-10-16
Scientific Linux SLSA-2019:3055-1 SL7 kernel 2019-10-18
Slackware SSA:2019-295-01 mozilla 2019-10-22
Slackware SSA:2019-293-01 python 2019-10-20
SUSE SUSE-SU-2019:1353-2 SLE15 bluez 2019-10-18
SUSE SUSE-SU-2019:2736-1 SLE15 SES6 ceph, ceph-iscsi, ses-manual_en 2019-10-22
SUSE SUSE-SU-2019:2727-1 SLE12 dhcp 2019-10-21
SUSE SUSE-SU-2019:2702-1 SLE15 gcc7 2019-10-17
SUSE SUSE-SU-2019:2710-1 SLE15 kernel 2019-10-18
SUSE SUSE-SU-2019:2706-1 SLE15 kernel 2019-10-17
SUSE SUSE-SU-2019:2710-1 SLE15 kernel 2019-10-18
SUSE SUSE-SU-2019:2738-1 SLE15 kernel 2019-10-22
SUSE SUSE-SU-2019:2745-1 SLE12 libcaca 2019-10-22
SUSE SUSE-SU-2019:2686-1 SLE15 libreoffice 2019-10-16
SUSE SUSE-SU-2019:2744-1 SLE12 openconnect 2019-10-22
SUSE SUSE-SU-2019:2737-1 SLE15 openconnect 2019-10-22
SUSE SUSE-SU-2019:2707-1 SLE15 postgresql10 2019-10-17
SUSE SUSE-SU-2019:2730-1 SLE15 procps 2019-10-21
SUSE SUSE-SU-2019:2748-1 SLE12 SES5 python 2019-10-23
SUSE SUSE-SU-2019:2743-1 SLE15 python 2019-10-22
SUSE SUSE-SU-2019:2719-1 SLE12 python-xdg 2019-10-18
SUSE SUSE-SU-2019:2752-1 SLE12 sysstat 2019-10-23
SUSE SUSE-SU-2019:2750-1 SLE15 zziplib 2019-10-23
Ubuntu USN-4155-2 19.10 aspell 2019-10-21
Ubuntu USN-4159-1 16.04 18.04 19.04 19.10 exiv2 2019-10-21
Ubuntu USN-4156-2 12.04 14.04 libsdl1.2 2019-10-16
Ubuntu USN-4164-1 12.04 14.04 16.04 18.04 19.04 19.10 libxslt 2019-10-22
Ubuntu USN-4157-1 19.04 linux, linux-aws, linux-azure, linux-gcp, linux-kvm, linux-raspi2, linux-snapdragon 2019-10-17
Ubuntu USN-4161-1 19.10 linux, linux-aws, linux-azure, linux-gcp, linux-kvm, linux-raspi2 2019-10-21
Ubuntu USN-4163-1 16.04 linux, linux-aws, linux-kvm, linux-raspi2, linux-snapdragon 2019-10-22
Ubuntu USN-4162-2 14.04 linux-azure 2019-10-22
Ubuntu USN-4157-2 18.04 linux-hwe, linux-azure, linux-gcp, linux-gke-5.0 2019-10-22
Ubuntu USN-4163-2 14.04 linux-lts-xenial, linux-aws 2019-10-22
Ubuntu USN-4162-1 16.04 18.04 linux-snapdragon 2019-10-22
Ubuntu USN-4158-1 16.04 18.04 19.04 tiff 2019-10-17
Ubuntu USN-4160-1 16.04 18.04 19.04 uw-imap 2019-10-21
Full Story (comments: none)

Kernel patches of interest

Kernel releases

Linus Torvalds Linux 5.4-rc4 Oct 20
Greg KH Linux 5.3.7 Oct 18
Greg KH Linux 4.19.80 Oct 18
Greg KH Linux 4.14.150 Oct 18
Greg KH Linux 4.9.197 Oct 18
Greg KH Linux 4.4.197 Oct 18

Architecture-specific

Pavel Tatashin arm64: MMU enabled kexec relocation Oct 16
Amit Daniel Kachhap arm64: return address signing Oct 17
Steven Price arm64: Stolen time support Oct 21
Christoph Hellwig RISC-V nommu support v5 Oct 17
kan.liang@linux.intel.com Stitch LBR call stack Oct 21

Core kernel

Development tools

Device drivers

Chris Packham gpio: brcm: XGS iProc GPIO driver Oct 17
min.guo@mediatek.com Add MediaTek MUSB Controller Driver Oct 17
Matti Vaittinen Support ROHM BD71828 PMIC Oct 17
Sheetal Tigadoli Add OP-TEE based bnxt f/w manager Oct 17
Srinivas Kandagatla ASoC: Add support to WCD9340/WCD9341 codec Oct 18
Xingyu Chen add meson secure watchdog driver Oct 18
Nagarjuna Kristam Tegra XUSB gadget driver support Oct 18
Oleksij Rempel add dsa switch support for ar9331 Oct 18
Andreas Kemnade Add rtc support for rn5t618 mfd Oct 21
Manivannan Sadhasivam Add GPIO support for RDA8810PL SoC Oct 21
Baolin Wang Add MMC software queue support Oct 22
Logan Gunthorpe PLX Switch DMA Engine Driver Oct 22
Mateusz Holenko LiteUART serial driver Oct 23
Mika Westerberg thunderbolt: Add support for USB4 Oct 23
Kishon Vijay Abraham I PHY: Add support for SERDES in TI's J721E SoC Oct 23

Device-driver infrastructure

Filesystems and block layer

Memory management

Dave Hansen Memory Tiering Oct 16
Oscar Salvador Hwpoison rework {hard,soft}-offline Oct 17
Roman Gushchin The new slab memory controller Oct 17
Steven Price Generic page walk and ptdump Oct 18

Networking

Toshiaki Makita xdp_flow: Flow offload to XDP Oct 18
Toke Høiland-Jørgensen Add Airtime Queue Limits (AQL) to mac80211 Oct 18

Security-related

Virtualization and containers

Page editor: Rebecca Sobol


Copyright © 2019, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds