|
|
Log in / Subscribe / Register

Kernel development

Brief items

Kernel release status

The current development kernel is 4.9-rc5, released on November 13. Linus said: "Things have definitely gotten smaller, so a normal release schedule (with rc7 being the last one) is still looking possible despite the large size of 4.9. But let's see how things work out over the next couple of weeks. In the meantime, there's a lot of normal fixes in here, and we just need more testing."

Stable updates: 4.8.7 and 4.4.31 were released on November 10, followed by 4.8.8 and 4.4.32 on November 15.

Comments (none posted)

Quotes of the week

Microsoft's networking driver interfaces and expectations are documented 1,000 times better than that of Linux.
David Miller

No, never skimp on Changelogs. Nobody reads documentation.
Peter Zijlstra

So what I'd love to see is to have a kernel option that re-introduces some historic root (and other) holes that can be exploited deterministically - obviously default disabled.

I'd restrict this to reasonably 'deterministic' holes, and the exploits themselves could be somewhere in tools/. (Obviously only where the maintainers agree to host the code.)

Ingo Molnar

Comments (3 posted)

Kernel development news

Scheduling for Android devices

By Jonathan Corbet
November 15, 2016

Linux Plumbers Conference
The Android/Mobile microconference at the 2016 Linux Plumbers Conference was a grueling seven-hour affair. Your editor was unable to attend the entire session, but was fortunate enough to be present for a series of talks on scheduling. Adding better awareness of power usage to the scheduler has been a recurring topic for some years; that didn't change this year, but there was also a focus on improving the user experience in general.

Todd Kjos started by talking about scheduling for the Nexus 5X handset, which was the first of Google's devices to be based on the big.LITTLE processor architecture. This handset has four A53 and two A57 cores in it; the A53s are relatively slow and power-efficient, while the A57s offer higher performance. Tuning Android to run well on this device has turned out to be challenging. The objectives are straightforward enough: workloads that the user doesn't care about should run on the A53 cores, but performance-sensitive loads should not get stuck on those cores. When not much is being done, the A57 cores should not be running at all. But achieving those objectives has taken some work.

It would be nice to avoid repeating that work for each new device, but generic support for energy-aware scheduling still has not found its way into the mainline kernel. Instead, each system-on-chip (SoC) vendor provides its own heavily patched kernel with its own solution to the [Todd
Kjos] scheduling problem. Some of these vendors, Kjos said, have done a relatively good job with their scheduling patches; others less so. But, either way, these out-of-tree patches mean that vendors have a lot of work to do when new Android releases come along.

All of these solutions are hampered by a lack of knowledge about what is going on — and what the user most cares about — in user space. But the Android framework does have an idea about what is going on there. At any given time, Android knows what the "top app", the app that the user actually sees and interacts with, is. It is aware of other "foreground" apps that are a part of the user experience; they may be drawing other parts of the screen or occupying a part of a split screen. Then, there are the "background" processes that are not currently a part of the user's experience and are, thus, less important.

Beyond that, the Android framework is aware of user interaction events, such as touches and swipes, and some other aspects of the workload, including things like video frame rates. That is a lot of information about what is going on, but that information is not being exploited on most devices currently.

The scheduler on an Android device should provide the best experience for the user while being as energy-efficient as possible. The latter goal implies getting proper energy-aware scheduling support into the mainline kernel. A common solution would allow Android to perform workload-specific tuning in a generic manner and would reduce the amount of SoC-specific code carried by manufacturers and vendors. As CPU topologies become more complex, the need to provide a better, saner strategy grows.

Energy-aware scheduling, as presented by Kjos, consists of three major components. The first of those is the core scheduler changes allowing it to use CPU topology to place running processes optimally. Some of that work has found its way into the mainline, and progress is being made on the rest. Next is improved frequency and voltage selection for the running processors to maximize efficiency; the plan here is to switch over to the mainline schedutil framework, which can make better use of what the scheduler knows to configure the CPU operating parameters. The last piece is the out-of-tree SchedTune subsystem, which allows Android to tweak scheduling policies on the fly.

A digression into SchedTune

Your editor will now indulge in a bit of temporal manipulation. Patrick Bellasi presented the SchedTune governor later that day, but the information provided then is useful for the understanding of the rest of Kjos's talk. In the interest of a better narrative, we'll take a quick look at SchedTune now.

The kernel's current CPU-frequency governors attempt to match the CPU's operating power points (its frequency and voltage) to the workload that is actually being run. A light workload can be run at a relatively low frequency and voltage, saving power, but heavier workloads require the CPU to be running in a more energy-intensive mode. The optimal power point, he said, should be chosen with the user's needs in mind; sometimes response time is crucial to the user's experience, while, at other times, workloads can be run more slowly and nobody will notice.

Unfortunately, the scheduler knows little about which processes are important to the user at any given time. On Android systems, the runtime system does have a good idea of what matters, but there is no good way to [Patrick Bellasi] communicate that information to the scheduler. The SchedTune governor is meant to be a way to provide that information so that the scheduler can act on it.

In current kernels, the load-tracking subsystem maintains an estimate of just how much load each running process will put onto the system. Recent changes have enabled CPU-frequency governors to make use of that information to set the power points optimally; usually, the objective is to run the CPUs just fast enough to get the anticipated amount of work done. Governor policy can be changed into, for example, a "performance" mode where the objective is to get the work done as quickly as possible, but there is no way to set CPU-frequency policies on a per-task basis or, in other words, to tell the system that it is worthwhile to expend a certain amount of extra energy to get a specific job done.

SchedTune is implemented as a control-group controller. Each control group has a single tunable knob, called schedtune.boost, which can be used by the runtime system to change how processes within that group are scheduled. This parameter works in an interesting manner: it tweaks the load-tracking code to make the affected processes look heavier (or lighter) than they really are. If a group is boosted by 25%, the scheduler will expect it to use 25% more CPU time than it (probably) actually will, and the CPU-frequency governor will speed up the processor accordingly. "Boosting" a process in this way, thus, does not affect its scheduling priority, but it will affect the speed of the CPU on which it ends up running.

Bellasi concluded with some proposals for future enhancements to SchedTune. One of those would be to tweak scheduling further so that processes with a positive boost value would get longer time slices in the CPU. A couple more knobs may be added to the control group to affect the CPUs on which processes may be run; they would specify the minimum and maximum performance capacity that a candidate CPU may have. To summarize, while SchedTune is already in use in shipping products, it is still in a certain amount of flux.

Energy-aware scheduling in the Pixel phone

Returning to Kjos's talk: with these components in mind, he and others set out to implement proper energy-aware scheduling in the Pixel phone; the initiative began after a meeting at the 2015 Linux Plumbers Conference. Some experiments with energy-aware scheduling on a tablet device yielded good results, so a group of engineers from ARM, Qualcomm, and Google decided to collaborate on a solution for the Pixel. The group had a significant challenge: come up with a relatively generic energy-aware scheduling solution that worked at least as well as Qualcomm's out-of-tree QHMP scheduling patches.

Getting there required a number of modifications to the existing energy-aware scheduling code. Its view of big.LITTLE processors doesn't quite fit the Snapdragon 821 used in the Pixel, so some of those assumptions had to be changed. SchedTune and cpusets were adopted to allow the Android framework to specify how processes should be placed in the system and, in particular, whether specific processes should be spread out to idle CPUs or packed into a small number of CPUs. SchedTune was also enhanced to allow tasks to be marked as latency-sensitive.

The kernel's per-entity load tracking mechanism, by which the scheduler tracks how much load each process puts on the system, proved to not be up to the task of getting the best performance out of this processor, so the out-of-tree window-assisted load tracking (WALT) mechanism was used instead. Nobody discussed the WALT patches in detail during the microconference, but they do play a significant part in this work, and thus merit a quick discussion.

Digression 2: WALT

Per-entity load tracking was added to the 3.8 kernel to fill in a gap in the scheduler's understanding: it didn't have any way to know how much load any given process would put on the CPU. That information is useful for load balancing — distributing processes optimally across the CPUs in the system — and for knowing what will happen if a process is moved from one CPU to the next. Each process's load is calculated using a simple geometric series, with the most recent load being counted most heavily.

This mechanism is an improvement over what came before, but it turns out to not work ideally, especially in the mobile world. The biggest complaint seems to be that it is simply too slow to respond to changes in a process's behavior. A web browser that has been idle for a long time, with a correspondingly low calculated load, may suddenly find itself rendering a complex page and needing a lot of CPU. If its associated load does not rise quickly enough, that browser process will not get enough CPU time (because it must share a CPU with too many others, putting the system out of balance, or because the CPU frequency will not be increased as needed) while it is trying to do its work. The result is slow response as seen by the increasingly grumpy user.

The problem also exists when processes stop using the CPU. Per-entity load tracking will take some time to notice that the load is gone, with the result that the CPU will stay in a higher power state than it actually needs to be, shortening battery life. To make things worse, processes that are not runnable (waiting on I/O, for example) are counted as part of the load, even though they are not doing anything and may not for some time.

The WALT patches do away with the geometric series for load calculations. Instead, WALT simply tracks the amount of CPU time used by the process during a set of recent "windows" of time. According to the patch description, "N" windows are maintained, and a process's calculated load is the maximum of either the most recently completed window or the average of all N windows. The actual patch, as posted, appears to keep around exactly one 20ms window, so the most recent measurement and the average are one and the same. In essence, WALT has modified per-entity load tracking by chopping off all but the first term in the geometric load-calculation series. There is one other significant change: tracking is only done while the process is running; a process will not contribute to system load while it is waiting for something, and its previous load calculation will still be there once it becomes runnable again.

This algorithm causes WALT to respond much more quickly to a process's behavior; if a process begins some sort of CPU-intensive activity, its calculated load will quickly rise to match. It is thus useful for tasks like setting the CPU frequency. It may not work as well as per-entity load tracking for the long-term balancing of processes across the system; for this reason, the latter numbers are still used for load-balancing decisions.

The WALT patches got a bit of a grumpy reception from scheduler maintainer Peter Zijlstra, who would have liked to have seen them before they started shipping in production devices. He agrees that the current load-tracking numbers fall short of what is really needed, but would like to see something like WALT added to the existing load-tracking code, rather than being bolted on alongside it. So the WALT patches are likely to change significantly before they can be considered for merging.

Finishing out the Pixel story

Returning once again to Kjos's talk: WALT, along with SchedTune, was needed to achieve the sort of response time and power savings needed for the Pixel phone. The sched-freq CPU governor (still in use in the Pixel) needed some tweaking as well; in particular, it was changed to ramp up CPU frequencies quickly, but to lower them more slowly. That ensures that processes needing CPU time will get it right away, and the CPU won't be yanked out from underneath them too quickly afterward.

Once the necessary software components were in place, it was time to set policies in SchedTune and the cpuset controller. The top app at any given time is set to a "spread" policy, meaning it is likely to find itself running on its own CPU. It is also given a 10% boost to ensure that it gets enough CPU time. Foreground tasks also get the "spread" policy, but without the boost, while background tasks are given a "pack" policy, causing them to be concentrated on relatively few CPUs — often just one. Kernel threads and other system processes also run with the pack policy. To further constrain placement, cpusets are used to keep background tasks on the two slower CPUs; the top app and foreground tasks share one fast CPU, while the top app gets the other fast CPU to itself.

The results (seen toward the end of Kjos's slides) show that the new scheduling code achieved or exceeded the performance of QHMP on almost all metrics. The code has been merged into the Android 3.18 and 4.4 kernel trees, and has been enabled on the Pixel device; it can also be found on the Acer R13 Chromebook and on development boards like the 96Boards HiKey. The move to the schedutil governor should happen within the next year. And, of course, the group is working to get these changes merged upstream.

Epilogue: realtime scheduling

Mechanisms like SchedTune give the Android developers better control over how the scheduler responds to user-space changes. But there is another way to get low-latency response from a Linux kernel: use its realtime capabilities. Juri Lelli gave a brief talk on how Android is using realtime scheduling now, and how things may change in the near future.

The SCHED_FIFO realtime scheduling class, which allows a realtime process to run uninterrupted until either it sleeps or a higher-priority realtime process comes along, is used for some [Juri
Lelli] latency-sensitive processes in Android now. In particular, the SurfaceFlinger display manager uses it, as do the audio pipeline and the threads for CPU-frequency management. There are other latency-sensitive processes that are not using realtime scheduling at this time; these include the user-interface and rendering threads.

Why are those threads not using realtime scheduling? The scheduler's load balancing is "naive" when it comes to realtime processes, he said, so they do not get properly distributed across the CPUs. By design, the realtime scheduler will throttle processes once they use about 95% of the available CPU time; that mechanism is there to prevent a runaway realtime process from killing the system completely, but it can also get in the way when that CPU time is needed. Even with that protection, though, there is also a fear that a bug in one of those processes could cause them to take over the CPU and kill the system.

These issues notwithstanding, the near-future hope is to move to SCHED_FIFO for the user-interface and rendering threads. Getting there will require a few changes, starting with tweaks to the user-space code that will hopefully be released in the Android Open Source Project in December. The realtime CPU-selection (load balancing) code will be given better energy awareness; those patches have yielded significant efficiency improvements in benchmark testing so far. There is also work on a new TEMP_FIFO scheduling class that would get around the throttling problem by allowing realtime tasks to continue executing, at normal priority under the completely fair scheduler, if they exceed the maximum CPU time allowed in realtime mode.

There is an alternative to changing the behavior of SCHED_FIFO, though: use the deadline scheduler instead. This scheduler can allow latency-sensitive tasks to get their work done with less risk of them taking over the system. There are, however, a number of shortcomings with the deadline scheduler that need to be dealt with first. These include proper control-group support and integration with the schedutil CPU-frequency governor. The deadline scheduler, too, needs a means by which a process can continue executing, at normal priority, when it exceeds the CPU time allotted to it.

Should it be possible to fix these issues, the plan is to use the SurfaceFlinger thread as the first guinea pig for deadline scheduling. Then, perhaps, deadline scheduling will finally see a widespread, real-world deployment.

[Thanks to LWN subscribers for supporting our travel to the event.]

Comments (3 posted)

Topics in live kernel patching

By Jonathan Corbet
November 14, 2016

Linux Plumbers Conference
Getting live-patching capabilities into the mainline kernel has been a multi-year process. Basic patching support was merged for the 4.0 release, but further work has been stalled over disagreements on how the consistency model — the code ensuring that a patch is safe to apply to a running kernel — should work. The addition of kernel stack validation has addressed the biggest of the objections, so, arguably, it is time to move forward. At the 2016 Linux Plumbers Conference, developers working on live patching got together to discuss current challenges and future directions.

This article is not an attempt at a comprehensive summary for a half-day of fast-moving discussion; instead, the goal is to cover some of the more interesting topics as a way of showing that challenges that the live-patching developers must overcome and how they plan to get there.

Unhelpful optimizations

A smart optimizing compiler is necessary for anybody who wants to get reasonable performance from their code, but problems arise if the compiler gets too smart. Developers working with concurrency in the kernel have had to worry about aggressive optimizations for some time; according to Miroslav Benes, live-patching developers have to worry as well. Compiler optimizations can change how code is compiled in subtle ways that can lead to mayhem when a patch is applied.

Starting with the easiest problems before moving on to the trickier ones, Benes noted that the automatic inlining of functions can be a problem if an inlined function must be patched. In that case, the solution is relatively easy; all callers of the function must be changed in the resulting live [Miroslav Benes] patch. The -fpartial-inlining option can complicate things by only inlining portions of functions, but it doesn't change the basic nature of the problem.

The -fipa-src option is a bit more subtle, in that it can lead to the removal of unused function parameters or change the way in which parameters are passed into a function. In other words, it changes the ABI of the function in response to its observations on how the function works. A live patch to that function could change how this optimization operates, leading to a surprising change in the ABI. The good news here is that, when this happens, GCC will change the name of the compiled function, so the broken ABI is immediately obvious. But this can prevent the direct patching of a buggy function; callers must be patched as well.

Code compiled with -fipa-pure-const may change in response to how a function operates; if a function is seen as not accessing memory, the compiler will make assumptions about the state of memory before and after calling it. If a patch changes the function's behavior, those assumptions may no longer hold; once again, it will be necessary to patch callers when this happens.

An "even crazier" option is -fipa-icf, which performs identical code folding. It can cause a function to be entirely replaced by an equivalent found elsewhere in the code, and it can be hard to detect that this change has happened. Code folding is also a problem for the kernel's stack unwinder. Other types of code elimination can happen if GCC thinks that a specific global variable won't change over a given function call. If the function is patched to now change the global variable, the calling code may well be incorrect. This sort of change, too, is hard to detect; it would be nice, he said, to have a GCC option to ask it to create a log of the optimizations it has done.

Perhaps the scariest option is -fipa-ra, which tracks the registers used by called functions and avoids saving those that will not be changed. A patch to the called function could easily cause it to use a new register, leading to data corruption in the calling functions and a likely significant reduction in the continuous uptime that live-patching users were hoping to enjoy. This optimization is hard to detect; it can be thought of as an ABI change for the called function, but no name changes are made. This one, he said, is "not good news." For now, this optimization is disabled by GCC when -pg is turned on, and the Ftrace subsystem, needed for live patching, needs -pg. But there is no inherent reason why those two options need to be incompatible, so this behavior could change at any time.

This list, Miroslav said, is only a small subset of the optimizations that can create problems for live patches. As compiler developers pursue increasingly aggressive optimizations, this problem is only going to get worse.

Patch building

The kernel has a standard way to apply a live patch, but there is not, yet, any sort of mainlined mechanism for the creation of live patches. Josh Poimboeuf gave a brief summary of the patch-creation tools out there with an eye toward picking one for upstream.

The first of these is kpatch-build. It works by building the kernel both with and without the patch applied, then does a binary diff to see which functions changed. All of the changed functions are then extracted and packaged up into a "Frankenstein kernel [Josh Poimboeuf] module" that is shipped with the live patch. It is a powerful system, he said, with a number of advantages, including the fact that it automatically deals with most of the optimization issues mentioned in the previous talk.

On the other hand, kpatch-build is quite complex. It has to know about all of the special sections used by the kernel, and it has problems with certain kinds of changes. It only works on the x86_64 architecture at the moment; all of those special sections differ across architectures, so turning it into a multi-architecture tool will not be easy. And, he said, kpatch-build is brittle and a maintenance nightmare.

An alternative is to just use the regular kernel build system and its module-building infrastructure. The changed function is copied and pasted into a new module, some boilerplate is added to register the function with the live-patching API, and the job is done. It's easy, but has its own problems; in particular, this module is unable to access non-exported symbols, which the patched function may need to do. This problem can be worked around by using kallsyms_lookup_name(), but this solution is error-prone, slow, and "yucky."

The third alternative is new; indeed, he posted the proposal the week before the conference. This alternative uses the copy-and-paste approach, but adds an API and a postprocessing tool that allows the generated module to gain access to non-exported symbols. The code works now, though there are a number of possible improvements, including automating the process of attaching to non-exported symbols and detecting interference from compiler optimizations.

In the brief discussion at the end of the talk, it became clear that there were not a lot of concerns about the new tooling, so that is the direction things seem likely to go.

Module dependencies

Live patches can make changes to loadable modules, which leads to an interesting question: what happens if the module isn't present in the system when a patch is applied, but is loaded afterward? The live-patching code currently has some complicated infrastructure designed to detect this case and apply patches to modules as they are loaded. Jessica Yu, who has just taken over as the maintainer of the loadable module subsystem in the kernel, talked briefly about changing this mechanism to require that alle affected modules be loaded before a live patch is applied.

Live patches are, themselves, loadable modules. Allowing a patch module to be loaded before any modules it affects requires carrying a fair amount of information and complex infrastructure, and it circumvents the normal [Jessica Yu] module dependency mechanism. As a result, there is a fair amount of code duplication, including a reimplementation of much of the module loader in the live-patching code.

There are a couple of ways that things could be changed. One would be to simply require that all modules being patched be loaded before the patch itself is loaded. That would work, but it forces the loading of code into the kernel that is unneeded and may never be used on any given site. The alternative would be to split the live-patch module into multiple pieces, each of which applies a patch to a single kernel module. Then, only the pieces that are relevant to any given running system need to be loaded.

Making this change would simplify the live-patching code and reduce code duplication, but there's a problem: there isn't an easy way to force a necessary patch module to be loaded when a module needing patching is loaded. The depmod tool just doesn't recognize that sort of dependency. FreeBSD has a nice MODULE_DEPEND() macro, but Linux has never needed that infrastructure.

Splitting the patch module, it turns out, could be problematic for any sort of wide-ranging change. CVE-2016-7097 was mentioned as an example; it included a virtual filesystem layer API change that had to be propagated to all filesystems. If it were to be split apart, the result would be a long list of modules that would need to be loaded to apply the patch.

There was a lively discussion on whether the rules concerning live patches for modules should be changed, much of it focused on a question asked by Steve Rostedt: if a module isn't present in the kernel, why not just fix it on disk rather than lurking in the kernel, waiting to patch it should it ever be loaded? Jiri Kosina replied that replacing on-disk modules would be hard from a distributor's point of view; it would introduce modules that no longer belong to the kernel package. Live patches can also be disabled; in that case, modified modules would have to be somehow restored. Some consistency models can also create trouble; it is possible to have both the pre-patch and post-patch code live and running in the kernel at the same time. So it's not obvious that fixing things on-disk is a workable solution, though Rostedt was adamant that it should be considered.

As the discussion wound down, it became fairly clear that the consensus was against changing how module dependencies work in live patching. The mechanism that the kernel has now, in the end, works well enough; it looks like it will not be going away anytime soon.

Other topics

Petr Mladek talked about the problems that come with modifying data structures in live patches. One has to start by locating the affected data and accesses; that is easy with a global variable, harder for data stored in multiple lists, and nearly impossible for uses that have been hidden via casts. Switching to the new values must be done carefully, once all of the code is in a position to handle them. Many techniques, such as the use of shadow structures to add data to existing structures, suffer from performance problems. And the problem of reverting a live patch gets that much harder when data changes have been made.

Miroslav Benes returned to talk about the problems associated with patching functions in the scheduler. It turns out that schedule() is a tricky function to work with, since it returns with a different stack than the one it was called with. This caused difficulties with a 2015 live patch fixing a security problem with x86 local descriptor table handling.

He outlined a solution to the problem involving putting the instruction pointer into the context that is saved when a context switch is made. That information can be used after a patch is applied to ensure that the version of schedule() that restores a given context is the same as the one that saved it. The solution is workable, but it's not clear that it matters that much; security issues in schedule() are rare and there may not be a need to apply another live patch to it anytime soon.

Jiri Kosina led a brief session on future work. The consistency model, as noted above, has been blocked for a long time on related issues. Now that the stack-validation work has been done and, hopefully, kernel-stack tracebacks can be trusted, it should be possible for that work to continue. There will likely be new proposals in that area soon.

In particular, the hybrid consistency model is likely to move forward. It should be reliable now that the stack traces are correct, but there is an associated problem: it requires a kernel built with frame pointers, and that has a significant performance cost — on the order of 10%. Nobody seems to know why turning on frame pointers hurts that badly; simply compiling the kernel with one register disabled does not have the same effect. Mel Gorman is evidently doing some benchmarking to try to track this problem down.

Kosina said that he is currently working on a port to the arm64 architecture. Beyond that, he said, there's not much point about worrying about other possible developments in live patching. The hybrid consistency model is likely to keep the group entertained for quite some time.

The microconference closed with a wide-ranging talk from Balbir Singh; much of it was taken up by low-level PowerPC details that are probably of relatively little interest to those outside the room. He did raise a few larger questions, though. One of those is expanding live patching to user-space code as well; there are, evidently, users who are interested in that capability.

He asked: what are the benefits of using live patching rather than performing a live cluster update? If a cluster can be taken down and upgraded one machine at a time, there is no real need for a live-patching infrastructure. We don't all run clusters, but users whose uptime needs make them consider live patching maybe should be using clusters.

His last question had to do with rootkits; a live-patching mechanism is obviously a nice tool by which code can be injected into a running kernel. Kosina said that he doesn't really understand what the worry is in this regard. A live patch is just a module; if an attacker can load modules into the kernel, the game is already over. But, Singh said, there could be a vulnerability in the live patch itself; this is something that has happened to other vendors in the past. Live patching is meant to be a way to quickly close security problems, but, like any other sort of patch, it always runs the risk of introducing new vulnerabilities of its own.

[Thanks to LWN subscribers for supporting our travel to the event.]

Comments (13 posted)

Patches and updates

Kernel trees

Linus Torvalds Linux 4.9-rc5 Nov 13
Greg KH Linux 4.8.8 Nov 15
Greg KH Linux 4.8.7 Nov 10
Greg KH Linux 4.4.32 Nov 15
Greg KH Linux 4.4.31 Nov 10
Jiri Slaby Linux 3.12.67 Nov 10

Architecture-specific

Build system

Core kernel code

Martin KaFai Lau bpf: LRU map Nov 11
Peter Zijlstra kref improvements Nov 14

Device drivers

Device driver infrastructure

Gustavo Padovan drm: add explict fencing Nov 11
Shuah Khan Media Device Allocator API Nov 16

Filesystems and block I/O

Memory management

Networking

Security-related

Virtualization and containers

Kirti Wankhede Add Mediated device support Nov 14

Miscellaneous

Page editor: Jonathan Corbet
Next page: Distributions>>


Copyright © 2016, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds