Kernel development
Brief items
Kernel release status
The current development kernel is 4.2-rc6, released on August 9. Linus said: "So last week I wasn't very happy about the state of the release candidates, but things are looking up. Not only is rc6 finally shrinking noticeably, the issues I was worried about had fixes come in early in the week, and so I don't have anything big pending. Assuming nothing new comes up, I suspect we will end up with the regular release schedule after all (ie in two weeks). Knock wood."
Stable updates: 4.1.5, 3.14.50, and 3.10.86 were released on August 10.
Quotes of the week
Kernel development news
Atomic mode setting design overview, part 2
Today's graphical applications require frame-perfect displays, while also taking advantage of specialized, power-saving hardware in embedded GPUs. To do that, the kernel must be able to update the hardware graphics state atomically — changing multiple independent parameters in a single operation so that users don't see corrupted frames while the change is taking place. The first article of this series looked at why an atomic display update API has become necessary, how it came to be, and what the main driver interfaces look like. Here, we will look into a few more details and build on the material discussed earlier.
Still, a short recap of the main interface can't hurt: an atomic display update is created by duplicating the state structures for display objects like planes or a CRTC and then updating the duplicates, which are collected in struct drm_atomic_state. Once all updates are applied, the new state is provided in its entirety to the driver's ->atomic_check() function for validation. If it passes, then it can be committed using ->atomic_commit(). Just validating updates to figure out in a generic way what's possible is also supported using a check-only mode.
Of atomic updates and nuclear options
A big change in upstream atomic mode setting compared to the Android Display Framework (ADF) is that upstream allows any display state to be changed through the atomic ioctl(). ADF only allows plane states to be changed atomically; output routing and display-mode changes (which are called mode-set changes in this article) need separate calls from user space in ADF. At least on a cursory look, plane updates and mode-set changes are indeed two different things. On one hand, atomic plane updates are meant to be completed by the next frame displayed. Atomic mode-set updates, on the other hand, can cause a display to go to black for a bit, which is anything but perfect. Mode-set updates generally take hundreds of milliseconds. But both update the display state in an atomic, all-or-nothing fashion and they can both use the same "list of property changes" transport for the ioctl(). So, purely by just looking at the structures and data passed between user space and the kernel — and not the requested semantics — it makes a lot of sense to treat them as the same operation.
The tension between these two types of updates resulted in a lot of confusion when discussing atomic plans for upstream. It was only fixed by calling full mode sets "atomic mode sets" and calling the plane updates "nuclear page flips". But when you look closer, the distinction between full mode sets and pure plane updates synchronized to the next frame becomes unclear quickly. Often a display mode change on integrated panels only means that the scaler used to fit the displayed area onto the panel needs to be adjusted. A lot of hardware can do that with a small update for the next frame, without the need for a full mode set and blanking the screen. But then there's hardware where some plane configuration changes require reallocation of FIFO space, and that is only possible when the display pipeline is shut down. Which looks very much like a mode set.
Furthermore, user space would often like to delay changes that require full mode sets to a more suitable time. For example, tablets and smartphones suspend so often that pretty much any mode set can be delayed without incurring too much energy waste by running in a suboptimal configuration until the next suspend. There, plane changes that need a full mode set shouldn't be done without the consent of user space to avoid a less-than-perfect visual experience for users.
After many bits were expended discussing this problem, the solution merged was the DRM_MODE_ATOMIC_ALLOW_MODESET flag. With this, user space can indicate whether a full mode set (with the downsides of potentially blanking seemingly unrelated screens and taking a long time) is acceptable or not. And, of course, this flag is also obeyed for the test-only mode. Most drivers don't need to bother with this, since the atomic helpers take care of checking for it — as long as the driver correctly indicates to the helper code that it needs or doesn't need a full mode set in one of the special cases mentioned above.
Locking magic using wait/wound mutexes
Next up is how concurrent updates are handled. A while ago, DRM gained per-CRTC locks to allow concurrent updates and changes on different display pipelines. That, in turn, required per-plane locks since planes can (if the hardware supports it) change the CRTC they're connected to. In addition, there's one more lock to protect all the output routing and connector state — making those kinds of changes usually requires at least a few hundred milliseconds, thus will result in stalls no matter what. It's also a rare and usually system-wide operation, so finer-grained locking for that would just be wasted.
But one problem with the atomic ioctl() is that user space could pass its list of updates (in the form of object_id, property_id, value triples) in any order it wants. That could easily lead to deadlocks if they are processed in that order, since different objects require different locks. User space could provoke an AB-BA deadlock by creating two updates that list the same two objects but in a different order.
That can be solved by sorting the entire list of triples by objects first, but unfortunately that only delays the problem. The first article explained that atomic state-checking code is only allowed to look at or touch the state of objects that are in the update. This guarantees that concurrent updates won't trample each another. Unfortunately, that also means that drivers must be able to add arbitrary objects and their states to the overall atomic update tracked in the drm_atomic_state structure in their ->atomic_check() callback. That will allow the callback to check cross-object limits and make sure that shared resources are not exhausted. Because hardware is crazy, this can't be done outside of the driver code since every piece of hardware has its own unique shared state. Just grabbing all potentially needed locks up front isn't a great idea either, since that will only make all updates synchronous and hence make all the fine-grained locking pointless.
This means a locking scheme where locks can be taken in an arbitrary order is needed. Luckily, the kernel has that with its wait/wound mutex support, which, incidentally, was also added by GPU driver hackers for some pesky memory-management troubles. Wait/wound mutexes are a topic of their own, really, but the summary is that they allow you to grab locks in arbitrary order, can reliably detect deadlocks, and have a means to back off from a deadlock without throwing fairness out the window or requiring costly spinning. They're also rather complex.
No one wants to inflict advanced locking schemes on driver writers unnecessarily, so the locking is all nicely encapsulated in functions provided by the atomic core. When you need to change which CRTC a plane is assigned to, just call drm_atomic_set_crtc_for_plane() and make sure any errors are forwarded correctly up the call chain. Similarly, if a driver needs to consult other state objects in its state-validation code (e.g. to check limits on shared resources), it can duplicate any state into the atomic update it needs and locks will be acquired correctly behind the scenes. The only important bit is to correctly forward the -EDEADLK error code and always use the provided functions to duplicate and update state structures. With that, magic simply happens. Well, as long as a driver's atomic check code is otherwise correct and doesn't touch hardware state nor anything else that might persist outside of just the state structures.
For users of the DRM-internal atomic interface things are a notch more complex, but not a lot. Wait/wound locking state is tracked in a struct drm_modeset_acquire_ctx, which is initialized using drm_modeset_acquire_init(). To make all the transparent lock-taking work when duplicating state objects into an atomic update, this is also stored in drm_atomic_state->ctx. As with drivers, any locks needed will be implicitly acquired when duplicating state objects; the only thing needed is to actually handle deadlocks signaled with the -EDEADLK error code.
Since the deadlock backoff logic will drop locks, already duplicated state objects might become stale and need to be released first with drm_atomic_state_clear(). Then a call to drm_modeset_backoff() will drop any acquired mode-set locks and block on the contended lock until it's available. After that, the entire sequence needs to be restarted right after the initialization of the acquire context — that must not be reinitialized since it contains the queue ticket that ensures forward progress in the wait/wound locking algorithm. Finally, when everything has succeeded, locks can be dropped with drm_modeset_drop_locks() and the acquire context is finalized with drm_modeset_acquire_fini().
Note that it's really easy to test all that special backoff code: enabling lock debugging and CONFIG_DEBUG_WW_MUTEX_SLOWPATH will cause the wait/wound mutex code to inject spurious deadlock conditions. With that, all the deadlock backoff code can be exhaustively tested just with single-threaded test cases. And of course wait/wound mutexes fully support all the Linux lock debugging and testing tools such as lockdep.
Helper-library design
The other piece that took years to settle is the driver helper-library design. A mid-layer approach, as in ADF, was out of question because hardware has too many special cases to be able to create such a framework that is not too complex. There was already an extensive set of helpers used by most drivers anyway, so preferably drivers shouldn't have to rewrite all their callbacks and support code when converting to atomic updates. Directly reusing the existing callbacks was done for proof-of-concept conversions, but fundamentally there were a few troubles with the existing helpers and their driver callbacks:
- They had no real support for checking cross-object constraints. Properly checking those is a major goal of the atomic changes. Checking for such arbitrary cross-device constraints is exposed to user space with the DRM_MODE_ATOMIC_TEST_ONLY flag, which is described in the first article.
- There was no support for doing atomic updates of planes. For sane hardware that has lockout bits to prevent any updates until they have all been written to registers, this is easy to fix with just pre- and post-commit hooks. But in general, hardware needs more flexibility there. There were also no provisions for separating the checks from the commit phase of plane updates.
- Any output routing changes required updating and enabling the primary plane too. Often that's not what you want, for example displaying a video overlay with black background for the letterbox or pillarbox bars doesn't need any primary plane at all. The hardware can do this, so it should be made possible.
- Finally, the legacy helpers were designed for the very first XRandR implementation. In the almost ten years that have passed since, DRM developers learned an awful lot of lessons about what works, what should be simplified (by providing stricter guarantees in helpers), and where more flexibility was needed to make the helpers more useful.
All in all, a full rewrite of the helper code was in order, while still making conversions fairly easy. The big change, certainly, is that the helper library doesn't just expose one monolithic template function for each hook, but also exposes separate pieces, for example to handle plane-only updates or only output configuration changes. Aside from a few restrictions imposed by the atomic semantics they can also be called in arbitrary order, as it best suits the hardware. This way drivers can mix and match between helper code and their own — or just augment helper code as needed (for checking or changing global state for example).
A good example would be drivers supporting runtime power management (PM). The old legacy helpers updated planes in between disabling outputs and enabling them with the new configuration. For best backward compatibility, that's also what the atomic helpers do, but this is not the best for drivers supporting runtime PM. Such drivers will disable the hardware when the display pipelines are off and, hence, plane updates will get lost if they are done in the middle. It's better to do plane updates as the final step of committing the update to hardware, and the helpers allow this.
Another useful feature of the new helper library is that any derived state to steer control flow is stored in the state structures — it has to be, due to the check/commit split. That allows drivers to overwrite and control the helpers in fine detail. For example, a driver can force a full mode set (and the required state precomputation) when some shared global resources need to be adjusted, even when the helper wouldn't see a need for a full mode set on its own. Since the exported check functions are idempotent, they can be re-run when a later step uncovers the need (again due to shared resources) to include more objects in the atomic update.
That is useful when a plane update uncovers the need for a full mode set. But a full mode set, in turn, might require recomputations for plane states, creating a loop. Making the exported helpers idempotent and callable in any order helps to break such loops. To avoid surprises, the default check function, drm_atomic_helper_check(), only calls the subordinate helpers to check plane updates and mode-set changes once each. But hardware with complicated dependencies between these two types of updates (like the example earlier where a plane update can require a full mode set, maybe even on an unrelated CRTC) can be handled by calling the helpers multiple times, until a steady state has been reached for the precomputed state. Note that this is only to facilitate the state validation, the actual commit of the atomic update to the hardware should still happen in one go, without any iterating.
Also, all the functions to implement legacy callbacks in terms of the atomic driver interfaces are exported and can be called explicitly. Drivers without a need to specially handle legacy entry points can just put the provided helpers directly into the DRM driver operation function tables. This allows drivers to keep some features only supported for legacy interfaces by keeping the code for old platforms around. This is useful for drivers which support ten or more years of hardware iterations where it doesn't make sense to implement proper atomic support for every old special case.
The other big change is simpler semantics for drivers. Thanks to the atomic updates, the new helpers can keep track of state changes much better and guarantee that a disable or enable hook won't be called redundantly — which the old helpers routinely did. They also hide a lot of the potential flexibility of legacy interfaces like the four different display power management signaling (DPMS) levels used for runtime PM on outputs — the atomic helpers only have on and off. This is because modern hardware simply doesn't support intermediate DPMS levels any more. On anything but the oldest of hardware, drivers just clamped the intermediate levels to off. This also allows the atomic helpers to reuse the same enable and disable hooks for runtime PM as for the normal enabling and disabling of outputs, which simplifies drivers. The ->dpms() callbacks are still supported to simplify converting drivers from legacy mode setting to atomic, but they are deprecated. Converted drivers should only have to implement the ->enable() and ->disable() hooks and should be able to delete the DPMS handlers entirely.
Finally, there are transitional helpers to handle the impedance mismatch for planes between legacy helpers and the new atomic ones. They provide helpers like drm_helper_crtc_mode_set_base() that implement the legacy helper callbacks in terms of the new atomic helper callbacks. This allows driver writers to switch over to the new hooks for plane updates in their backend while otherwise still using the same code and overall control-flow. That helps a lot in converting big drivers by allowing the conversion to happen in multiple phases that only deal with a part of the driver. There's also a "how to" that has all of the details for an atomic conversion — what all needs to be done and in which exact order.
Overall, there are a number of drivers already converted and the new helpers seem to be working out well. Some small changes in the interfaces of callbacks were needed, but thus far there have been no big problems with the semantics themselves. A big reason for that is that the new code has been modeled after the i915.ko-specific mode-set infrastructure, which has been developed over the past few years with an aim toward atomic updates.
The big lesson learned from i915.ko led to the major difference in the merged code: how an atomic update is assembled before actually committing it to the hardware. In i915.ko all updates were staged using a set of new_foo members embedded into each mode-setting object structure, mirroring all the existing data structures and pointers. The major downside of that approach is that rolling back an update (either because it violates a constraint or because user space only asked to check constraints using TEST_ONLY) needed to carefully undo all these changes. Staging updates in the objects themselves also makes it much harder to have multiple updates in-flight at the same time. With the merged atomic infrastructure, updates are assembled out of free-standing state objects and no permanent mode-setting object structures are touched before actually committing state to the hardware. That makes rollback and handling concurrent updates much easier.
The unresolved gap in the current helpers is handling non-blocking updates. Thus far, drivers need to manage their own work queues for the asynchronous tail of an atomic update and synchronize appropriately when an update depends upon global state, like they had to already for primary plane page flips with the old legacy ioctl(). There are ideas floating around to implement this in the atomic helpers. But nothing has come up that is both simple enough to be an improvement over the current state while still being useful for a lot of drivers.
Conclusions
After more than three years in the making, upstream has finally a mode-setting interface and support infrastructure for drivers that covers all of the use cases, from energy-sipping systems-on-chip (SoCs) with the need for all the special plane hardware and supporting perfect updates of each frame to big desktops supporting multiple screens. That's a first, since the original kernel mode setting pretty much left embedded systems to fbdev. But given the number of ongoing or already-merged conversions to atomic and entirely new drivers supporting it, it looks like upstream graphics for SoCs is finally happening and is bound to stay. Well, at least for the mode-setting side of GPUs. But even just that would be a big change and a great improvement for all the upstreaming efforts of Android-vendor kernel trees.
Let's all embrace the age of atomic mode setting — that future is here and it is looking good ...
libnvdimm, or the unexpected virtue of unit tests
In the July 9th edition, Jonathan Corbet said: "Over time, the kernel development community has indeed gotten better at merging code that does not require fixing later in the development cycle." That’s good. Now how do we accelerate this trend? One need only look at the recent inclusion of kselftests in the kernel or the ongoing development of the venerable xfstests suite to understand the current state of the art of Linux kernel test capabilities. However, what becomes readily apparent when looking at these test suites is that they are limited to tests that can be solely driven by user-space stimuli, such as generic sysfs attributes, system calls, and on-disk metadata formats.
Granted this is a very large test surface, but it still leaves a coverage hole for a body of code that, to date, has only executed in the presence of specific hardware (be it virtual or physical), makes up the majority of the kernel source, and sees the highest rate of change: device drivers. The libnvdimm subsystem is one of the latest additions to the kernel's driver infrastructure; it layers services over persistent memory to make access to that memory safer and more reliable. To the best of this author’s knowledge, libnvdimm is the first upstream device-driver subsystem to integrate in-kernel interface mocking techniques in support of unit testing. This article walks through how and why this came to pass, as well as the challenge and promise of driver unit testing going forward.
The trouble with TRBs
While libnvdimm is the first upstream inclusion of this technique, the first overall attempt to use mocking for driver development was part of a patch kit for rearchitecting transfer-request-block (TRB) handling in the USB 3.0 host controller (XHCI) driver. It is helpful to understand the problem the patch kit was trying to solve to see why mocking was proposed as a way to validate the implementation.
Like many I/O devices, XHCI controllers have a ring buffer for describing transfers between memory and the device. An XHCI ring buffer entry is called a TRB, and it describes a transfer of a contiguous range of memory. The total set of TRBs in an XHCI ring need not be contiguous in memory as a ring can be divided into segmented groups of TRBs. These individual ring segments (TRB-segments) can be chained together with "link" TRBs. In the case when a single TRB is only able to describe a subset of the full transfer request (i.e., when performing scatter-gather to multiple discontiguous memory ranges), multiple TRBs can be ganged together to form the transfer-descriptor (TD). This architecture is fairly straightforward until one considers the pile of constraints imposed by the I/O request, the host controller, and the attached device:
- A TD may only cross a TRB-segment boundary (by way of a link-TRB) at a
device-defined boundary called the MBP (max burst payload). This means
that if the MBP is 1KB and we hit the end of a TRB segment after queuing a
512-byte TRB, things will fail; the driver should have instead enqueued an
early link-TRB rather than the 512-byte "transfer"-TRB.
- A single TRB may not cross a 64KB boundary in memory.
- Each discontiguous range to be transferred must have its own TRB (implied since the driver is performing scatter-gather DMA in a TD).
If you are still reading this article after that acronym soup then perhaps you are also starting to imagine the array of USB peripherals and contrived I/O scenarios needed to fully exercise all the corner-case conditions presented by the these overlapping constraints. Instead, these conditions can be explicitly tested by mocking all the interfaces and input conditions that surround the TRB queuing implementation. This is the approach taken in the patch titled "xhci: unit test ring enqueue/dequeue routines".
Move fast and break things
The motivation for exercising the XHCI TRB handling code with a unit test was not driven solely by the complexities of the hardware implementation. After all, the kernel has been quite successful in the absence of widespread driver unit testing since its inception. However, that success has come in part from shipping buggy code to users and fixing it quickly when problems arise. Can this arrangement continue to scale into the future?
While the number of new devices and platforms increases and new developers join the kernel development community, the number of active code reviewers remains relatively flat. A maintainer’s most important job is to say "no" and when that "no" can be automated by a robot like checkpatch, the 0day-kbuild infrastructure, or a unit test suite, those limited reviewer resources can scale out to higher-order review tasks. The power of unit tests to allow a project to scale was made clear by this author’s time working at Facebook. The risk incurred by onboarding new developers at a high rate was mitigated in part by extensive unit tests. This discipline shifts the "break things" aspect of "move fast" to the developer’s workstation rather than the production environment.
From read-only to rewrite
The potential of unit testing to move bug detection earlier in the development cycle is common knowledge. What was surprising during the development of libnvdimm were the occasions where the process of thinking through and developing a test resulted in early detection of warts in the architecture. This happened several times in small ways; early drafts of an approach that seemed suitable at first later felt kludgy after writing the test, and these were ultimately thrown away. A larger example was a reorganization of the block-translation-table (BTT) implementation from a driver to a library.
First, here is a quick primer on the responsibilities of the BTT code. Non-volatile memory devices provide persistent storage and can be used anywhere a disk is used. Whereas disks perform I/O operations in units of sectors (512-bytes for example) that are atomic with respect to power loss, memory devices perform I/O in units of individual bytes. With this change comes an important question: what happens if a sector update is interrupted (by a power failure, say) partway through?
Developers (and users) would like to see non-volatile devices be "atomic with respect to power loss," meaning that a write to a sector completes in full, or not at all, and software can depend on not seeing partially updated sectors upon recovery from a crash or power failure event. The BTT is a software mechanism to layer these atomic-sector-update semantics on top of byte-accessible memory. The initial architecture for BTT followed in the footsteps of other kernel drivers that layer various storage semantics on top of another storage device. Device Mapper (DM - volume management) and Multiple Devices (MD - RAID) use a stacked block device arrangement for this purpose.
This stacked arrangement served the implementation well up until the point where code was added to handle occasions when the backing memory for a BTT is discovered to be read-only. The code gymnastics needed to keep the block-device-read-only flags synchronized between the "fronting" BTT block-device and the "backing" non-volatile memory block-device received special attention during the upstream review cycle. It became clear that the implementation would be cleaner with the stacking removed and instead have the BTT built as an intrinsic property of the storage device.
In the typical development model, sans unit tests, the fact that the read-only flag for fronting and backing-device could get out of sync may not have been caught until well after the kernel had shipped to users. The process of developing a test highlighted the hidden requirement, exposed a deficiency in the architecture, and triggered significant reworking of the code, all in advance of the initial merge of the libnvdimm subsystem into the v4.2-rc1 kernel.
Making a mockery
The unit-test infrastructure for libnvdimm lives in the tools/testing/nvdimm directory. It compiles modified versions of the nfit (the "firmware interface table" that enables discovery of persistent memory arrays), libnvdimm core, and nvdimm storage drivers that consume mocked resources for exercising the implementation.
The general method for inserting mock objects into a C project is via the --wrap option to the linker. In the kernel this is really only suitable for exported symbols. See tools/testing/nvdimm/Kbuild for an example of providing fake resources to the ACPI NFIT driver and the resulting child devices. It is worth noting that --wrap should only ever appear in Kbuild configurations for external (out-of-tree) modules — and testing modules should always be considered out-of-tree, even when they are a part of the kernel source. Beyond giving a visual representation (a "taint" flag) in a kernel oops report that external modules were loaded into the failing kernel, it also separates unit test implementations that want to mock the same symbol in different ways.
Lastly, a successful unit test implementation should not render the rest of the kernel inoperable. For example, the libnvdimm test modules provide a __wrap_ioremap() symbol that redirects requests to mock memory resources for test purposes. However, if that routine is handed a real libnvdimm resource, __wrap_ioremap() detects it and passes it through to the actual ioremap() in the base kernel.
I struggled with the kernel build infrastructure to use the linker-provided method for calling "real" symbols from a wrapper. The linker defines a __real_ioremap() by default for this purpose, but that symbol cannot be handed to EXPORT_SYMBOL(). Instead, the Kbuild infrastructure can achieve the same effect as __real_<symbol>() by placing the definition of the wrappers in their own directory. See tools/testing/nvdimm/test/iomap.c for the implementation of the fallback mechanism.
Conclusion
The addition of the unit tests was a contentious point during the review process — a fact that is reflected in the changelog. Although there were numerous, pleasantly surprising occasions of early detection of bugs and architecture defects, this approach carries risk. Mock objects by definition take shortcuts and bend the normal rules of the objects they replace. This fact presents a unique maintenance burden in that different test implementations may choose to mock the same symbol in different ways — ways that would be surprising to a typical kernel developer.
In general, code aspects in a project that surprise a typical developer are a tax on review resources. That said, the "tax" is overshadowed by the benefits. The volume and pace of iteration needed to take libnvdimm, without regressing, from its initial posting at v4.1-rc1 to its final merging state at v4.2-rc1 could not have been achieved without this capability. As the kernel continues to scale and attract new developers, the need for automated code review and testing will continue to grow.
RCU requirements part 3
This is the third and final segment in a series on the requirements for the kernel's read-copy-update (RCU) subsystem and how they came to be. Part 1 covered the fundamental requirements that make RCU what it is, while part 2 got into the requirements driven by software-engineering and parallelism concerns. In this concluding segment, the discussion will become more Linux-specific. In particular, the following topics are covered:
Needless to say, it wouldn't be an RCU article without the answers to the quick quizzes at the end — but this article only has a single quiz in it.
Linux kernel complications
The Linux kernel provides an interesting environment for all kinds of software, including RCU. Some of the relevant points of interest are as follows:
- Configuration.
- Firmware interface.
- Early boot.
- Interrupts and non-maskable interrupts (NMIs).
- Loadable modules.
- Hotplug CPU.
- Scheduler and RCU.
- Tracing and RCU.
- Energy efficiency.
- Performance, scalability, response time, and reliability.
This list is probably incomplete, but it does give a feel for the most notable Linux kernel complications. Each of the following sections covers one of the above topics.
Configuration
RCU's goal is automatic configuration, so that almost nobody needs to worry about RCU's Kconfig options. And, for almost all users, RCU does in fact work well “out of the box.”
However, there are specialized use cases that are handled by kernel boot parameters and Kconfig options. Unfortunately, the Kconfig system will explicitly ask users about new Kconfig options, which requires almost all of them be hidden behind a CONFIG_RCU_EXPERT Kconfig option. Commits doing this hiding are making their way to mainline.
This all should be quite obvious, but the fact remains that Linus Torvalds recently had to remind me of this requirement.
Firmware interface
In many cases, the kernel obtains information about the system from the firmware, and sometimes things are lost in translation. Or the translation is accurate, but the original message is bogus.
For example, some systems' firmware over-reports the number of CPUs, sometimes by a large factor. If RCU naively believed the firmware, as it used to do, it would create too many per-CPU kernel threads. Although the resulting system will still run correctly, the extra threads needlessly consume memory and can cause confusion when they show up in ps listings.
RCU must therefore wait for a given CPU to actually come online before it can allow itself to believe that the CPU actually exists. The resulting “ghost CPUs” (which are never going to come online) cause a number of interesting complications.
Early boot
The Linux kernel's boot sequence is an interesting process, and RCU is used early, even before rcu_init() is invoked. In fact, a number of RCU's primitives can be used as soon as the initial task's task_struct is available and the boot CPU's per-CPU variables are set up. The read-side primitives (rcu_read_lock(), rcu_read_unlock(), rcu_dereference(), and rcu_access_pointer()) will operate normally early on, as will rcu_assign_pointer().
Although call_rcu() may be invoked at any time during boot, callbacks are not guaranteed to be invoked until after the scheduler is fully up and running. This delay in callback invocation is due to the fact that RCU does not invoke callbacks until it is fully initialized, and this full initialization cannot occur until after the scheduler has initialized itself to the point where RCU can spawn and run its kernel threads. In theory, it would be possible to invoke callbacks earlier; however, this is not a panacea because there would be severe restrictions on what operations those callbacks could invoke.
Perhaps surprisingly, synchronize_rcu(), synchronize_rcu_bh() (discussed below), and synchronize_sched() will all operate normally during very early boot, the reason being that there is only one CPU and preemption is disabled. This means that the call synchronize_rcu() (or friends) itself is a quiescent state and thus a grace period, so the early-boot implementation can be a no-op.
Answer
I learned of these boot-time requirements as a result of a series of system hangs.
Interrupts and NMIs
The Linux kernel has interrupts, and RCU read-side critical sections are legal within interrupt handlers and within interrupt-disabled regions of code, as are invocations of call_rcu().
Some Linux kernel architectures can enter an interrupt handler from non-idle process context, and then just never leave it, instead stealthily transitioning back to process context. This trick is sometimes used to invoke system calls from inside the kernel. These “half-interrupts” mean that RCU has to be very careful about how it counts interrupt nesting levels. I learned of this requirement the hard way during a rewrite of RCU's dyntick-idle code.
The Linux kernel has non-maskable interrupts (NMIs), and RCU read-side critical sections are legal within NMI handlers. Thankfully, RCU update-side primitives, including call_rcu(), are prohibited within NMI handlers.
The name notwithstanding, some Linux kernel architectures can have nested NMIs, which RCU must handle correctly. Andy Lutomirski recently surprised me with this requirement; he also kindly surprised me with an algorithm that meets that requirement.
Loadable modules
The Linux kernel has loadable modules, and these modules can also be unloaded. After a given module has been unloaded, any attempt to call one of its functions results in a segmentation fault. The module-unload functions must therefore cancel any delayed calls to loadable-module functions; for example, any outstanding mod_timer() must be dealt with via del_timer_sync() or similar.
Unfortunately, there is no way to cancel an RCU callback; once you invoke call_rcu(), the callback function is going to eventually be invoked, unless the system goes down first. Because it is normally considered socially irresponsible to crash the system in response to a module unload request, we need some other way to deal with in-flight RCU callbacks.
RCU therefore provides rcu_barrier(), which waits until all in-flight RCU callbacks have been invoked. If a module uses call_rcu(), its exit function should therefore prevent any future invocation of call_rcu(), then invoke rcu_barrier(). In theory, the underlying module-unload code could invoke rcu_barrier() unconditionally, but in practice this would incur unacceptable latencies.
Nikita Danilov noted this requirement for an analogous filesystem-unmount situation, and Dipankar Sarma incorporated rcu_barrier() into RCU. The need for rcu_barrier() for module unloading became apparent later.
Hotplug CPU
The Linux kernel supports CPU hotplug, which means that CPUs can come and go. It is of course illegal to use any RCU API member from an offline CPU. This requirement was present from day one in DYNIX/ptx, but on the other hand, the Linux kernel's CPU-hotplug implementation is “interesting.”
The Linux kernel CPU-hotplug implementation has notifiers that are used to allow the various kernel subsystems (including RCU) to respond appropriately to a given CPU-hotplug operation. Most RCU operations may be invoked from CPU-hotplug notifiers, including even normal synchronous grace-period operations such as synchronize_rcu(). However, expedited grace-period operations such as synchronize_rcu_expedited() are not supported, due to the fact that current implementations block CPU-hotplug operations, which could result in deadlock.
In addition, all-callback-wait operations such as rcu_barrier() are also not supported, due to the fact that there are phases of CPU-hotplug operations where the outgoing CPU's callbacks will not be invoked until after the CPU-hotplug operation ends, which could also result in deadlock.
Scheduler and RCU
RCU depends on the scheduler, and the scheduler uses RCU to protect some of its data structures. This means the scheduler is forbidden from acquiring the run-queue locks and the priority-inheritance locks in the middle of an outermost RCU read-side critical section unless it also releases them before exiting that same RCU read-side critical section. This same prohibition also applies to any lock that is acquired while holding any lock to which this prohibition applies. Violating this rule results in deadlock.
For RCU's part, the preemptible-RCU rcu_read_unlock() implementation must be written carefully to avoid similar deadlocks. In particular, rcu_read_unlock() must tolerate an interrupt where the interrupt handler invokes both rcu_read_lock() and rcu_read_unlock(). This possibility requires rcu_read_unlock() to use negative nesting levels to avoid destructive recursion via interrupt handler's use of RCU.
This pair of mutual scheduler-RCU requirements came as a complete surprise.
As noted above, RCU makes use of kernel threads, and it is necessary to avoid excessive CPU-time accumulation by these threads. This requirement was no surprise, but RCU's violation of it when running context-switch-heavy workloads when built with CONFIG_NO_HZ_FULL=y did come as a surprise [PDF]. RCU has made good progress toward meeting this requirement, even for context-switch-heavy CONFIG_NO_HZ_FULL=y workloads, but there is room for further improvement.
Tracing and RCU
It is possible to use tracing on RCU code, but tracing itself uses RCU. For this reason, rcu_dereference_raw_notrace() is provided for use by tracing, which avoids the destructive recursion that could otherwise ensue. This API is also used by virtualization in some architectures, where RCU readers execute in environments in which tracing cannot be used. The tracing folks both located the requirement and provided the needed fix, so this surprise requirement was relatively painless.
Energy efficiency
Interrupting idle CPUs is considered socially unacceptable, especially by people with battery-powered embedded systems. RCU therefore conserves energy by detecting which CPUs are idle, including tracking CPUs that have been interrupted from idle. This is a large part of the energy-efficiency requirement, so I learned of this via an irate phone call.
Because RCU avoids interrupting idle CPUs, it is illegal to execute an RCU read-side critical section on an idle CPU. (Kernels built with CONFIG_PROVE_RCU=y will splat if you try it.) The RCU_NONIDLE() macro and _rcuidle event tracing is provided to work around this restriction. In addition, rcu_is_watching() may be used to test whether or not it is currently legal to run RCU read-side critical sections on this CPU. I learned of the need for diagnostics on the one hand and RCU_NONIDLE() on the other while inspecting idle-loop code. Steven Rostedt supplied _rcuidle event tracing, which is used quite heavily in the idle loop.
It is similarly socially unacceptable to interrupt an nohz_full CPU running in user space. RCU must therefore track nohz_full user-space execution. And in CONFIG_NO_HZ_FULL_SYSIDLE=y kernels, RCU must separately track idle CPUs on the one hand and CPUs that are either idle or executing in user space on the other. In both cases, RCU must be able to sample state at two points in time, and be able to determine whether or not some other CPU spent any time idle and/or executing in user space.
These energy-efficiency requirements have proven quite difficult to understand and to meet; for example, there have been more than five clean-sheet rewrites of RCU's energy-efficiency code, the last of which was finally able to demonstrate real energy savings running on real hardware [PDF]. As noted earlier, I learned of many of these requirements via angry phone calls; flaming me on the Linux-kernel mailing list was apparently not sufficient to fully vent their ire at RCU's energy-efficiency bugs.
Performance, scalability, response time, and reliability
Expanding on the earlier discussion (in part 2), RCU is used heavily by hot code paths in performance-critical portions of the Linux kernel's networking, security, virtualization, and scheduling subsystems. RCU must therefore use efficient implementations, especially in its read-side primitives. To that end, it would be good if preemptible RCU's implementation of rcu_read_lock() could be inlined, however, doing this requires resolving #include issues with the task_struct structure.
The Linux kernel supports hardware configurations with up to 4096 CPUs, which means that RCU must be extremely scalable. Algorithms that involve frequent acquisitions of global locks or frequent atomic operations on global variables simply cannot be tolerated within the RCU implementation. RCU therefore makes heavy use of a combining tree based on the rcu_node structure. RCU is required to tolerate all CPUs continuously invoking any combination of RCU's runtime primitives with minimal per-operation overhead. In fact, in many cases, increasing load must decrease the per-operation overhead; witness the batching optimizations for synchronize_rcu(), call_rcu(), synchronize_rcu_expedited(), and rcu_barrier(). As a general rule, RCU must cheerfully accept whatever the rest of the Linux kernel decides to throw at it.
The Linux kernel is used for realtime workloads, especially in conjunction with the -rt patchset. The realtime-latency response requirements are such that the traditional approach of disabling preemption across RCU read-side critical sections is inappropriate. Kernels built with CONFIG_PREEMPT=y therefore use an RCU implementation that allows RCU read-side critical sections to be preempted. This requirement made its presence known after users made it clear that an earlier realtime patch did not meet their needs, in conjunction with some RCU issues encountered by a very early version of the -rt patchset.
In addition, RCU must make do with a sub-100-microsecond realtime latency budget. In fact, on smaller systems with the -rt patchset, the Linux kernel provides sub-20-microsecond realtime latencies for the whole kernel, including RCU. RCU's scalability and latency must therefore be sufficient for these sorts of configurations. To my surprise, the sub-100-microsecond realtime latency budget applies to even the largest systems [PDF], up to and including systems with 4096 CPUs. This realtime requirement motivated the grace-period kernel thread, which also simplified handling of a number of race conditions.
Finally, RCU's status as a synchronization primitive means that any RCU failure can result in arbitrary memory corruption that can be extremely difficult to debug. This means that RCU must be extremely reliable, which in practice also means that RCU must have an aggressive stress-test suite. This stress-test suite is called rcutorture.
Although the need for rcutorture was no surprise, the current immense popularity of the Linux kernel is posing interesting—and perhaps unprecedented—validation challenges. To see this, keep in mind that there are well over one billion instances of the Linux kernel running today, given Android smartphones, Linux-powered televisions, and servers. This number can be expected to increase sharply with the advent of the celebrated Internet of Things.
Suppose that RCU contains a race condition that manifests on average once per million years of runtime. This bug will be occurring about three times per day across the installed base. RCU could simply hide behind hardware error rates, given that no one should really expect their smartphone to last for a million years. However, anyone taking too much comfort from this thought should consider the fact that in most jurisdictions, a successful multi-year test of a given mechanism, which might include a Linux kernel, suffices for a number of types of safety-critical certifications. In fact, rumor has it that the Linux kernel is already being used in production for safety-critical applications. I don't know about you, but I would feel quite bad if a bug in RCU killed someone. Which might explain my recent focus on validation and verification.
Other RCU flavors
One of the more surprising things about RCU is that there are now no fewer than five flavors, or API families. In addition, the primary flavor that has been the sole focus up to this point has two different implementations, non-preemptible and preemptible. The other four flavors are listed below, with requirements for each described in a separate section.
Bottom-half flavor
The softirq-disable (AKA "bottom-half", hence the "_bh" abbreviations) flavor of RCU, or RCU-bh, was developed by Dipankar Sarma to provide a flavor of RCU that could withstand the network-based denial-of-service attacks researched by Robert Olsson. These attacks placed so much networking load on the system that some of the CPUs never exited softirq execution, which in turn prevented those CPUs from ever executing a context switch, which, in the RCU implementation of that time, prevented grace periods from ever ending. The result was an out-of-memory condition and a system hang.
The solution was the creation of RCU-bh, which does local_bh_disable() across its read-side critical sections, and which uses the transition from one type of softirq processing to another as a quiescent state in addition to context switch, idle, user mode, and offline. This means that RCU-bh grace periods can complete even when some of the CPUs execute in softirq indefinitely, thus allowing algorithms based on RCU-bh to withstand network-based denial-of-service attacks.
Because rcu_read_lock_bh() and rcu_read_unlock_bh() disable and re-enable softirq handlers, any attempt to start a softirq handler during the RCU-bh read-side critical section will be deferred. In this case, rcu_read_unlock_bh() will invoke softirq processing, which can take considerable time. One can, of course, argue that this softirq overhead should be associated with the code following the RCU-bh read-side critical section rather than rcu_read_unlock_bh(), but the fact is that most profiling tools cannot be expected to make this sort of fine distinction. For example, suppose that a three-millisecond-long RCU-bh read-side critical section executes during a time of heavy networking load. There will very likely be an attempt to invoke at least one softirq handler during that three milliseconds, but any such invocation will be delayed until the time of the rcu_read_unlock_bh(). This can of course make it appear at first glance as if rcu_read_unlock_bh() was executing very slowly.
The RCU-bh API includes rcu_read_lock_bh(), rcu_read_unlock_bh(), rcu_dereference_bh(), rcu_dereference_bh_check(), synchronize_rcu_bh(), synchronize_rcu_bh_expedited(), call_rcu_bh(), rcu_barrier_bh(), and rcu_read_lock_bh_held().
Sched flavor
Before preemptible RCU, waiting for an RCU grace period had the side effect of also waiting for all pre-existing interrupt and NMI handlers. However, there are legitimate preemptible-RCU implementations that do not have this property, given that any point in the code outside of an RCU read-side critical section can be a quiescent state. Therefore, RCU-sched was created, which follows “classic” RCU in that an RCU-sched grace period waits for for pre-existing interrupt and NMI handlers. In kernels built with CONFIG_PREEMPT=n, the RCU and RCU-sched APIs have identical implementations, while kernels built with CONFIG_PREEMPT=y provide a separate implementation for each.
Note well that in CONFIG_PREEMPT=y kernels, rcu_read_lock_sched() and rcu_read_unlock_sched() disable and re-enable preemption, respectively. This means that if there was a preemption attempt during the RCU-sched read-side critical section, rcu_read_unlock_sched() will enter the scheduler, with all the latency and overhead entailed. Just as with rcu_read_unlock_bh(), this can make it look as if rcu_read_unlock_sched() was executing very slowly. However, the highest-priority task won't be preempted, so that task will enjoy low-overhead rcu_read_unlock_sched() invocations.
The RCU-sched API includes rcu_read_lock_sched(), rcu_read_unlock_sched(), rcu_read_lock_sched_notrace(), rcu_read_unlock_sched_notrace(), rcu_dereference_sched(), rcu_dereference_sched_check(), synchronize_sched(), synchronize_rcu_sched_expedited(), call_rcu_sched(), rcu_barrier_sched(), and rcu_read_lock_sched_held(). However, anything that disables preemption also marks an RCU-sched read-side critical section, including preempt_disable() and preempt_enable(), local_irq_save() and local_irq_restore(), and so on.
Sleepable RCU
For well over a decade, someone saying “I need to block within an RCU read-side critical section” was a reliable indication that this someone did not understand RCU. After all, if you are always blocking in an RCU read-side critical section, you can probably afford to use a higher-overhead synchronization mechanism. However, that changed with the advent of the Linux kernel's notifiers, whose RCU read-side critical sections almost never sleep, but sometimes need to. This resulted in the introduction of sleepable RCU, or SRCU.
SRCU allows different domains to be defined, with each such domain defined by an instance of an srcu_struct structure. A pointer to this structure must be passed into each SRCU function, for example, synchronize_srcu(&ss), where ss is the srcu_struct structure. The key benefit of these domains is that a slow SRCU reader in one domain does not delay an SRCU grace period in some other domain. That said, one consequence of these domains is that read-side code must pass a “cookie” from srcu_read_lock() to srcu_read_unlock(), for example, as follows:
1 int idx; 2 3 idx = srcu_read_lock(&ss); 4 do_something(); 5 srcu_read_unlock(&ss, idx);
As noted above, it is legal to block within SRCU read-side critical sections, however, with great power comes great responsibility. If you block forever in one of a given domain's SRCU read-side critical sections, then that domain's grace periods will also be blocked forever. Of course, one good way to block forever is to deadlock, which can happen if any operation in a given domain's SRCU read-side critical section can block waiting, either directly or indirectly, for that domain's grace period to elapse. For example, this results in a self-deadlock:
1 int idx; 2 3 idx = srcu_read_lock(&ss); 4 do_something(); 5 synchronize_srcu(&ss); 6 srcu_read_unlock(&ss, idx);
However, if line 5 acquired a mutex that was held across a synchronize_srcu() for domain ss, deadlock would still be possible. Furthermore, if line 5 acquired a mutex that was held across a synchronize_srcu() for some other domain ss1, and if an ss1-domain SRCU read-side critical section acquired another mutex that was held across an ss-domain synchronize_srcu(), deadlock would again be possible. Such a deadlock cycle could extend across an arbitrarily large number of different SRCU domains. Again, with great power comes great responsibility.
Unlike the other RCU flavors, SRCU read-side critical sections can run on idle and even offline CPUs. This ability requires that srcu_read_lock() and srcu_read_unlock() contain memory barriers, which means that SRCU readers will run a bit slower than would RCU readers. It also motivates the smp_mb__after_srcu_read_unlock() API, which, in combination with srcu_read_unlock(), guarantees a full memory barrier.
The SRCU API includes srcu_read_lock(), srcu_read_unlock(), srcu_dereference(), srcu_dereference_check(), synchronize_srcu(), synchronize_srcu_expedited(), call_srcu(), srcu_barrier(), and srcu_read_lock_held(). It also includes DEFINE_SRCU(), DEFINE_STATIC_SRCU(), and init_srcu_struct() APIs for defining and initializing srcu_struct structures.
Tasks RCU
Some forms of tracing use “trampolines” to handle the binary rewriting required to install different types of probes. It would be good to be able to free old trampolines, which sounds like a job for some form of RCU. However, because it is necessary to be able to install a trace anywhere in the code, it is not possible to have read-side markers such as rcu_read_lock() and rcu_read_unlock(). In addition, it does not work to have these markers in the trampoline itself, because there would need to be instructions following rcu_read_unlock(). Although synchronize_rcu() would guarantee that execution reached the rcu_read_unlock(), it would not be able to guarantee that execution had completely left the trampoline.
The solution, in the form of Tasks RCU, is to have implicit read-side critical sections that are delimited by voluntary context switches, that is, calls to schedule(), cond_resched_rcu_qs(), and synchronize_rcu_tasks(). In addition, transitions to and from user-space execution also delimit tasks-RCU read-side critical sections.
The tasks-RCU API is quite compact, consisting only of call_rcu_tasks(), synchronize_rcu_tasks(), and rcu_barrier_tasks().
Possible future changes
One of the tricks that RCU uses to attain update-side scalability is to increase grace-period latency with increasing numbers of CPUs. If this becomes a serious problem, it will be necessary to rework the grace-period state machine so as to avoid the need for the additional latency.
Expedited grace periods scan the CPUs, so their latency and overhead increases with increasing numbers of CPUs. If this becomes a serious problem on large systems, it will be necessary to do some redesign to avoid this scalability problem.
RCU disables CPU hotplug in a few places, perhaps most notably in the expedited grace-period and rcu_barrier() operations. If there is a strong reason to use expedited grace periods in CPU-hotplug notifiers, it will be necessary to avoid disabling CPU hotplug. This would introduce some complexity, so there had better be a very good reason.
The tradeoff between grace-period latency on the one hand and interruptions of other CPUs on the other hand may need to be re-examined. The desire is of course for zero grace-period latency as well as zero inter-processor interrupts undertaken during an expedited grace period operation. While this ideal is unlikely to be achievable, it is quite possible that further improvements can be made.
The multiprocessor implementations of RCU use a combining tree that groups CPUs so as to reduce lock contention and increase cache locality. However, this combining tree does not spread its memory across NUMA nodes nor does it align the CPU groups with hardware features such as sockets or cores. Such spreading and alignment is currently believed to be unnecessary because the hot-path read-side primitives do not access the combining tree, nor does call_rcu() in the common case. If you believe that your architecture needs such spreading and alignment, then your architecture should also benefit from the rcutree.rcu_fanout_leaf boot parameter, which can be set to the number of CPUs in a socket, NUMA node, or whatever. If the number of CPUs is too large, use a fraction of the number of CPUs. If the number of CPUs is a large prime number, well, that certainly is an “interesting” architectural choice. More flexible arrangements might be considered, but only if rcutree.rcu_fanout_leaf has proven inadequate, and only if the inadequacy has been demonstrated by a carefully run and realistic system-level workload.
Please note that arrangements that require RCU to remap CPU numbers will require extremely good demonstration of need and full exploration of alternatives.
There is an embarrassingly large number of flavors of RCU, and this number has been increasing over time. Perhaps it will be possible to combine some at some future date.
RCU's various kernel threads are reasonably recent additions. It is quite likely that adjustments will be required to more gracefully handle extreme loads. It might also be necessary to be able to relate CPU utilization by RCU's kernel threads and softirq handlers to the code that instigated this CPU utilization. For example, RCU callback overhead might be charged back to the originating call_rcu() instance, though probably not in production kernels.
Summary
This document has presented more than two decade's worth of RCU requirements. Given that the requirements keep changing, this will not be the last word on this subject, but at least it serves to get an important subset of the requirements set forth.
Acknowledgments
I am grateful to Steven Rostedt, Lai Jiangshan, Ingo Molnar, Oleg Nesterov, Borislav Petkov, Peter Zijlstra, and Andy Lutomirski for their help in rendering this article human readable, and to Michelle Rankin for her support of this effort.
Answer to the quick quiz
Quick Quiz 14: So what happens with synchronize_rcu() during scheduler initialization for CONFIG_PREEMPT=n kernels?
Answer: In CONFIG_PREEMPT=n kernels, synchronize_rcu() maps directly to synchronize_sched(). Therefore, synchronize_rcu() works normally throughout boot in CONFIG_PREEMPT=n kernels. However, the code must work in CONFIG_PREEMPT=y kernels, so it is still necessary to avoid invoking synchronize_rcu() during scheduler initialization.
Patches and updates
Kernel trees
Architecture-specific
Core kernel code
Device drivers
Device driver infrastructure
Documentation
Filesystems and block I/O
Memory management
Networking
Security-related
Virtualization and containers
Miscellaneous
Page editor: Jonathan Corbet
Next page:
Distributions>>