|
|
Subscribe / Log in / New account

Kernel development

Brief items

Kernel release status

The current 2.6 prepatch is 2.6.24-rc7, released by Linus on January 6. It contains a fair number of fixes and an implementation of /proc/slabinfo for the SLUB allocator (which was discussed in last week's Kernel Page). About the long release cycle, he says "I'll be charitable and claim it's because it's all stabilizing, and not because we've all been in a drunken stupor over the holidays." The short-form changelog can be found in the release announcement; see the long-format changelog for all the details.

The mainline git repository contains, as of this writing, a few dozen post-rc7 patches.

The current stable 2.6 kernel is 2.6.23.13, released on January 9. This update is only of interest to people using the w83627ehf hardware monitoring driver, but they should be very interested: "I have had a private report that this bug might have caused permanent hardware damage. There is no definitive proof at this point, but unfortunately due to the lack of documentation I really can't rule it out."

For older kernels: 2.6.16.58-rc1 was released on January 6 with about a dozen fixes, a few of which are security-related.

Comments (none posted)

Kernel development news

Quotes of the week

What guarantees that it doesn't happen before we get to callback? AFAICS, nothing whatsoever...

And if it does happen, we'll get rdev happily freed (by rdev_free(), as ->release() of &rdev->kobj) by the time we get to delayed_delete(). Which explains what's going on just fine.

-- Al Viro shows how to debug kernel problems

I consider the fact that I can spend full-time working on Linux to be a blessing. But if you don't feel that way, my condolences, and please do what you need to do so you can stay in your happy place.
-- Ted Ts'o shows how to respond with class to trolls

Comments (4 posted)

2.6.24 - some statistics

By Jonathan Corbet
January 9, 2008
As of this writing, the 2.6.24 kernel is getting close to a release - though there is likely to be one more -rc version to look at first. The rate of change has slowed significantly, though, and the final regressions are being chased down. So it seems like a suitable time to look at the patches which went into this kernel and where they came from.

This is, in many ways, a record-breaking development cycle. Over 10,000 individual changesets have been merged this time around, with a net growth of almost 300,000 lines of code. 950 developers contributed this code; of those, 358 contributed just one patch. By comparison, the previous cycle (2.6.23) merged some 6200 patches from about 860 developers. Given that, it's not surprising that the 2.6.24 cycle has been a little longer than some of its predecessors.

Without further ado, here is the list of top contributors to this kernel:

Most active 2.6.24 developers
By changesets
Thomas Gleixner3623.6%
Bartlomiej Zolnierkiewicz2052.0%
Adrian Bunk1901.9%
Ralf Baechle1761.8%
Pavel Emelyanov1461.5%
Ingo Molnar1411.4%
Tejun Heo1381.4%
Paul Mundt1311.3%
Johannes Berg1191.2%
Al Viro1161.2%
Takashi Iwai1151.1%
Jeff Garzik1071.1%
David S. Miller1021.0%
Matthew Wilcox971.0%
Jens Axboe890.9%
Krzysztof Helt890.9%
Stephen Hemminger860.9%
Rusty Russell860.9%
Alan Cox850.8%
Herbert Xu840.8%
By changed lines
Thomas Gleixner463585.9%
Zhu Yi351334.5%
Auke Kok258613.3%
Michael Buesch244803.1%
Ivo van Doorn221782.8%
Matthew Wilcox204162.6%
Adrian Bunk190502.4%
Larry Finger150031.9%
David S. Miller143151.8%
Andy Gospodarek138141.8%
Nathanael Nerode128211.6%
Jeff Dike111031.4%
Johannes Berg101181.3%
Ralf Baechle95551.2%
Scott Wood93281.2%
Krzysztof Helt81621.0%
Kumar Gala80021.0%
Jeff Garzik76891.0%
David Gibson72840.9%
Michael Hennerich71810.9%

By either method of counting, Thomas Gleixner comes out at the top of the list by virtue of his work on the i386/x86_64 architecture merger. Bringing those architectures together and making the result work well was a huge job; this effort will continue into future development cycles. (For the curious, simply renamed files were not counted as "changed lines" in the generation of these numbers). Note that many of these patches also carry a signoff by Ingo Molnar, but git only stores the name of a single "author" for a changeset.

Other contributors of large numbers of changesets in 2.6.24 include Bartlomiej Zolnierkiewicz (lots of IDE driver patches), Adrian Bunk (cleanups all over the kernel tree), Ralf Baechle (MIPS architecture work), Pavel Emelyanov (mostly network and PID namespaces), Tejun Heo (serial ATA and a number of sysfs cleanups), Johannes Berg (wireless networking), and Al Viro (mostly annotation patches and related fixes). If one looks at the number of changed lines, the list of developers changes almost completely: Zhu Yi (iwlwifi driver), Auke Kok (e1000 driver), Michael Buesch (wireless networking and the b43 driver), Ivo van Doorn (rt2x00 wireless driver), Matthew Wilcox (SCSI, especially advansys and sym53c8xx drivers), Adrian Bunk (cleanups and code deletions), Larry Finger (mainly addition of the b43 legacy driver), and David Miller (networking and SPARC64).

If one assigns developers' contributions to employers and totals the results, the following numbers emerge (note that these tables have been updated since initial publication to fix an error):

Most active 2.6.24 employers
By changesets
(None)141714.1%
(Unknown)110811.1%
Red Hat104510.4%
IBM8198.2%
Novell6806.8%
Intel4464.5%
linutronix3693.7%
Oracle2402.4%
SWsoft2122.1%
CERN2052.0%
Movial1901.9%
Linux Foundation1901.9%
MIPS Technologies1761.8%
Renesas Technology1401.4%
(Academia)1321.3%
Freescale1261.3%
MontaVista1221.2%
Analog Devices1151.1%
(Consultant)1121.1%
NetApp1011.0%
By lines changed
(None)14073018.0%
(Unknown)12151115.5%
Intel11499014.7%
Red Hat588587.5%
IBM517776.6%
linutronix479686.1%
Novell298563.8%
Movial190932.4%
Freescale152621.9%
Analog Devices149711.9%
MIPS Technologies117261.5%
SWsoft83311.1%
Linux Foundation79171.0%
Oracle77771.0%
Atmel71250.9%
CERN66180.8%
Renesas Technology64140.8%
Google63730.8%
MontaVista60260.8%
NetApp56200.7%

In many ways, these lists look similar to those posted for past kernels. But there are a few things which jump out this time around:

  • Intel has made it to the top of the "by lines changed" list - and not just by a little bit. This happened by virtue of the work done by four of the top-20 developers, but also by dozens of others who contributed to the 2.6.24 kernel. Intel has a lot of people working on the kernel, many of whom spend little time in the limelight.

  • Movial found its way onto the list for the first time as a result of having hired a very active developer.

  • The amount of work done by people known to be hacking on their own time has grown a bit. This change is mostly a result of more complete information on our side - many developers have moved out of the "unknown" category. Quite a bit of the no-employer work this time around was done on the wireless networking tree; since much of the interesting work in this area currently involves reverse engineering, perhaps it is not surprising that relatively few companies are willing to sponsor it.

All told, some 130 distinct employers were identified for the contributors to 2.6.24. That is a lot of companies to be working on one body of code.

Looking at the Signed-off-by headers of patches is always interesting; if one removes the signoffs added by the authors themselves, what is left is a list of the gatekeepers - those who channel the code into the mainline. The people who signed off on the most patches which they did not write are:

Sign-offs in the 2.6.24 kernel
By developer
Andrew Morton167917.6%
David S. Miller8949.4%
Jeff Garzik6316.6%
Ingo Molnar6266.6%
John W. Linville4134.3%
Mauro Carvalho Chehab3673.9%
Greg Kroah-Hartman3373.5%
Paul Mackerras3053.2%
Jaroslav Kysela2843.0%
James Bottomley2602.7%
Linus Torvalds2502.6%
Thomas Gleixner2162.3%
Bryan Wu1661.7%
Takashi Iwai1151.2%
Jens Axboe1131.2%
Len Brown1131.2%
Avi Kivity1071.1%
Roland Dreier1071.1%
Ralf Baechle961.0%
Adrian Bunk880.9%
By employer
Red Hat293530.2%
Linux Foundation192919.9%
(None)8238.5%
(Unknown)7367.6%
Novell6366.6%
IBM5846.0%
Intel3183.3%
linutronix2162.2%
Analog Devices1751.8%
SGI1411.5%
Oracle1331.4%
Cisco1071.1%
Qumranet1071.1%
NetApp1061.1%
MIPS Technologies961.0%
Movial880.9%
(Consultant)850.9%
Renesas Technology840.9%
Cendio430.4%
CERN400.4%

There are not a lot of changes here from previous development cycles. While quite a few developers add signoffs to code and pass it on, they work for a relatively small number of companies - 7 employers account for 70% of the non-author signoffs.

Finally, given that we are starting a new year, it is worth taking a quick look back at the entirety of 2007. In 2007, Linus merged just over 30,000 changesets (more than 80 per day, every day) from 1900 developers working for (at least) 200 companies. All told, they changed over 2 million lines of code, growing the kernel by more than 750,000 lines. The kernel developers are, in other words, touching over 5,000 lines of code every day - that is a high rate of change.

The top contributors over the course of the year (by changesets) were:

Top contributors in 2007
By developer
Ralf Baechle5071.7%
Thomas Gleixner4851.6%
David S. Miller4681.6%
Adrian Bunk4391.5%
Tejun Heo3941.3%
Ingo Molnar3511.2%
Paul Mundt3511.2%
Al Viro3371.1%
Bartlomiej Zolnierkiewicz3301.1%
Andrew Morton3191.1%
Stephen Hemminger3021.0%
Patrick McHardy2770.9%
Alan Cox2700.9%
Takashi Iwai2690.9%
Trond Myklebust2560.9%
David Brownell2540.8%
Avi Kivity2290.8%
Jeff Dike2270.8%
Jeff Garzik2160.7%
Jean Delvare2150.7%
By employer
(None)488116.2%
Red Hat344111.4%
(Unknown)29339.7%
IBM23797.9%
Novell20546.8%
Intel10603.5%
Linux Foundation7842.6%
Oracle6772.2%
(Consultant)6312.1%
MIPS Technologies5071.7%
linutronix5071.7%
Renesas Technology3941.3%
(Academia)3921.3%
SWsoft3841.3%
SGI3681.2%
MontaVista3421.1%
CERN3301.1%
Freescale2911.0%
NetApp2790.9%
Astaro2770.9%

It should be noted that the employer numbers are more approximate than usual. Some developers changed employers in 2007, but LWN, as a matter of policy, does not maintain a database of developers and their employers over time. Still, the picture is relatively constant - the same companies continue to contribute approximately the same percentage of the patches going into the kernel over relatively long periods of time.

Overall, the picture that results from all these numbers is one of a widespread and healthy development community. There appears to be no shortage of jobs for kernel developers, but also room for those who work outside of the office. The kernel truly is a common resource, with literally thousands of people working to improve it. And it shows no signs of slowing down anytime soon.

Your editor would like to profusely thank Greg Kroah-Hartman for his help in improving these statistics.

Comments (6 posted)

The Linux trace toolkit's next generation

By Jake Edge
January 9, 2008

Instrumenting a running kernel for debugging or profiling is on the wish list of many administrators and developers. Advocates of OpenSolaris like to point to DTrace as a feature that Linux lacks, though SystemTap has started to close that gap. The Linux Trace Toolkit next generation (LTTng) takes a different approach and was recently submitted for inclusion in the kernel (in two patches: arch independent and arch dependent).

LTTng relies upon kernel markers to provide static probe points for its kernel tracing activities. It also provides the ability to trace userspace programs and combine that data with kernel tracing data to give a detailed view of the internals of the system. Unlike other tools, LTTng takes a post-processing approach, storing the data away as efficiently as possible for later analysis. This is in contrast to SystemTap and DTrace which have their own mini-languages that specify what to do as each trace point is reached.

One of the major design goals of LTTng is to have as little impact on the system as possible, not only when it is actually tracing events, but also when it is disabled. Kernel hackers are quite resistant to debugging solutions that add any significant performance penalty when not in use. In addition, any significant delays while enabled may change the system timing such that the bug or condition being studied does not occur. For this reason, LTTng does not take the path that various dynamic tracing solutions have used and avoids the expense of a breakpoint interrupt by using the static markers.

Another major design goal is to provide monotonically increasing timestamp values for events. The original LTT uses timestamps derived from the kernel Network Time Protocol (NTP) time, which can fluctuate somewhat as adjustments are made – sometimes going backward. LTTng uses a timestamp derived from the hardware clocks that will work on various processor architectures and clock speeds. In addition, the timestamps can be correlated between different processors in a multi-processor system.

As LTTng gathers its data, it uses relayfs to get the data to a userspace daemon (lttd) that writes the data to disk. The daemon is started from the lttctl command-line tool, which controls the tracing settings in the kernel via a netlink socket. A user wishing to investigate tracing could use lttctl to start and stop a trace; once the trace is complete, the data could be viewed and analyzed.

The LTT viewer (LTTV) is the program that is used to analyze the data gathered. It provides both GUI and text-based viewers to interpret the binary data generated by LTTng and present it to the user. Multi-gigabyte files of tracing data are not uncommon when using LTTng, so a tool like LTTV is indispensable for visualization and filtering to allow the user to focus on the events of interest. LTTV has a plugin mechanism that allows users to develop their own display and analysis tools, while using the LTTV framework and filtering capabilities.

An advantage of using static probe points – though some may see it as a disadvantage – is that they can be maintained with the kernel code they are targeting. If the kernel markers patch is merged, subsystems can add probe points at places they find interesting or useful and those markers will be carried along in the kernel source; updated as the kernel changes. Other solutions rely on matching an external list of probes with the version of the running kernel, which can result in mismatches and incorrect traces. Also, SystemTap will be able to use any markers that get added to the kernel as is, so users who want the abilities that it provides will also benefit.

LTTng is being developed at the École Polytechnique de Montréal with support from quite a few Linux companies. It has the looks of a very well thought out framework that builds upon the tracing work that has been done before. It certainly won't make it into 2.6.24, but it would seem to have a good chance of making it into a future mainline kernel.

Comments (2 posted)

RCU part 3: the RCU API

January 7, 2008

This article was contributed by Paul McKenney

[Editor's note: this is the third and final installment in Paul McKenney's "What is RCU?" series. The first and second parts remain available for those who might have missed them. Many thanks to Paul for letting LWN run these articles.]

Introduction

Read-copy update (RCU) is a synchronization mechanism that was added to the Linux kernel in October of 2002. RCU is most frequently described as a replacement for reader-writer locking, but has also been used in a number of other ways. RCU is notable in that RCU readers do not directly synchronize with RCU updaters, which makes RCU read paths extremely fast, and also permits RCU readers to accomplish useful work even when running concurrently with RCU updaters.

This leads to the question "what exactly is RCU?", a question that this document addresses from the viewpoint of the Linux kernel's RCU API.

  1. RCU has a Family of Wait-to-Finish APIs

  2. RCU has Publish-Subscribe and Version-Maintenance APIs

  3. So, What is RCU Really?

These sections are followed by a references section and the answers to the Quick Quizzes.

RCU has a Family of Wait-to-Finish APIs

The most straightforward answer to "what is RCU" is that RCU is an API used in the Linux kernel, as summarized by the pair of tables in this section (the first table shows the wait-for-RCU-readers portions of the API, while the second table shows the publish/subscribe portions of the API). Or, more precisely, RCU is a family of APIs as shown in the first table, with each column corresponding to a member of the RCU API family.

If you are new to RCU, you might consider focusing on just one of the columns in the following table. For example, if you are primarily interested in understanding how RCU is used in the Linux kernel, "RCU Classic" would be the place to start, as it is used most frequently. On the other hand, if you want to understand RCU for its own sake, "SRCU" has the simplest API. You can always come back for the other columns later.

If you are already familiar with RCU, the following pair of tables can serve as a useful reference.

Attribute RCU Classic RCU BH RCU Sched Realtime RCU SRCU QRCU
Purpose Original Prevent DDoS attacks Wait for hardirqs and NMIs Realtime response Sleeping readers Sleeping readers and fast grace periods
Availability 2.5.43 2.6.9 2.6.12 Aug 2005 -rt 2.6.19
Read-side primitives rcu_read_lock()
rcu_read_unlock()
rcu_read_lock_bh()
rcu_read_unlock_bh()
preempt_disable()
preempt_enable()
(and friends)
rcu_read_lock()
rcu_read_unlock()
srcu_read_lock()
srcu_read_unlock()
qrcu_read_lock()
qrcu_read_unlock()
Update-side primitives
(synchronous)
synchronize_rcu()
synchronize_net()
synchronize_sched() synchronize_rcu()
synchronize_net()
synchronize_srcu() synchronize_qrcu()
Update-side primitives
(asynchronous/callback)
call_rcu() call_rcu_bh() call_rcu() N/A N/A
Update-side primitives
(wait for callbacks)
rcu_barrier() rcu_barrier() N/A N/A
Read side constraints No blocking No irq enabling No blocking No blocking except preemption and lock acquisition No synchronize_srcu() No synchronize_qrcu()
Read side overhead Preempt disable/enable (free on non-PREEMPT) BH disable/enable Preempt disable/enable (free on non-PREEMPT) Simple instructions, irq disable/enable Simple instructions, preempt disable/enable Atomic increment and decrement of shared variable
Asynchronous update-side overhead (for example, call_rcu()) sub-microsecond sub-microsecond sub-microsecond N/A N/A
Grace-period latency 10s of milliseconds 10s of milliseconds 10s of milliseconds 10s of milliseconds 10s of milliseconds 10s of nanoseconds in absence of readers
Non-PREEMPT_RT implementation RCU Classic RCU BH RCU Classic N/A SRCU N/A
PREEMPT_RT implementation N/A Realtime RCU Forced Schedule on all CPUs Realtime RCU SRCU N/A

Quick Quiz 1: Why are some of the cells in the above table colored green?

The "RCU Classic" column corresponds to the original RCU implementation, in which RCU read-side critical sections are delimited by rcu_read_lock() and rcu_read_unlock(), which may be nested. The corresponding synchronous update-side primitives, synchronize_rcu(), along with its synonym synchronize_net(), wait for any currently executing RCU read-side critical sections to complete. The length of this wait is known as a "grace period". The asynchronous update-side primitive, call_rcu(), invokes a specified function with a specified argument after a subsequent grace period. For example, call_rcu(p,f); will result in the "RCU callback" f(p) being invoked after a subsequent grace period. There are situations, such as when unloading a module that uses call_rcu(), when it is necessary to wait for all outstanding RCU callbacks to complete. The rcu_barrier() primitive does this job.

In the "RCU BH" column, rcu_read_lock_bh() and rcu_read_unlock_bh() delimit RCU read-side critical sections, and call_rcu_bh() invokes the specified function and argument after a subsequent grace period. Note that RCU BH does not have a synchronous synchronize_rcu_bh() interface, though one could easily be added if required.

Quick Quiz 2: What happens if you mix and match? For example, suppose you use rcu_read_lock() and rcu_read_unlock() to delimit RCU read-side critical sections, but then use call_rcu_bh() to post an RCU callback?

In the "RCU Sched" column, anything that disables preemption acts as an RCU read-side critical section, and synchronize_sched() waits for the corresponding RCU grace period. This RCU API family was added in the 2.6.12 kernel, which split the old synchronize_kernel() API into the current synchronize_rcu() (for RCU Classic) and synchronize_sched() (for RCU Sched). Note that RCU Sched does not have an asynchronous call_rcu_sched() interface, though one could be added if required.

Quick Quiz 3: What happens if you mix and match RCU Classic and RCU Sched?

The "Realtime RCU" column has the same API as does RCU Classic, the only difference being that RCU read-side critical sections may be preempted and may block while acquiring spinlocks. The design of Realtime RCU is described in the LWN article The design of preemptible read-copy-update.

Quick Quiz 4: What happens if you mix and match Realtime RCU and RCU Classic?

The "SRCU" column displays a specialized RCU API that permits general sleeping in RCU read-side critical sections, as was described in the LWN article Sleepable RCU. Of course, use of synchronize_srcu() in an SRCU read-side critical section can result in self-deadlock, so should be avoided. SRCU differs from earlier RCU implementations in that the caller allocates an srcu_struct for each distinct SRCU usage. This approach prevents SRCU read-side critical sections from blocking unrelated synchronize_srcu() invocations. In addition, in this variant of RCU, srcu_read_lock() returns a value that must be passed into the corresponding srcu_read_unlock().

The "QRCU" column presents an RCU implementation with the same API structure as SRCU, but optimized for extremely low-latency grace periods in absence of readers, as described in the LWN article Using Promela and Spin to verify parallel algorithms. As with SRCU, use of synchronize_qrcu() can result in self-deadlock, so should be avoided. Although QRCU has not yet been accepted into the Linux kernel, it is worth mentioning given that it is the only RCU implementation that can boast deep sub-microsecond grace-period latencies.

Quick Quiz 5: Why do both SRCU and QRCU lack asynchronous call_srcu() or call_qrcu() interfaces?

Quick Quiz 6: Under what conditions can synchronize_srcu() be safely used within an SRCU read-side critical section?

The Linux kernel currently has a surprising number of RCU APIs and implementations. There is some hope of reducing this number, evidenced by the fact that a given build of the Linux kernel currently has at most three implementations behind four APIs (given that RCU Classic and Realtime RCU share the same API). However, careful inspection and analysis will be required, just as would be required for one of the many locking APIs.

RCU has Publish-Subscribe and Version-Maintenance APIs

Fortunately, the RCU publish-subscribe and version-maintenance primitives shown in the following table apply to all of the variants of RCU discussed above. This commonality can in some cases allow more code to be shared, which certainly reduces the API proliferation that would otherwise occur.

Category Primitives Availability Overhead
List traversal list_for_each_entry_rcu() 2.5.59 Simple instructions (memory barrier on Alpha)
List update list_add_rcu() 2.5.44 Memory barrier
list_add_tail_rcu() 2.5.44 Memory barrier
list_del_rcu() 2.5.44 Simple instructions
list_replace_rcu() 2.6.9 Memory barrier
list_splice_init_rcu() 2.6.21 Grace-period latency
Hlist traversal hlist_for_each_entry_rcu() 2.6.8 Simple instructions (memory barrier on Alpha)
Hlist update hlist_add_after_rcu() 2.6.14 Memory barrier
hlist_add_before_rcu() 2.6.14 Memory barrier
hlist_add_head_rcu() 2.5.64 Memory barrier
hlist_del_rcu() 2.5.64 Simple instructions
hlist_replace_rcu() 2.6.15 Memory barrier
Pointer traversal rcu_dereference() 2.6.9 Simple instructions (memory barrier on Alpha)
Pointer update rcu_assign_pointer() 2.6.10 Memory barrier

The first pair of categories operate on Linux struct list_head lists, which are circular, doubly-linked lists. The list_for_each_entry_rcu() primitive traverses an RCU-protected list in a type-safe manner, while also enforcing memory ordering for situations where a new list element is inserted into the list concurrently with traversal. On non-Alpha platforms, this primitive incurs little or no performance penalty compared to list_for_each_entry(). The list_add_rcu(), list_add_tail_rcu(), and list_replace_rcu() primitives are analogous to their non-RCU counterparts, but incur the overhead of an additional memory barrier on weakly-ordered machines. The list_del_rcu() primitive is also analogous to its non-RCU counterpart, but oddly enough is very slightly faster due to the fact that it poisons only the prev pointer rather than both the prev and next pointers as list_del() must do. Finally, the list_splice_init_rcu() primitive is similar to its non-RCU counterpart, but incurs a full grace-period latency. The purpose of this grace period is to allow RCU readers to finish their traversal of the source list before completely disconnecting it from the list header -- failure to do this could prevent such readers from ever terminating their traversal.

Quick Quiz 7: Why doesn't list_del_rcu() poison both the next and prev pointers?

The second pair of categories operate on Linux's struct hlist_head, which is a linear linked list. One advantage of struct hlist_head over struct list_head is that the former requires only a single-pointer list header, which can save significant memory in large hash tables. The struct hlist_head primitives in the table relate to their non-RCU counterparts in much the same way as do the struct list_head primitives.

The final pair of categories operate directly on pointers, and are useful for creating RCU-protected non-list data structures, such as RCU-protected arrays and trees. The rcu_assign_pointer() primitive ensures that any prior initialization remains ordered before the assignment to the pointer on weakly ordered machines. Similarly, the rcu_dereference() primitive ensures that subsequent code dereferencing the pointer will see the effects of initialization code prior to the corresponding rcu_assign_pointer() on Alpha CPUs. On non-Alpha CPUs, rcu_dereference() documents which pointer dereferences are protected by RCU.

Quick Quiz 8: Normally, any pointer subject to rcu_dereference() should always be updated using rcu_assign_pointer(). What is an exception to this rule?

Quick Quiz 9: Are there any downsides to the fact that these traversal and update primitives can be used with any of the RCU API family members?

So, What is RCU Really?

At its core, RCU is nothing more nor less than an API that supports publication and subscription for insertions, waiting for all RCU readers to complete, and maintenance of multiple versions. That said, it is possible to build higher-level constructs on top of RCU, including the reader-writer-locking, reference-counting, and existence-guarantee constructs listed in the companion article. Furthermore, I have no doubt that the Linux community will continue to find interesting new uses for RCU, just as they do for any of a number of synchronization primitives throughout the kernel.

Finally, a complete view of RCU would also include all of the things you can do with these APIs.

Acknowledgements

We are all indebted to Andy Whitcroft, Jon Walpole, and Gautham Shenoy, whose review of an early draft of this document greatly improved it. I owe thanks to the members of the Relativistic Programming project and to members of PNW TEC for many valuable discussions. I am grateful to Dan Frye for his support of this effort.

This work represents the view of the author and does not necessarily represent the view of IBM.

Linux is a registered trademark of Linus Torvalds.

Other company, product, and service names may be trademarks or service marks of others.

References

This section gives a short annotated bibliography describing using RCU, Linux-kernel RCU implementations, background, and historical perspectives. For more information, see Paul E. McKenney's RCU Page.

Using RCU

  1. Overview of Linux-Kernel Reference Counting (McKenney, January 2007) [PDF]. Overview of Linux-kernel reference counting (including RCU) prepared for the Concurrency Working Group of the C/C++ standards committee.

  2. RCU and Unloadable Modules (McKenney, January 2007). Describes how to unload modules that use call_rcu(), so as to avoid RCU callbacks trying to use the module after it has been unloaded.

  3. Recent Developments in SELinux Kernel Performance. James Morris describes a performance problem in the SELinux Access Vector Cache (AVC), and its resolution via RCU in a patch by Kaigai Kohei.

  4. Using Read-Copy-Update Techniques for System V IPC in the Linux 2.5 Kernel (Arcangeli et al., June 2003) [PDF]. Describes how RCU is used in the Linux kernel's System V IPC implementation.

Linux-Kernel RCU Implementations

  1. The design of preemptible read-copy-update (McKenney, October 2007). Describes a high-performance RCU implementation for realtime use.

  2. Sleepable RCU (McKenney, October 2006). Description of SRCU.

  3. Using Promela and Spin to verify parallel algorithms (McKenney, August 2007). Description of the QRCU patch.

  4. RCU dissertation (McKenney, July 2004) [PDF].
    • Section 2.2.20 (pages 62-64) gives a history of RCU-like mechanisms, a very brief summary of which can be found below.
    • Chapter 4 (pages 71-98) and Appendix C (pages 326-345) review a number of different types of RCU implementations, summarizing a number of earlier papers.
    • Chapter 5 (pages 137-178) gives an overview of a number of "design patterns" guiding use of RCU.
    • Chapter 6 (pages 179-234) describes some early uses of RCU.

  5. Using RCU in the Linux 2.5 Kernel (October 2003). Brief summary of why RCU can be helpful, along with an analogy between RCU and reader-writer locking.

  6. Anyone who is laboring under the misapprehension that the Linux community would never have independently invented RCU should read this netdev posting and this one as well. Both postings pre-date the earliest known introduction of RCU to the Linux community.

Background

  1. Real-Time Linux Wiki. Provides much valuable information on the -rt patchset for both kernel and application developers.

  2. Home of the -rt kernel patchsets.

  3. Memory Ordering in Modern Microprocessors (McKenney, August 2005) [PDF]. Gives an overview of how Linux's memory-ordering primitives work on a number of computer architectures.

Historical Perspectives on RCU and Related Mechanisms

  1. Tornado: Maximizing Locality and Concurrency in a Shared Memory Multiprocessor Operating System (Gamsa et al., February 1999) [PDF]. Independent invention of a mechanism very similar to RCU. Tornado is a research operating system developed at the University of Toronto. This operating system uses its analog to RCU pervasively. Some of the University of Toronto students brought this operating system with them to IBM Research, where it was developed as part of the K42 project.

  2. Read-Copy Update: Using Execution History to Solve Concurrency Problems (McKenney and Slingwine, October 1998) [PDF]. First non-patent publication of DYNIX/ptx's RCU implementation.

  3. Passive Serialization in a Multitasking Environment (Hennessey et al., February 1989). This patent describes an RCU-like mechanism that was apparently used in IBM's VM/XA mainframe hypervisor. This is the earliest known production use of an RCU-like mechanism.

  4. Concurrent Manipulation of Binary Search Trees (Kung and Lehman, September 1980). The earliest known publication of an RCU-like mechanism, using a garbage collector to implicitly compute grace periods.

Answers to Quick Quizzes

Quick Quiz 1: Why are some of the cells in the above table colored green?

Answer: The green API members (rcu_read_lock(), rcu_read_unlock(), and call_rcu()) were the only members of the Linux RCU API that Paul E. McKenney was aware of back in the mid-90s. During this timeframe, he was under the mistaken impression that he knew all that there is to know about RCU.

Back to Quick Quiz 1.

Quick Quiz 2: What happens if you mix and match? For example, suppose you use rcu_read_lock() and rcu_read_unlock() to delimit RCU read-side critical sections, but then use call_rcu_bh() to post an RCU callback?

Answer: If there happened to be no RCU read-side critical sections delimited by rcu_read_lock_bh() and rcu_read_unlock_bh() at the time call_rcu_bh() was invoked, RCU would be within its rights to invoke the callback immediately, possibly freeing a data structure still being used by the RCU read-side critical section! This is not merely a theoretical possibility: a long-running RCU read-side critical section delimited by rcu_read_lock() and rcu_read_unlock() is vulnerable to this failure mode.

This vulnerability disappears in -rt kernels, where RCU Classic and RCU BH both map onto a common implementation.

Back to Quick Quiz 2.

Quick Quiz 3: What happens if you mix and match RCU Classic and RCU Sched?

Answer: In a non-PREEMPT or a PREEMPT kernel, mixing these two works "by accident" because in those kernel builds, RCU Classic and RCU Sched map to the same implementation. However, this mixture is fatal in PREEMPT_RT builds using the -rt patchset, due to the fact that Realtime RCU's read-side critical sections can be preempted, which would permit synchronize_sched() to return before the RCU read-side critical section reached its rcu_read_unlock() call. This could in turn result in a data structure being freed before the read-side critical section was finished with it, which could in turn greatly increase the actuarial risk experienced by your kernel.

In fact, the split between RCU Classic and RCU Sched was inspired by the need for preemptible RCU read-side critical sections.

Back to Quick Quiz 3.

Quick Quiz 4: What happens if you mix and match Realtime RCU and RCU Classic?

Answer: That would be up to you, because you would have to code up changes to the kernel to make such mixing possible. Currently, any kernel running with RCU Classic cannot access Realtime RCU and vice versa.

Back to Quick Quiz 4.

Quick Quiz 5: Why do both SRCU and QRCU lack asynchronous call_srcu() or call_qrcu() interfaces?

Answer: Given an asynchronous interface, a single task could register an arbitrarily large number of SRCU or QRCU callbacks, thereby consuming an arbitrarily large quantity of memory. In contrast, given the current synchronous synchronize_srcu() and synchronize_qrcu() interfaces, a given task must finish waiting for a given grace period before it can start waiting for the next one.

Back to Quick Quiz 5.

Quick Quiz 6: Under what conditions can synchronize_srcu() be safely used within an SRCU read-side critical section?

Answer: In principle, you can use synchronize_srcu() with a given srcu_struct within an SRCU read-side critical section that uses some other srcu_struct. In practice, however, doing this is almost certainly a bad idea. In particular, the following could still result in deadlock:

idx = srcu_read_lock(&ssa);
synchronize_srcu(&ssb);
srcu_read_unlock(&ssa, idx);

/* . . . */

idx = srcu_read_lock(&ssb);
synchronize_srcu(&ssa);
srcu_read_unlock(&ssb, idx);

Back to Quick Quiz 6.

Quick Quiz 7: Why doesn't list_del_rcu() poison both the next and prev pointers?

Answer: Poisoning the next pointer would interfere with concurrent RCU readers, who must use this pointer. However, RCU readers are forbidden from using the prev pointer, so it may safely be poisoned.

Back to Quick Quiz 7.

Quick Quiz 8: Normally, any pointer subject to rcu_dereference() must always be updated using rcu_assign_pointer(). What is an exception to this rule?

Answer: One such exception is when a multi-element linked data structure is initialized as a unit while inaccessible to other CPUs, and then a single rcu_assign_pointer() is used to plant a global pointer to this data structure. The initialization-time pointer assignments need not use rcu_assign_pointer(), though any such assignments that happen after the structure is globally visible must use rcu_assign_pointer().

However, unless this initialization code is on an impressively hot code-path, it is probably wise to use rcu_assign_pointer() anyway, even though it is in theory unnecessary. It is all too easy for a "minor" change to invalidate your cherished assumptions about the initialization happening privately.

Back to Quick Quiz 8.

Quick Quiz 9: Are there any downsides to the fact that these traversal and update primitives can be used with any of the RCU API family members?

Answer: It can sometimes be difficult for automated code checkers such as "sparse" (or indeed for human beings) to work out which type of RCU read-side critical section a given RCU traversal primitive corresponds to. For example, consider the following:

rcu_read_lock();
preempt_disable();
p = rcu_dereference(global_pointer);

/* . . . */

preempt_enable();
rcu_read_unlock();

Is the rcu_dereference() primitive in an RCU Classic or an RCU Sched critical section? What would you have to do to figure this out?

Back to Quick Quiz 9.

Comments (5 posted)

Patches and updates

Kernel trees

Linus Torvalds Linux 2.6.24-rc7 ?
Greg Kroah-Hartman Linux 2.6.23.13 ?
Adrian Bunk Linux 2.6.16.58-rc1 ?

Architecture-specific

Build system

Sam Ravnborg Kbuild update ?

Development tools

Rodrigo Rubira Branco (BSDaemon) ebizzy 0.3 released ?
Steven Rostedt mcount tracing utility ?

Device drivers

Documentation

Filesystems and block I/O

Memory management

Security-related

Virtualization and containers

Benchmarks and bugs

Miscellaneous

Kay Sievers udev 118 release ?
Douglas Gilbert smp_utils-0.93 available ?
Stephen Hemminger bridge-utils 1.4 ?
Stephen Hemminger iproute2-2.6.24-rc7 ?

Page editor: Jonathan Corbet
Next page: Distributions>>


Copyright © 2008, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds