|
|
Subscribe / Log in / New account

Kernel development

Brief items

Kernel release status

The current development kernel is 4.1-rc5, released on May 24. "So we're on schedule for a normal 4.1 release, if it wasn't for the fact that the timing looks like the next merge window would hit our yearly family vacation. So we'll see how that turns out, I might end up delaying the release just to avoid that (or just delay opening the merge window)."

Stable updates: none have been released in the last week.

Comments (none posted)

Quote of the week

I think that, as experts, we should regularly make mistakes - very public mistakes. That way people don't get lulled into the idea that we can't. Linus has been doing it for years and it seems to be working for him.
Neil Brown

Comments (none posted)

Kernel development news

Firmware signing

By Jonathan Corbet
May 27, 2015
The kernel has had the ability to enforce signature requirements on loadable modules for a few years now. But there are other types of code loaded by the kernel that are not, yet, subject to such checks; firmware loaded into controllers via the kernel is perhaps the primary example. Work is being done to add the ability to enforce signature requirements for firmware blobs, but not everybody is convinced that there is a need for this feature.

Luis Rodriguez described his plan on various kernel mailing lists. The kernel's module loader is currently being reworked (by David Howells) to move away from its home-built module-signing mechanism toward the PKCS#7 standard. This is, Luis said, a good time to adopt the same standard for the signing of other files loaded into the kernel — firmware files in particular.

In this plan, the enforcement of signatures on firmware would be (like enforcement of module signatures) optional; one could build a kernel without this capability. Most firmware loaded by Linux drivers is kept in the linux-firmware repository; those blobs would be signed by the firmware maintainer, and would, thus, be loadable by default. To make that scheme work, Luis has proposed that the Linux Foundation create an X.509 key that would be embedded in the kernel source and would be used, in turn, to sign the firmware maintainer's key. The topic of firmware for out-of-tree modules was not brought up; if nothing else, it should always be possible to add other keys to the kernel's keyring to enable loading that firmware.

Andy Lutomirski quickly raised some concerns about how these keys would be handled. In particular, he would like to ensure that the applicability of any given key is as limited as possible. Keys used for module signing should not work for firmware signing, for example. Signatures should also spell out where a firmware blob is meant to be used in order to avoid attacks where firmware is sent to the wrong device.

The more general question, though, as asked by Alan Cox and Greg Kroah-Hartman, was: why bother signing firmware files in the first place? As Greg put it:

I too don't understand this need to sign something that you don't really know what it is from some other company, just to send it to a separate device that is going to do whatever it wants with it if it is signed or not.

Both asserted that, if firmware signatures are to be checked, that checking should be done by the actual device receiving the firmware. Anything else puts the kernel in the position of attesting to the validity of a firmware image that it can't know much about.

The problem with checking signatures in the device, as David Woodhouse pointed out, is that loadable firmware is often used as a way to reduce the cost of the hardware. Putting that sort of cryptographic capability into a device (one which doesn't have its operating software loaded yet, at that) would, instead, add expense and defeat the original purpose. Greg disagreed with the idea that adding signature checking would be prohibitively expensive, but the fact of the matter seems to be that hardware lacking firmware signature-checking capability is not going away anytime soon.

David also made the point that, in the absence of an I/O memory management unit, a rogue device can do anything it wants to the running system. Thus, compromised firmware might indeed be an attractive way to attack a system. Firmware signing, he said, is a way of protecting the operating system; it is not just a service for hardware vendors.

There is another reason for wanting this capability, though; it can be used to verify the provenance of other files loaded into the kernel as well. Luis, in particular, is concerned with the Linux "central regulatory domain agent" (CRDA) subsystem. CRDA describes the legal operating parameters for wireless network interfaces in various jurisdictions worldwide. Different countries have different rules about which frequencies can be used, maximum power levels, and more. The CRDA subsystem ensures that Linux systems play by the rules wherever they may end up.

Luis credits CRDA with having gotten us out of the situation where manufacturers of wireless adapters refused to provide free drivers for their hardware. With CRDA in place, those manufacturers can be confident that their hardware will be operated in a compliant fashion. But that confidence is only merited if the CRDA database cannot be trivially modified by users. To that end, the database is currently signed; that signature is, on some distributions at least, verified in user space before being loaded into the kernel.

The use of subsystem-specific cryptographic code seems like a sure path to problems at some point; getting such code right is not easy, and the number of eyeballs on, say, the CRDA signature-checking code is probably fairly low. So Luis would like to move the checking into the kernel and have it use the same code that the module loader is using. That should reduce the amount of signature-checking code in use and increase confidence that said code is actually working as planned.

In this case, there is no device to offload the responsibility for checking a signature to; this is data that is used directly by the kernel. If the kernel is going to protect itself from a corrupted CRDA database, it needs to do the checking itself. Code that does this checking can check firmware just as easily. So it makes some sense to introduce the feature as a firmware-validation mechanism that can also check the integrity of other files loaded into the kernel.

The end result is that this feature will probably not have too much trouble getting into the kernel once it's clear that the code works as intended. The kernel community as a whole is generally receptive to the addition of this sort of integrity-verification mechanism, as long as the policy choices remain in the hands of the user. Distributions may or may not choose to enable firmware signature checking, but the option will be there for those who want it.

Comments (29 posted)

A fresh look at the kernel's device model

May 27, 2015

This article was contributed by Neil Brown

Understanding the Linux device model is (or should be, at least) central to working with device drivers in Linux — and drivers constitute over half of the kernel code. I've been working with a variety of device drivers at differing levels of involvement for some years but until recently I didn't feel that I really understood the model. This is potentially dangerous as, without a good understanding, it is easy to make poor choices.

The problem, or at least my problem, is firmly rooted in the terminology. The device model involves things called "device" and "driver", "bus" and "class". To be able to understand the model, I need accurate definitions of these terms, and useful definitions are hard to find.

An LWN article from 2003 is an excellent example, as it clearly presents some definitions of the sort that can be found in other documentation and in the source code. It declares a device to be: "A physical or virtual object which attaches to a (possibly virtual) bus". This sounds good and highly general, but it doesn't actually match reality, even the reality of twelve years ago when the article was written.

For example, in the device model, a partition on a hard drive is a "device" much like the hard drive as a whole is. The hard drive as a whole may attach to a "bus", but the partition certainly doesn't: at best it attaches to the whole drive. Also, there are devices that don't attach to anything, let alone a "bus". The devices listed in directories under /sys/devices/virtual are not "attached" to anything. That "virtual" directory is not a special bus called "virtual", it is simply a place to put things that don't belong anywhere else.

Similar oversimplifications are found when trying to find definitions of the other objects. This is very likely because the driver model was still under development and the meanings that would end up being useful had not yet fully crystallized. Now, over a decade later, the available documentation still refers to the same terms and generally uses the same imprecise definitions.

The eye of the beholder

The epiphany that allowed me to form a coherent understanding of the device model was that none of these terms really have an external meaning at all. They are defined purely by the code that implements them. A device is simply any data structure that contains an embedded struct device, no more and no less.

Meaning comes only from the mind of the developer working with the code. Having multiple independent developers will likely result in multiple different meanings. The meanings that seem to be associated with the terms and that are found in documentation are the meanings that the early developers were thinking about. Those ideas have been revised over time, and other developers have had other thoughts.

The definition "anything with a struct device" may be accurate, but is not useful for someone considering the implementation of a new driver or modification to an old one. Similarly "what other developers are thinking" is too nebulous to be useful. With a bit of effort, and some carefully chosen examples, each of these can be fleshed out a bit and together form a picture that is, hopefully, a good start. So, to present my understanding of the device model, and particularly of those four terms, I will present some examples to show what other developers have thought, and what value a struct device provides.

The first example revolves around the TCA6507 chip from Texas Instruments. This is a simple piece of hardware that accepts requests over an "i2c" bus, and responds by draining current through seven separate pins with various on/off patterns. This is particularly intended to pull electrical current through an LED to make it glow, but can equally pull current through a resistor to create logic 0 or 1 levels. This example is chosen because it is the most "device-like" of devices that I am familiar with — it perfectly fits the earlier definition.

The second example is the workqueue mechanism in Linux. It allows arbitrary tasks to be handed off for asynchronous completion, either promptly or after a delay, and will attempt to make optimal use of resources in doing so. It is also the least "device-like" thing I came across.

With these examples in mind, together with the previously mentioned block devices, we can proceed to those definitions.

Devices

A device is an instance. It is corresponds to a thing, or maybe an "object" in the most general sense of the word. A device gains its thing-hood primarily by a person thinking that it is something worth identifying. A device may sometimes correspond to a specific piece of hardware like an integrated circuit, but it could equally correspond to a collection of such circuits or just one component of the functionality of a circuit. Hardware need not exist at all — a device could be virtualized or could represent something that has no real physical equivalent at all. It is just a "thing".

The TCA6507 chip is represented in Linux by a device. Each of the seven controllable pins may be connected to something and this may lead to more devices. If a pin is attached to an LED, for example, then there will be a separate device that represents that LED, though arguably it could be seen as representing the signalling capability of the "LED plus pin" combination. Different people will probably look at this in different ways.

If a pin is connected to a "pull-up" resistor and used to signal a logic level, then it will be represented in Linux as a "GPIO" — General Purpose I/O pin. In terms of the device model, all of the pins that are configured as GPIOs are presented as a single "gpiochip" device. So while there is one device for each LED, there is one device for all GPIOs.

Each individual GPIO can be configured and used internally, or may be exported to user space through sysfs. When a GPIO is exported, a new device is created to represent just that one GPIO. This is visible as a directory under /sys/class/gpio; files are available there that can be used to set the output level to 1 or 0.

There are two important lessons in this example. One is that choices are context-dependent and probably very developer-dependent as well. Grouping GPIOs into a "chip" seems to make sense, while doing the same with LEDs doesn't seem to be a priority, though there has been a suggestion that there might be value in that. The second lesson is that one reason to make a "thing" into a "device" in the device model is so that it can appear in sysfs and be directly examined or manipulated.

Moving on to our second example we find something that is not at all "device-like". There are many "things" or "instances" in Linux that are not device-like and are not represented as devices: filesystems and processes are obvious examples. One that is represented as a device is the workqueue.

The workqueue subsystem in Linux creates a "device" to represent each distinct queue. The apparent reason for this is much like the reason for (sometimes) making devices for GPIOs — it allows the thing (i.e. the workqueue) to be examined and managed via sysfs. A thing doesn't have to be a device to appear in sysfs, modules and filesystems are clear counter-examples to that idea. But making something a "device" is a relatively easy and well-worn path to sysfs access.

The compelling reason to use a "device" to represent some "thing" seems to be the interfaces. A "device" not only has standard interfaces in sysfs, it also has standard interfaces for power management, and may make use of internal services (like the devm resource management API) that are only provided to devices. There is also useful functionality for grouping "like" devices together, for varying definitions of "like".

Classes

A "class" is both the implementation of a set of devices, and the set of devices themselves. A class can be thought of as a driver in the more general sense of the word. The device model has specific objects called "drivers" but a "class" is not one of those.

All the devices in a particular class tend to expose much the same interface, either to other devices or to user space (via sysfs or otherwise). Exactly how uniform the included devices are is really up to the class though. It is not unusual for there to be optional aspects of an interface that not all devices in a class present. It is not unheard-of for some devices in the same class to be completely different from others.

So far we have met three classes in our examples. A device that represents an LED attached to a TCA6507 is a member of the "leds" class. This class supports the blinking, flashing, and brightness control features of physical LEDs. The class requires an underlying device to be available, such as a TCA6507 or a GPIO or any of various other options. This underlying device must be able to turn the LED on or off, may be able to set the brightness, and might even provide timer functionality to autonomously blink the LED with a given period and duty cycle. The "leds" class hides as much of this detail as it can to provide a simple abstract device.

Similar to the "leds" class is the "gpio" class; it provides a uniform interface to a variety of devices that can generate (output) or can sense (input) an electrical logic level. If the underlying device can generate an interrupt on a level change, "gpio" can translate that to a notification via poll() or can route it to the interrupt handler for some other device. The gpio class provides both the "gpiochip" devices and the individual "gpio" devices.

The third class we have met is the "disk" class which provides both whole hard drives and partitions within drives. As with the "leds" class, there are a few different interfaces to storage functionality that can be provided and the "disk" class presents a unified interface to that functionality.

There is some obvious similarity between the "gpio" class implementing both "gpios" and "gpiochips", and the "disk" class implementing both "disks" and "partitions". There are also differences. One of those is that "gpio" and "gpiochip" provide completely different interfaces, while "disk" and "partition" have a lot of commonality in their interfaces — both have block sizes and support I/O, but only a "disk" can be "removable".

Another, less obvious difference involves another aspect of the device model. Each "device" can have a "type". This type is often presented in the "uevent" file in the relevant sysfs directory. For example, the command:

    grep DEVTYPE /sys/class/block/*/uevent

will show the type of every block device on your system. The different types in the "gpio" class are not known to the device model, though, so they are not reported by the uevent file. A human or a script would need to deduce the type from the device names if it was important.

Each of the classes listed here can be seen as providing a generic interface over a range of different hardware. This seems to be part of the original intention of the "class" facility. However "generic" isn't a very precise term. What one developer sees as "generic" another developer might see as "specific". These perspectives can change over time too, particularly if a simple or successful interface gets used more broadly than its initial context.

To enforce this point it is worth briefly considering the "backlight" class of devices. A backlight for a graphics display can use a number of different underlying technologies, including a device of the "leds" class. So depending on your perspective, an LED might be a generic interface for signaling, or a specific underlying technology for backlighting. It depends on whose eye is beholding.

Buses

A "bus" is similar to a "class" in several ways, but it has an important difference. While a class is a complete implementation of the devices that are members of that class, the bus is only a partial implementation. For complete functionality, a bus usually works with a set of "drivers". A bus may implement some devices completely by itself, like a class does. Other devices will require a driver to be attached. The choice of driver can be made by the bus, by the driver (which can be asked if it "matches" a given device), or by a request through sysfs.

Our examples so far provide two examples of buses. The workqueue subsystem defines a "workqueue" bus to hold the devices that it creates for each workqueue. The set of drivers for this bus is empty. There are no separate implementations and no indication that there ever might be. This is probably the most minimal structure that a bus can have.

The other example, which has not yet been made explicit, is the i2c bus, which is a standard two-wire bus for communicating between integrated circuits, typically all on a single board. This is the bus that is used to control the TCA6507, so the leds-tca6507 driver is written to work with the "i2c" bus in Linux.

The "i2c" bus in the device model is a collection of code that provides interface support between an individual driver like leds-tca6507 and some i2c bus master such as the OMAP I2C controller. It manages bus arbitration, retry handling, and various other protocol details. The "i2c" bus thus supports two different types of device, though the distinction is not directly visible in sysfs as the types are not given textual names.

The i2c_client_type includes all devices that are supported by separate drivers and represent hardware that can be communicated with via the i2c protocol. The i2c_adapter_type, instead, is implemented in the i2c bus code without using a separate driver. It represents the whole bus and exposes a character-special device in /dev (e.g. /dev/i2c-0) that can be used to interact directly with i2c clients, bypassing any driver.

By providing code and device support to both the client (or slave) side and the adapter (or master) side of an electrical bus, the Linux "i2c" bus very clearly represents the whole i2c bus. When there is an electrical bus to represent, a device model bus will often fill that role. When there is no electrical bus, as with workqueues, a bus might represent something else entirely.

Devices in a bus tend to reflect specific hardware rather than generic functionality, but once again it is hard to draw a clear line. One of the many drivers for the "usb" bus is the "usbhid" driver that supports any mouse, keyboard or similar "human interface device". This isn't really very specific.

When is a bus not a bus?

I had a particular reason for choosing an "leds" device as one of the examples, and that is because the "leds" class has a particularly interesting structure. Each "leds" device exposes a "trigger" attribute in sysfs. This file contains a list of all possible triggers, the currently active one surrounded by brackets. For example:

none usb-gadget usb-host cpu0 cpu1 cpu2 cpu3 cpu4 cpu5 cpu6 cpu7
AC-online BAT0-charging-or-full BAT0-charging BAT0-full
BAT0-charging-blink-full-solid [mmc0] rfkill1 phy0rx phy0tx phy0assoc
phy0radio phy0tpt rfkill2 rfkill3 rfkill33

Writing the name of some trigger to this file will locate the driver for that trigger, loading a module if necessary, and will configure the device to use that trigger. This may involve presenting different attributes via sysfs. This mechanism for binding a trigger to an "leds" device is extremely similar to the mechanism that a bus provides to bind a driver to a device. Lots of the details are different, but the core functionality and purpose are the same.

We saw earlier that the workqueue subsystem, despite being a bus, had no separate drivers and so could have been a class. Here we see that "leds", despite being a class, has a number of separate drivers (called "triggers") and so could have been a bus. This seems to emphasize that fact that the choice of bus or class, like the particular role of a device, is truly in the eye of the beholder. There is no firm external meaning.

We can get a hint of what meaning one developer saw by examining the declaration and registration of the "workqueue" bus. It is declared:

    static struct bus_type wq_subsys = { /* ... */ };

and registered:

    return subsys_virtual_register(&wq_subsys, NULL);

So, while it is a bus_type, it is named as a "subsys" or "subsystem" and registered as a "virtual subsystem". It seems the eye of at least one developer looked at a "bus" and saw a "subsystem".

There appears, both here and elsewhere, to be a desire to discard the separate concepts of "class" and "bus" and instead just have "subsystems". A subsystem would be exactly what a bus is, but without all the baggage that comes with the name. As a class provides nothing that is not provided by a bus, it could simply be dropped. Whether this transition will ever be complete remains to be seen.

A lesson learned

This, then, is the lesson of the driver model: the implementation provides functionality, not meaning. The meaning comes from the thoughts of developers and is coherent or disjoint in the same measure that those developers are of one mind, or not.

A "device", is just a thing that provides and consumes interfaces. It represents an idea more than it represents any particular hardware. A "bus" is better known as a "subsystem" and is some code to implement devices together with a mechanism to attach separate "drivers" to those devices. A bus (or subsystem) is also the set of device associated with that code. A "class" is just a bus without the mechanism for separate drivers. And a "driver" is code that works in concert with a particular bus to implement certain devices.

These definitions helped me to lose the baggage that I tried to associate with "device" and "bus" and provided clearer understanding. But it is not yet enough for a complete understanding. Devices do not exist in isolation — they have those interfaces for a reason. A full understanding of the device model requires some understanding of how they all fit together and a good place to look at that is in /sys/devices which contains all the devices on a particular system.

So next week we will dive in to /sys/devices and find out how that directory tree is structured, what it contains, and what more we can learn about devices.

Comments (10 posted)

A tale of two data-corruption bugs

By Jonathan Corbet
May 24, 2015
There have been two bugs causing filesystem corruption in the news recently. One of them, a bug in ext4, has gotten the bulk of the attention, despite the fact that it is an old bug that is hard to trigger. The other, however, is recent and able to cause data loss on filesystems installed on a RAID 0 array. Both are interesting examples of how things can go wrong, and, thus, merit a closer look.

Extent confusion in ext4

Like many reasonably modern filesystems, ext4 includes a number of performance-enhancing features. One of those is delayed allocation, wherein the filesystem will not immediately allocate specific blocks on the disk for data that an application has just written. By delaying that allocation, the filesystem gives itself some time to see if more data will be written in the near future. If so, space for all of the written data can be allocated contiguously, improving future I/O performance. Delayed allocation works, but it does leave some written data (the "delayed extent") in a sort of limbo state for a brief period where it has no fixed home on the disk. Obviously, when the time comes to flush that data out to persistent storage, the task of allocating the destination blocks can be delayed no further.

Another performance feature is unwritten extents. An application can increase the size of a file with system calls like truncate() or fallocate(). These calls do not actually write data to any new extents added to the file. Allowing anybody to read blocks that have not been written to is clearly a bad idea; at best, the result will be garbage, while, at worst, another user's sensitive information could be disclosed. The filesystem could avoid this problem by writing zeroes to all new blocks once they are allocated, but that's an inefficient use of CPU time and I/O bandwidth, given that the blocks are ordinarily going to be overwritten with real data in the near future. The alternative is to mark the new blocks as being explicitly unwritten until that real data comes along. Attempts to read unwritten blocks will just result in a zero-filled buffer.

Ext4 keeps track of both delayed and unwritten extents in a data structure called the "extent status tree". Something interesting happens, though, if an unwritten extent is added when there are already delayed allocation blocks in the same block range. The entire unwritten extent ends up being marked as delayed as well, because the extent status tree can't track the fact that only part of the extent is delayed.

For example, consider a file that is currently 100 blocks long — blocks 0-99 are written and present on disk. The application writes blocks 100 and 101; the filesystem responds by putting them into the extent status tree as delayed-allocation blocks. The application then uses fallocate() to tell the filesystem to allocate blocks 100-109. Those unwritten blocks also go into the tree, but, since there are already two delayed blocks in that range, the entire range 100-109 is marked delayed as well.

A delayed extent is removed from the tree when the delayed buffers are actually written to disk. But, in this case, there are no delayed buffers for blocks 102-109; as a result, the extent remains as a delayed extent in the tree, even after the actual delayed portion (blocks 100 and 101) has been allocated and written out. There it will stay until another write to one of the affected blocks comes along. At that point, the entire block will be reallocated (because it is still marked as delayed), losing the previously delayed data that had already been written. That is about the point where alcohol consumption by both administrators and users increases unhealthily.

This bug has been present in the ext4 filesystem for some time; nobody seems to be quite sure when it was introduced. It has remained undetected because it is quite hard to hit; the process, as described by Ted Ts'o, is:

It requires the combination of (a) writing to a portion of a file that was not previously allocated using buffered I/O, (b) an fallocate of a region of the file which is a superset of region written in (a) before it has chance to be written to disk, (c) waiting for the file data in (a) to be written out to disk (either via fsync or via the writeback daemons), and then (d) before the extent status cache gets pushed out of memory, another random write to a portion of the file covered by (a) -- in which case that specific portion of (a) could be replaced by all zeros.

Nonetheless, corruption bugs are bad news. This one has been fixed by this patch from Lukas Czerner which was merged for 4.1-rc2. The fix also found its way into the 4.0.3, 3.18.14, 3.14.42, and 3.10.78 stable updates.

Discard discrepancies

About the time the above bug was being fixed, some users started reporting problems with RAID 0 arrays based on ext4; many assumed that they had been a victim of that bug. The truth of the matter was somewhat worse than that, though; they had found a nastier, easier-to-trigger bug that was the result of an overly hasty fix.

Back in April, Joe Landman reported a problem with RAID 0 volumes on the XFS filesystem. After some back-and-forth, Neil Brown tracked it down to a change merged for the 3.14 release. The code in question calculates the number of sectors that fit in the next RAID 0 chunk — in other words, the portion of the I/O request that maps to a single underlying drive. In simplified form, this calculation looks like:

    unsigned sectors = chunk_sects - sector_div(sector, chunk_sects);

The call to sector_div() returns the remainder of the division. What it also does, though, may be surprising: it replaces the value of sector with sector/chunk_sects. In other words, sector_div() is a macro that modifies one of its arguments. The code did not take that modification into account, with the result that it used the wrong value of sector from then on. Neil's fix was to simply reinitialize sector from the bio structure describing the operation in question.

There was only one little problem: by then the bio pointer had been advanced, so the new sector value was wrong. When the RAID 0 code then proceeded to map the sector in the RAID device to a sector in one of the underlying physical devices, it would end up in the wrong place — likely as not, on the wrong device entirely. In practice, the code path that goes wrong in this way would be executed relatively rarely; it requires an I/O operation that crosses multiple RAID 0 chunks. So it is perhaps not entirely surprising that it seems to manifest itself most often with "discard" requests, which can be applied to an entire file at once.

Discard is a mechanism for telling a storage device that a particular range of blocks no longer contains needed data. Its use can improve performance on solid-state drives, which can benefit from the knowledge that a certain range of blocks does not need to be preserved during wear-leveling operations. If told to do so, the ext4 filesystem will generate DISCARD requests when a file is deleted, and the RAID 0 code will pass those requests down to the underlying drives. When this particular bug hits, though, the discard request will go to the wrong place, resulting in the discarding of some random, unrelated data.

This unfortunate "fix" was merged into the mainline during the 4.1 merge window. If it had stayed with the 4.1 kernel, its impact would have been limited; 4.1 has still not seen an official release. But this patch also went into the 4.0.2, 3.19.8, 3.18.14, and 3.14.41 stable updates. The fix to the fix (written by Eric Work) has been pulled into the mainline kernel, but has not, as of this writing, found its way into any stable updates. One assumes that will happen soon, but it is worth noting that 3.19.8 is the end of the 3.19 stable series, so there may be no updated kernel for 3.19 users.

The good news is that the problem was caught reasonably quickly; there should not be huge numbers of users who have updated to one of the affected kernels. The bad news is that, for those users who are affected, there could be silent data corruption that will not be discovered for some time. Anybody who is running one of the affected kernels will — after moving to a safe kernel, of course — want to check the contents of their RAID 0 arrays against a backup.

Keeping data safely is one of the fundamental obligations of an operating system kernel, so data-corruption bugs can shake one's confidence in the whole structure. But, as has been seen here, bugs happen. Sometimes they lurk for years without causing trouble, and sometimes they make their presence known quickly. Such bugs are, fortunately, rare with Linux; with luck, these are the last we will see for a while.

Comments (44 posted)

Patches and updates

Kernel trees

Linus Torvalds Linux 4.1-rc5 ?
Luis Henriques Linux 3.16.7-ckt12 ?
Steven Rostedt 3.14.43-rt42 ?
Jiri Slaby Linux 3.12.43 ?
Steven Rostedt 3.12.42-rt58 ?
Steven Rostedt 3.10.79-rt85 ?
Steven Rostedt 3.2.69-rt101 ?
Willy Tarreau Linux 2.6.32.66 ?

Architecture-specific

Core kernel code

Development tools

Device drivers

Device driver infrastructure

Documentation

Filesystems and block I/O

Memory management

Networking

Security-related

Virtualization and containers

Miscellaneous

Page editor: Jonathan Corbet
Next page: Distributions>>


Copyright © 2015, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds