Kernel development
Brief items
Kernel release status
The current development kernel is 4.8-rc2, released on August 14. Linus said: "Nothing really strange seems to be going on, so please just go out and test it and report any problems you encounter. It's obviously fairly early in the rc series, but I don't think there was anything particularly worrisome this merge window, so don't be shy."
See this report for the known regressions in the 4.8 kernel.
Stable updates: 4.7.1, 4.6.7, 4.4.18, and 3.14.76 were released on August 16. Note that the 4.6.x series ends with 4.6.7.
Quotes of the week
Kernel development news
The case of the stalled CPU controller
Long-running technical disagreements are certainly well-known in the kernel community. Usually they are eventually resolved and the developers involved move on to new problems. Occasionally, though, stalled consensus can lead to a break between the kernel and its user community. The ongoing dispute over the CPU controller in the new control-group hierarchy is beginning to look like one of those unpleasant cases.Control groups (cgroups) have been supported by the kernel for nearly ten years; they provide a mechanism by which processes can be grouped in a hierarchical manner and made subject to various resource controllers. It did not take long after the introduction of control groups before users and developers began to realize that there were some fundamental problems with its design; the discussion about fixing those problems got started in earnest in early 2012. Those discussions led to the description of a second version of the cgroup API and the beginning of work to move to that API. At this point, the version-2 API transition is mostly complete, with one glaring exception: the CPU controller.
The CPU controller, as one might expect, controls access to the CPU; it allows different groups of processes to be allocated specific amounts of CPU time and keeps those groups from interfering with each other. The low-level CPU-controller code is able to support the new API without trouble, but the scheduler developers have resisted the merging of that API itself; at this point, the CPU controller is the most significant controller without version-2 support. In an attempt to push things forward (and to say what will happen if things do not move forward), cgroup maintainer Tejun Heo posted a detailed summary of the situation as he sees it. That document is well worth reading for those who are interested in the topic.
Objections to the CPU controller
In short, the scheduler developers object to the new API for two reasons, both stemming from a perceived mismatch between the API and how they feel CPU control should be done. Both relate to fundamental design decisions in the version-2 cgroup API.
In the original cgroup implementation, each thread (of which a process may contain many) can be placed in a separate control group. The version-2 API, instead, requires that all of a process's threads be in the same group. For some controllers, such as the memory-usage controller, putting different threads into different groups makes little sense; all those threads are sharing the same memory, after all, so it is hard to say what it would mean to try to apply different policies to different threads. There are reasonable use cases for applying different CPU-usage policies to different threads, but the unified hierarchy, which is a fundamental design aspect of the version-2 API, requires all controllers to see the same cgroup arrangement. So all threads must be in the same cgroup from the CPU controller's point of view.
This requirement apparently seems fundamentally wrong to the scheduler developers; nothing in the scheduler itself recognizes the abstraction of a "process" at all. At that level, everything is a thread; applying a coarser policy at the cgroup level takes away an important degree of flexibility for (from their point of view) no gain.
There are users who want to be able to apply different policies to different threads; managing a thread pool is one commonly cited use case. But Heo stands by the design decisions; he also feels that the same interface should not be used at both the administrator level and within an individual process. He has proposed a mechanism called resource groups for the intra-process case, but that proposal has not made a lot of headway thus far.
There is another version-2 design decision that does not sit well with the scheduler developers. In the new API, a control group may contain other control groups, or it may contain processes, but not both; processes can only appear as leaves in the control-group hierarchy. Again, this decision was made to facilitate support for controllers other than the CPU controller. If subgroups and processes appear in the same cgroup, then the two types of object must compete for the same resource. In the CPU case, that competition is easily managed; when a cgroup is "scheduled," the scheduler recurses into the group and chooses one of the entities found therein to run. For many other controllers, though, it is not possible to treat processes and subgroups in the same manner.
The primary objection here seems to be that this restriction stomps on some of the elegance in the CPU-scheduler design; scheduling decisions are applied to "scheduling entities" that can be either processes or groups, and the scheduler itself need not care which. The version-2 API makes some control policies difficult or impossible to achieve but, Heo asserts, that may not matter much:
In summary, Heo says, there are solid reasons for the decisions that were made in the version-2 API. It handles most use cases as-is, and the addition of features like resource groups can fill in the gaps that remain. If there is anybody who still cannot work with the version-2 API, version-1 will continue to be maintained for as long as it has users. The transition is nearly done: the low-level support is there; all that is left to be merged is the API-level code to allow the CPU controller to operate in the unified version-2 hierarchy. But that code has been blocked with, seemingly, no way forward.
What happens now
Heo clearly hopes that, by reopening the discussion, he can maybe bring it to a conclusion and clear the way for the remaining patches to be merged. There is little evidence of that happening from the discussion so far, though. In the absence of a solution there, he is planning to do a couple of other things to make this functionality available to users.
One of those is to maintain the necessary patches going forward so that anybody who wants the CPU controller with the version-2 API can easily add it to their kernel. While it is unstated, it seems fairly clear that he is hoping that distributors will apply these patches to make the functionality available to their users. That approach has been used to resolve such logjams in the past; if a patch is widely applied by distributors and widely used, there comes a point where it clearly makes no sense to keep it out of the mainline. That was part of the reasoning that brought the Android patches into the mainline, among others.
The other half of that picture is to ensure that the most widely
distributed user of control groups — systemd — is able to use the version-2
API. To that end, he has posted a pull request to
add this functionality to systemd, saying: "This
commit implements systemd CPU controller support on the unified hierarchy so
that users who choose to deploy CPU controller cgroup v2 support can easily
take advantage of it.
" That code was merged into the systemd
mainline on August 14.
That action has led to a bit of disagreement in the systemd community, given that systemd normally wants to see features merged upstream before adding code to make use of them — though, it must be said, the bulk of that disagreement seems to come from a single vociferous developer. Lennart Poettering defended the action, saying that the systemd developers want to get the capability into users' hands, and that he hopes to get the kernel patches added to Fedora's kernel as well. Greg Kroah-Hartman added that this is not the first time that support for unmerged features has been added, and that it is often for good reasons:
That is where things stand as of this writing. Predictions can be dangerous, especially when they involve the future, but, in this case, it seems likely that the kernel patches will indeed find their way into a number of distributor kernels. They make the version-2 API more widely useful, and, since most distributors are using systemd at this point, they have an important consumer lined up and ready to use it. Pressure from the user community is a blunt tool to use when patches are stalled but, in this case, it might just work.
Filesystem mounts in user namespaces — one year later
User namespaces allow an ordinary user to set up an environment in which they appear to be running as root and have access to all operations normally reserved to privileged users — with exceptions where such access cannot be made safe for the system as a whole. One of those exceptions is mounting of filesystems with backing store (in other words, most real filesystems as opposed to in-memory filesystems like tmpfs). Work on enabling filesystem mounts in user namespaces has been underway for over a year. While these mounts are still disallowed in the 4.8 kernel, many of the infrastructural changes needed to eventually allow them have been merged.There are numerous potential problems with allowing filesystem mounts in user namespaces. Most of these result from the fact that user and group ID values are mapped to different values inside user namespaces; this feature allows root access to be turned into something unprivileged outside of the namespace, but it also creates the potential for grave confusion if objects in the filesystem are accessed from multiple namespaces. Additionally, a great deal of privilege information is encoded in filesystems; this includes setuid executables, file capabilities, and security labels. Making filesystem mounts in user namespaces safe requires ensuring that there are no corner cases where privilege can leak across a namespace boundary.
Special and setuid files
As was the case a year ago, the patch set starts by adding a new s_user_ns field to the superblock structure used to represent a mounted filesystem. This field allows privilege-checking code to detect filesystems mounted inside a namespace other than the initial namespace and adjust its privilege checking accordingly.
One obvious problem area with unprivileged filesystem mounting is device files. A device special file gives direct access to a device on the system, subject to the protections applied to the file. A filesystem mounted in a user namespace may be fully under the control of an unprivileged user, who could happily populate it with device files that just happen to allow full access; that, in turn, would be a direct path to a total compromise of the system. Developers working on user namespaces wisely figured out some time ago that this particular feature might be seen as undesirable by the wider community and have put a number of defensive mechanisms in place.
The kernel has long had the ability to block the use of device special files on a given filesystem; it is a matter of setting the nodev flag at mount time. That flag is automatically set now in the few situations (such as tmpfs) where filesystem mounts are allowed inside user namespaces, but it has long been seen as a dangerous area, since there is always potential that the filesystem could be remounted in a way that removes that flag. In 4.8, the nodev flag has been moved internally from the vfsmount structure (representing a specific mount point) to the superblock structure, of which there is a single instance for all points where a given filesystem is mounted. That makes it impossible to strip the nodev flag away once it has been set and, hopefully, heads off attacks based on device special files.
Once a user mounts a filesystem within a user namespace, said user can create setuid files and files with capability masks. As long as those files are contained within that namespace, all is well; they will not give the user any privileges they do not already have. But perils await if those files can somehow be accessed from outside of that namespace and, unfortunately, there are paths (files under /proc, for example) where that might happen. To avoid excessive consumption of CVE numbers in this area, the 4.8 kernel will refuse to respect setuid flags, setgid flags, or file capabilities on a file if that file is accessed from anything other than the namespace in which it was initially mounted.
The next problem area is security labels used by security modules like SELinux or Smack. Security labels are not mapped at namespace boundaries like user and group IDs are, so there is potential for trouble if a process within a user namespace is able to gain security labels that it would not otherwise have. Running executables can be a path by which these labels are acquired, so, once again, the system has to be careful about what it allows within a user namespace, where the files may be entirely under an attacker's control. It must also be impossible for labels created within a namespace to be used to elevate privilege outside that namespace.
Early work in this area simply disabled security labels entirely, but that had unfortunate effects of its own, often rendering filesystems entirely unusable. There seems to be agreement that the ideal solution would be to look at the label applied to the filesystem's backing store and use it for all files within the filesystem in a user namespace. That would give processes in the namespace the same access that they would have outside of it. Unfortunately, it seems to be difficult to obtain that information at the points in the kernel where labels need to be checked, so that approach cannot be used.
The second-best approach is to use the label of the process that mounted the filesystem. Once again, that will keep the process from exceeding the privilege it has anyway. If Smack is running, it will deny access to files whose label does not match the mounting process's label. For SELinux, at this point, the label applied to the mount point applies through the entire filesystem and the labels on individual files are simply ignored. There are suggestions that the SELinux approach may gain sophistication over time.
User and group IDs
There is yet another set of potential pitfalls around user and group IDs in general. When a user namespace is created, a set of ID mappings is set up along with it. Those mappings translate in-namespace user and group IDs to their equivalents outside of the namespace. A user with ID 1000 may start a user namespace where 1000 in the wider world is mapped to 0 (root) inside the namespace; others IDs can be mapped as well. One result of this mapping is a restriction in the range of valid user and group IDs. In the initial namespace, any ID is considered valid by the kernel; inside a user namespace, instead, only IDs that have been explicitly mapped are valid. That is a significant semantic change in how these IDs are handled.
When a process within a user namespace accesses a filesystem mounted outside that namespace, its user and group IDs will be mapped accordingly before any access decisions are made. If the filesystem has been mounted within the namespace, though, that mapping should not happen. But how should the kernel respond if that filesystem contains user and group IDs that are not considered valid within the namespace? The answer, in general, is that those values are mapped to the special INVALID_UID and INVALID_GID values, both of which are defined to be -1. That change prevents unknown IDs within the namespace from creating confusion outside the namespace, but it has a few hazards of its own.
For example, the "protected symlinks" mechanism, if enabled, prevents a process from following a symbolic link if the user ID of the link owner differs from that of the directory in which the link is found. This restriction (along with a couple of others) is meant to thwart temporary file vulnerabilities and related unpleasantness when a privileged process is fooled into opening a file it was not meant to open. But consider a situation where the link and containing-directory owners do not match, but where both are owned by IDs that are not valid in the namespace; they will both be mapped to INVALID_UID and will, within the namespace, appear to match. The function may_follow_link(), where these checks are made, has been duly changed to ensure that the IDs are valid.
Creating files with invalid IDs can only lead to trouble later on, so that must be prohibited. A more interesting problem arises when attempting to modify a file that already has an invalid ID. Any changes to a file will cause its inode structure to be written back to persistent storage, but the inode will contain the INVALID_UID and INVALID_GID values, which should not be written back. The solution adopted here is to simply disallow modification of files with invalid IDs altogether, rather than try to fix all the ways that a corrupt inode might be written. So such files will be read-only on filesystems mounted within a user namespace, even if their permissions would allow modification. There is one exception planned, though it has not been implemented yet: changing the IDs to valid values will be allowed; doing so will potentially make the file writable.
Not done yet
The list of potential problems does not end there. Disk quotas must be
handled properly and not be confused by invalid IDs. The integrity
management subsystem must be taught to not be fooled by filesystems mounted
in user namespaces. Access-control lists must be checked to ensure that
they do not contain invalid IDs. And so on, including, presumably, some
special cases that nobody has thought about. For this reason, Eric
Biederman, who took over the patch set from Seth Forshee and made it
suitable for merging, said in the
pull request: "I am not considering even obvious relaxations of
permission checks until it is clear there are no more corner cases that
need to be locked down and handled generically.
"
Finding all of those cases may take a while yet, and it may take even
longer for developers to convince themselves that no traps remain.
But there is another aspect to the problem that was extensively discussed in 2015, but which has not come up at all this time around. Most Linux filesystems are simply not designed to be robust in the face of deliberately hostile on-disk filesystem images. As long as mounting a filesystem has been a privileged operation, that has not normally been a problem; getting an administrator to mount a random ext4 or XFS filesystem image can take quite a bit of social engineering. Once filesystem mounts are possible within user namespaces, that situation changes: any attacker with a local account can mount malicious filesystems. They will also be able to change the underlying filesystem at any time, further widening the range of potential mischief. Said mischief undoubtedly includes a number of opportunities to fully compromise the kernel.
Hardening filesystems against attacks from below is not a simple matter; such hardening has never been a design goal for the filesystems in common use. Doing so in a way that preserves performance will be even more difficult. So it is not really clear when it might truly be safe to allow unprivileged users to mount arbitrary filesystems. Even after the kernel reaches a point where this capability might be enabled, one might well expect distributors and administrators to keep it disabled for a long time. There is clear value in this feature, but it may be some time before it's made widely available.
Bus1: a new Linux interprocess communication proposal
Anyone who has been paying attention to Linux kernel development in recent years would be aware that IPC — interprocess communication — is not a solved problem. There are certainly many partial solutions, from pipes and signals, through sockets and shared memory, to more special-purpose solutions like Cross Memory Attach and Android's binder. But it seems there are still some use cases that aren't fully addressed by current solutions, leading to new solutions being occasionally proposed to try to meet those needs. The latest proposal is called "bus1". While that isn't a particularly interesting name, it could be much worse: it could have been named after a town in Massachusetts like Wayland, Dracut, Plymouth, and others.
The focus for bus1 is much the same as that for recent "kdbus" proposals — to provide kernel support for D-Bus — and the implementation has strong similarities with binder, which occupies a similar place in the IPC space. The primary concerns seem to be low-overhead message passing between established peers and multicast, which are useful for remote procedure calls and distributing status updates, respectively.
David Herrmann announced bus1 on the kernel-summit mailing list with the goal of having a session for discussing the new functionality at the upcoming summit in November. The announcement was accompanied by a link to the current code and documentation, which is pleasingly well organized and thorough. Any imperfections are easily explained by the fact that bus1 is under active development and did not interfere with my ability to form the following picture of the structure, strengths, and, occasionally, weaknesses of bus1.
Peers, nodes, and handles
All communication mediated by bus1 travels between "peers", where a peer is a kernel abstraction somewhat like a socket. A process accesses a peer through a file descriptor which, in the current implementation, is obtained by opening the character special device /dev/bus1. Tom Gundersen, one of the authors, acknowledged that having a dedicated system call to create a peer, and to perform the various other operations that currently use ioctl(), might make more sense for eventual upstream submission. Devices and ioctl() have been used so far because they make out-of-tree development easier.
A peer is not directly addressable, but it can hold an arbitrary number of addressable objects known as "nodes". Bus1 maintains minimal state about a node beyond a reference counter and linkage into other data structures; it serves only as a rendezvous point. When a message is sent to a node, it is delivered to the peer that owns the node, and the peer is told "this message was sent to that node". The application managing the node can interpret that as a particular object, or as a particular service, or whatever is appropriate.
Nodes themselves do not have externally visible names, but are identified by "handles" that are a little bit like file descriptors, a little bit like the "watch descriptors" used by inotify, and a lot like the object descriptors used by binder. As such, a handle acts like a "capability". If a peer holds a handle that identifies a particular node, then it has implicit permission to send messages to that node, and hence whatever object some application associates with that node. If, instead, it doesn't have a handle for a particular node, the only way get one is to ask another node to provide it, and that is where permission checking will happen.
A handle is a 64-bit number (allocated by bus1) that only has meaning in the context of a particular peer, in the same way that a file descriptor only has meaning the context of the process that owns it. A handle refers to a node that belongs to some peer; it might be the same peer that owns the handle, or it might be a different peer. In the latter case, the handle effectively acts as a link between two peers, though neither can directly determine any details of the other.
To create a new node, an application performs any of the various ioctl() operations that accept a handle, passing the special reserved handle number of "3". This causes a new node to be allocated; the handle number for that node will replace the reserved number in the argument to ioctl().
When a peer is created by opening /dev/bus1, it has no nodes and no handles. Nodes local to the peer can be created on demand as described above, but an extra step is required to get a handle for a node in another peer. The BUS1_CMD_HANDLE_TRANSFER ioctl() can be used for this. This ioctl() is called on the file descriptor for one peer and is given the file descriptor for the destination peer along with the handle to transfer. A new handle will be created for the second peer referring to the node pointed to by the original handle.
The question of how to arrange for a process to have two different peer-descriptors so it can call BUS1_CMD_HANDLE_TRANSFER is not addressed by bus1. One option would be to have a daemon listening on a Unix-domain socket. An application that wants to communicate on that daemon's bus would open /dev/bus1 and send the resulting file descriptor over a socket to the daemon. The daemon would then perform the handle transfer and tell the application what the new handle is.
This mechanism could be repeated to give the first peer a handle that can be used to reach the second, but once there is this one point of communication, it is possibly easier to use the more general approach of message passing.
Messages, queues, and pools
The core functionality of any IPC mechanism is to pass messages. Bus1 allows a message to be sent via a peer to a list of nodes by specifying a list of destination handles. This message has three application-controlled segments and a fourth segment that is imposed by bus1.
The three application-controlled segments all contain resources to be passed to the message recipient(s). They are: a block of uninterpreted data that can be assembled from multiple locations as is done by writev(2), a list of handles, and a list of file descriptors. The handles and file descriptors are mapped to references to internal data structures when a message is sent, and mapped back to handles and descriptors relevant to the receiver when it is received. In order to give an application control over its open files, the receiving application can request that files not be mapped to local file descriptors. This is particularly useful when combined with the "peek" version of message reception which reports the content of the message without removing it from the incoming queue.
If a peer is sent a handle for a node that it already has a handle for, then the handle it is given will be exactly the same as the handle it already has. This means that handles can be compared for equality by just comparing the 64-bit values. This contrasts with file descriptors in that when a file descriptor is received, a new local descriptor is allocated even if the process already has the same file open on a different descriptor.
The fourth segment of a message identifies the sender of the message; it contains the sender's process, thread, user, and group IDs. Each of these numbers is mapped appropriately if the message travels between namespaces. These details are similar to those contained in the SCM_CREDENTIALS message that can be passed over a Unix-domain socket, but with an important difference: when using SCM_CREDENTIALS, the sending process provides the credentials and the kernel validates them; with bus1, instead, the sending process has no control at all. This means that if the sending process is running a setuid program and has different real and effective user IDs, it cannot choose which one to send. Bus1 currently insists on always sending the real user and group IDs.
Note that the identification of the sender does not include a handle by which a reply might be sent. If the sender wants a reply, it must explicitly pass a handle as part of the message; there is no implicit return path.
As mentioned, a list of recipient handles can be given, and the message will be delivered to all of the recipients, if possible. This provides a form of multicast, though there is no native support for a publish/subscribe multicast arrangement where multicast messages are sent to a well-known address and recipients can indicate which addresses they are listening on. If publish/subscribe is needed, it would have to be layered on top with some application maintaining subscription lists and sending messages to those lists as required.
Each peer has a "pool" of memory, currently limited to 256MB, in which incoming messages are placed. When a message is sent to a particular peer, space is allocated in that peer's pool and the data segment of the message is copied directly from the sender's memory into the recipient's pool. The recipient can map that pool into virtual memory and will get read-only access to the data. When it has finished with the data it can tell bus1 that section (or "slice") of the pool is free for reuse.
The message-transfer process copies the data to the pool, reserves a little extra space in the pool to store the translated handles and file descriptors, and adds a fixed-sized structure with other message details to a per-peer queue. Once there is a message on the queue, the peer's file descriptor will report to poll() or select() that it is readable; an ioctl() request can then be made to find out where in the pool the message is and to collect other details like the sender's credentials. It is only when this request is made that the handles and file descriptors are translated and their details added to the pool.
In order to avoid denial-of-service attacks, each user has quotas limiting the amount of space they can consume in pools by sending messages and the number of handles and file descriptors they can have sent that haven't been processed yet. The default quota on space is 256MB in a total of at most 16,383 messages. As soon as the receiving application accepts the message, whether it releases the pool space or not, the charge against the sender is removed.
Total message ordering
An important property that the bus1 developers put some effort into providing is a global ordering of messages. This doesn't mean that every message has a unique sequence number, but instead provides semantics that are just as good for practical purposes, without needing any global synchronization.
The particular properties that are ensured are "consistency" (if two peers receive the same two messages, they will both see them in the same order), and "causality" (if there is any chance of causality between two messages, then the message relating to the cause will be certain to arrive before the message relating to the result). This is achieved using local clocks at each peer that are synchronized in a manner similar to that used for Lamport clocks. For the fine details it is best to read the documentation section on message ordering.
Like binder, only better?
As I absorbed all the details about bus1, I was struck by its similarities to Android's binder. The message structure is similar, the handles are similar, and the use of a pool to receive messages is similar. Though I haven't covered them here, bus1 delivers node destruction messages when a node is destroyed in a similar manner to binder. Given this, a useful perspective might be provided by looking for differences.
The most obvious difference is that binder strongly unifies the concept of a peer with that of a process. In binder, the message-receipt pool is reserved in one process's address space, and each object ID, the equivalent of a bus1 handle, is a per-process identifier. Having all these concepts connected with a file descriptor in bus1, instead of with a process, is a clear improvement in flexibility and simplicity.
Binder has an internal distinction between requests and replies, and a dedicated send/wait/receive operation so that a complete remote procedure call can be effected in a single system call. Bus1 doesn't have this and so would require separate send, wait, and receive steps. This may seem like a minor optimization, but there is an important underlying benefit that this brings to binder.
Unifying all the steps into a single operation provides binder with the concept of a transaction; the message and reply can be closely associated into a single abstraction. If the recipient of the message needs to perform other IPC calls as part of handling the message, binder can see those as part of the same transaction, which will continue until the reply comes back to the originating process. Since all of this can be identified as a single transaction, binder is able to temporarily elevate the priority of every process involved to match the priority of the calling process, effectively allowing priority inheritance across IPC calls. The introduction to bus1 that was posted to the kernel-summit list identified priority inheritance as an important requirement for an IPC system, but the current code and documentation don't give any hint about how that will be implemented. Until we can see the design for priority inheritance, we cannot know if discarding the transaction concept of binder is a good simplification, or a bad loss of functionality.
Any IPC mechanism requires some sort of shared namespace for actors to find each other. Both binder and bus1 largely leave this to other layers, though in slightly different ways. In binder there is a single well-known object ID — the number "0". Only a single privileged process can receive messages sent to "0". An obvious use of this would be to support registration and lookup in a global namespace. The process listening on ID 0 would be a location broker that checked the privileges of any processes registering a name, and then would direct any request for that name to the associated object. Bus1 doesn't even have this "ID 0". Whatever handshake is used to allow BUS1_CMD_HANDLE_TRANSFER to be called must also make sure that each new client knows how to contact any broker that it might need.
Finally, binder has nothing like the global message-ordering guarantees that bus1 provides, but bus1 has nothing like the thread-pool management that is built into binder. The importance of either of these cannot be known without considerable experience working in this space, so it might be a worthy topic to explore at the Kernel Summit. These differences, together with the lack of a request/reply distinction in bus1, are probably enough that it would be unwise to hope that bus1 might eventually replace binder in the kernel.
Summary
It is early days yet for bus1. Though it has been under development for a least eight months (based on Git history) and is based on even older ideas, there has been little public discussion. The follow-up comments on the kernel-summit email thread primarily involved people indicating their interest rather than commenting on the design. From my limited perspective, though, it is looking positive. The quality of the code and documentation is excellent. The design takes the best of binder, which is a practical success as a core part of the Android platform, and improves on it. And the development team appears to be motivated towards healthy informed community discussion prior to any acceptance. The tea-leaves tell me there are good things in store for bus1.
Patches and updates
Kernel trees
Architecture-specific
Build system
Core kernel code
Device drivers
Device driver infrastructure
Filesystems and block I/O
Memory management
Networking
Security-related
Miscellaneous
Page editor: Jonathan Corbet
Next page:
Distributions>>