LWN.net Weekly Edition for October 10, 2013
DRI3 and Present
The direct rendering infrastructure (DRI) allows applications to access the system's accelerated graphics hardware without going through the display server (e.g. X or Wayland). That interface has seen two revisions over the years (DRI1, DRI2) and is poised for another. Keith Packard described DRI3 (or DRI3000 as he likes to call it) at this year's X.Org Developers Conference (XDC), along with the status and plans for the related "Present" extension. Present is a way to get new window content from a pixmap to the screen in a VBLANK-synchronized way, as Packard put it in August.
Packard noted that he had rewritten his presentation three times over the past week or so due to conversations he had at the Linux Plumbers Conference, LinuxCon North America, and XDC. There will likely be more revisions to DRI3 and Present based on the answers to some questions that he had for those in the audience, he said. But the Present extension, which comes from the verb "present", will be keeping that name, though he had looked in vain for something better.
Requirements
DRI3 is the "simplest extension I have ever written", he said, and it makes all of the direct rendering pain that came with DRI1 and DRI2 "just go away". It is designed to make the connection between dma-bufs and pixmaps. A dma-buf is a kernel DMA buffer, while a pixmap is an X rendering target. The extension "just links those two together" so that an application can draw into a dma-buf, hand it to the X server, and it becomes a pixmap with the same contents. Or the reverse.
DRI3 will also allow an application and display server to share synchronization objects. That way, the server can know when the application has finished writing to a buffer before it starts putting it on the screen. In current Intel graphics hardware, this is trivial because the kernel will serialize all access to dma-bufs, but that is not true of other graphics hardware. So there is a requirement to be able to share sync objects between applications and the server.
The third requirement for DRI3 is that it needs to tell the client which rendering device it should be talking to. With DRI2, a filename was passed to the client over the X protocol. The client opened the device and the "lovely DRI2 authentication dance" followed. DRI3 will do things differently, Packard said.
Status
"I think the DRI3 extension is done", he said, noting that there is nothing significant that he knows of left to be done in the extension itself. It has an X protocol in the server along with C bindings (XCB). There are no plans for an Xlib binding as it should all be hidden by the direct rendering libraries (e.g. Mesa). There is a Mesa DRI3 loader and a GLX API, so one can run Mesa with DRI3 today without any trouble. In fact, performance is a lot better because the Present extension is more efficient than DRI2 in swapping buffers. In addition, the Intel Mesa and 2D drivers are "all hooked up".
There are a few things still to be done, including an EGL API on the application side. He lamented the fact that the choice of window system API necessitates a different API for GL, so that he has to write another implementation in Mesa to bind the EGL API to DRI3/Present. "Thanks GL." It turns out that due to Adam Jackson's GLX rewrite, Packard won't have to write a GL loader for DRI3 for the X server. The server has yet another, separate binding to the window system APIs, Packard said, that really should switch to DRI3, but Jackson is removing all that code with his rewrite.
There is a need for more test cases as well. DRI3 passes all the tests he has now, but there are a number of OpenGL extensions that DRI3 will support (e.g. OML sync, OML swap), which have not been supported before. There are few tests in Piglit for those. There are three for OML sync, which pass for DRI3 (and fail for DRI2), he said. It is "100% improvement", but "as usual, more testing is needed".
DRI3 extension details
In an "eye-chart slide" (slide 5 in his deck [PDF]), Packard listed the requests provided by the DRI3 extension. There are only four: Open, PixmapFromBuffer, BufferFromPixmap, and FenceFromFD. The Open request opens a direct rendering manager (DRM) device and prepares it for rendering. One can select a particular rendering provider if there is more than one available. The X server will pass back a file descriptor to the opened device. Currently that means the X server can do the authentication dance, but once DRM render nodes are available in the kernel, it can transparently switch to using those. There is no other information provided to the client beyond the file descriptor; there is a function in libdrm that takes a file descriptor and figures out the proper Mesa driver to load, so there was no need for additional information.
PixmapFromBuffer and BufferFromPixmap are symmetrical operations. For PixmapFromBuffer, the client creates a dma-buf and passes a file descriptor of the buffer to the X server. The server then creates a pixmap referencing that dma-buf. BufferFromPixmap is the reverse operation, where the server maps a pixmap DRM object to a dma-buf and sends the file descriptor of the dma-buf to the client.
The FenceFromFD request creates an XSyncFence object on the server based on the file descriptor sent by the client. That descriptor refers to a page of shared memory that contains a futex. The synchronization is only in one direction: clients can wait for the server to signal using the futex. It would be nice to be able to go both ways, but he doesn't know how to make the single-threaded X server wait for the futex without blocking everything else that it is doing. In addition, Intel graphics hardware doesn't need this synchronization mechanism, so Packard needs other vendors to "tell me how to make it actually useful".
Present details
The Present extension provides three requests (PresentPixmap, PresentNotifyMSC, and PresentSelectInput) and three events (PresentIdleNotify, PresentConfigureNotify, and PresentCompleteNotify). The PresentPixmap request takes a pixmap and makes it the contents of a window. PresentNotifyMSC allows requesting a notification when the current "media stream counter" (MSC)—a frame counter essentially—reaches a particular value, while PresentSelectInput chooses which Present events the client will receive.
Those events include a notification when a pixmap is free for reuse (PresentIdleNotify), when the window size changes (PresentConfigureNotify), or when the PresentPixmap operation has completed (PresentCompleteNotify). One would think PresentConfigureNotify would be redundant because the core protocol ConfigureNotify event gives the same indication. But there is no way for the Present extension to know whether the client has requested ConfigureNotify events itself, so that it can figure out whether to pass the core event along or not. Thus a new event.
There is a long list of parameters for PresentPixmap that appear on Packard's slide #7. It includes a serial number that is returned in a matching PresentCompleteNotify event, the valid and update areas of the pixmap (which are the regions with correct and changed pixels respectively), x and y offsets, an XSyncFence object, options, and so on. The interface supports VBLANK-synchronized updates for sub-window regions and it will allow page flips even for small updates. In addition, it separates the presentation completion event from the buffer idle event, which allows for more flexibility.
Discussion
After that, much of Packard's presentation took the form of a discussion of various features and how they might be implemented. For example, the media stream counters were something of an "adventure". The MSC is a monotonically increasing counter of the number of frames since power on, but that model doesn't necessarily work well for applications that can move between different monitors, for example. Suspend/resume, virtual terminal (VT) switches, and display power management signaling (DPMS) also add to the complexity.
In the current design, there is a counter for each window and those counters still run at 60Hz when the monitor is turned off. That's not ideal, he said, as the counters should slow down, but not stop because applications are not prepared to deal with stopped counters. He wondered how fast the fake counters should run. Jackson suggested deriving something from the kernel timer slack value, which was met with widespread approval.
Another discussion topic was DRM SyncFences, which are currently implemented using futexes. That may not be ideal as futexes are not select()/poll()-friendly. As mentioned earlier, the X server cannot wake up when a fence is signaled, so there may need to be a kernel API added to get a signal when the futex is poked, he said. Someone from the audience suggested using socketpair() instead, which he plans to investigate.
Currently, protocol version 1 of Present has been implemented (which is different than what he presented). There are XCB and Xlib bindings and it is in the Intel Mesa and 2D drivers. Packard said that he would love to see Present added to a non-Intel driver to prove (or disprove) the synchronization mechanism it provides.
To close, there was also some discussion of the Present Redirection feature, which would pass pixmaps for redirected windows directly to a compositor. The "simple plan" of forwarding PresentPixmap requests directly to the compositor is what he has been working on, but something more complicated is probably required. Redirection seems targeted at improving XWayland performance in particular.
The overall plan is to get DRI3 and Present into the X server 1.15 release, which is currently scheduled for late December.
[I would like to thank the X.Org Foundation for travel assistance to Portland for XDC.]
The Thing System
"The Internet of Things" is a tech buzzword that provokes a lot of eye-rolling (right up there with "infotainment"), but the concept that it describes is still an important one: the pervasive connectivity of simple devices like sensors and switches. That connectivity makes it possible to automate the monitoring and control of physical systems like houses, offices, factories, and the like—from any computing device that has access to the internet. Getting to that blissful automated future remains a challenge, however; one that a new project called The Thing System is addressing with free software.
Sensors and devices with built-in connectivity are widespread and affordable: wireless weather stations, remote-controllable light bulbs and switches, web-accessible thermostats, and so on. The hurdle in deploying these products in a "smart home" scenario is that rarely do any two speak the same protocol—much less provide any APIs that would allow them to interact with each other. In many cases, they do not even use the same bits of spectrum for signaling; it is only in recent years that WiFi and Bluetooth Low Energy have become the norm, replacing proprietary protocols over RF.
Due to the difficulty of merging products from a host of different vendors, most whole-home automation packages rely on a central server process that monitors the state of all of the devices, queues and processes events, and provides a way to create macros or scripts (activating one device based on the status of another, for example). But it is often at the scripting interface that such projects begin to inflict pain on the user. MisterHouse and Linux MCE, for example, both support a wide variety of hardware, but they are often criticized for the difficulty of defining home automation "scenes" (which is home automation slang for a collection of lighting or device settings to be switched on as a group).
Control all the things
The complexity problem is an area where the developers of The Thing System believe
they can make significant strides. The project's mantra is that
" The example provided on the wiki is that an
environmental sensor would log carbon dioxide levels, and the master
process would run the ventilation system's fans until the levels
return to normal—without any user intervention whatsoever.
Clearly, such logic requires a detailed knowledge of the sensors and
devices that are available in the building as well as a semantic
understanding of what they can do—e.g., not just that there
is (for example) a Nest thermostat, but that the thermostat
activating the fan can lower the carbon dioxide level. The project
defines
a taxonomy
of ten device types: climate, lighting, switch, media, presence,
generic sensor, motive (that is, devices that implement some sort of
physical motion, like raising or lowering blinds), wearable,
indicator, and gateway.
Several of these categories are self-explanatory, although it is important to note that they may include
both monitoring and control. For instance, the climate category
includes weather sensors and thermostat controllers. Others are bit
more abstract: gateways are pseudo-devices that implement the Thing
System APIs but connect to a remote Internet service in the
background, and indicators are defined to be simple status
lights.
Still, the device categories are detailed where it is
appropriate; lighting devices can implement simple properties like
brightness percentage (from zero to 100%), less-common properties like RGB
color (a feature found in the Phillips Hue smart-bulb product line),
and even minutia like the number of milliseconds it takes to
transition from one state to another.
The Thing
System's master process is called the steward; it runs on Node.js and
is purported to be lightweight enough to function comfortably on a
small ARM system like the Raspberry Pi or BeagleBone Black. When it
starts up, the steward reads in a JavaScript module for each type of device
that is present in the building. There is a sizable list
of supported devices at present, although not all implement every
API. After loading in the necessary modules, the steward then
attempts to discover each device actually present on the network. It
does so via Simple
Service Discovery Protocol (SSDP), scanning for Bluetooth LE
devices, looking at attached USB devices, and scanning for devices
offering services over known TCP ports.
The steward's basic interface is a web-based client; it presents a grid of
buttons showing each discovered device, and for the well-supported
devices there is a usually a simple web control panel that the user can
bring up by clicking on the device's button. The project's
documentation does consistently refer to these interfaces as "simple
clients," however—meaning, for example, that they let the user switch
lamps on and off or adjust the light's color, but that they do
not implement magic.
Sadly, The Thing System's magic seems to still be a ways from
making its public debut. However, the project has developed several
interesting pieces of the puzzle along the way. First, there is the
Simple
Thing Protocol, which defines a WebSockets interface for the
steward to talk to programmable devices that (out of the box) offer only
a vendor-specific API. That makes supporting new hardware a
bit easier, and it establishes a generic interface with which it is
significantly easier to define scenes and events. In contrast,
scripting in other home automation systems like MisterHouse typically
requires knowledge of low-level configuration details for each device.
The project also defines the Thing
Sensor Report Protocol, which is a simpler, unidirectional
protocol for read-only sensors to send data to the steward. Here
again, most of the other well-known home-automation projects add
direct support for reading data for each sensor type. This is
difficult to support in the long run, particularly when one considers
that many simple sensors (say, thermometers or ambient light sensors)
change configuration every time the manufacturer releases an update.
Many home automation enthusiasts simply build such sensors themselves
from off-the-shelf electronic components or Arduino-like hardware.
Either way, if the scene descriptions and scripts assume details about the
sensors, the result is a hard-to-maintain configuration that
eventually falls into disuse.
Finally, The Thing System's scripting syntax is abstract
enough to be understandable to novices, without being too vague. The
key notion is activities,
which bind a status reading from one device to an action performed on
another device. For example, when a thermometer's
temperature property exceeds a pre-set value, the air
conditioning device is sent the on command. This temperature
event might be defined as:
It is not quite as simple as If This Then That, but by comparison to the competition,
it is straightforward.
The Thing System will probably be more interesting to revisit when
more advanced client applications (with lots of magic) become available, but the
overall design is good. Being web-based may be a turn-off to some,
but it is clearly the most future-proof way to offer support for
commercial hardware. Like it or not, most of the devices that
manufacturers release in the next few years will be accessible over
HTTP or (hopefully) HTTPS. Similarly, whether one loves or hates
JavaScript, the fact remains that is it far easier to find JavaScript
programmers who might contribute device drivers or clients to the
project than it is to find Perl programmers willing to do the same (as
MisterHouse requires).
Ultimately, what makes a home automation
system viable is its ease of use and its ability to support the
hardware that is actually available in stores today. The Thing System
looks to be making progress on both of those fronts.
Mozilla has merged the code for Shumway, its
JavaScript-based Flash runtime, into the latest builds of Firefox.
The feature must be switched on manually, but it still marks a
milestone for a project that Mozilla initially described as an
R&D venture. Shumway is still a work in progress, but it brings
Firefox users one step closer to eliminating a plugin that few will
miss.
We first looked at Shumway in
November 2012, shortly after it first went public. The goal of the
project has always been to implement a "web-native" runtime for
Flash—that is, a virtual machine to run Shockwave/Flash (SWF) files not through a
browser plugin, but by translating SWF content into standard HTML,
CSS, and JavaScript. In 2012, Shumway was an effort branded with the
"Mozilla Research" moniker (though whether that designation makes it
an official project or a personal investigation is not clear), and the initial announcement
came with the caveat that Shumway was " Mozilla certainly has valid reasons for pursuing a project like
Shumway: no matter how much people claim that they hate Flash, there
remains a lot of Flash content on the web, and several of Mozilla's
target platforms will soon have no official Flash implementation. For
starters, Adobe announced its
intention to drop Linux support from its traditional Netscape Plugin
API (NPAPI) plugin. Google, however, is pressing forward with its own
Flash support for Chrome on Linux, which would leave Firefox with a
noticeable feature gap by comparison.
Then, too, there is the mobile device sector to consider: Adobe dropped
Flash
for mobile browsers in 2011, and so far Google's Chrome team says it
has no plans to implement support for the format on Android. If
Mozilla were able to deliver reasonable performance in a plugin-less
Flash VM for the Android builds of Firefox, then there would certainly be
an interested audience. There are already third-party browsers for
Android that do support Flash, and some users are evidently willing to
jump
through hoops to install the old, 2011-era plugin on today's
Android releases.
But regardless of whether or not Flash support is vital to any
person's web browsing experience, Shumway being made available in
Firefox itself is news in its own right. Previously, the Shumway VM
was installable as a Firefox extension
[XPI], although interested users could also see an online
demo of the code running in a test page. The Shumway code was merged
on October 1. Builds are available from the Firefox Nightly channel; if all
goes according to schedule, the feature will arrive for the general
public in Firefox
27 in January 2014.
In order to test the nightly build's Shumway implementation, users
must open about:config and change the
shumway.disabled preference to false. With Shumway
activated, users can test drive Flash-enabled sites or open SWF
files. There are five demos linked to at the project's "Are we Flash yet?" page,
although the links actually attempt to open the files in the "Shumway
Inspector" debugging tool—which, in my tests, reports all of the
files as corrupt and renders none of them. It is straightforward enough to download the files
from the examples directory of the Shumway GitHub repository
and open them directly, however. The "pac" demos are a simple Pac-Man
character that can be navigated with the keyboard arrow keys;
pac.swf is implemented in ActionScript 2 (Flash's scripting
language) and pac3.swf in ActionScript 3. The "racing" demos
are an animated car-racing video game (again available for both
ActionScript 2 and 3); there is also an MP3 player demo and a 2D
falling-boxes demo that shows off a simple physics engine. Together,
these and the other demos in the Shumway GitHub repository demonstrate
basic interactivity, media playback, and vector graphics.
It can also be instructive to open up some of the SWF demos and
tests from the other free software Flash players Gnash and
Lightspark. Shumway is a more ambitious project in some respects than
either Lightspark or Gnash, in that it targets support for both
ActionScript 2 and ActionScript 3. Adobe introduced ActionScript 3
with Flash 9, and with it incorporated a new virtual machine model.
Neither of the other free software projects has had the time or the
resources to tackle both; Gnash implements the older standard and
Lightspark the newer. Shumway's ActionScript support is built in to
the Tamarin
engine that is also used as Firefox's JavaScript interpreter. The
non-ActionScript contents of a Flash file are parsed by Shumway and
translated to standard HTML5 elements.
That said, Shumway is currently considerably slower than the other
two projects. In the Firefox nightly, it runs its own demos, but does not
even run them at anything approaching a usable speed (for
instance, there is significant lag between key presses in the racing
game and actual movement on screen).
Performance aside, the other important factor in replacing Adobe's
binary Flash plugin is correctness—whether or not Shumway can
render arbitrary Flash content found in the wild in a usable fashion.
This is a tricky question, the answer to which varies considerably
based on each individual experiment. I tested Shumway against a
random assortment of free Flash games, several business web sites, and
an interactive house-painting visualizer from a home-improvement store
(don't ask...). To my surprise, Shumway had no trouble with the
home-improvement app and played most of the games, albeit slowly
enough to prevent me from becoming competitive, and with occasional
visual artifacts. YouTube, however, did not work—which might
also be attributable to other issues, such as the video codecs used.
The canonical response to that is that YouTube is best used
in HTML5 <video> mode, of course, putting it outside
of Shumway's purview entirely. The project has made considerable progress since we last looked at
it in 2012, but there is also a lot of work remaining. For starters,
despite Adobe's periodic insistence that the Flash specification is
open, there are evidently a significant number of undocumented
APIs. Flash is also, to put it mildly, a large and complex format
that incorporates a lot of (literally) moving parts; the testing process
would be long under even the best circumstances. But the same
argument could be made for PDF, which Firefox has been interpreting
with a built-in viewer since Firefox 19. That project, too, started
off as an in-house experiment at Mozilla Labs (which does
not appear to be synonymous with Mozilla Research).
Perhaps the success of PDF.js bodes well for the future of
Shumway—after all, a full year after Shumway's initial debut,
the world does not seem to be any closer to reaching the
much-ballyhooed end of the Flash era. Then again, if Flash files
can be rendered into normal web content without a buggy, proprietary
plugin, perhaps its continued existence is not that big of a problem.
things should work like magic
", a notion that it defines
in a fairly specific way. Server-side logic should recognize patterns
and trigger events to solve a problem without requiring intervention
from the user.
The magic house
{ path : '/api/v1/event/create/0123456789abcdef'
, requestID : '1'
, name : 'gettin too hot'
, actor : 'climate/1'
, observe : 'comfortthreshold'
, parameter : 'temperature'
}
Shumway lands in Firefox
very experimental
".
The project's GitHub
README still says that "integration with Firefox is a
possibility if the experiment proves successful
", although it
now seems that Mozilla is moving forward from the research stage to
testing in the wild.
Security
Two LSS talks
The 2013 Linux Security Summit (LSS) had many different talks over its first day. It started with Ted Ts'o's keynote, then had several "refereed" talks (two of which were covered here: Embedded integrity and Kernel ASLR). The day ended with a selection of short topics, which were mostly updates on various security subsystems (SELinux, Smack, and Integrity). Unfortunately, there isn't enough time to write all of them up, but we will complete our LSS coverage with reports from two kernel-related talks.
LSM composition
Composing (or stacking) Linux Security Modules (LSMs) has been a perpetual topic in kernel security circles. We have looked at the problem and solutions many times over the years (most recently in August). Casey Schaufler has been working on a solution to the problem over the last few years. He presented the problem and his solution to the assembled security developers looking for guidance on where to go from here.
The problem in a nutshell is that there can be only one LSM active on any given boot of the kernel. But there are multiple use cases for having more than one active. Many think that certain combinations (e.g. SELinux and Smack) do not make sense, but there are reports of people who want to be able to combine arbitrary LSMs. In addition, only allowing a single LSM restricts systems with multiple containers to only using one security model, when it might be desirable to support all of them for use in different containers.
There is also the problem of special-purpose LSMs. Whenever someone brings up an idea for a new way to tighten security in the kernel core, they are almost inevitably pointed in the direction of the LSM API. But, since many distributions already ship with the single LSM slot filled, those smaller LSMs are unlikely to be used. Yama started off as a special-purpose LSM, but it was eventually manually—unconditionally—stacked so that it could coexist. That was meant as a temporary change until stacking was added to the kernel, but without a stacking solution, its temporary nature is somewhat in question.
Schaufler's proposal still follows the broad outlines we described a year ago. It has added the ability to stack any and all of the existing LSMs, which was not working at that point. It has also added a user-space interface that has separate directories for each LSM under /proc/PID/attr. It now tries to deal with labeled networking by restricting the different mechanisms (NetLabel, XFRM, and secmark) each to a single LSM per boot. The first LSM that asks for a given network labeling scheme "gets it". The details are available in Schaufler's slides [PDF] as well as the patches. But the point of his talk was mostly to get feedback and ideas on whether it was an idea worth moving forward with.
Some were not particularly happy with the user-space interface and/or the networking changes, believing that they added too much complexity. Others seemed skeptical that stacking was ever a sensible thing to do. But several folks spoke up from the audience about how they currently use multiple LSMs and have to carry out-of-tree patches to make it all work. In addition, the standard stacking arguments were restated. There is a clear demand for the feature—whether that is enough to overcome the objections remains to be seen.
In a post-presentation discussion, Paul Moore and Schaufler explored the possibility of pushing forward the stacking core, while leaving some of the "messier" pieces (like the user-space interface and labeled networking handling) as later additions. Most or all of the stated use cases would be fully served by a "simplified stacking" solution. The other pieces could continue to be worked on, or possibly dropped if there were no real users for them. That sounded like the approach that will be tried next, but, so far, patches have not yet appeared.
Core kernel anti-patterns
There are lots of known "anti-patterns" for kernel code, like busy waiting or hardcoding values, but security anti-patterns are not as well-known, Kees Cook said at the beginning of his talk. He and others have been spending some time trying to find "obvious" bugs in the kernel, some of which fall into the anti-pattern category. His talk was meant to document some of them to hopefully avoid them in the future.
It is frustrating that he can so easily find security holes in the kernel, he said. In addition, Dan Carpenter has been using smatch to find more examples of these anti-patterns once someone has found the first instance. Cook suggested that perhaps checkpatch.pl could be extended to catch some of this bad code long before it ever reached the kernel. He also suggested that kernel developers just go look for other examples of "something ugly" when they find such bugs—surprisingly often they will find many more instances.
Format strings are one source of these errors. For example:
printk(buffer);If the user can influence what goes into the buffer, they can put format specifiers into it, which can cause havoc. Fixing the problem is as easy as:
printk("%s", buffer);GCC can be used to help find these kinds of problems, using the format and format-security warning/error flags, but it is, unfortunately, "dumb" about const char *, so it will complain even for static buffers that are not exploitable.
A related problem is the addition of the "%n" format specifier, which writes the number of characters written to an address that is passed as an argument on the stack. It was not added to the kernel until 2009 and is only used for padding calculations in procfs output. But it is the format specifier of choice for those performing format string attacks. He would like to see support for that specifier removed entirely: "I don't care about prettiness if it leaves %n as an attack vector."
String manipulation is another area with lots of low-hanging fruit. He noted that strncpy() is generally a safer call than some others (e.g. strcpy()), but you have to check the length of the destination, not the source.
strncpy(dest, src, strlen(src));can sometimes be found and it will leave the dest string without a NULL termination. He suggested that for purposely leaving the destination unterminated, one should use memcpy() to make it clear.
Another problem that is fairly easy to find is unchecked copy_*_user() calls. The return from those is the number of bytes not copied, which typically indicates some kind of error. So calling those routines without checking the return value can lead to security holes. Various graphics drivers are currently guilty, he said.
Reading from the same user-space location twice can lead to a race condition where the value changes between the two reads. It is a hard race to win, but still a problem. This often happens when the first part of a structure being read from user space is the length of the data. The length is read in, the structure is allocated, then the whole thing (length and all) is read into the new structure. If the length changes between the reads, it can lead to problems. He has found this anti-pattern in both the kernel and U-Boot.
A problem similar to the double-read occurs in drivers for unusual devices. USB human interface devices (HID) have a complex way of describing the data being delivered. In a week's time, he found 12 CVEs in that code using malicious hardware. He verified each using a Facedancer a software-defined USB device, which allows him to write a Python script that acts like a USB device. In the future, he plans to look for problems in the mass storage and webcam USB drivers.
Cook said these kinds of bugs are an indication that the "many eyes" theory is failing in some areas. He knows this because he keeps finding the same kinds of bugs whenever he has time to look. There are tools that could help, including stronger GCC default flags and using the GCC plugins from the PaX project. Coccinelle and smatch are also useful. It is important that we get more proactive, he said, and keep these anti-patterns from ever getting into the kernel to begin with.
[I would like to thank LWN subscribers for travel assistance to New Orleans for LSS.]
Brief items
Security quotes of the week
The people who have been operating these surveillance systems should be ashamed of their work, and those who have been overseeing the operation of these systems should be ashamed of themselves. We need to better understand the scope of the damage done to our global infrastructure so we can repair it if we have any hope of avoiding a complete surveillance state in the future. Getting the technical details of these compromises in the hands of the public is one step on the path toward a healthier society.
Attacking Tor: how the NSA targets users' online anonymity (The Guardian)
Writing at The Guardian, Bruce Schneier explains in his latest Edward Snowden–related piece that the US National Security Agency (NSA) had tried unsuccessfully to mount an attack against the Tor network, in hopes of bypassing the service's anonymity protections. Nevertheless, the NSA is still able to identify Tor traffic and track individual Tor users (despite not knowing their identities), which can lead to further surveillance. "After identifying an individual Tor user on the internet, the NSA uses its network of secret internet servers to redirect those users to another set of secret internet servers, with the codename FoxAcid, to infect the user's computer. FoxAcid is an NSA system designed to act as a matchmaker between potential targets and attacks developed by the NSA, giving the agency opportunity to launch prepared attacks against their systems.
" By targeting a Tor user, the agency could then leverage attacks like browser exploits to get into the user's system; nevertheless, so far the design of Tor itself seems to be functioning as planned.
New vulnerabilities
aircrack-ng: code execution
Package(s): | aircrack-ng | CVE #(s): | CVE-2010-1159 | ||||||||||||
Created: | October 7, 2013 | Updated: | October 18, 2013 | ||||||||||||
Description: | From the Gentoo advisory:
A buffer overflow vulnerability has been discovered in Aircrack-ng. A remote attacker could entice a user to open a specially crafted dump file using Aircrack-ng, possibly resulting in execution of arbitrary code with the privileges of the process or a Denial of Service condition. | ||||||||||||||
Alerts: |
|
kfreebsd-9: multiple vulnerabilities
Package(s): | kfreebsd-9 | CVE #(s): | CVE-2013-5691 CVE-2013-5710 | ||||
Created: | October 8, 2013 | Updated: | October 9, 2013 | ||||
Description: | From the CVE entries:
The (1) IPv6 and (2) ATM ioctl request handlers in the kernel in FreeBSD 8.3 through 9.2-STABLE do not validate SIOCSIFADDR, SIOCSIFBRDADDR, SIOCSIFDSTADDR, and SIOCSIFNETMASK requests, which allows local users to perform link-layer actions, cause a denial of service (panic), or possibly gain privileges via a crafted application. (CVE-2013-5691) The nullfs implementation in sys/fs/nullfs/null_vnops.c in the kernel in FreeBSD 8.3 through 9.2 allows local users with certain permissions to bypass access restrictions via a hardlink in a nullfs instance to a file in a different instance. (CVE-2013-5710) | ||||||
Alerts: |
|
nginx: code execution
Package(s): | nginx | CVE #(s): | CVE-2013-2028 | ||||
Created: | October 7, 2013 | Updated: | October 9, 2013 | ||||
Description: | From the CVE entry:
The ngx_http_parse_chunked function in http/ngx_http_parse.c in nginx 1.3.9 through 1.4.0 allows remote attackers to cause a denial of service (crash) and execute arbitrary code via a chunked Transfer-Encoding request with a large chunk size, which triggers an integer signedness error and a stack-based buffer overflow. | ||||||
Alerts: |
|
poppler: denial of service
Package(s): | poppler | CVE #(s): | CVE-2013-1789 | ||||||||
Created: | October 7, 2013 | Updated: | October 9, 2013 | ||||||||
Description: | From the CVE entry:
splash/Splash.cc in poppler before 0.22.1 allows context-dependent attackers to cause a denial of service (NULL pointer dereference and crash) via vectors related to the (1) Splash::arbitraryTransformMask, (2) Splash::blitMask, and (3) Splash::scaleMaskYuXu functions. | ||||||||||
Alerts: |
|
rubygems: denial of service
Package(s): | rubygems | CVE #(s): | CVE-2013-4363 | ||||||||||||||||||||
Created: | October 4, 2013 | Updated: | October 9, 2013 | ||||||||||||||||||||
Description: | From the Fedora advisory: Previously a security flow was found on rubygems for validating versions with a regular expression which is vulnerable to denial of service due to backtracking. Although this was thought to be fixed in the previous rubygems, the fix was found to be incomplete and the incompleteness is now assigned as CVE-2013-4363. | ||||||||||||||||||||||
Alerts: |
|
torque: authentication bypass
Package(s): | torque | CVE #(s): | CVE-2013-4319 | ||||||||||||||||||||||||
Created: | October 9, 2013 | Updated: | October 20, 2014 | ||||||||||||||||||||||||
Description: | From the Debian advisory:
John Fitzpatrick of MWR InfoSecurity discovered an authentication bypass vulnerability in torque, a PBS-derived batch processing queueing system. The torque authentication model revolves around the use of privileged ports. If a request is not made from a privileged port then it is assumed not to be trusted or authenticated. It was found that pbs_mom does not perform a check to ensure that connections are established from a privileged port. A user who can run jobs or login to a node running pbs_server or pbs_mom can exploit this vulnerability to remotely execute code as root on the cluster by submitting a command directly to a pbs_mom daemon to queue and run a job. | ||||||||||||||||||||||||||
Alerts: |
|
xen: information leak
Package(s): | xen | CVE #(s): | CVE-2013-1442 | ||||||||||||||||||||||||||||||||
Created: | October 7, 2013 | Updated: | October 9, 2013 | ||||||||||||||||||||||||||||||||
Description: | From the CVE entry:
Xen 4.0 through 4.3.x, when using AVX or LWP capable CPUs, does not properly clear previous data from registers when using an XSAVE or XRSTOR to extend the state components of a saved or restored vCPU after touching other restored extended registers, which allows local guest OSes to obtain sensitive information by reading the registers. | ||||||||||||||||||||||||||||||||||
Alerts: |
|
xinetd: privilege escalation/code execution
Package(s): | xinetd | CVE #(s): | CVE-2013-4342 | ||||||||||||||||||||||||||||||||||||||||||||
Created: | October 8, 2013 | Updated: | November 15, 2016 | ||||||||||||||||||||||||||||||||||||||||||||
Description: | From the Red Hat advisory:
It was found that xinetd ignored the user and group configuration directives for services running under the tcpmux-server service. This flaw could cause the associated services to run as root. If there was a flaw in such a service, a remote attacker could use it to execute arbitrary code with the privileges of the root user. | ||||||||||||||||||||||||||||||||||||||||||||||
Alerts: |
|
Page editor: Jake Edge
Kernel development
Brief items
Kernel release status
The current development kernel is 3.12-rc4, released on October 6. Linus's comments were: "Hmm. rc4 has more new commits than rc3, which doesn't make me feel all warm and fuzzy, but nothing major really stands out. More filesystem updates than normal at this stage, perhaps, but I suspect that is just happenstance. We have cifs, xfs, btrfs, fuse and nilfs2 fixes here."
Stable updates:
3.11.4,
3.10.15,
3.4.65, and
3.0.99 were released on October 5.
Greg has included a warning that
the long-lived 3.0 series will be coming to a close "
Then, wondering if you can blame someone else for not fixing it up
since then, you run 'scripts/get_maintainer.pl' and realize that
you are the maintainer for it as well.
Time to just back away slowly from the keyboard and forget I ever
even opened those files...
within a few
weeks
", so users of that kernel should be thinking about moving on.
Quotes of the week
Kernel development news
Kernel address space layout randomization
Address-space layout randomization (ASLR) is a well-known technique to make exploits harder by placing various objects at random, rather than fixed, addresses. Linux has long had ASLR for user-space programs, but Kees Cook would like to see it applied to the kernel itself as well. He outlined the reasons why, along with how his patches work, in a Linux Security Summit talk. We looked at Cook's patches back in April, but things have changed since then; the code was based on the original proposal from Dan Rosenberg back in 2011.
Attacks
![[Kees Cook]](https://static.lwn.net/images/2013/lss-cook-sm.jpg)
There is a classic structure to many attacks against the kernel, Cook said. An attacker needs to find a bug either by inspecting kernel code, noticing something in the patch stream, or following CVEs. The attacker can then use that bug to insert malicious code into the kernel address space by various means and redirect the kernel's execution to that that code. One of the easiest ways to get root privileges is to execute two simple functions as follows:
commit_creds(prepare_creds());The existence of those function has made things "infinitely easier for an attacker", he said. Once the malicious code has been run, the exploit will then clean up after itself. For an example, he pointed to Rosenberg's RDS protocol local privilege escalation exploit.
These kinds of attacks rely on knowing where symbols of interest live in the kernel's address space. Those locations change between kernel versions and distribution builds, but are known (or can be figured out) for a particular kernel. ASLR disrupts that process and adds another layer of difficulty to an attack.
ASLR in user space randomizes the location of various parts of an executable: stack, mmap region, heap, and the program text itself. Attacks have to rely on information leaks to get around ASLR. By exploiting some other bug (the leak), the attack can find where the code of interest has been loaded.
Randomizing the kernel's location
Cook's kernel ASLR (KASLR) currently only randomizes where the kernel code (i.e. text) is placed at boot time. KASLR "has to start somewhere", he said. In the future, randomizing additional regions is possible as well.
There are a number of benefits to KASLR. One side effect has been moving the interrupt descriptor table (IDT) away from the rest of the kernel to a location in read-only memory. The unprivileged SIDT instruction can be used to get the location of the IDT, which could formerly have been used to figure out where the kernel code was located. Now it can't be used that way because the IDT is elsewhere, but it is also protected from overwrite because it is read-only.
ASLR is a "statistical defense", because brute force methods can generally be used to overcome it. If there are 1000 locations where the item of interest could reside, brute force will find it once and fail 999 times. In user space, that failure will lead to a crash of the program, but that may not raise the kind of red flags that crashing 999 machines would. The latter is the likely outcome from a wrong brute force guess against KASLR.
On the other hand, KASLR is not compatible with hibernation (i.e. suspend to disk). That is a solvable problem, Cook said, but is not interesting to him. The amount of space available for the kernel text to move around in is another problem. The code must be 2M aligned because of page table restrictions, and the space available is 2G. In a "perfect world", that would give 1024 slots for the location. In the real world, it turns out to be a fair amount less.
There are also some steps that need to be taken to protect against information leaks that can be used to determine where the kernel was loaded. The kptr_restrict sysctl should be enabled so that kernel pointers are not leaked to user space. Similarly, dmesg_restrict should be used as dmesg often has addresses or other information that can be used. Also, log files (like /var/log/messages) should have permissions for root-only access.
The last source of leaks he mentioned is conceptually easy to fix, but has run into resistance from the network subsystem maintainers. The INET_DIAG socket API uses the address of a kernel object as a handle. That address is opaque to user space, but it is a real kernel pointer, so it can be used to determine the kernel location. Changing it to some obfuscated value would fix the problem, but the network maintainers are not willing to do so, he said.
In a completely unconfined system, especially one with local untrusted users, KASLR is not going to be very useful, Cook said. But, on systems that use containers or have heavily contained processes, KASLR can help. For example, the renderer process in the Chrome browser is contained using the seccomp-BPF sandbox, which restricts an exploit to the point where it shouldn't be able to get the information needed. It is also useful to protect against attacks via remote services since there are "many fewer leaks" available remotely.
Implementation
KASLR has been added to Chrome OS, Cook said. It is in the Git tree for the distribution's kernel and will be rolled out in the stable release soon. He has a reputation for "bringing disruptive security changes to people who did not necessarily want them", he said with a smile, but KASLR was actually the "least problematic" of his proposed changes. Part of the reason for that is that "several other very smart people" have helped, including Rosenberg, other Google developers, and folks on the kernel mailing list.
Cook's patches change the boot process so that it determines the lowest safe address where the kernel could be placed. It then walks the e820 regions counting kernel-sized slots. From those, it chooses a slot randomly using the best random number source available. Depending on the system, that would be from the RDRAND instruction, the low bits from a RDTSC (time stamp counter), or bits from the timer I/O ports. After that, it decompresses the kernel, handles the relocation, and starts the kernel.
The patches are currently only for 64-bit x86, though Cook plans to look at ARM next. He knows a "lot less" about ARM, though, so he is hoping that he can "trick someone into helping me", he said.
The current layout of the kernel's virtual address space only leaves 512M for the kernel code—and 1.5G for modules. Since there is no need for that much module space, his patches reduce that to 1G, leaving 1G for the kernel, thus 512 possible slots (as it needs to be 2M aligned). The number of slots may increase when the modules' location is added to KASLR.
![[Kees Cook]](https://static.lwn.net/images/2013/lss-cook2-sm.jpg)
A demonstration of three virtual machines, with one running a "stock" kernel and two running the KASLR code, was up next. Looking at /proc/kallsyms and /sys/kernel/debug/kernel_page_tables on each showed different addresses. Cook said that he was unable to find a measurable performance impact from KASLR.
The difference in addresses makes panics harder to decode, so the offset of the slot used to locate the kernel has been added to that output. He emphasized that information leaks are going to be more of a problem for KASLR-enabled systems, noting that it is somewhat similar to Secure Boot now making a distinction between root and kernel ring 0. For the most part, developers didn't care about kernel information leaks, but that needs to change.
There are some simple steps developers can take to avoid leaking kernel addresses, he said. Using the "%pK" format for printing addresses will show regular users 0, while root still sees the real address (if kptr_restrict is enabled, otherwise everyone sees the real addresses). The contents of dmesg need to be protected using dmesg_restrict and the kernel should not be using addresses as handles. All of those things will make KASLR an effective technique for thwarting exploits—at least in restricted environments.
[I would like to thank LWN subscribers for travel assistance to New Orleans for LSS.]
Optimizing CPU hotplug locking
The 3.12 development cycle has seen an increased level of activity around scalability and, in particular, the reduction of locking overhead. Traffic on the linux-kernel mailing list suggests that this work will extend into 3.13, if not beyond. One of several patch sets currently under development relates to CPU hotplugging — the process of adding CPUs to (or removing them from) a running system.CPU hotplugging adds complications to a number of kernel subsystems; the fact that processors can come and go at arbitrary times must always be taken into account. Needless to say, hotplug operations must be restricted to times when the kernel is prepared for them; to that end, the kernel provides a reference count mechanism to allow any thread to block CPU hotplugging. The reference count is raised with get_online_cpus() to indicate that the set of online CPUs should not be changed; the reference count is decremented with put_online_cpus().
The implementation of get_online_cpus() in current kernels is relatively straightforward:
mutex_lock(&cpu_hotplug.lock); cpu_hotplug.refcount++; mutex_unlock(&cpu_hotplug.lock);
Code that is managing an actual hotplug operation will acquire cpu_hotplug.lock (after waiting for the reference count to drop to zero if need be) and hold it for the duration of the operation. This mechanism ensures that no thread will see a change in the set of active CPUs while it holds a reference, but there is a bit of a problem: each reference-count change causes the cache line containing the lock and the count to bounce around the system. Since calls to get_online_cpus() and put_online_cpus() can happen frequently in the core kernel, this bouncing can be hard on performance.
The really sad fact in this case, though, is that CPU hotplug events are exceedingly rare; chances are that, in most running systems, there will never be a hotplug event until the system shuts down. This kind of pattern argues for a different approach to locking, where the common case is as fast as it can be made to be. That is exactly what Peter Zijlstra's CPU hotplug locking patch set sets out to do. To reach that goal, Peter has had to create a custom locking mechanism — a practice which is frowned upon whenever it can be avoided — and incorporate a new RCU-based synchronization mechanism as well. The patch series shows the evolution of this approach; this article will follow in the same path.
The new locking scheme
Peter's patch adds a couple of new variables related to CPU hotplugging:
- __cpuhp_refcount is the new form of the reference count
controlling hotplug operations. Unlike its predecessor, though, it is
a per-CPU variable, so each CPU can tweak its own count without
causing cache-line contention.
- __cpuhp_state is an enum with three values: readers_fast, readers_slow, and readers_block.
"Readers," in the context of this locking mechanism, are threads that call get_online_cpus(); they need the set of online CPUs to stay stable but make no changes to it. A "writer," instead, is a thread executing an actual CPU hotplug operation.
The state starts out as readers_fast, an indication that no CPU hotplugging activity is going on and that, thus, readers can take the fast path through the locking code. With that in mind, here is a simplified form of the core of the new get_online_cpus():
if (likely(__cpuhp_state == readers_fast)) __this_cpu_inc(__cpuhp_refcount); else __get_online_cpus();
So, when things are in the readers_fast state, get_online_cpus() turns into a simple, per-CPU increment operation, with no cache-line contention. Otherwise the slow-path code (found in __get_online_cpus()) must be run. The put_online_cpus() code looks similar; when no CPUs are coming or going, all that is needed is a per-CPU decrement operation.
When it is time to add or remove a CPU, the hotplug code will make a call to cpu_hotplug_begin(). This function begins with these three lines of code:
__cpuhp_state = readers_slow; synchronize_sched(); __cpuhp_state = readers_block;
The assignment to __cpuhp_state puts an end to the fast-path reference count operations. A call to synchronize_sched() (a read-copy-update primitive that waits for each CPU to schedule at least once) is necessary to ensure that no thread is still running in the hot-path code in either get_online_cpus() or put_online_cpus(). Once that condition is assured, the state is changed again to readers_block. That will cause new readers to block (as described below), but there may still be old readers running, so the cpu_hotplug_begin() call will block until all of the per-CPU reference counts fall to zero.
At this point, it is worth looking at what happens in the __get_online_cpus() slow path. If that code sees __cpuhp_state as readers_slow, it will simply increment the per-CPU reference count and return in the usual manner; it is still possible to obtain a reference in this state. If, instead, it sees readers_block, it will increment an (atomic) count of waiting threads, then block on a wait queue without raising the reference count. The __put_online_cpus() slow path is simpler: it decrements the reference count as usual, then calls wake_up() to wake any thread that might be waiting in cpu_hotplug_begin().
Returning to that function: cpu_hotplug_begin() will return to its caller once all references have been returned (all of the per-CPU reference counts have dropped to zero). At that point, it is safe to carry out the CPU hotplug event, changing the set of online CPUs; afterward, a call is made to cpu_hotplug_done(). That function reverses what was done in cpu_hotplug_begin() in the following way:
__cpuhp_state = readers_slow; wake_up_all(&cpuhp_readers); synchronize_sched(); __cpuhp_state = readers_fast;
It will then wait until the count of waiting readers drops to zero before returning. This wait (like the entire hotplug operation) is done holding the global hotplug mutex, so, while that wait is happening, no other CPU hotplug operations can begin.
This code raises some interesting questions, starting with: why does cpu_hotplug_done() have to set the state to readers_slow, rather than re-enabling the fast paths immediately? The purpose here is to ensure that any new readers that come along will see all of the changes made by the writer while readers were blocked. The extra memory barriers in the slow path will ensure that all CPUs see the new state of the world correctly. The synchronize_sched() call is needed to ensure that any thread that might try to block will have done so; that means, among other things, that the count of waiting readers will be complete.
Why does cpu_hotplug_begin() explicitly block all readers? This behavior turns the CPU hotplug locking mechanism into one that is biased toward writers; the moment a writer comes along, new readers are blocked almost immediately. Things are done this way because there could be a lot of readers in a large and busy system; if they cannot be blocked, writers could be starved indefinitely. Given that CPU hotplug operations are so rare, there should be no real performance issues resulting from blocking readers and allowing hotplug operations to proceed as soon as possible.
What is the purpose of the count of waiting readers? A single writer can put readers on hold, but those readers should be allowed to proceed before a second hotplug operation can be carried out. By waiting for the count to drop to zero, cpu_hotplug_done() ensures that every reader that was blocked will be able to proceed before the next writer clogs up the works again.
The end result of all this work is that, most of the time, the locking overhead associated with get_online_cpus() will be replaced by a fast, per-CPU increment operation. There is a cost paid in the form of more complex locking code and, perhaps, more expensive hotplug operations, but a CPU hotplug event is not something that needs to be optimized for. So it seems like a net win.
rcu_sync
Interestingly enough, though, Peter's patch still wasn't fast enough for some people. In particular, the synchronize_sched() calls were seen as being too expensive. To address this problem, Oleg Nesterov put together a patch adding a new "rcu_sync" mechanism. In brief, the API looks like:
struct rcu_sync_struct; void rcu_sync_enter(struct rcu_sync_struct *rss); void rcu_sync_exit(struct rcu_sync_struct *rss); bool rcu_sync_is_idle(struct rcu_sync_struct *rss);
An rcu_sync structure starts out in the "idle" state; it can be moved out of that state with one or more rcu_sync_enter() calls. When an equal number of rcu_sync_exit() calls have been made, the structure will test as idle again. The state changes are made using RCU so that, in particular, rcu_sync_exit() works via an ordinary RCU callback rather than calling synchronize_sched().
To use this infrastructure with CPU hotplugging, Peter defined the "idle" state as meaning that no hotplug operations are underway; then, calls to rcu_sync_is_idle() can replace tests against the readers_fast state described above — and the synchronize_sched() calls as well. That should make things faster — though the extent of the speedup is not entirely clear.
After all this work is done, a simple mutex-protected reference count has been replaced by a few hundred lines of complex, one-off locking code. In the process, the kernel has gotten a little bit harder to understand. This new complexity is unfortunate, but it seems to be an unavoidable by-product of the push for increased scalability. Getting the best performance out of a highly concurrent system can only be made so simple.
The Android Graphics microconference
At the 2013 Linux Plumbers Conference, a number of upstream community developers and Google Android developers got together for the Android + Graphics micro-conference to discuss several aspects of the Android graphics stack, how collaboration could be improved, and how this functionality could be merged upstream.
Sync
Erik Gilling of Google's Android team started things off by covering some background on the Android Sync infrastructure, why it was introduced and why it's important to Android.
Sync is important because it allows for better exploitation of parallelism in graphics and media pipelines. Normally you can think of a pipeline as a collection of different devices that are manipulating buffers in series. Each step would have to completely finish before passing the output buffer to the next step. However, in many cases, there is some overhead work that is required in each step that doesn't strictly need the data that is going to be processed. So you can try to do some of those overhead steps, like setting up the buffer to display, in parallel while the buffer is still being filled. However, there needs to be some interlocking so that one step can signal to the next that it is actually finished. This agreement between steps is the "synchronization contract."
Android through version 3.0 (Honeycomb) didn't use any sort of explicit fencing. Instead, the synchronization contract was implicit and not necessarily well defined. This caused problems as driver writers often misunderstood or mis-implemented the synchronization contract, leading to very difficult-to-debug issues. Additionally, by having the contract be implicit and its implementation spread across a number of drivers, with some being proprietary, it made it very difficult to make changes to that contract in order to improve performance.
To address these issues, Android's explicit synchronization mechanism was implemented. Sync is a synchronization framework that allows SurfaceFlinger (Android's display manager) to establish a timeline and set sync points on that timeline. Other threads and drivers can then block on a sync point and will wait until the timeline counter has crossed that point. There can be multiple timelines, managed by various drivers; the Sync interface allows for merging sync points from different timelines. This is how the SufaceFlinger and BufferQueue processes manage the synchronization contract across a number of drivers and processes.
In describing the various sync timelines and the ability to merge sync points on different timelines, Maarten Lankhorst, author of dmabuf fences and wait/wound mutexes, got into a discussion about whether circular deadlocks were possible with Android's sync framework. Erik believed they were not, and made a convincing case, but he admitted that he's not had to use any systems with video RAM (which has complicated locking requirements that led to the need for wait/wound mutexes), so the result of the discussion wasn't exactly clear.
Tom Cooksey from ARM mentioned that, in graphics development, trying to debug issues related to why things didn't happen in the order they were expected is really hard, and that the Android Sync debugging infrastructure makes this much easier. Maarten noted that, for dma-buf fences, the in-kernel lockdep infrastructure can also be used to prove locking correctness. But, it was pointed out, that only works because fences are not held across system calls.
There was also some question of how to handle the unwinding of hardware fences and other error conditions, which Erik thought should be resolved by resetting the GPU. Rob Clark thought that wasn't a very good solution; he worries that resetting the GPU can take a long time and might interfere with concurrent users of the GPU.
In trying to figure what the next steps would be, Rob said he didn't have any objection to adding sync point arguments to the various functions, as long as they were optional. He thought that the explicit sync points could either be built on top of dma-buf fences, or otherwise fall back to dma-buf fences. Erik mentioned that while the Android sync points aren't tied to anything specific in the kernel, they are really only used for graphics buffers, so he thought tying sync points to dma-bufs might be doable. There didn't seem to be any objections to this approach, but it also wasn't clear that all sides were in agreement, so folks promised to continue the discussion on unifying the lower-level primitives on the mailing list.
The atomic display framework
Greg Hackmann from the Android team then discussed the atomic display framework (ADF) which was developed while he was trying to develop a version of HWComposer based on the kernel mode setting (KMS) interface. During that development, Greg ran into a number of limitations and issues with KMS, so he developed ADF as a simple display framework built on top of dma-buf and Android Sync. Thus ADF represents somewhat of an ideal interface for Android, and Greg wanted to see whether folks thought the KMS interface could be extended to provide the same functionality.
One of the limitations discussed was the absence of atomic screen updates. There is the out-of-tree atomic mode setting and nuclear pageflip patch set [PDF], but in that implementation updates to the screen are done by deltas, updating just a portion of the screen. Android, instead, prefers to update the entire screen to reduce the amount of state that needs to be kept as well as to avoid problems with some system-on-chip (SoC) hardware.
There is also the problem of KMS not handling ganged CRTCs (CRT controllers that generate output streams to displays), split-out planes, or custom pixel formats well, and that the modeling primitives KMS uses to manage the hardware don't map very well to some of the new SoCs. Further, there wasn't a good way to allow KMS to exchange sync-point data.
In addition, Greg sees the KMS interface as being fairly complex, requiring drivers to implement quite a bit of boilerplate code and to handle many cases that aren't very common. The concern is that if the API is large, SoC driver writers will only write the parts they immediately need and will likely make mistakes on the edge cases. There was some discussion that maybe KMS could use some helper functions, like the fbdev (Linux framebuffer device) helpers in the DRM layer which automatically provide an fbdev interface for DRM drivers.
As a result, ADF's design is a bit simplified, representing displays as a collection of overlay engines and interfaces which can be interconnected in any configuration the hardware supports. ADF uses a structure that wraps dma-buf handles with Sync data and formatting metadata. ADF then does sanity checks on buffer lengths and pixel formatting, deferring to driver-specific validation if custom formats are in use. ADF also manages any waiting that is required on the sync handle before flipping the page, and provides driver hooks for mode-setting and events like DPMS changes and vsync signals.
Rob noted that issues like split-out planes or custom pixel formats are solvable in the KMS API, and in many cases he has specific plans to do so. For others, like ganged CRTCs he's hesitant and wants to get more info on the hardware before deciding how it would be best to add the requisite functionality.
There was some minor debate about how ADF tends to allow blobs of data to be passed through it from user space to drivers, requiring hardware-specific user-space code. This functionality makes it harder to support other display managers — Wayland, for example — that depend on a hardware-independent interface. Rob noted that, for the most part, Wayland is very similar to SurfaceFlinger, but maybe just a few years behind when it comes to things like ganged CRTCs, and that improvements are needed. But he was also concerned with the desire for KMS to be generic and to have hardware-independent Weston user space, so maybe there need to be some cases where there are hardware-specific plugins, but it will need to fall back to a slower generic implementation.
Folks from the Android team pointed out that it really is hard to anticipate all the constraints and how weird the hardware ends up being. So the issue of where to draw the line on generic interfaces vs hardware-specific seemed unresolved. However, ADF does allow for a simple non-accelerated recovery console, which would be generic.
There was also further discussion on how the atomic mode setting does partial updates or deltas while ADF focuses on full updates. With Wayland being very similar to SurfaceFlinger, the partial updates are really not as useful there, and it's mostly for X that partial updates are useful. There was some question of whether X should really be targeted for atomic mode setting, but Rob said that, while for some things like overlays X isn't a target, X likely will use atomic mode setting. There was also some question as to what a "full frame update" entails, and whether it means updating things like gamma tables as well, as that can be slow on some hardware.
Other KMS extensions
Laurent Pinchart walked through a number of other proposed extensions to KMS. The first was non-memory-backed pipeline sources. Here the issue is that there can be complicated video pipelines where a capture device can write to both memory and directly to the display at the same time. KMS doesn't really model this well, and Laurent would like to see some sort of API to handle this case. There was some back and forth with Rob as to if the DRM framebuffer objects would mostly suffice.
The next issue was memory writeback, where the composited frames are written to memory instead of being sent to a display, and what the right API is for this. On one hand this looks very similar to video capture, so the proposal was to use Video4Linux (V4L) device nodes. Mirroring earlier issues raised, Greg noted that in many situations it's just a lot simpler to write a custom driver that does the bare minimum of what is needed than to try to implement the entire V4L interface. Driver writers are often under pressure, so they're unlikely to do the right thing if it requires writing lots of extra code. Hans Verkuil, the maintainer of V4L, expressed his exasperation with people who go off and do their own thing, needlessly reinventing the wheel, and that he is very open to addressing problems and improving things. Rob again suggested that V4L may need some internal refactoring and helper functions to try to make it easier to write a driver.
There were also discussions on chained compositors, non-linear pipelines and root planes that don't span the whole display, but it didn't seem like there was much resolution to the issues discussed. Hans finally asked that folks mail the linux-media mailing list, as the V4L developers would be interested in working and collaborating to resolve some of these issues.
Deprecating fbdev
The next short topic was deprecating the fbdev interface and discussions to see if this would impact Android as it's a common user of fbdev. For Android, fbdev is particularly useful for very early hardware bringup and recovery consoles. Greg pointed out that Google was able to bring up a Nexus 10 without fbdev using ADF, so this wouldn't be a problem for them, assuming the issues in KMS that ADF worked around were resolved.
ION
The discussion then turned to the ION memory allocator. With much of the background for the topic being covered on LWN, I summarized some of the recent responses from the community and the current thinking from the Android developers and asked what would be reasonable next steps to upstreaming ION. The upstream developers were suggesting that the dma-buf delayed allocation method be used, where user space would attach the dma-buf to all the various devices and allow the allocation to happen at map time.
One problem with this approach that the Android developers saw was that it can have permissions issues. It would require one process that has permissions to all the various devices to do the attach; the Android developers would prefer to avoid having one process with permissions to everything, and instead minimize the permissions needed. The way ION currently works is by having gralloc just know the memory constraints for all the devices, but it doesn't need permissions to all of those devices to allocate buffers. Those buffers can then be passed between various processes that have only the limited device permissions for their specific needs.
With respect to the approach of trying to do the constraints solving in kernel space, Colin Cross, an Android developer, brought up that the SoC constraints are insane, and the Android team sees dynamic constraint solving as an impossible problem. One just cannot generally enumerate the various constraints in a sane way, and trying to then map to specific allocators which satisfy those constraints will always be messy. Additionally, since the desktop use case has a more dynamic environment, there may not exist a solution to the given constraints, and in that case one needs to fall back to a slow path where buffers are copied between device-compatible memory regions. The point was made that for Android, slow paths are not an option, and because of that they expect a level of control made possible by customizing the entire stack to each device.
The solution Android took with ION was to provide known allocators (heaps) and then create a device-specific, custom mapping from device to allocator in user space via the custom gralloc implementations. Colin admitted there would always be some messiness and that the Android developers prefer that messiness to exist in user space.
As far as next steps, it was proposed that, instead of trying to enumerate generic constraints in the dma-buf parameters structure, we could instead have a table of allocators and have the devices link to compatible allocators for the device at attach time. That way we could use the same heap allocators that ION uses and just have two different ways of triggering that allocation. This would allow some shared infrastructure if there couldn't be an agreed-upon top-level interface for the allocations. Rob seemed to be agreeable, but Colin brought up the downside that, by enumerating allocators per device, when new heaps are added, we would have to go through all the drivers and update their heap compatibility. This is a fair point, but it is the cost of doing constraint solving in the kernel, rather than via custom user-space code, and as long as there was still the direct ION-like interface, it wouldn't have any negative consequences for Android.
Another interesting result was that Colin has been busy addressing some of the other issues with ION that were brought up in the LWN summary and in other discussions. It seems likely that ION will be submitted to staging so that the transition to using shared heaps can be more centrally coordinated.
I'd like to thank all of the discussion participants, along with Laurent Pinchart and Jesse Barker for co-organizing the micro-conference, and Zach Pfeffer for his diligent note taking.
Patches and updates
Kernel trees
Architecture-specific
Core kernel code
Development tools
Device drivers
Filesystems and block I/O
Janitorial
Memory management
Networking
Security-related
Miscellaneous
Page editor: Jonathan Corbet
Distributions
Fedora's working groups
Last August we covered a proposal to change how Fedora is built that was being pushed by Matthew Miller. This proposal divided Fedora into "rings," with a tightly-controlled core component serving as the base for various outer layers aimed at specific use cases. Since then, work on this idea has progressed somewhat, and the Fedora project is now trying to put together a governance structure that can implement it. In the process it's revealing more of what the project's leaders have in mind and how this plan may raise some tensions with the wider community.Back in September, Matthew posted a call for nominations for a set of "working groups" that would drive the future of Fedora. Each of these working groups would function as a subcommittee of the Fedora Engineering Steering Committee (FESCo) and handle everything that is specific to its particular topic; FESCo's job will become one of handling issues that affect more than one of these working groups:
The current plan is to have five working groups, two of which are directly tied to the "rings" in the original proposal. There will be a "Base Design" group that will manage the core of the Fedora system; this core includes the kernel, low-level plumbing, and the toolchain needed to build it all. The "Environments & Software Stacks" group is charged with managing the next layer, which includes development environments, desktop environments, and more. These two layers provide the substrate that most real-world uses of Fedora will want to have.
From here, it appears that the Fedora distribution is going to fork into three "products" (workstation, server, and cloud), each of which will be run by its own working group. The plans for these groups are intentionally fuzzy until the groups themselves are formed:
Some participants on the Fedora development list were a little disappointed to see that there was not an embedded product on the list. Red Hat's Stephen Gallagher confirmed that an embedded product is not envisioned for now, though, he noted, there will be a place for ARM in the server and cloud variants. Given the current level of Fedora use in the embedded world (or the lack thereof) this position might make sense, but it would be a shame if Fedora were to show a lack of ambition in this area in the long term.
The biggest fuss, however, was certainly about the process by which the groups will be chosen and the degree of freedom they will have. This process was briefly described in the FESCo ticket system and discussed at the October 2 FESCo meeting. In short, FESCo will assign one of its own members to each of the working groups; that delegate will then select nine members from those who have nominated themselves for the position. Nominations are being gathered on this wiki page and will close on October 14, so anybody with an interest in participating should put their name in soon.
How much latitude will the working groups have to define their various
products? During the meeting (IRC
log), Miloslav Trmač complained about the proposed rule stating that
Red Hat employees could not hold more than half of any working group's
seats, saying "
The deliberations of these working groups will be public, so we will be
able to see how it actually plays out. Chances are that things will
generally go in directions that are agreeable to Red Hat for a simple
reason: Red Hat is paying the bulk of the developers who actually get the
work done. Nobody could realistically expect that a Fedora working group,
no matter how independent, would have the power to direct the activities of
Red Hat's staff. So, naturally, directions that are interesting to Red Hat
are likely to get more developer time and, thus, to progress more quickly.
In any case, the next step is to name the members of these committees and
to let them start the process of deciding what they want to do. The shape
of the Fedora 21 release probably will not change much as a result of
the new design, but it would be surprising if the releases after that
didn't exhibit some new directions. Within a year or two, we could have a
Fedora distribution with three distinct flavors, each of which is optimized
for a different user community. It will be interesting to watch.
I'd rather be upfront with that this needs to work for
Red Hat
". Peter Jones translated that as "basically if
whatever they come up with doesn't work for RH, it's a non-starter
",
and Matthew added "
Right. that's a big elephant that underlies
everything Fedora does without actually being in our mission docs.
"
That was enough to inspire Jóhann B. Guðmundsson, who has a long history of
dissatisfaction with Fedora project governance, to dismiss the entire exercise as "nothing
but an utter and total shenanigan on RH behalf
". This outburst drew
responses from several Red Hat employees, all of whom said that Red Hat had
no intention of dictating directions to Fedora. As Matthew put it:
Brief items
OpenBSD ports gets long-term support
Reiner Jung looks at an initiative to provide long-term support for OpenBSD ports. "To address the separate but related needs of OpenBSD users there is a requirement to provide the very latest release of software and to increase the longevity of an existing stable release. These are two challenging propositions and both are real world requirements. There are various reasons for this and for some organisations the ongoing provision of a stable and reliable release which doesn't hinder or impact normal operations, is of paramount importance. To address these needs M:Tier will release "Long Time Support" (LTS) for OpenBSD ports. This new service will be introduced for the forthcoming OpenBSD v5.4 release scheduled for 1st November 2013." Details are available at M:Tier's website. (Thanks to Jasper Lievisse Adriaanse)
Oracle Linux Release 5 Update 10
Oracle has announced the general availability of Oracle Linux Release 5 Update 10 for x86 (32 bit) and x86_64 (64 Bit) architectures. This release has numerous bug fixes and improvements.PC-BSD 9.2 released
The PC-BSD 9.2-RELEASE images are available. Based on FreeBSD 9.2, this release features a number of improvements to ZFS functionality, GitHub migration, and more.
Distribution News
Debian GNU/Linux
bits from the DPL -- September 2013
Debian Project Leader Lucas Nussbaum presents a few bits about DPL activities during September and early October. Topics include some calls for help, OPW, interviews and talks, assets, sprints, and more.Debian's Google Summer of Code 2013 Wrap-up
Debian participated in Google Summer of Code with 15 of 16 projects successfully completed. "We would like to thank everybody involved in the program — the thirty mentors, sixteen students, and the DebConf team — for making this a success. We have been told by many students that they will continue their projects or will get involved in other areas of Debian even after the summer and we consider this to be the most significant achievement of our participation."
openSUSE
openSUSE's Google Summer of Code 2013
openSUSE participated in this year's Google Summer of Code with 9 out of 12 successfully completed projects. "It was a nice experience working for this summer. A lot of thanks to all mentors, who took out valuable time out of their busy schedules for the students. We can improve in lots of places, and come back better next year!"
Ubuntu family
Next Ubuntu Developer Summit
The next Ubuntu Developer Summit (UDS) will be held online during November 19-21. The event is free and open to everyone. Sessions need to be set up and registered with proposals due by November 1.
Other distributions
Woof now in maintenance mode
Barry Kauler has announced that he intends to retire from Puppy Linux and Woof development. Woof is the build system for Puppy Linux and its "puplets". "I don't plan to just suddenly pull the plug, rather just put Woof (and Puppy) in "maintenance mode" for the next year (or as long as I deem necessary), while a few things get sorted out. "Maintenance mode" means that I will continue to work on Woof, but just focused on essential fixes, rather than any new features."
Newsletters and articles of interest
Distribution newsletters
- This Week in CyanogenMod (October 5)
- DistroWatch Weekly, Issue 528 (October 7)
- Tails report (for September)
- Ubuntu Weekly Newsletter, Issue 337 (October 6)
arkOS : An Anti-Cloud Server Management Distribution For Raspberry Pi (Crazy Engineers)
Crazy Engineers has an introduction to arkOS. "The initiative of CitizenWeb Project & Jacob Cook, arkOS is Arch-Linux based Anti-cloud server management custom Linux distribution, specially built for Raspberry Pi promoting decentralization and democratization of Internet. arkOS allows you to host your own Website, Email & also your own private cloud service. All of these functions are managed by a GUI (Graphical User Interface) application known as 'Genesis', a one-stop shop which runs on top of arkOS; where you can add, modify & customize the arkOS nodes, allowing the user to easily install Server applications, plugins, upload - manage files & manage your (own) cloud."
The Klaus Knopper Interview (Everyday Linux User)
Everyday Linux User has an interview with Klaus Knopper, creator of Knoppix. "I am very interested in feedback. Negative feedback with a detailed error description or complaints about things that are not intuitive is actually very valuable for me, it helps me to improve the system or remove software packages that are not working correctly or are superseded by better ones. Of course I'm also happy to receive an occasional "everything is working fine" message, or success stories for data rescue or for getting certain hardware to work again, too, but I take complaints and criticism very seriously, and try to analyze problems and help as far as my free time allows, or explain why some things are just as they are and are going to stay that way (like the missing browser Flash plugin and restrictive security settings in Firefox and Chromium, complaint number one, but I'm not going to change this!)."
Page editor: Rebecca Sobol
Development
Outlining with Concord
Organizing one's thoughts can be the most difficult part of writing, whether one is writing a blog post, a conference talk, or a technical document. Although everyone has a favorite tool, there is a dedicated community of people who are fans of the outliner: a plain-text editor that allows the user to create document structure with tab-indentation or similar minimalist markup.
Despite the lightweight markup and the simplicity of the editing interface, however, outliners can be used to generate and rearrange complex, multi-level documents or even small databases. Emacs Org mode, for example, is a much-loved outliner that supports a variety of uses: creating spreadsheets, calendar management with to-do lists, and much more. But it has been a while since there was a full-featured free software outliner for use on the web, so the recent release of Fargo, and the Concord component that powers it, has attracted considerable attention.
Dave Winer announced Concord on September 16, a few months after releasing Fargo as a free web outliner application that uses Dropbox as its storage back-end. Concord is written in JavaScript as a jQuery plugin that can be easily incorporated into other applications. In the initial announcement, Winer cited a number of popular web applications in which it could be useful, such as Evernote, WordPress, and GitHub. The license is GPLv3, a fact that prompted some griping in the comments on the announcement,from web developers who claim they would like to incorporate the module into products with a non-copyleft license.
For a while, Winer's company Small Picture ran a web content-management system (CMS) service called Trex running on top of Fargo, as a demonstration of Concord's integration capabilities, although the plug was pulled on October 7. Nevertheless, there are outliner fans who have written about the potential benefits of Concord integration for other projects; for example, Matt Jadud announced his plan to use the component in a web-based IDE.
![[The Concord editor in Fargo]](https://static.lwn.net/images/2013/10-concord-sm.png)
Testing Concord or Fargo out for a few minutes might leave one wondering what all the fuss is about—after all, one can write text and indent it at will using the Tab key, and there are a few text mark-up options (bold, italics, and strikethrough), but little else. The real benefits of the tool, however, only reveal themselves when working on a real-world project. Behind the simplistic user interface, Concord saves documents as Outline Processor Markup Language (OPML), an XML format that is specifically designed to support collapsing and expanding sections, nested outline structure, and rearrangement of file contents. Many people may be familiar with OPML solely as the format used to save RSS feed subscriptions, but it does much more, and like most XML, can be transformed into other formats.
Where an outliner interface proves its mettle is in bringing structure to a document that is formed and re-formed as it is being written. A heavyweight editor (in which, say, every element needs to be explicitly assigned a position and hierarchy level at the outset) gets in the way, whereas tab-based markup lets the author shift items around essentially at full-typing speed.
As a web editor, Concord offers several niceties above and beyond the simple tab-based outline. Each level of an outline has a "wedge" (the familiar triangle icon) that expands or collapses sub-levels, and there is a "structure mode" that allows the user to move and rearrange existing outline levels (including promoting and demoting items to different levels) with the keyboard arrow keys.
Each item in the Concord outline also has hidden metadata attributes—such as creator and last-modification time—that can be shown and edited. By default, the editor renders outline contents as HTML (that is, actually displaying bold-tagged text in bold), but it can be switched into markup mode with a keystroke, allowing one to edit the tags directly. The editor also supports inserting JavaScript snippets and commenting out portions of the outline.
Winer, of course, is arguably the biggest fan of outliners one is likely to encounter on the web today. Fargo is reminiscent in many ways of his earlier web outliner project, the now-dormant Frontier. Whether Concord makes a more lasting impression than Frontier remains to be seen. The grumbling over the GPLv3 license suggests that there are a least a few potential users for the tool (squabbling over the merits of copyleft aside).
The biggest beneficiaries of a tool like Concord might be all of the web applications that currently implement their own editor component specifically for a specialized markup language. Consider Wikipedia, for example: Wiki markup is complex and covers hundreds of scenarios that the average article does not require. That makes it more difficult to work with when making a small change or insertion to an article. But tabs and arrow keys? Those are difficult to misunderstand and forget.
Brief items
Quote of the week
GNU Make 4.0 released
GNU Make 4.0 is out. New features include the integration of the Guile extension language, better tracing, a new shell assignment operator (which is "!="), and more.tar-1.27 released
Version 1.27 of the archiving utility GNU tar has been released. Most notably, this update adds support for storing and extracting POSIX ACLs, extended attributes, and SELinux contexts, in addition to several other new configuration options and switches.
PyQt5 v5.1 Released
PyQt5, the Python bindings for Qt5, has been updated to version 5.1. This release brings full compatibility with Digia's Qt 5.1, plus support for other useful modules like QtSensors and QtSerialPort, and bindings for OpenGL 2.0 and EGL 2.
Newsletters and articles
Development newsletters from the past week
- Caml Weekly News (October 8)
- OpenStack Community Weekly Newsletter (October 4)
- Perl Weekly (October 7)
- PostgreSQL Weekly News (September 30)
- Ruby Weekly (October 3)
- Tor Weekly News (October 9)
Firefox Developer Tools and Firebug
Here's a Mozilla blog entry on current developments with the Firefox developer tools. "The tools have improved a lot lately: black-boxing lets you treat sources as system libraries that won’t distract your debugging flow. Source maps let you debug source generated by transpilers or minimizers. The inspector has paint flashing, a new font panel, and a greatly improved style inspector with tab completion and pseudo-elements. The network monitor helps you debug your network activity."
Garrett: The state of XMir
Matthew Garrett has posted an assessment of where XMir development stands. "This is an unfortunate situation to be in. Ubuntu Desktop was told that they were switching to XMir, but Mir development seems to be driven primarily by the needs of Ubuntu Phone. XMir has to do things that no other Mir client will ever cope with, and unless Mir development takes that into account it's inevitably going to suffer breakage like this. Canonical management needs to resolve this if there's any hope of ever shipping XMir as the default desktop environment."
Page editor: Nathan Willis
Announcements
Brief items
Google Summer of Code 2014 and Google Code-in 2013/14
Google has announced its next Summer of Code program a bit earlier than usual because they will be celebrating an anniversary: "To date, the program has produced 50 million lines of open source code from more than 8,500 student developers—and in 2014, we'll mark the 10th anniversary of Google Summer of Code." The program will be getting larger and, this year, participants are getting larger stipends.
FSF Opens Nominations for the 16th Annual Free Software Awards
The Free Software Foundation (FSF) and the GNU Project have announced the opening of nominations for the 16th annual Free Software Awards. The Free Software Awards include the Award for the Advancement of Free Software and the Award for Projects of Social Benefit. "In the case of both awards, previous winners are not eligible for nomination, but renomination of other previous nominees is encouraged. Only individuals are eligible for nomination for the Advancement of Free Software Award (not projects), and only projects can be nominated for the Social Benefit Award (not individuals)."
Articles of interest
FSFE Newsletter - October 2013
The October issue of the Free Software Foundation Europe covers the GNU project's 30th anniversary, local Fellowship events, FSFE joined a coalition which launched a list of 13 International Principles on the Application of Human Rights to Communication Surveillance, free software in education, updated Open Hardware License, and several other topics.Intel powers an Arduino for the first time with new “Galileo” board (ars technica)
Ars technica covers Intel's announcement of the Galileo development board, which contains a Quark 32-bit x86 CPU and is targeted at the "Internet of Things". It was designed in conjunction with Arduino and has connections for existing Arduino "shields" in addition to USB, Ethernet, RS-232 serial, and PCIe. "Intel will be donating 50,000 Galileo boards to universities around the world as part of the collaboration, and it will be available to hobbyists for $60 or less by November 29. That price makes Galileo quite competitive with existing Arduino boards, most of which aren't as feature complete. Intel promises full compatibility with Arduino software and existing hardware, which could make this a very attractive board for complex projects." Galileo is also open hardware, with schematics and other information available at its home page.
White paper: the economic value of the Long Term Support Initiative
The Linux Foundation has announced the availability of a white paper (registration required) estimating the economic value of the Long Term Support Initiative, an effort which supports stable kernel releases for the consumer electronics industry. The resulting value is about $3 million per release. "LTSI is important because device makers are doing significant back-porting, bug testing and driver development on their own, which carries substantial cost in terms of time-to-market, as well as development and engineering effort to maintain those custom kernels. Through collaboration in this initiative, these CE vendors are reducing the duplication of effort currently prevalent in the consumer electronics industry. This new paper helps calculate that total cost savings in more definite terms."
Calls for Presentations
Mini-Debconf in Cambridge, UK
There will be a mini-DebConf November 14-17 in Cambridge, UK. That includes a mini-DebCamp November 14-15 and the regular conference November 16-17. "In terms of talks for the weekend, I've had lots of offers from various people but relatively few detailed proposals. That means that the talk schedule is still very open yet. If you're wanting to talk to people about stuff you've been doing in and around Debian, or you have insights that you'd like to share, now is your time!"
Want to run the Linux Plumbers Conference in 2014?
The Linux Foundation's Technical Advisory Board is currently accepting applications from groups wishing to organize the 2014 Linux Plumber's Conference; the current plan is to co-locate that conference with LinuxCon Europe in Düsseldorf, Germany, but hosting it in Chicago with LinuxCon North America is also a possibility. See this page for information about out to put together a bid; the deadline is November 3.CFP Deadlines: October 10, 2013 to December 9, 2013
The following listing of CFP deadlines is taken from the LWN.net CFP Calendar.
Deadline | Event Dates | Event | Location |
---|---|---|---|
November 1 | January 6 | Sysadmin Miniconf at Linux.conf.au 2014 | Perth, Australia |
November 4 | December 10 December 11 |
2013 Workshop on Spacecraft Flight Software | Pasadena, USA |
November 15 | March 18 March 20 |
FLOSS UK 'DEVOPS' | Brighton, England, UK |
November 22 | March 22 March 23 |
LibrePlanet 2014 | Cambridge, MA, USA |
November 24 | December 13 December 15 |
SciPy India 2013 | Bombay, India |
December 1 | February 7 February 9 |
devconf.cz | Brno, Czech Republic |
December 1 | March 6 March 7 |
Erlang SF Factory Bay Area 2014 | San Francisco, CA, USA |
December 2 | January 17 January 18 |
QtDay Italy | Florence, Italy |
December 3 | February 21 February 23 |
conf.kde.in 2014 | Gandhinagar, India |
If the CFP deadline for your event does not appear here, please tell us about it.
Upcoming Events
Events: October 10, 2013 to December 9, 2013
The following event listing is taken from the LWN.net Calendar.
Date(s) | Event | Location |
---|---|---|
October 12 October 14 |
GNOME Montreal Summit | Montreal, Canada |
October 12 October 13 |
PyCon Ireland | Dublin, Ireland |
October 14 October 19 |
PyCon.DE 2013 | Cologne, Germany |
October 17 October 20 |
PyCon PL | Szczyrk, Poland |
October 19 | Central PA Open Source Conference | Lancaster, PA, USA |
October 19 | Hong Kong Open Source Conference 2013 | Hong Kong, China |
October 20 | Enlightenment Developer Day 2013 | Edinburgh, Scotland, UK |
October 21 October 23 |
KVM Forum | Edinburgh, UK |
October 21 October 23 |
LinuxCon Europe 2013 | Edinburgh, UK |
October 21 October 23 |
Open Source Developers Conference | Auckland, New Zealand |
October 22 October 24 |
Hack.lu 2013 | Luxembourg, Luxembourg |
October 22 October 23 |
GStreamer Conference | Edinburgh, UK |
October 23 | TracingSummit2013 | Edinburgh, UK |
October 23 October 25 |
Linux Kernel Summit 2013 | Edinburgh, UK |
October 23 October 24 |
Open Source Monitoring Conference | Nuremberg, Germany |
October 24 October 25 |
Embedded LInux Conference Europe | Edinburgh, UK |
October 24 October 25 |
Xen Project Developer Summit | Edinburgh, UK |
October 24 October 25 |
Automotive Linux Summit Fall 2013 | Edinburgh, UK |
October 25 October 27 |
Blender Conference 2013 | Amsterdam, Netherlands |
October 25 October 27 |
vBSDcon 2013 | Herndon, Virginia, USA |
October 26 October 27 |
T-DOSE Conference 2013 | Eindhoven, Netherlands |
October 26 October 27 |
PostgreSQL Conference China 2013 | Hangzhou, China |
October 28 November 1 |
Linaro Connect USA 2013 | Santa Clara, CA, USA |
October 28 October 31 |
15th Real Time Linux Workshop | Lugano, Switzerland |
October 29 November 1 |
PostgreSQL Conference Europe 2013 | Dublin, Ireland |
November 3 November 8 |
27th Large Installation System Administration Conference | Washington DC, USA |
November 5 November 8 |
OpenStack Summit | Hong Kong, Hong Kong |
November 6 November 7 |
2013 LLVM Developers' Meeting | San Francisco, CA, USA |
November 8 | PGConf.DE 2013 | Oberhausen, Germany |
November 8 | CentOS Dojo and Community Day | Madrid, Spain |
November 8 November 10 |
FSCONS 2013 | Göteborg, Sweden |
November 9 November 11 |
Mini DebConf Taiwan 2013 | Taipei, Taiwan |
November 9 November 10 |
OpenRheinRuhr | Oberhausen, Germany |
November 13 November 14 |
Korea Linux Forum | Seoul, South Korea |
November 14 November 17 |
Mini-DebConf UK | Cambridge, UK |
November 15 November 16 |
Linux Informationstage Oldenburg | Oldenburg, Germany |
November 15 November 17 |
openSUSE Summit 2013 | Lake Buena Vista, FL, USA |
November 17 November 21 |
Supercomputing | Denver, CO, USA |
November 18 November 21 |
2013 Linux Symposium | Ottawa, Canada |
November 22 November 24 |
Python Conference Spain 2013 | Madrid, Spain |
November 25 | Firebird Tour: Prague | Prague, Czech Republic |
November 28 | Puppet Camp | Munich, Germany |
November 30 December 1 |
OpenPhoenux Hardware and Software Workshop | Munich, Germany |
December 6 | CentOS Dojo | Austin, TX, USA |
If your event does not appear here, please tell us about it.
Page editor: Rebecca Sobol