LWN.net Weekly Edition for January 17, 2013
Ubuntu's phone SDK and the QML component zoo
Canonical unveiled its Ubuntu for phones effort the first week of January with an announcement and a video demonstrating the user interface. For many developers, however, that launch was light on detail, and it was the somewhat less flashy Ubuntu Phone software development kit (SDK) that provided the interesting bits. The SDK highlights QML as the UI toolkit of choice — a fact which has attracted attention from other QML-based projects like KDE's Plasma Active and Jolla's MeeGo derivative Sailfish OS who showed an interest in collaborating on a common API. QML has been the application framework for several free software mobile phone platforms in the past — including some that fizzled out in high-profile fashion; perhaps a collaborative approach is the missing piece.
The Ubuntu Phone SDK
The Ubuntu Phone SDK is provided as a set of .deb packages in a Launchpad Personal Package Archive (PPA), which makes installation a snap for people running Ubuntu or downstream distributions, but a bit of an open question for those running something else. Source is available from Launchpad, but users on RPM-based systems may find satisfying the bleeding-edge dependencies a hassle. The packages include a set of Qt component libraries, plus some demos, examples, and the associated documentation. The development tools are the standard Qt set: the Qt Creator IDE and a QML viewer for interpreting and testing QML code itself.
QML is a JavaScript-based language designed for use in writing application user interfaces. It was introduced in Qt 4.7, coupled with a QtQuick module that defines a basic set of building blocks — graphics primitives like rectangles, gradients, borders, and images; text handling elements like input fields and form validators; animation and transition elements; and an assortment of list, transformation, and state handling elements. Although QML and QtQuick are frequently associated with smartphone development thanks to MeeGo, they are not specific to the mobile space.
The Ubuntu Phone SDK builds on top of the QtQuick base with its own set of components that define application interface elements (such as buttons, tabs, scrollbars, and progress bars), more specific elements for use with lists (such as headers and dividers, and separate elements for those list items that can take one value and those that can include multiple values), and navigation elements geared towards the latest in phone app design trends (such as "sheets" and popovers intended to slide or spring into view and be swiped away by the user).
There is also a separate application theming framework in the SDK, although the documentation warns that it is still under development and subject to change. Currently, theming Ubuntu Phone apps consists of writing a foo.qmltheme file containing CSS-like rules, such as:
.button { color: item.hovered ? "orange" : "#ffcc99"; }which are then referenced in a separate file that defines the look of the specific elements used in the app. For example, a Button element would refer to the color above with itemStyle.color and pick up the value #ffcc99. As one would expect of the visually-conscious Canonical, indications are that Ubuntu Phones will ship with a set of pre-defined themes to provide consistency.
Other QML platforms
The theming model is not a big departure from the one used by QtQuick, however. The new QML components defined by the SDK are the principal difference between developing an app for the Ubuntu Phone platform and developing one for any of the other QML-driven platforms. That list includes not only Sailfish and Plasma Active, but RIM's Blackberry Cascades and Nokia's still-alive efforts for its MeeGo Harmattan and Symbian platforms. Each of the other platforms has defined its own set of add-on components as well, a fact that did not escape the attention of mobile developers.
Johan Thelin wrote a blog post on January 3 comparing and contrasting the available add-on toolkits by looking at just one component, the CheckBox element. A checkbox is used to capture a binary selection, but Thelin outlined five properties (text, checked, enabled, pressed, and hovered) and three signal types (onCheckedChanged, onClicked, and onPressAndHold) that vary between the available component libraries. This potentially forces developers to rewrite their applications to support more than one QML-based platform.
Nevertheless, Thelin, concluded, "as the concepts seems
similar, the work to create a cross-platform set of components
wrapping the components of each platform should not be an impossible
task
".
Canonical's Zoltán Balogh was evidently of similar mind, and popped
into the Plasma Active IRC channel on January 9 to discuss the idea
with the KDE and Sailfish developers. Subsequently he wrote
up the outcome of that discussion on the Qt Components mailing
list. Balogh, Marco Martin and Aaron Seigo from KDE, and Joona
Petrell from Sailfish agreed to continue discussing QML components and
APIs with an eye toward "collect[ing] all concepts in various
projects
" and a "consistent QML component and common API
set
". Vladimir Minenko from RIM agreed to participate in the
discussion as well, as did several developers from Qt's current
corporate backer Digia (who are working on a revised set of components
to include in Qt 5.1).
One component set to rule them all?
The discussion on the Qt Components list then turned to the task of collecting information on the various components from each platform that were perceived to be of general interest. The list is growing at the Qt Project wiki, but agreement on the principle of cooperation does not make platform differences melt away.
Digia's Jens Bache-Wiig expressed
some skepticism that coordination would reap big benefits, noting
that the project had "tried this before with Symbian and Meego
and partially failed while spending an amazing amount of time arguing
over small issues.
" Nevertheless, Bache-Wiig was "optimistic
that we can find at least some shared ground
".
Martin Jones argued
for defining a "minimal API" that omitted platform-specific properties
and signals, but if the early discussions are anything to judge by,
differences between the platforms may make even that task a tricky
one. For example, Jens proposed
an initial set of components consisting of Button, ToolButton,
CheckBox, Slider, ProgressBar, TextField, TextArea, RadioButton,
ButtonRow, Menu, ContextMenu, ToolBar, Switch, and BusyIndicator. But
Alan Alpert replied
that ToolBar, ToolButton, and ContextMenu could not be used in
Cascades, because on that platform the system maintains tighter
control over UI elements. "You don't create your own tool bar
for your application - it creates the system tool bar for your
application when you specify actions for it.
"
To be sure, RIM may have less market share today than it has in years past, but the other platforms differ as well. Martin Jones pointed out that Sailfish has no "tool bar" or "status bar" at all, and so far has no implementation of radio buttons either. Likewise, Canonical's Zsombor Egri replied that Ubuntu Phone has no "tool bar" either and has yet to make a decision about menus and context menus.
Of course, Sailfish and Ubuntu Phone are new platforms not yet running on released products. For that matter, the widely-anticipated Vivaldi tablet project using Plasma Active came to an abrupt halt in mid 2012. All three projects could decide tomorrow that tool bars are indispensable (or, for that matter, that text entry boxes are passé ... ). Still, it is a welcome sight to see the projects collaborating on the process of developing a component API. Seigo wrote approvingly about the discussion on his blog, saying
Further down in the same post, he hinted that the collaboration may
extend beyond QML components to include "other aspects of the
bigger puzzle such as common package formats and delivery
strategies.
" The broad view, he concluded, was that sharing
some level of API consistency, even if implemented differently behind
the scenes, offered "a far more friendly and interesting story
for developers.
"
Few would argue that offering a friendly and interesting developer story is not critical to the success of a new mobile platform. The bigger obstacle may be attracting hardware partners and mobile carriers — a feat that Ubuntu Phone, Plasma Active, and Sailfish all have yet to deliver to completion — but providing a partially-shared development story around QML certainly cannot hurt.
Then again, the promise of a device-neutral application platform is also one of the oft-repeated benefits of using HTML5 as a development platform. Here the Ubuntu Phone marketing materials do not paint as clear of a picture for developers. HTML5 is touted as an option for developing Ubuntu Phone apps, but so far the SDK and tutorials offer no help for HTML5 developers, and the supporting documentation is silent. Perhaps more is still to come (this is the first release of the Ubuntu Phone SDK, after all), or perhaps Canonical is simply waiting to see where its competitors in the mobile HTML5 game — such as Tizen and Firefox OS — head first.
Fitbit, Linux, and you
Fitbit devices are a family of wearable gadgets that track everyday health metrics (such as motion, calorie burn, and sleep cycles) on a minute-by-minute basis. There are certainly other hardware and software solutions to track such data, but most of them are "sportsband" products, focused on monitoring intense training sessions; the upshot of the Fitbit is that it monitors the contribution of everyday activity: walking, climbing stairs, and sitting still behind the keyboard. Since the latter activity consumes a large portion of the average software developer's day, the device has unsurprisingly attracted a following among programmers. But as is often the case, Linux users have some hurdles to clear in order to make use of the product.
Fitbit devices work by logging time-stamped readings from internal sensors; the current models incorporate both accelerometers and altimeters, tracking typical foot traffic. By associating the movement data with some basic facts about the size of the user, secondary statistics like calorie count are calculated, and some logic is used to smooth out the results and determine when a wearer is (for example) driving on the autobahn rather than running. The sensors themselves are not unique — plenty of smartphones ship with similar offerings — but the devices have gained a loyal following thanks to their low power consumption, small size, and relatively rugged construction.
Sync
Unfortunately the company releases support software only for Windows and Mac OS X. Support software is required because the devices are designed to offload their logs periodically to servers running at fitbit.com, which provides a free web service where users can track their individual metrics over time and compare it against other users. The early Fitbit models came bundled with a USB "base station" that connected wirelessly to the Fitbit devices using the ANT protocol; every so often when the device was in range it would synchronize the latest activity readings to the web service.
For that first generation of hardware, there is an open source solution for Linux users. In 2011, Kyle Machulis released libfitbit, a Python tool that supports pulling data off of the first generation Fitbit devices and uploading it to the official web service. In addition to pushing the device's data logs upstream, libfitbit also saves them locally in plain text format. The library does not support the altimeter sensor introduced with the Fitbit "Ultra", however. Machulis has subsequently created an umbrella project for libfitbit and an array of related health-monitoring device projects called OpenYou — he does not appear to be actively developing it, but there are several forked branches on Github that introduce improvements.
In mid 2012, Paul Burton started another Fitbit sync project called fitbitd that re-implemented libfitbit in C and ran the synchronization service as a daemon, with the client connecting to it over D-Bus. Fitbitd also added support for the Fitbit Ultra's altimeter sensor, allowing users to keep a close eye on their stair-climbing prowess.
But in late 2012 the company updated its line of tracker devices again, and switched radio protocols from ANT to Bluetooth Low Energy (LE). On one hand, the switch to Bluetooth should make implementing Linux support a little simpler, since ANT is proprietary. But on the other, only relatively recent kernels support Bluetooth LE, and thus far neither of the existing open source Fitbit sync projects have undertaken the task of implementing Bluetooth support. The new Fitbit devices do ship with a Bluetooth LE adapter for the PC, so the hardware is in place, but it may be a matter of time before the devices are widespread enough in the developer community for an effort at reverse-engineering the Bluetooth traffic to pick up steam.
Nor is there any great hope that the company will provide a Linux synchronization client of its own. Canonical's Jono Bacon reported in January 2012 that he had reached out to Fitbit about writing a Linux driver for the devices, but that after an initial expression of interest, the company stopped replying. Several application developers have asked about Linux support on the Fitbit API development list (most recently in September 2012) and were met with similar silence. In fact, Fitbit has still not introduced an Android synchronization app for the Bluetooth products; only recent Android devices support Bluetooth LE, but the company has released an iOS app that synchronizes with the devices over Bluetooth LE.
Data access
Of course, synchronization from a Linux machine is one aspect of using the device, but it is not the only one. For some people, the notion of uploading one's activity data to a remote service is anathema to begin with, and even more so if the data cannot be retrieved. The company offers an API so that developers can link other applications (such as those for jog/run planning or nutrition tracking) to the same user account. The API can be used to retrieve historical data from the web service in JSON format, but it only provides access to daily totals of the covered metrics.
At the moment, the most robust way to retrieve one's data from fitbit.com is through John McLaughlin's Fitbit for Google Apps script, a JavaScript tool that can extract the data directly into a Google Docs spreadsheet. It requires registering as a Fitbit application developer in order to obtain an authorization key, but Fitbit has recommended the script on the mailing list, so the company is clearly not opposed to the idea.
Through the official API, McLaughlin's script can download all of the daily totals from a user account going all the way back to the account's creation. The retrieval period is configurable, so users can run the script once to grab the complete history, then run shorter updates for subsequent daily or weekly additions. The result is a nicely formatted table of time-series data suitable for any inspection or graphing tool.
Daily summary data is certainly useful, but the minute-by-minute data is far more useful, particularly when it comes to analyzing patterns of activity over the course of the day. Libfitbit allows owners of the ANT models to save their full sensor logs locally, but so far Bluetooth device owners are out of luck. It is conceivable that the company regards high-resolution data as something it can monetize, since the basic data logging services is free of charge (Fitbit currently offers a paid "pro" version of the web service that somehow acts as personal trainer to the user; I was not intrigued enough by that idea to pony up the cash necessary to try it out).
In October 2011, Fitbit announced
that it would begin allowing access to minute-by-minute "intraday"
data "on a case by case basis to a limited number of
developers
". It is not clear what happened after that
announcement. As of January 3, 2013 the company still has
not opened intraday data to the public. The developer wiki mentions a Partner
API which still requires that the developer contact Fitbit and ask
for access, but the limited number of developers granted that access
do not seem keen to discuss it.
Others have inquired on the mailing list about "raw" access to the device's sensor logs before they are uploaded to the web service. This would be even more appealing to the privacy conscious, and if one wants to do data mining personally it would be much faster to simply capture the data locally before it takes a round trip to the server. Here again Fitbit expressed interest in the concept, but has not acted on it subsequently.
Alternative hardware
Fitbit is far from the only player in the health-monitoring hardware market, but none of the other manufacturers are particularly friendly to Linux and free software either. Machulis's OpenYou project hosts a number of other libraries for communicating with personal monitoring devices, such as Nike's training-oriented Fuelband and the BodyBugg armband. Like libfitbit, both are designed to allow extraction of log data before it is uploaded to the relevant web server, but neither is actively developed. Similarly, there was a 2009-era effort to reverse engineer support for personal monitoring devices manufactured by BodyMedia, but it has not been updated in several years.
The "Up" hardware devices from Jawbone offer a slightly better
experience. There is no Linux software available, but the full data
set can be downloaded from the web service in CSV form. Motorola's
motoACTV is an Android-based wristwatch with health monitoring
features, and it is possible to root the device and side-load third
party applications. Perhaps the strangest entry in the field is the
Zeo line of sleep-monitoring headbands. The company released an
official library
to decrypt the logged data and export it to CSV, and a separate project to
access the raw Zeo data in real-time. Although it was initially advertised
as "under an open source library
" it can currently only
be accessed by agreeing to a severely limiting terms and
conditions document. This dichotomy is perhaps a simple
misunderstanding about licensing, but that does not resolve the
situation for free software sticklers.
Another alternative that is frequently proposed is to make use of the position and accelerometer sensors already found in many smartphones. There has been at least one effort to write an open source pedometer application for Android, but both the reviews of the app and issues filed on the bug tracker reveal a key difficulty. Apparently many Android phones do not allow an application to access and record sensor data when the phone is in an inactive state. Since the goal is to track movement over the course of an entire day, this can make the app useless if one's phone manufacturer choose to enable this restriction.
Exactly what Fitbit plans to do with the API and with raw sensor data remains an open question. The company previewed another set of new devices at the Consumer Electronics Show in January 2013, but did not announce changes to the developer program. Consequently, Linux users in possession of the newer devices will need to start capturing and decoding Bluetooth LE traffic in order to make use of their hardware. The good news is that so far all of the Fitbit models successfully reverse engineered appear to use the same data format. The bad news is that capturing and reading Bluetooth traffic logs is such a sedentary activity.
A discordant symphony
Last May, IBM announced the completion of its long-awaited contribution of the source code for its "Symphony" OpenOffice.org fork to the Apache Software Foundation. More than six months later, there is no freely-licensed version of Symphony available, and some observers, at least, see no evident signs that any such release is in the works. A look at the situation reveals gears that grind slowly indeed, leading to tension that is not helped by some unfortunate bad feelings between rival development projects.Apache OpenOffice (AOO) and LibreOffice are both forks of the old OpenOffice.org code base. There is not always a great deal of love lost between these two projects, which, with some justification, see themselves as being in direct competition with each other. That situation got a little worse recently when de-facto AOO leader Rob Weir complained about talk in the other camp:
Rob raised the idea of putting out a corrective blog post, but the project consensus seemed to be to just let things slide. Clearly, though, the AOO developers were unhappy with how the "usual misinformed suspects" were describing their work.
The specific suspect in question is Italo Vignoli, a director of the Document Foundation and spokesperson for the LibreOffice project. His full posting can be found on the LibreOffice marketing list. His main complaint was that the Symphony code remained inaccessible to the world as a whole; IBM, he said, did not donate anything to the community at all. This claim might come as a surprise to the casual observer. A quick search turns up Apache's Symphony page; from there, getting the source is just a matter of a rather less quick 4GB checkout from a Subversion repository. Once one digs a little further, though, the situation becomes a bit less clear.
The Apache Software Foundation releases code under the Apache license; they are, indeed, rather firm on that point. The Symphony repository, though, as checked out from svn.apache.org, contains nearly 3,600 files with the following text:
* Licensed Materials - Property of IBM. * (C) Copyright IBM Corporation 2003, 2011. All Rights Reserved.
That, of course, is an entirely non-free license header. Interestingly, over 2,000 of those files also have headers indicating that they are distributable under the GNU Lesser General Public License (version 3). These files, in other words, contain conflicting license information but neither case (proprietary or LGPLv3) is consistent with the Apache license. So it would not be entirely surprising to see a bit of confusion over what IBM has really donated.
The conflicting licenses are almost certainly an artifact of how Symphony was developed. IBM purchased the right to take the code proprietary from Sun; when IBM's code was added to existing, LGPLv3-licensed files, the new headers were added without removing the old. Since this code has all been donated to the Foundation, clearing up the confusion should just be a matter of putting in new license headers. But that has not yet happened.
What is going in here is reminiscent of the process seen when AOO first began as an Apache project. Then, too, a pile of code was donated to the Apache Software Foundation, but it did not become available under the Apache license until the first official release happened, quite some time later. In between there unfolded an obscure internal process where the Foundation examined the code, eliminated anything that it couldn't relicense or otherwise had doubts about, and meditated on the situation in general. To an outsider, the "Apache Way" can seem like a bureaucratic way indeed. It is unsurprising to see this process unfold again with a brand new massive corporate code dump.
There is an added twist this time, though. In June, the project considered two options for the handling of the Symphony code dump. One was the "slow merge" where features would be taken one-by-one from the Symphony tree; the alternative was to switch to Symphony as the new code base, then merge newer OpenOffice.org and AOO features in that direction instead. The "slow" path was chosen, and it has proved to be true to its name. Rob noted 167 bug fixes that have found their way into AOO from Symphony, but there do not appear to be any significant features that have made the move at this point.
One assumes that will change over time. The code does exist, the Foundation does have the right to relicense it, and there are developers who, in time, should be able to port the most interesting parts of it and push it through the Apache process. One might wonder why almost none of that work appears to be happening. If the project was willing to do the work to rebase entirely on top of the Symphony code, it must have thought that some significant resources were available. What are those resources doing instead?
Rob's mention of "larger pieces that will be merged in branches
first
" points at one possible answer: that work is being
done, we just aren't allowed to see it yet. Given the way the AOO
and LibreOffice projects view each other, and given that the Apache license
gives LibreOffice the right to incorporate AOO code, it would not be
surprising to see AOO developers working to defer the release of
this code under their license for as long as possible. It would be
embarrassing for LibreOffice to show up with Symphony features first, after
all.
On the other side, it is not at all hard to imagine that some LibreOffice developers would be happy to embarrass AOO in just that way. Their complaint is not that IBM did not donate the code; what really makes them unhappy is that LibreOffice cannot take that code and run with it yet. It must certainly be frustrating to see useful code languish because the AOO project and the Apache Software Foundation are taking their time in getting around to putting it under the intended license. But IBM chose a channel for the release of this code that puts its ultimate fate under the control of those entities; there is little to be done to change that.
Competition between software projects is not necessarily a bad thing; it can motivate development and enable the exploration of different approaches to a problem. Thus far, it is not clear that the rivalry between AOO and LibreOffice has achieved any of that. Instead, it seems to create duplication of work and inter-project hostility. The grumbling over the Symphony source, which, meanwhile, sits unused by anybody seems like another example of that dynamic. With luck, the AOO developers will find a way to release the bulk of Symphony as free software, but one should not expect it to happen in a hurry.
Security
Keeping administrators up to date
Keeping up with distribution security updates is typically straightforward, but finding out about vulnerable packages before they have been patched can be rather harder. There is generally a lag between the report of a vulnerability and the availability of an updated package. In that window, there might well be steps that administrators could take to mitigate or work around the problem, but they can only do so if they are aware of the problem. In our recent article that looked at distribution response to the MoinMoin and Rails vulnerabilities, there was a suggestion that distributions could do more to help notify administrators of known-but-unpatched security holes. As it turns out, a comment on that article led us to one example of just such an early warning system.
The tool in question is debsecan (Debian security analyzer), which helps Debian administrators keep up with the vulnerabilities reported against the packages they have installed. By consulting the Debian security bug tracker, debsecan gets information about entries in the CVE (Common Vulnerabilities and Exposures) and National Vulnerability Database lists that it can correlate with the packages installed on the system. It runs hourly by default, and can email interested parties with its results once per day.
Debsecan was written by Florian Weimer, starting back at the end of 2005; at this point, it is fairly stable and has remained largely unchanged since mid-2010. The program is less than 1500 lines of Python, with just a few dependencies (e.g., libapt-pkg bindings). That dependency and the reliance on the bug tracker make it quite Debian-specific, of course, but the idea behind it is more widely applicable.
Obviously, debsecan depends on the information in the security bug tracker being kept up to date. That is handled by the Debian security team, though volunteers are welcome. The team has put together an introduction to the security bug tracker that describes the process it uses to track security problems for Debian. Other distributions also track security problems, of course, but tools like debsecan that specifically look for problems that have not yet been patched are not common.
Ubuntu carries debsecan in its repositories, but it is too
Debian-specific to be directly useful on Ubuntu and, so far, efforts
to Ubuntu-ize it have not gone anywhere.
At this point, the package is targeted for removal from Ubuntu, because it
"conveys information that is just plain wrong
" for Ubuntu.
For other distributions, package managers (e.g., yum,
zypper) will list available updates, and can often filter that list based
on security updates, but don't list unpatched packages.
It is, of course, best if a distribution can keep up with the security problems in its packages, but that can be difficult at times. Like with the recent MoinMoin and Rails vulnerabilities, though, there are often ways to mitigate a particular problem—if the administrator is alerted. Even if there is no workaround available, an administrator could choose to completely disable the affected package (or install a patched version from source) while awaiting a distribution update. There is some similarity with the arguments in favor of "full disclosure" here: essentially, the more each individual knows about the vulnerabilities of their software, the more options for handling the problem they have. Without that information, those options are severely limited—in fact, largely non-existent.
One could imagine a cross-distribution project that gathered the same kind of information as the Debian security bug tracker, but in a more distribution-independent fashion. Each distribution could have a tool that processed that data, correlated it to its package names and versions, and then reported on what it found. It could even potentially be extended to help track software that is installed from source.
Keeping up with security updates for source installations can definitely be a problem area. While many larger projects have advisory announcement mailing lists, there are plenty of smaller projects that aren't quite as formal. That means that there are multiple sources of security advisories that an administrator needs to keep track of. By maintaining some kind of list of locally installed packages, coupled with a central storehouse of vulnerabilities, a tool like debsecan could also be used to provide alerts to security holes in local source-installed packages as well.
There are plenty of reasons that administrators will install from source—new features and bug fixes, compatibility with other packages, and so on. Those packages are often things like fast-moving web frameworks or applications that have high risk profiles. A tool that helped administrators keep up with the security issues in source packages, while also integrating the distribution package vulnerabilities and updates, would be a real boon for Linux.
Brief items
Security quotes of the week
In the end, the old gods of information scarcity and control will indeed die, and more open models will win the future.
New vulnerabilities
389-ds-base: ACL restriction bypass
Package(s): | 389-ds-base | CVE #(s): | CVE-2012-4450 | ||||||||||||||||||||
Created: | January 15, 2013 | Updated: | March 11, 2013 | ||||||||||||||||||||
Description: | From the CVE entry:
389 Directory Server 1.2.10 does not properly update the ACL when a DN entry is moved by a modrdn operation, which allows remote authenticated users with certain permissions to bypass ACL restrictions and access the DN entry. | ||||||||||||||||||||||
Alerts: |
|
asterisk: denial of service
Package(s): | asterisk | CVE #(s): | CVE-2012-5976 CVE-2012-5977 | ||||||||||||||||||||||||||||
Created: | January 14, 2013 | Updated: | January 30, 2013 | ||||||||||||||||||||||||||||
Description: | From the CVE entries:
Multiple stack consumption vulnerabilities in Asterisk Open Source 1.8.x before 1.8.19.1, 10.x before 10.11.1, and 11.x before 11.1.2; Certified Asterisk 1.8.11 before 1.8.11-cert10; and Asterisk Digiumphones 10.x-digiumphones before 10.11.1-digiumphones allow remote attackers to cause a denial of service (daemon crash) via TCP data using the (1) SIP, (2) HTTP, or (3) XMPP protocol. (CVE-2012-5976) Asterisk Open Source 1.8.x before 1.8.19.1, 10.x before 10.11.1, and 11.x before 11.1.2; Certified Asterisk 1.8.11 before 1.8.11-cert10; and Asterisk Digiumphones 10.x-digiumphones before 10.11.1-digiumphones, when anonymous calls are enabled, allow remote attackers to cause a denial of service (resource consumption) by making anonymous calls from multiple sources and consequently adding many entries to the device state cache. (CVE-2012-5977) | ||||||||||||||||||||||||||||||
Alerts: |
|
autofs: denial of service
Package(s): | autofs | CVE #(s): | CVE-2012-2697 | ||||||||||||
Created: | January 14, 2013 | Updated: | January 17, 2013 | ||||||||||||
Description: | From the Red Hat advisory:
A bug fix included in RHBA-2012:0264 introduced a denial of service flaw in autofs. When using autofs with LDAP, a local user could use this flaw to crash autofs, preventing future mount requests from being processed until the autofs service was restarted. | ||||||||||||||
Alerts: |
|
conga: leaks authentication credentials
Package(s): | conga | CVE #(s): | CVE-2012-3359 | ||||||||||||
Created: | January 14, 2013 | Updated: | January 17, 2013 | ||||||||||||
Description: | From the Red Hat advisory:
It was discovered that luci stored usernames and passwords in session cookies. This issue prevented the session inactivity timeout feature from working correctly, and allowed attackers able to get access to a session cookie to obtain the victim's authentication credentials. | ||||||||||||||
Alerts: |
|
drupal7-context: information disclosure
Package(s): | drupal7-context | CVE #(s): | CVE-2012-5655 | ||||||||||||||||
Created: | January 14, 2013 | Updated: | January 21, 2013 | ||||||||||||||||
Description: | From the CVE entry:
The Context module 6.x-3.x before 6.x-3.1 and 7.x-3.x before 7.x-3.0-beta6 for Drupal does not properly restrict access to block content, which allows remote attackers to obtain sensitive information via a crafted request. | ||||||||||||||||||
Alerts: |
|
freeciv: denial of service
Package(s): | freeciv | CVE #(s): | CVE-2012-6083 | ||||
Created: | January 15, 2013 | Updated: | January 16, 2013 | ||||
Description: | From the Mageia advisory:
Malformed network packets could cause denial of service (memory exhaustion or CPU-bound loop) in Freeciv before 2.3.3 See the Freeciv announcement for more details. | ||||||
Alerts: |
|
java: multiple vulnerabilities
Package(s): | java-1.7.0-oracle | CVE #(s): | CVE-2012-3174 CVE-2013-0422 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Created: | January 15, 2013 | Updated: | January 25, 2013 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Description: | From the CVE entries:
Unspecified vulnerability in Oracle Java 7 before Update 11 allows remote attackers to affect confidentiality, integrity, and availability via unknown vectors, a different vulnerability than CVE-2013-0422. NOTE: some parties have mapped CVE-2012-3174 to an issue involving recursive use of the Reflection API, but that issue is already covered as part of CVE-2013-0422. This identifier is for a different vulnerability whose details are not public as of 20130114. (CVE-2012-3174) Multiple vulnerabilities in Oracle Java 7 before Update 11 allow remote attackers to execute arbitrary code by (1) using the public getMBeanInstantiator method in the JmxMBeanServer class to obtain a reference to a private MBeanInstantiator object, then retrieving arbitrary Class references using the findClass method, and (2) using the Reflection API with recursion in a way that bypasses a security check by the java.lang.invoke.MethodHandles.Lookup.checkSecurityManager method due to the inability of the sun.reflect.Reflection.getCallerClass method to skip frames related to the new reflection API, as exploited in the wild in January 2013, as demonstrated by Blackhole and Nuclear Pack, and a different vulnerability than CVE-2012-4681 and CVE-2012-3174. NOTE: some parties have mapped the recursive Reflection API issue to CVE-2012-3174, but CVE-2012-3174 is for a different vulnerability whose details are not public as of 20130114. CVE-2013-0422 covers both the JMX/MBean and Reflection API issues. NOTE: it was originally reported that Java 6 was also vulnerable, but the reporter has retracted this claim, stating that Java 6 is not exploitable because the relevant code is called in a way that does not bypass security checks. NOTE: as of 20130114, a reliable third party has claimed that the findClass/MBeanInstantiator vector was not fixed in Oracle Java 7 Update 11. If there is still a vulnerable condition, then a separate CVE identifier might be created for the unfixed issue. (CVE-2013-0422) See the Oracle Security Alert for additional information. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Alerts: |
|
kde-filesystem: insecure build flags
Package(s): | kde-filesystem | CVE #(s): | |||||
Created: | January 14, 2013 | Updated: | January 16, 2013 | ||||
Description: | From the Red Hat bugzilla:
Sync FFLAGS and LDFLAGS in the %cmake_kde4 macro with redhat-rpm-config | ||||||
Alerts: |
|
kexec-tools: executable stack
Package(s): | kexec-tools | CVE #(s): | |||||
Created: | January 15, 2013 | Updated: | January 16, 2013 | ||||
Description: | Fedora fixed an executable stack issue for ppc32 in kexec-tools 2.0.3-64. | ||||||
Alerts: |
|
mozilla: cross-site scripting
Package(s): | iceape, thunderbird, seamonkey, firefox | CVE #(s): | CVE-2013-0751 | ||||||||||||||||||||||||||||||||
Created: | January 15, 2013 | Updated: | February 18, 2013 | ||||||||||||||||||||||||||||||||
Description: | From the CVE entry:
Mozilla Firefox before 18.0 on Android and SeaMonkey before 2.15 do not restrict a touch event to a single IFRAME element, which allows remote attackers to obtain sensitive information or possibly conduct cross-site scripting (XSS) attacks via a crafted HTML document. | ||||||||||||||||||||||||||||||||||
Alerts: |
|
mysql: authentication bypass
Package(s): | mysql | CVE #(s): | CVE-2012-4452 | ||||||||||||
Created: | January 14, 2013 | Updated: | January 17, 2013 | ||||||||||||
Description: | From the CVE entry:
MySQL 5.0.88, and possibly other versions and platforms, allows local users to bypass certain privilege checks by calling CREATE TABLE on a MyISAM table with modified (1) DATA DIRECTORY or (2) INDEX DIRECTORY arguments that are originally associated with pathnames without symlinks, and that can point to tables created at a future time at which a pathname is modified to contain a symlink to a subdirectory of the MySQL data home directory, related to incorrect calculation of the mysql_unpacked_real_data_home value. NOTE: this vulnerability exists because of a CVE-2009-4030 regression, which was not omitted in other packages and versions such as MySQL 5.0.95 in Red Hat Enterprise Linux 6. | ||||||||||||||
Alerts: |
|
OpenIPMI: invalid permissions
Package(s): | OpenIPMI | CVE #(s): | CVE-2011-4339 | ||||||||||||
Created: | January 14, 2013 | Updated: | January 17, 2013 | ||||||||||||
Description: | From the CVE entry:
ipmievd (aka the IPMI event daemon) in OpenIPMI, as used in the ipmitool package 1.8.11 in Red Hat Enterprise Linux (RHEL) 6, Debian GNU/Linux, Fedora 16, and other products uses 0666 permissions for its ipmievd.pid PID file, which allows local users to kill arbitrary processes by writing to this file. | ||||||||||||||
Alerts: |
|
pl: code execution
Package(s): | pl | CVE #(s): | CVE-2012-6090 CVE-2012-6089 | ||||||||||||||||
Created: | January 15, 2013 | Updated: | December 6, 2013 | ||||||||||||||||
Description: | From the CVE entries:
Multiple stack-based buffer overflows in the expand function in os/pl-glob.c in SWI-Prolog before 6.2.5 and 6.3.x before 6.3.7 allow remote attackers to cause a denial of service (application crash) or possibly execute arbitrary code via a crafted filename. (CVE-2012-6090) Multiple stack-based buffer overflows in the canoniseFileName function in os/pl-os.c in SWI-Prolog before 6.2.5 and 6.3.x before 6.3.7 allow remote attackers to cause a denial of service (application crash) or possibly execute arbitrary code via a crafted filename. (CVE-2012-6089) | ||||||||||||||||||
Alerts: |
|
proftpd-dfsg: privilege escalation
Package(s): | proftpd-dfsg | CVE #(s): | CVE-2012-6095 | ||||||||||||||||||||||||||||
Created: | January 14, 2013 | Updated: | April 8, 2013 | ||||||||||||||||||||||||||||
Description: | From the Debian advisory:
It has been discovered that in ProFTPd, an FTP server, an attacker on the same physical host as the server may be able to perform a symlink attack allowing to elevate privileges in some configurations. | ||||||||||||||||||||||||||||||
Alerts: |
|
qemu: buffer overflow
Package(s): | qemu-kvm, qemu | CVE #(s): | CVE-2012-6075 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Created: | January 16, 2013 | Updated: | March 13, 2013 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Description: | From the Debian advisory:
It was discovered that the e1000 emulation code in QEMU does not enforce frame size limits in the same way as the real hardware does. This could trigger buffer overflows in the guest operating system driver for that network card, assuming that the host system does not discard such frames (which it will by default). | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Alerts: |
|
qt: confusing SSL error messages
Package(s): | qt | CVE #(s): | CVE-2012-6093 | ||||||||||||||||||||||||||||||||
Created: | January 14, 2013 | Updated: | February 7, 2013 | ||||||||||||||||||||||||||||||||
Description: | From the Red Hat bugzilla:
A security flaw was found in the way QSslSocket implementation of the Qt, a software toolkit for applications development, performed certificate verification callbacks, when Qt libraries were used with different OpenSSL version than the one, they were compiled against. In such scenario, this would result in a connection error, but with the SSL error list to contain QSslError:NoError instead of proper reason of the error. This might result in a confusing error being presented to the end users, possibly encouraging them to ignore the SSL errors for the site the connection was initiated against. | ||||||||||||||||||||||||||||||||||
Alerts: |
|
rails: code execution and more
Package(s): | rails | CVE #(s): | CVE-2013-0156 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Created: | January 10, 2013 | Updated: | March 16, 2015 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Description: | From the Debian advisory: It was discovered that Rails, the Ruby web application development framework, performed insufficient validation on input parameters, allowing unintended type conversions. An attacker may use this to bypass authentication systems, inject arbitrary SQL, inject and execute arbitrary code, or perform a DoS attack on the application. Lots more information can be found in the Rails advisory and this analysis. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Alerts: |
|
rubygem-activerecord: sql injection
Package(s): | rubygem-activerecord | CVE #(s): | CVE-2012-6496 | ||||||||||||||||||||
Created: | January 15, 2013 | Updated: | January 22, 2014 | ||||||||||||||||||||
Description: | From the CVE entry:
SQL injection vulnerability in the Active Record component in Ruby on Rails before 3.0.18, 3.1.x before 3.1.9, and 3.2.x before 3.2.10 allows remote attackers to execute arbitrary SQL commands via a crafted request that leverages incorrect behavior of dynamic finders in applications that can use unexpected data types in certain find_by_ method calls. | ||||||||||||||||||||||
Alerts: |
|
tcl-snack: code execution
Package(s): | tcl-snack | CVE #(s): | CVE-2012-6303 | ||||||||||||||||||||||||
Created: | January 14, 2013 | Updated: | February 26, 2015 | ||||||||||||||||||||||||
Description: | From the Secunia Advisory:
Two vulnerabilities have been discovered in Snack Sound Toolkit, which can be exploited by malicious people to compromise a user's system. The vulnerabilities are caused due to missing boundary checks in the "GetWavHeader()" function (generic/jkSoundFile.c) when parsing either format sub-chunks or unknown sub-chunks. This can be exploited to cause a heap-based buffer overflow via specially crafted WAV files with overly large chunk sizes specified. Successful exploitation may allow execution of arbitrary code. | ||||||||||||||||||||||||||
Alerts: |
|
Page editor: Jake Edge
Kernel development
Brief items
Kernel release status
The current development kernel is 3.8-rc3, released on January 9. "Anyway, another week, another -rc. A fairly normal-sized one." Changesets continue to flow into the mainline repository; along with the usual fixes, they include a new driver for Wilocity wil6210-based WiFi cards.
Stable updates: 3.0.58, 3.4.25, and 3.7.2 were released on January 11; 3.2.37 came out on January 16. Massive updates set to become 3.0.59, 3.4.26, 3.5.7.3, and 3.7.3 are all in the review process as of this writing; they can be expected on or after January 17.
Quotes of the week
Kernel development news
GPIO in the kernel: an introduction
A GPIO (general-purpose I/O) device looks like the most boring sort of peripheral that a computer might offer. It is a single electrical signal that the CPU can either set to one of two values — zero or one, naturally — or read one of those values from (or both). Either way, a GPIO does not seem like a particularly expressive device. But, at their simplest, GPIOs can be used to control LEDs, reset lines, or pod-bay door locks. With additional "bit-banging" logic, GPIOs can be combined to implement higher-level protocols like i2c or DDC — a frequent occurrence on contemporary systems. GPIOs are thus useful in a lot of contexts.GPIO lines seem to be especially prevalent in embedded systems; even so, there never seems to be enough of them. As one might expect, a system with dozens (or even hundreds) of GPIOs needs some sort of rational abstraction for managing them. The kernel has had such a mechanism since 2.6.21 (it was initially added by David Brownell). The API has changed surprisingly little since then, but that period of relative stasis may be about to come about to an end. The intended changes are best understood in the context of the existing API, though, so that is what this article will cover. Subsequent installments will look at how the GPIO API may evolve in the near future.
Naturally, there is an include file for working with GPIOs:
#include <linux/gpio.h>
In current kernels, every GPIO in the system is represented by a simple unsigned integer. There is no provision for somehow mapping a desired function ("the sensor power line for the first camera device," say) onto a GPIO number; the code must come by that knowledge by other means. Often that is done through a long series of macro definitions; it is also possible to pass GPIO numbers through platform data or a device tree.
GPIOs must be allocated before use, though the current implementation does not enforce this requirement. The basic allocation function is:
int gpio_request(unsigned int gpio, const char *label);
The gpio parameter indicates which GPIO is required, while label associates a string with it that can later appear in sysfs. The usual convention applies: a zero return code indicates success; otherwise the return value will be a negative error number. A GPIO can be returned to the system with:
void gpio_free(unsigned int gpio);
There are some variants of these functions; gpio_request_one() can be used to set the initial configuration of the GPIO, and gpio_request_array() can request and configure a whole set of GPIOs with a single call. There are also "managed" versions (devm_gpio_request(), for example) that automatically handle cleanup if the developer forgets.
Some GPIOs are used for output, others for input. A suitably-wired GPIO can be used in either mode, though only one direction is active at any given time. Kernel code must inform the GPIO core of how a line is to be used; that is done with these functions:
int gpio_direction_input(unsigned int gpio); int gpio_direction_output(unsigned int gpio, int value);
In either case, gpio is the GPIO number. In the output case, the value of the GPIO (zero or one) must also be specified; the GPIO will be set accordingly as part of the call. For both functions, the return value is again zero or a negative error number. The direction of (suitably capable) GPIOs can be changed at any time.
For input GPIOs, the current value can be read with:
int gpio_get_value(unsigned int gpio);
This function returns the value of the provided gpio; it has no provision for returning an error code. It is assumed (correctly in almost all cases) that any errors will be found when gpio_direction_input() is called, so checking the return value from that function is important.
Setting the value of output GPIOs can always be done using gpio_direction_output(), but, if the GPIO is known to be in output mode already, gpio_set_value() may be a bit more efficient:
void gpio_set_value(unsigned int gpio, int value);
Some GPIO controllers can generate interrupts when an input GPIO changes value. In such cases, code wishing to handle such interrupts should start by determining which IRQ number is associated with a given GPIO line:
int gpio_to_irq(unsigned int gpio);
The given gpio must have been obtained with gpio_request() and put into the input mode first. If there is an associated interrupt number, it will be passed back as the return value from gpio_to_irq(); otherwise a negative error number will be returned. Once obtained in this manner, the interrupt number can be passed to request_irq() to set up the handling of the interrupt.
Finally, the GPIO subsystem is able to represent GPIO lines via a sysfs hierarchy, allowing user space to query (and possibly modify) them. Kernel code can cause a specific GPIO to appear in sysfs with:
int gpio_export(unsigned int gpio, bool direction_may_change);
The direction_may_change parameter controls whether user space is allowed to change the direction of the GPIO; in many cases, allowing that control would be asking for bad things to happen to the system as a whole. A GPIO can be removed from sysfs with gpio_unexport() or given another name with gpio_export_link().
And that is an overview of the kernel's low-level GPIO interface. A number of details have naturally been left out; see Documentation/gpio.txt for a more thorough description. Also omitted is the low-level driver's side of the API, by which GPIO lines can be made available to the GPIO subsystem; covering that API may be the subject of a future article. The next installment, though, will look at a couple of perceived deficiencies in the above-described API and how they might be remedied.
Signing ELF binaries
As part of the effort to support UEFI secure boot on Linux, Matthew Garrett proposed a number of restrictions on kernel features so that signed kernels could not be used to circumvent secure boot. Many of those restrictions were fairly uncontroversial, but disabling kexec() was not one of them, so it was dropped in a later patch set. At the time, there was discussion of how to support kexec() in a secure boot world; Vivek Goyal recently posted an RFC patch set to start down that path.
The kexec() system call is used to replace the running kernel with a different program. It can be used to boot a new kernel without going through the BIOS or other firmware, which is exactly what gets it into trouble for secure boot. A running kernel that has been verified by the secure boot mechanism (and thus is trusted) could boot any unsigned, unverified kernel by way of kexec(). The concern is that it would be used to boot Windows in an insecure environment while making it believe it was running under secure boot—exactly what secure boot is meant to prevent. That, in turn, could lead to Linux bootloaders getting blacklisted, which would make it more difficult to boot Linux on hardware certified for Windows 8.
Goyal's patches add the ability to cryptographically sign ELF executables, then have the kernel verify those signatures. If the binary is signed and the signature verifies, it will be executed. While the patch does not yet implement this, the idea is that a signed binary could be given additional capabilities if it verifies—capabilities that would enable kexec(), for example. If the binary is unsigned, it will always be executed. Only if a signed binary fails to verify does it get blocked from execution.
The patches contain a signelf utility that puts a signature based on the private key argument into a .signature ELF section. The signature is calculated by hashing the contents of the PT_LOAD ELF segments, then cryptographically signing the result. It is based on the module signing code that was recently added to the kernel, but instead of just tacking the signature on at the end of the binary, it puts it into the .signature section.
Since any shared libraries used by an executable cannot be trusted (so far, at least, there is no mechanism to verify those libraries), only statically linked executables can be signed and verified. The patches do not stop binaries from using dlopen() directly, however, so Goyal said binaries that do so should not be signed. He is targeting the /sbin/kexec binary that is used to launch kdump, so that users can still get crash dumps, even in a secure-boot-enabled system, but there are other possible uses as well.
When the binfmt_elf loader in the kernel detects a binary with the .signature section, it locks the pages of the executable into memory and verifies the signature. Goyal is trying to avoid situations where the binary is modified after the verification has been done, which is why the executable is locked into memory. If the signature does not verify, the process is killed; unsigned binaries are simply executed as usual.
Beyond just adding the capability for kexec(), there are some other pieces of the puzzle that aren't addressed in the patches. The biggest is the need to disable ptrace() on signed binaries. Otherwise, the signed binary could be subverted in various ways—changing the binary passed to kexec(), for example. In addition, the "to do" list has some key and keyring related issues that need to be sorted out.
There is already a mechanism in the kernel to verify the signature of various kinds of files, though. The Integrity Measurement Architecture (IMA) appraisal extension that was added in Linux 3.7 does much of what Goyal needs, as was pointed out by IMA maintainer Mimi Zohar. While the integrity subsystem targets measuring and verifying the whole system, it already does most of the kinds of signature operations Goyal is looking to add. On the other hand, features like disabling ptrace(), locking the binary into memory, and setting capabilities based on signature verification are well beyond the scope of the integrity subsystem. Goyal is currently looking into using the integrity features and adding secure-boot-specific features on top.
Losing the ability to use kexec() on secure boot systems would be rather painful. While Garrett's patches do not actually make that change (because of the outcry from other kernel developers), any distribution that is trying to enable secure boot is likely to do so. Finding a way to support that use case, without unduly risking the blacklist wrath of Microsoft, would be good.
Deadlocking the system with asynchronous functions
Deadlocks in the kernel are a relatively rare occurrence in recent years. The credit largely belongs to the "lockdep" subsystem, which watches locking activity and points out patterns that could lead to deadlocks when the timing goes wrong. But locking is not the source of all deadlock problems, as was recently shown by an old deadlock bug which was only recently found and fixed.In early January, Alex Riesen reported some difficulties with USB devices on recent kernels; among other things, it was easy to simply lock up the system altogether. A fair amount of discussion followed before Ming Lei identified the problem. It comes down to the block layer's use of the asynchronous function call infrastructure used to increase parallelism in the kernel.
The asynchronous code is relatively simple in concept: a function that is to be run asynchronously can be called via async_schedule(); it will then run in its own thread at some future time. There are various ways of waiting until asynchronously called functions have completed; the most thorough is async_synchronize_full(), which waits until all outstanding asynchronous function calls anywhere in the kernel have completed. There are ways of waiting for specific functions to complete, but, if the caller does not know how many asynchronous function calls may be outstanding, async_synchronize_full() is the only way to be sure that they are all done.
The block layer in the kernel makes use of I/O schedulers to organize and optimize I/O operations. There are several I/O schedulers available; they can be switched at run time and can be loaded as modules. When the block layer finds that it needs an I/O scheduler that is not currently present in the system, it will call request_module() to ask user space to load it. The module loader, in turn, will call async_synchronize_full() at the end of the loading process; it needs to ensure that any asynchronous functions called by the newly loaded module have completed so that the module will be fully ready by the time control returns to user space.
So far so good, but there is a catch. When a new block device is discovered, the block layer will do its initial work (partition probing and such) in an asynchronous function of its own. That work requires performing I/O to the device; that in turn, requires an I/O scheduler. So the block layer may well call request_module() from code that is already running as an asynchronous function. And that is where things turn bad.
The problem is that the (asynchronous) block code must wait for request_module() to complete before it can continue with its work. As described above, the module loading process involves a call to async_synchronize_full(). That call will wait for all asynchronous functions, including the one that called request_module() in the first place, and which is still waiting for request_module() to complete. Expressed more concisely, the sequence looks like this:
- sd_probe() calls async_schedule() to scan a device
asynchronously.
- The scanning process tries to read data from the device.
- The block layer realizes it needs an I/O scheduler, so, in
elevator_get(), it calls request_module() to load
the relevant kernel module.
- The module is loaded and initializes itself.
- do_module_init() calls async_synchronize_full() to
wait for any asynchronous functions called by the just-loaded module.
- async_synchronize_full() waits for all asynchronous functions, including the one called back in step 1, which is waiting for the async_synchronize_full() call to complete.
That, of course, is a classic deadlock.
Fixing that deadlock turns out not to be as easy as one would like. Ming suggested that the call to async_synchronize_full() in the module loader should just be removed, and that user space should be taught that devices might not be ready immediately when the modprobe binary completes. Linus was not impressed with this approach, however, and it was quickly discarded.
The optimal solution would be for the module loader to wait only for asynchronous functions that were called by the loaded module itself. But the kernel does not currently have the infrastructure to allow that to happen; adding it as an urgent bug fix is not really an option. So something else needed to be worked out. To that end, Tejun Heo was brought into the discussion and asked to help come up with a solution. Tejun originally thought that the problem could be solved by detecting deadlock situations and proceeding without waiting in that case, but the problem of figuring out when it would be safe to proceed turned out not to be tractable.
The solution that emerged instead is regarded as a bit of a hack by just about everybody involved. Tejun added a new process flag (PF_USED_ASYNC) to mark when a process has called asynchronous functions. The module loader then tests this flag; if no asynchronous functions are called as the module is loaded, the call to async_synchronize_full() is skipped. Since the I/O scheduler modules make no such calls, that check avoids the deadlock in this particular case. Obviously, the problem remains in any case where an asynchronously-loaded module calls asynchronous functions of its own, but no other such cases have come to light at the moment. So it seems like a workable solution.
Even so, Tejun remarked "It makes me feel dirty but makes the problem
go away and I can't think of anything better
". The patch has found
its way into the mainline and will be present in the 3.8 final
release. By then, though, it would not be entirely surprising if somebody
else were to take up the task of finding a more elegant solution for a
future development cycle.
Patches and updates
Kernel trees
Architecture-specific
Core kernel code
Development tools
Device drivers
Filesystems and block I/O
Memory management
Networking
Security-related
Page editor: Jonathan Corbet
Distributions
The Grumpy Editor's Fedora 18 experience
Delays in Fedora releases are a standard part of the development cycle; users would likely approach a hypothetical non-delayed release with a great deal of concern. Even so, the Fedora 18 release stands out: originally planned for November 6, 2012, it did not make its actual appearance until January 15 — more than two months later. So it was with some trepidation that your editor set out to install the final Fedora 18 release candidate on his laptop. With so many delays and problems, would this distribution release function well enough to get real work done?Upgrading
Traditionally, in-place upgrades of Fedora systems have been done with the "preupgrade" tool; reports of preupgrade problems have been prevalent over the years, but your editor never encountered any difficulties with it. With the F18 release, though, preupgrade has been superseded by the new "FedUp" tool. Indeed, the project has committed to FedUp to the degree that the Anaconda installer no longer even has support for upgrades; at this point, the only way to upgrade an existing Fedora system appears to be to use FedUp and a network repository. Given that, one would expect FedUp to be a reasonably well-polished tool.
In truth, upgrading via this path required typing in a rather long command line (though that should get shorter with the official F18 release when the repository moves to a standard place). The tool then set off downloading 1,492 packages for the upgrade without even pausing to confirm that this was the desired course of events. Needless to say, such a download takes a while; there are no cross-release delta RPMs to ease the pain here. At the end of this process, after FedUp had nicely picked up the pair of packages that failed to download the first time, it simply printed a message saying that it was time to reboot.
After the reboot one gets a black screen, a pulsating Fedora logo, and a progress bar that cannot be more than 200 pixels wide. That bar progresses slowly indeed. It is only later that one realizes that this is FedUp's way of telling the user that the system is being updated. One would at least expect a list of packages, a dancing spherical cow, or, at a minimum, a message saying "your system is being upgraded now," but no such luck. To all appearances, it simply looks like the system is taking a very long time to boot. At the end of the process (which appears to have run flawlessly), the system reboots again and one is faced with the new, imposing, gray login screen. Fedora 18 is now in charge.
What do you get?
At first blush, the distribution seems to work just fine. Almost everything works as it did before, the laptop still suspends and resumes properly, etc. Nothing of any great significance is broken by this upgrade; there may have been problems at one point, but, it seems, the bulk of them were resolved by the time the Fedora developers decided that they should actually make a release. (That said, it should be pointed out that using FedUp precluded testing the Anaconda installer, which is where a lot of the problems were.)
One should not conclude that the upgrade is devoid of little irritations, though; such is not the nature of software. Perhaps the most annoying of those irritations resembles the classic "GNOME decided to forget all of your settings" pathology, but it's not quite the same. For whatever reason, somebody decided that the modifier key used with the mouse (to move or resize windows, for example) should be changed from "Alt" to "Super" (otherwise known as the "Windows key"). This is a strange and gratuitous change to the user interface that seems bound to confuse a lot of users. The fix is to go into dconf-editor, click on down to org→gnome→desktop→wm→preferences and change the value of mouse-button-modifier back to "<Alt>".
The GNOME developers, in their wisdom, decided that there was no use for a "log out" option if there is only one user account on the system. Modern systems are supposed to be about "discoverability," but it is awfully hard to discover an option that does not exist at all. Another trip into dconf-editor (always-show-logout under org→gnome→shell) will fix that problem — or one can just create a second user account.
Other glitches include the fact that the compose key no longer works with Emacs (a bug report has been filed for this one). This key (used to "compose" special characters not normally available on the keyboard) works fine with other applications, but not in Emacs. Also worth mentioning, in the hope it saves some time for others: the powertop 2.2 release shipped with F18 has changed the user interface so that the arrow keys, rather than moving between tabs, just shift the content around within the window. The trick is to use the tab key to go between tabs instead.
So what does this release have to offer in the way of new features? There is, of course, the usual array of upgraded packages, starting with a 3.7.2 kernel. Your editor, who has been working with Debian Testing on the main machine in recent months, still misses the rather fresher mix of packages to be found in the Fedora distribution.
Beyond that, there is the ability to use 256 colors in terminal emulator windows; your editor has little love for multicolor terminal windows, but others evidently disagree and will be happy with the wider range of color choices. Many users may also be pleased by the inclusion of the MATE desktop, a fork of the GNOME 2 environment. Your editor gave it a quick try and found that it mostly worked with occasional glitches. For example, the terminal emulator came up with both the foreground and background being black, suggesting that the MATE developers, too, are unenthusiastic about the 256-color feature. At this point, though, MATE feels something like a "70's classics" radio station; even if the music was better then, the world has moved on.
Beyond that, Fedora 18 offers features like Samba 4 and a new wireless
hotspot functionality. The latter looks like a useful way to extend
hotel Internet service to multiple devices, but your editor was unable to
get it to work. There is also the controversial placement of /tmp
on a tmpfs filesystem; that can be turned off by the administrator if
desired. Detection of MDNS devices (printers and such) should work better
even with the firewall in place. The internals of the yum package manager
have been replaced with a system that is intended to perform
better.
An experimental
version of a yum replacement, intended to provide better performance,
is available in this release. The
Eucalyptus cloud manager is now available.
And so on.
The list of new features is rather longer than that, naturally; see the F18 feature page and the release notes for a more complete summary. But, for most Fedora users, it will be just another in a long series of releases, just later than most. This release's troubled development cycle does not appear to have led to a less stable distribution at the end.
Brief items
Distribution quotes of the week
A. There's always a huge discussion about release processes, covering almost every previously discussed and documented proposal.
Oh, and someone whines about the name. I haven't seen the headlines that we're late in the release yet though, so that's a refreshing change.
Fedora 18 released
The Fedora 18 release is out. "Fedora is a leading-edge, free and open source operating system that continues to deliver innovative features to many users, with a new release about every six months...or so. :-D But no bull: Spherical Cow, is of course, Fedora's best release yet. You'll go through the hoof when you hear about the Grade A Prime F18 features." See the release notes for details.
Distribution News
Debian GNU/Linux
Bits from Debian Med team
The Debian Med team has a few bits about bug squashing, mentoring, a sprint in Kiel, and more.
Fedora
Appointment to Fedora Board
Rex Dieter has accepted an appointment to the Fedora Project Board. "Many of you know Rex from various areas of the project, including his work within the KDE SIG, initiation of the Community Working Group (CWG), as well as his service as former elected Board Member, among many, many other areas. Rex has proven himself to be fair, wise, and adept in resolving conflicts, and I very much look forward to working with him again."
Cooperative Bug Isolation for Fedora 18
The Cooperative Bug Isolation Project (CBI) is now available for Fedora 18, with instrumented versions of Evolution, GIMP, GNOME Panel, Gnumeric, Liferea, Nautilus, Pidgin, and Rhythmbox.Reminder: Fedora 16 end of life on 2013-02-12
Now that Fedora 18 is out, Fedora 16's days are numbered. Support ends on February 12.
Other distributions
Oracle Linux 5.9 released
Oracle has announced the release of Oracle Linux 5.9, thus winning the race to be the first RHEL clone to follow the Red Hat Enterprise Linux 5.9 release. See the release notes for details.
Newsletters and articles of interest
Distribution newsletters
- DistroWatch Weekly, Issue 490 (January 14)
- Maemo Weekly News (January 14)
- Ubuntu Weekly Newsletter, Issue 299 (January 13)
Page editor: Rebecca Sobol
Development
Namespaces in operation, part 3: PID namespaces
Following on from our two earlier namespaces articles (Part 1: namespaces overview and Part 2: the namespaces API), we now turn to look at PID namespaces. The global resource isolated by PID namespaces is the process ID number space. This means that processes in different PID namespaces can have the same process ID. PID namespaces are used to implement containers that can be migrated between host systems while keeping the same process IDs for the processes inside the container.
As with processes on a traditional Linux (or UNIX) system, the process IDs within a PID namespace are unique, and are assigned sequentially starting with PID 1. Likewise, as on a traditional Linux system, PID 1—the init process—is special: it is the first process created within the namespace, and it performs certain management tasks within the namespace.
First investigations
A new PID namespace is created by calling clone() with the CLONE_NEWPID flag. We'll show a simple example program that creates a new PID namespace using clone() and use that program to map out a few of the basic concepts of PID namespaces. The complete source of the program (pidns_init_sleep.c) can be found here. As with the previous article in this series, in the interests of brevity, we omit the error-checking code that is present in the full versions of the example program when discussing it in the body of the article.
The main program creates a new PID namespace using clone(), and displays the PID of the resulting child:
child_pid = clone(childFunc, child_stack + STACK_SIZE, /* Points to start of downwardly growing stack */ CLONE_NEWPID | SIGCHLD, argv[1]); printf("PID returned by clone(): %ld\n", (long) child_pid);
The new child process starts execution in childFunc(), which receives the last argument of the clone() call (argv[1]) as its argument. The purpose of this argument will become clear later.
The childFunc() function displays the process ID and parent process ID of the child created by clone() and concludes by executing the standard sleep program:
printf("childFunc(): PID = %ld\n", (long) getpid()); printf("ChildFunc(): PPID = %ld\n", (long) getppid()); ... execlp("sleep", "sleep", "1000", (char *) NULL);
The main virtue of executing the sleep program is that it provides us with an easy way of distinguishing the child process from the parent in process listings.
When we run this program, the first lines of output are as follows:
$ su # Need privilege to create a PID namespace Password: # ./pidns_init_sleep /proc2 PID returned by clone(): 27656 childFunc(): PID = 1 childFunc(): PPID = 0 Mounting procfs at /proc2
The first two lines line of output from pidns_init_sleep show the PID of the child process from the perspective of two different PID namespaces: the namespace of the caller of clone() and the namespace in which the child resides. In other words, the child process has two PIDs: 27656 in the parent namespace, and 1 in the new PID namespace created by the clone() call.
The next line of output shows the parent process ID of the child, within the context of the PID namespace in which the child resides (i.e., the value returned by getppid()). The parent PID is 0, demonstrating a small quirk in the operation of PID namespaces. As we detail below, PID namespaces form a hierarchy: a process can "see" only those processes contained in its own PID namespace and in the child namespaces nested below that PID namespace. Because the parent of the child created by clone() is in a different namespace, the child cannot "see" the parent; therefore, getppid() reports the parent PID as being zero.
For an explanation of the last line of output from pidns_init_sleep, we need to return to a piece of code that we skipped when discussing the implementation of the childFunc() function.
/proc/PID and PID namespaces
Each process on a Linux system has a /proc/PID directory that contains pseudo-files describing the process. This scheme translates directly into the PID namespaces model. Within a PID namespace, the /proc/PID directories show information only about processes within that PID namespace or one of its descendant namespaces.
However, in order to make the /proc/PID directories that correspond to a PID namespace visible, the proc filesystem ("procfs" for short) needs to be mounted from within that PID namespace. From a shell running inside the PID namespace (perhaps invoked via the system() library function), we can do this using a mount command of the following form:
# mount -t proc proc /mount_point
Alternatively, a procfs can be mounted using the mount() system call, as is done inside our program's childFunc() function:
mkdir(mount_point, 0555); /* Create directory for mount point */ mount("proc", mount_point, "proc", 0, NULL); printf("Mounting procfs at %s\n", mount_point);
The mount_point variable is initialized from the string supplied as the command-line argument when invoking pidns_init_sleep.
In our example shell session running pidns_init_sleep above, we mounted the new procfs at /proc2. In real world usage, the procfs would (if it is required) usually be mounted at the usual location, /proc, using either of the techniques that we describe in a moment. However, mounting the procfs at /proc2 during our demonstration provides an easy way to avoid creating problems for the rest of the processes on the system: since those processes are in the same mount namespace as our test program, changing the filesystem mounted at /proc would confuse the rest of the system by making the /proc/PID directories for the root PID namespace invisible.
Thus, in our shell session the procfs mounted at /proc will show the PID subdirectories for the processes visible from the parent PID namespace, while the procfs mounted at /proc2 will show the PID subdirectories for processes that reside in the child PID namespace. In passing, it's worth mentioning that although the processes in the child PID namespace will be able to see the PID directories exposed by the /proc mount point, those PIDs will not be meaningful for the processes in the child PID namespace, since system calls made by those processes interpret PIDs in the context of the PID namespace in which they reside.
Having a procfs mounted at the traditional /proc mount point is necessary if we want various tools such as ps to work correctly inside the child PID namespace, because those tools rely on information found at /proc. There are two ways to achieve this without affecting the /proc mount point used by parent PID namespace. First, if the child process is created using the CLONE_NEWNS flag, then the child will be in a different mount namespace from the rest of the system. In this case, mounting the new procfs at /proc would not cause any problems. Alternatively, instead of employing the CLONE_NEWNS flag, the child could change its root directory with chroot() and mount a procfs at /proc.
Let's return to the shell session running pidns_init_sleep. We stop the program and use ps to examine some details of the parent and child processes within the context of the parent namespace:
^Z Stop the program, placing in background [1]+ Stopped ./pidns_init_sleep /proc2 # ps -C sleep -C pidns_init_sleep -o "pid ppid stat cmd" PID PPID STAT CMD 27655 27090 T ./pidns_init_sleep /proc2 27656 27655 S sleep 600
The "PPID" value (27655) in the last line of output above shows that the parent of the process executing sleep is the process executing pidns_init_sleep.
By using the readlink command to display the (differing) contents of the /proc/PID/ns/pid symbolic links (explained in last week's article), we can see that the two processes are in separate PID namespaces:
# readlink /proc/27655/ns/pid pid:[4026531836] # readlink /proc/27656/ns/pid pid:[4026532412]
At this point, we can also use our newly mounted procfs to obtain information about processes in the new PID namespace, from the perspective of that namespace. To begin with, we can obtain a list of PIDs in the namespace using the following command:
# ls -d /proc2/[1-9]* /proc2/1
As can be seen, the PID namespace contains just one process, whose PID (inside the namespace) is 1. We can also use the /proc/PID/status file as a different method of obtaining some of the same information about that process that we already saw earlier in the shell session:
# cat /proc2/1/status | egrep '^(Name|PP*id)' Name: sleep Pid: 1 PPid: 0
The PPid field in the file is 0, matching the fact that getppid() reports that the parent process ID for the child is 0.
Nested PID namespaces
As noted earlier, PID namespaces are hierarchically nested in parent-child relationships. Within a PID namespace, it is possible to see all other processes in the same namespace, as well as all processes that are members of descendant namespaces. Here, "see" means being able to make system calls that operate on specific PIDs (e.g., using kill() to send a signal to process). Processes in a child PID namespace cannot see processes that exist (only) in the parent PID namespace (or further removed ancestor namespaces).
A process will have one PID in each of the layers of the PID namespace hierarchy starting from the PID namespace in which it resides through to the root PID namespace. Calls to getpid() always report the PID associated with the namespace in which the process resides.
We can use the program shown here (multi_pidns.c) to show that a process has different PIDs in each of the namespaces in which it is visible. In the interests of brevity, we will simply explain what the program does, rather than walking though its code.
The program recursively creates a series of child process in nested PID namespaces. The command-line argument specified when invoking the program determines how many children and PID namespaces to create:
# ./multi_pidns 5
In addition to creating a new child process, each recursive step mounts a procfs filesystem at a uniquely named mount point. At the end of the recursion, the last child executes the sleep program. The above command line yields the following output:
Mounting procfs at /proc4 Mounting procfs at /proc3 Mounting procfs at /proc2 Mounting procfs at /proc1 Mounting procfs at /proc0 Final child sleeping
Looking at the PIDs in each procfs, we see that each successive procfs "level" contains fewer PIDs, reflecting the fact that each PID namespace shows only the processes that are members of that PID namespace or its descendant namespaces:
^Z Stop the program, placing in background [1]+ Stopped ./multi_pidns 5 # ls -d /proc4/[1-9]* Topmost PID namespace created by program /proc4/1 /proc4/2 /proc4/3 /proc4/4 /proc4/5 # ls -d /proc3/[1-9]* /proc3/1 /proc3/2 /proc3/3 /proc3/4 # ls -d /proc2/[1-9]* /proc2/1 /proc2/2 /proc2/3 # ls -d /proc1/[1-9]* /proc1/1 /proc1/2 # ls -d /proc0/[1-9]* Bottommost PID namespace /proc0/1
A suitable grep command allows us to see the PID of the process at the tail end of the recursion (i.e., the process executing sleep in the most deeply nested namespace) in all of the namespaces where it is visible:
# grep -H 'Name:.*sleep' /proc?/[1-9]*/status /proc0/1/status:Name: sleep /proc1/2/status:Name: sleep /proc2/3/status:Name: sleep /proc3/4/status:Name: sleep /proc4/5/status:Name: sleep
In other words, in the most deeply nested PID namespace (/proc0), the process executing sleep has the PID 1, and in the topmost PID namespace created (/proc4), that process has the PID 5.
If you run the test programs shown in this article, it's worth mentioning that they will leave behind mount points and mount directories. After terminating the programs, shell commands such as the following should suffice to clean things up:
# umount /proc? # rmdir /proc?
Concluding remarks
In this article, we've looked in quite some detail at the operation of PID namespaces. In the next article, we'll fill out the description with a discussion of the PID namespace init process, as well as a few other details of the PID namespaces API.
Brief items
Quotes of the week
Initial release of remotecontrol
Stephen H. Dawson has released the first version of GNU remotecontrol, a free software application for managing IP-enabled thermostats, air-conditioners, and other building automation devices.
Kolab 3 released
Kolab is a web-based "groupware" system with support for email, calendar management, task management, mobile device synchronization, and more. More than seven years after the Kolab 2 release, version 3.0 is available. It includes a new, Roundcube-based web client, better synchronization, and more.Release 1.45 released
Russ Allbery has released version 1.45 of release, his utility for making software releases. Release can create a tarball from many different version control and build systems, sign and upload packages, and increment versioning information. This release of release adds support for multiple PGP signatures, which may prove useful ... even if it remains somewhat confusing to discuss.
Clasen: Input Sources in GNOME 3.7.4, continued
At his blog, Matthias Clasen has posted part two of his status report about IBus integration and input sources in GNOME 3.7.x. Part one was posted in December; between the two it seems there are many changes coming down the pipe for users who manage multiple input sources.
Newsletters and articles
Development newsletters from the past week
- Caml Weekly News (January 15)
- What's cooking in git.git (January 9)
- What's cooking in git.git (January 11)
- What's cooking in git.git (January 14)
- Haskell Weekly News (January 9)
- OpenStack Community Weekly Newsletter (January 11)
- Perl Weekly (January 14)
- PostgreSQL Weekly News (January 13)
- Ruby Weekly (January 10)
Coker: Android Multitasking
At his blog, Russell Coker examines Android multitasking, particularly as it is revealed in the "Multi Window Mode" supported on some Samsung devices, and is less than overwhelmed. "So while Android being based on Linux does multitask really well in the technical computer-science definition it doesn't do so well in the user-centric definition. In practice Android multitasking is mostly about task switching and doing things like checking email in the background. Having multiple programs running at once is particularly difficult due to the Android model of applications sometimes terminating when they aren't visible.
"
Page editor: Nathan Willis
Announcements
Brief items
Embedded Linux Conference Europe 2012 videos posted
The folks at Free Electrons have posted videos from the talks given at the Embedded Linux Conference 2012 in Barcelona. While they were at it, they also posted videos from the embedded track at FOSDEM 2012; as they say, "better late than never".
Articles of interest
Cory Doctorow on Aaron Swartz
Cory Doctorow reflects on the life of Aaron Swartz, Reddit co-founder and co-author (at age 14) of the RSS specification, who committed suicide on January 11. "The post-Reddit era in Aaron's life was really his coming of age. His stunts were breathtaking. At one point, he singlehandedly liberated 20 percent of US law. PACER, the system that gives Americans access to their own (public domain) case-law, charged a fee for each such access. After activists built RECAP (which allowed its users to put any caselaw they paid for into a free/public repository), Aaron spent a small fortune fetching a titanic amount of data and putting it into the public domain. The feds hated this. They smeared him, the FBI investigated him, and for a while, it looked like he'd be on the pointy end of some bad legal stuff, but he escaped it all, and emerged triumphant."
Government formally drops charges against Aaron Swartz (ars technica)
Ars technica reports that the United States Attorney has dropped the pending charges against Aaron Swartz. ""In support of this dismissal, the government states that Mr. Swartz died on January 11, 2013," wrote Carmen Ortiz, the United States Attorney for the District Court of Massachusetts. Swartz faced legal charges after he infamously downloaded a huge cache of documents from JSTOR. Over the weekend, Swartz' family said the aggressive legal tactics of the US Attorney's office contributed to his suicide."
First FOSDEM 2013 speaker interviews
As with previous years, Koen Vervloesem is interviewing speakers at FOSDEM (Free and Open Source Software Developers European Meeting), which will be held February 2-3 in Brussels, Belgium. In this edition, eight speakers from the conference are interviewed, more will be coming over the next few weeks. From the Luc Verhaegen interview: "This talk will be a relatively high level description of the current situation with open source 3D/GPU drivers for the ARM ecosystem. It will not only show how far the Lima driver has come in a years time, it will also cover the other ARM GPU projects and the persons driving those. It will end with a demo of the current lima driver work, and then I’ll try to drag as many people as possible over to the X.org devroom where the other ARM GPU developers can go into more detail and demo their stuff."
Software Wars: A film about FOSS, collaboration, and software freedom (Opensource.com)
Opensource.com has an interview with Keith Curtis about a movie he is making, called Software Wars. "Because the movie is an explanation but also a critique of the existing world, this happily forces us to cover things that many technical people don't know. If they all knew what was in the movie, more crazy things would have happened. The trailer is a first attempt at achieving this balance. The final feature will be more polished in every regard. There are a mix of people working on this with different experiences and interests and together we will hammer it out."
January Issue of the TIM Review: Open Source Sustainability
The January issue of the Technology Innovation Management Review (TIM Review) has been published. The editorial theme for this issue is Open Source Sustainability. The articles in this edition are "Editorial: Open Source Sustainability (January 2013)" by Chris McPhee and Maha Shaikh; "Sustainability in Open Source Software Commons: Lessons Learned from an Empirical Study of SourceForge Projects" by Charles M. Schweik; "Sustainability of Open Collaborative Communities: Analyzing Recruitment Efficiency" by Kevin Crowston, Nicolas Jullien and Felipe Ortega; "Going Open: Does it Mean Giving Away Control?" by Nadia Noori and Michael Weiss; "The Evolving Role of Open Source Software in Medicine and Health Services" by David Ingram and Sevket Seref Arikan; "Sustainability and Governance in Developing Open Source Projects as Processes of In-Becoming" by Daniel Curto-Millet; and "Q&A. Is Open Source Sustainable?" by Matt Asay. (Thanks to Martin Michlmayr)
New Books
Learn You Some Erlang for Great Good!--New from No Starch Press
No Starch Press has released "Learn You Some Erlang for Great Good!" by Fred Hébert.
Calls for Presentations
Euro LLVM Conference
The third European LLVM conference will take place April 29-30, 2013 in Paris, France. "This will be a two day conference which aims to present the latest developments in the LLVM world and help strengthen the network of LLVM developers. The format will be similar to that of the previous meetings held in London but with more time for presentations and networking. The meeting is open to anyone whether from business or academia, professional or enthusiast and is not restricted to those from Europe - attendees from all regions are welcome." The CfP deadline is March 1.
samba eXPerience 2013 - call for papers
samba eXPerience 2013 will take place May 14-17 in Göttingen, Germany. The call for papers deadline is February 28.Prague PostgreSQL Developers Day 2013
Prague PostgreSQL Developers Day will take place May 30 in Prague, Czech Republic. The call for papers closes April 14. "Majority of the talks will be in czech language, but we're looking for two english-speaking guest. We can offer covering travel and hotel expenses up to ~ 500 EUR (should be enough for 2 nights in a hotel and air ticket from around Europe)."
Texas Linux Fest 2013
Texas Linux Fest will take place May 31-June 1 in Austin, Texas. The Call For Papers is open until April 1.
Upcoming Events
Announcing PyCon Australia 2013
PyCon Australia will take place July 5-7 in Hobart, Tasmania. "Once again, we'll have a weekend packed full of amazing content on all aspects of the Python ecosystem, presented by experts and core developers of the tools and frameworks you use every day."
Events: January 17, 2013 to March 18, 2013
The following event listing is taken from the LWN.net Calendar.
Date(s) | Event | Location |
---|---|---|
January 18 January 20 |
FUDCon:Lawrence 2013 | Lawrence, Kansas, USA |
January 18 January 19 |
Columbus Python Workshop | Columbus, OH, USA |
January 20 | Berlin Open Source Meetup | Berlin, Germany |
January 28 February 2 |
Linux.conf.au 2013 | Canberra, Australia |
February 2 February 3 |
Free and Open Source software Developers' European Meeting | Brussels, Belgium |
February 15 February 17 |
Linux Vacation / Eastern Europe 2013 Winter Edition | Minsk, Belarus |
February 18 February 19 |
Android Builders Summit | San Francisco, CA, USA |
February 20 February 22 |
Embedded Linux Conference | San Francisco, CA, USA |
February 22 February 24 |
Southern California Linux Expo | Los Angeles, CA, USA |
February 22 February 24 |
FOSSMeet 2013 | Calicut, India |
February 22 February 24 |
Mini DebConf at FOSSMeet 2013 | Calicut, India |
February 23 February 24 |
DevConf.cz 2013 | Brno, Czech Republic |
February 25 March 1 |
ConFoo | Montreal, Canada |
February 26 March 1 |
GUUG Spring Conference 2013 | Frankfurt, Germany |
February 26 February 28 |
ApacheCon NA 2013 | Portland, Oregon, USA |
February 26 February 28 |
O’Reilly Strata Conference | Santa Clara, CA, USA |
March 4 March 8 |
LCA13: Linaro Connect Asia | Hong Kong, China |
March 6 March 8 |
Magnolia Amplify 2013 | Miami, FL, USA |
March 9 March 10 |
Open Source Days 2013 | Copenhagen, DK |
March 13 March 21 |
PyCon 2013 | Santa Clara, CA, US |
March 15 March 17 |
German Perl Workshop | Berlin, Germany |
March 15 March 16 |
Open Source Conference | Szczecin, Poland |
March 16 March 17 |
Chemnitzer Linux-Tage 2013 | Chemnitz, Germany |
If your event does not appear here, please tell us about it.
Page editor: Rebecca Sobol