|
|
Subscribe / Log in / New account

LWN.net Weekly Edition for August 16, 2012

GUADEC: New funding models for open source software

By Nathan Willis
August 15, 2012

Money and free software have never been an easy fit. As Adam Dingle of Yorba explained at GUADEC 2012, the overwhelming majority of companies that produce free software underwrite its development with other income: services, contract work, support plans, and the like. But that dichotomy puzzled Dingle when viewed in light of the recent explosion of successful crowd-sourced funding stories on sites like Kickstarter. His talk asked why the Kickstarter approach is not working for open source — and what could be done about it.

Dingle founded Yorba in 2009, and his first hire was developer Jim Nelson. Nelson joined Dingle on stage to describe the project's background. Yorba is a small non-profit company best known for its GNOME applications, such as the photo organizer Shotwell and the email client Geary. But the group's non-profit status does not mean it is uninterested in paying developers; rather, it is attempting to fund development of its open source applications without subsidizing them with other work. So far, that dream has not been realized. Yorba does break even, but only by taking on contract work. It has a number of GTK+ projects it would like to pursue, but has had to put on hold simply for lack of time. The company is not looking for "Facebook money", Nelson said, just competitive salaries for its employees. "I think everyone in this room has had that dream at one point", he said, "to be able to be secure enough to write the apps they want to write".

Web donations don't work

But, asked Dingle, why is that so difficult? The issue is not unique to Yorba, he said, but a classic problem. Yorba has a donations page on its Web site, and it knows it has millions of users, but they do not translate into funding. "If every one of our users gave us one dollar once in their lifetime, we'd be fine". Dingle confessed that he had only rarely donated money on similar donation pages at other open source project sites, and a show of hands revealed that around half the GUADEC audience had done so at least once. On the other hand, Dingle said, he has given more money (and on more occasions) to Wikipedia — primarily "because Jimmy Wales asked me to, again and again" in Wikipedia fund-raising drives.

Clearly the Web donation page approach works for Wikipedia, which raised more than US $20 million last year, he said, but on the free desktop it is a lot more difficult. Part of the reason is exposure of users to the campaign. Wikipedia promotes its fundraising drives on the site. Projects, however, do not want to put advertisements in their applications, but only a tiny fraction of an application's users ever see the project's site, because they install and update the application through their distribution's package management system instead.

There are other difficulties with the donation model, Dingle said. Donation pages are difficult to use because making online payments is not simple, usually requiring a third-party service like Paypal that adds complexity. The offline alternatives (like mailing in a check) can be worse. In either case, it is far harder than clicking the "buy" button in Apple's self-contained iOS application marketplace. Static "donate" buttons also make it difficult to see how one's individual contribution affects the software itself.

A few people have tried to streamline the online donation process, he said, including Flattr. But although Flattr is helpful, Dingle said, "I don't know of any open source project sustaining itself via Flattr — or via donations, period." Even as open source has continued its struggle with funding, he said, a couple of radical new funding methods have exploded on the Internet in other fields. They are not really being used for open source, he said, but perhaps they could be.

Pay as you like

The first new development in funding is the "pay what you want, including zero" approach, which Dingle described as post-funding. This is the approach taken by the Humble Bundle video game sales. The first major use of this model was in 2007, he said, when the band Radiohead released its album In Rainbows online, and allowed customers to pay any price they chose for it — including free.

The Humble Bundle approach is a little different, he explained. There is no "free" option, and Humble Bundle employs other tactics to try and increase the purchase price (such as promising to release games as open source if a target is hit, and providing add-ons if a buyer beats the current average price). But Humble Bundle is instructive for open source projects, because it demonstrates that Linux users are willing to pay for games (which has long been a point of debate in some circles).

The question for open source projects is how to harness this approach and apply it successfully to software development. Dingle identified a few key factors that distinguish the post-funding model from traditional donations. First, there is always a limited time window in which to make the purchase. Second, the post-funding drive works for a completed project, and one that cannot be downloaded outside the drive. In other words, you must pay something to get the work at all.

Both factors could prove difficult for ongoing software projects to emulate. The Ardour audio workstation tries something akin to post-funding on its site, he said; it suggests a $45 donation to download the application. But Ardour is also built for Mac OS X, where users often download applications from the Web; Linux applications still face the challenge of requesting donations through a package manager. Dingle suggested that GUI package managers could support donations from within the application, but there are social and logistical problems. First, there is no automatic way to send donated funds to the application in question, so it would entail a lot of manual oversight. Second, there will always be people in the community who argue that "we should never ever suggest paying money for free software". In closing, he observed that there was a proposal to add a donation mechanism to Ubuntu's Software Center, but that it seems to have no momentum at the present.

Crowdfunding

The other new funding method to make a splash in recent years is crowdfunding, in which donations are pledged before work begins, which Dingle called a pre-funding model in contrast with the post-funding model. Kickstarter is the darling of the crowdfunding world, he said, although there are other players. On Kickstarter, anyone can propose a project, but Kickstarter selectively vets the proposals before accepting them. Once accepted, each campaign has a fixed time limit, a target price, and the money is rewarded in an all-or-nothing fashion at the end of the campaign. Kickstarter also requires projects to offer "rewards" for each funding level, he observed, although there are no real criteria in place governing them. He mentioned the OpenTripPlanner project, which offered a free copy of the application to the lowest tier of donors — even though the application is open source.

Kickstarter has other peculiarities worth watching out for, he said. Projects that are selected for heavy promotion by the Kickstarter team tend to get drastically better results, and the dollar amounts pledged are still rising. He pointed out that nine of the top ten highest-funded projects on Kickstarter's history have been in the first half of 2012. But Kickstarter also cannot force a project to complete the work it has pledged to do. The money is awarded in a lump sum, and completion of the project is based on trust.

Despite a handful of standouts (such as Diaspora), Dingle said that Kickstarter has not been a great match for open source projects, for several reasons. First, the site is slanted towards art and design projects; all non-game software projects get dumped in the general "technology" category, where they compete for attention "with all the helicopters, robots, and other things." More importantly, Kickstarter is geared toward getting a new project off the ground, not for developing the next version of an existing project that has existed for years. Finally, Kickstarter's selective acceptance means it is difficult to make the cut, requiring slick sales pitches, and facing increasing competition.

Others have attempted to build a "Kickstarter for software" before, Dingle said, including Fundry and CofundOS. Neither has achieved much success; Fundry is defunct, and CofundOS currently has five projects tagged with "GTK" — all from 2007. He then listed several reasons why Kickstarter has proven successful while the software-centric sites have not. First, the artistic and non-software projects are "cooler", so they attract more users and build a larger community around the site. Second, Kickstarter's time-limited and all-or-nothing funding creates a sense of excitement around each campaign, while the software sites function more like standing bounties. Third, the software funding sites always included complicated weighted-voting schemes through which the donors determined whether or not a feature had been completed enough to collect the pledged money. That made the process slower and more difficult to understand.

Moving forward

Dingle cautioned that "I don't have an easy answer", but concluded his talk suggesting that maybe it was time for a new crowdfunding site for software that built on the lessons from Kickstarter's success and the other sites' failures. Yorba has been discussing the idea, he said. Ideally it would be simple and good-looking like Kickstarter, but scoped only for software (and perhaps just for open source software). Launching it would probably require "bootstrapping" the site with several high-profile projects.

On the other hand, he said, such a site would need to take a different approach in some ways to fit open source projects' needs. One interesting idea would be allow projects to maintain an indefinite presence on the site, but still run separate, time-limited campaigns for specific development cycles. Or perhaps projects could run campaigns to back specific features, where each development cycle's campaign would determine what gets worked on for that cycle. Whether such a campaign would let donors back specific features, or let developers set "goal" features at different levels is an open question. It is also unclear how to support libraries on such a site: it is hard enough to fund applications, he said, much less supporting libraries that the users might not even realize are there.

Several in the audience had questions in the waning minutes of the session. One asked how a software crowdfunding site could generate the same excitement over novel and crazy projects, which are a big part of Kickstarter's draw. Dingle responded that the site would have to emphasize features. Another audience member observed that Kickstarter projects maintain a lot of buzz through ongoing interaction with the project teams, and asked how a software crowdsourcing site could do the same. Dingle agreed, noting that he had donated to a game project and received a near constant stream of updates and feedback, something that is hard for a small software shop to do, since writing updates means temporarily stopping work on the code. On that question, Nelson also weighed in, saying

We're kind of talking about marketing and promotion, and we have some negative associations with those terms. But Kickstarter has some pretty fancy pitches up there. One thing about GNOME is that we have a lot of designers and artistic people. It's worth the investment.

Of course, none of Dingle and Nelson's observations about crowdfunding and pay-as-you-like finances are limited to the GNOME project itself. All open source software faces the same challenges when it comes to raising the money to keep developers at the keyboard. In recent years, Linux distributors have underwritten the development of desktop software through the sale of enterprise services and support contracts of various forms. Users have grown accustomed to that situation, and there is certainly nothing wrong with it, but Dingle and his colleagues at Yorba have shown that no one needs to accept that as the only viable funding model.

[The author would like to thank the GNOME Foundation for travel assistance to A Coruña for GUADEC.]

Comments (36 posted)

GUADEC: porting GNOME to Android

By Nathan Willis
August 15, 2012

Rightly or wrongly, GNOME is regarded by many casual users as "a Linux desktop environment." It also runs on various BSD and Solaris flavors, of course, and there has been considerable discussion recently of developing a "GNOME OS" deliverable suitable for tablets and other devices. But at GUADEC 2012, there was yet another spin on the redefinition of GNOME: porting several underlying GNOME frameworks to Android.

A team of developers from Collabora hosted the relevant session, which was titled D-Bus, PulseAudio, GStreamer and Telepathy In the Palm of your Hand. Reynaldo Verdejo Pinochet acted as ringmaster for the other speakers, and provided an introduction to the overall effort. As many others have observed, GNOME and other open source projects are facing a "form-factor challenge", he said. Desktop approaches to user interfaces do not work as well on small-screen devices, and mobile device users exhibit different usage patterns: rapidly interleaving multiple tasks rather than camping out in one application.

The open source developer community is currently failing to provide a bridge from the desktop to mobile devices, he said. Meanwhile, the system-on-chip vendors are turning their full attention to Android — a platform into whose development the community has no say. Collabora's solution, he said, is not to "fix" Android or to "fix" the users, but instead to enable the GNOME project's technology to run on Android. As a result, developers and users will have more choice, and the GNOME components will continue to be relevant to both.

The bulk of the session was then taken up by a series of short status updates from Collabora developers working on "smallifying" frameworks used by GNOME applications. Covered were Android ports of PulseAudio, Wayland, Telepathy, and GStreamer. Despite the title, D-Bus was not discussed separately in the talk; Verdejo said in an email that the D-Bus port was undertaken as part of the Telepathy effort. Verdejo closed out the session with general information for application developers looking to port GNOME applications to Android.

Porting stories

Arun Raghavan spoke first, explaining the PulseAudio port for Android. Android's native audio subsystem is called AudioFlinger (AF). It is a software mixer like PulseAudio, he explained, but it pales in comparison to PulseAudio for features. "It's not as nice as PulseAudio, and you can look at code if you don't believe me." In particular, PulseAudio provides flexible audio routing (including network routing), dynamically-adjustable latency (which prevents glitches, saves power, and simplifies the work required from applications), and support for passing compressed audio (in contrast to AF, which provides only PCM audio transport).

Although the project's original goal was to install PulseAudio in parallel with AF, Raghavan said, that quickly proved impossible due to how integrated AF is into Android. So instead, Raghavan focused on building and installing PulseAudio as a complete replacement for AF, using the Galaxy Nexus phone as the target platform. First, he successfully compiled PulseAudio and got it to run and play sound on the Galaxy Nexus. He then started work on using PulseAudio to emulate AF's playback API "AudioTrack," a task that he said is almost complete. Still in progress is emulation of AF's policy framework (which controls volume settings, when playback is allowed, and so forth). After that, the next item on the to-do list is "AudioRecord," the AF audio capture API.

Next, Daniel Stone presented the Wayland port. Once again, the goal was to offer Wayland as an alternative to a built-in Android service — in this case, SurfaceFlinger. SurfaceFlinger provides buffer transport functions analogous to Wayland, and composition functions like those of Wayland's reference compositor Weston. Collabora's port is mostly complete, and is currently able to run Wayland demos. Still in progress is integrating Wayland into the Android stack so that it can run side-by-side with SurfaceFlinger, a challenge that includes both running Wayland and SurfaceFlinger applications at once, and handling clipboard events, task switching, and other functions. Still to come is styling, which would make Wayland applications seamlessly match the SurfaceFlinger look.

The Wayland work ran into the usual porting problems, he said, but there were special difficulties as well. First, Wayland relies on several kernel features too new to be found in the Android kernel (such as timerfd and signalfd). Second, the Android graphics drivers are "terrible" at buffer management in comparison to the desktop. Whereas Mesa provides full EGLImage sharing through the Direct Rendering Manager (DRM) and seamless zero-copy transfers (which Wayland can take advantage of through the wl_drm extension), EGLImage is an "absolute crapshoot" on Android. The quality of the implementation varies drastically between hardware vendors, and the graphics drivers are almost always closed.

Alvaro Soliverez presented the port of the Telepathy communication framework. Telepathy is modular, supporting a long list of pluggable back-ends (from XMPP to SIP, to proprietary networks like MSN) and bindings for a number of programming languages and toolkits. Soliverez's port picked the subset most useful for Android: the Gabble XMPP module, link-local XMPP with Salut (using Avahi service discovery), and Java bindings.

The port is fully functional, he said, and the patches have been sent upstream to Telepathy. Android users can download and install the Telepathy service as a stand-alone application. In addition to the protocols mentioned above, the account management and status management functionality works, and the GLib bindings are in place for developers. Ports of the rest of the framework are underway, he said, although assistance would be welcome.

Verdejo discussed the GStreamer port. GStreamer is already successfully in use on other platforms, where it can serve as a bridge to the operating system's native multimedia framework (such as QuickTime on Mac OS X or DirectShow on Windows). The GStreamer Android port is intended to be a complete port of the framework, putting Android on par with the other platforms. This required the development of two new elements to interface to Android's media layer: GSTSurfaceFlingerSink and GSTAudioFlingerSink. These are sink elements, which allow applications' GStreamer pipelines to play their output on Android.

But the Android platform "is crippled on purpose", he said, offering no real native development story. Instead, all of the emphasis is placed on writing Dalvik applications. Consequently, Collabora decided to build GStreamer as a system-level service as well. It developed two other elements, GSTPlayer and GSTMetaDataRetriver, which can function as replacements for Android's built-in media playback functionality. Up next are new elements to handle direct multimedia output over OpenGL ES and OpenSL ES, and capture elements for camera and microphone input. The OpenGL ES and OpenSL ES elements will eventually replace the existing Android GStreamer sinks so that GStreamer pipelines will not need to depend on the Android API layer. All of the GStreamer work is being pushed upstream and will be included in the GStreamer SDK.

Developing applications

Verdejo Raghavan closed out the session with general advice for other developers interested in porting their software to Android. There are two general approaches to consider, he said: the native development kit (NDK) and the complete system-integration route. The NDK is Google's official offering, and it places some requirements on the developer — specifically, all of the application's functionality must be in libraries, which are then accessed by Java code using JNI. The libraries must also be bundled with applications.

The alternative route is to develop the software integrated into the Android source, which means it must subsequently be distributed as a custom Android build. The system integration route does allow one to make the software functionality available to the entire platform, he said, rather than just to specific applications (which is a limitation of bundling the library with an NDK application).

Whichever route one chooses, he said, there are "gotchas" to look out for. Android exhibits not-invented-here (NIH) syndrome, he said, right down to the basic C library: it uses Bionic, a "bastardized" version of libc with missing APIs and hobbled features. Android also uses unique Git repository and patch management tools written by Google, he added. But the main sticking point is Android's specialized build system, which relies on Android-specific makefiles. Maintaining those alongside the makefiles needed for another platform can be a considerable headache.

To simplify the process, Collabora created a tool called Androgenizer. Developers need to set up only a single top-level Android.mk file for the project and Android target rules in Makefile.am. The target rules call Androgenizer, which generates the makefiles Android expects (using Google's macros and Android-specific variables). Androgenizer can be used to generate makefiles appropriate for either NDK or system-integration Android builds, by checking for an environment variable called ANDROGENIZER_NDK.

The Androgenizer tool does not make building for Android trivial, he said, but once you figure it out, it only takes about 30 minutes for each new project. Although the process is not hard, Verdejo encouraged anyone working on an Android port to get in touch, saying the company is interested in helping.

All things considered, the ports of these and other frameworks from desktop GNOME (there are several other projects on display at the company's Git repository) offer an interesting take on the desktop-versus-mobile "problem." Typically the massive increase in mobile Internet devices is talked about as a sign that desktop projects are doomed; Collabora's GNOME-to-Android ports show that developers do not need to take that as a foregone conclusion, even for a tightly-controlled platform like Android.

[The author would like to thank the GNOME Foundation for travel assistance to A Coruña for GUADEC.]

Comments (13 posted)

The GNOME project at 15

By Jonathan Corbet
August 14, 2012
On August 15, 1997, Miguel de Icaza announced the launch of the GNOME project. In the following years, GNOME has seen more than its share of ups and downs; it must be considered one of the community's most successful and most controversial projects. This is a moderately significant anniversary, so it makes some sense to have a look at where GNOME came from and speculate on where the project may be heading.

In the mid 1990's, the Linux desktop experience was provided by such aesthetically striking components as the fvwm window manager, the Xaw toolkit, and the Midnight Commander file manager. That was more than enough for many early Linux users, quite a few of whom, having written their own custom modelines to get X working with their monitors, felt no need for a desktop system beyond what it took to get their xterm windows arranged nicely. Every community has its discontents, though. In this case, one of the first groups to get itself properly organized was the KDE project, which started hacking on an integrated desktop environment in 1996 and which, by the middle of 1997, had some interesting results to show.

The problem with KDE, of course, is that it relied on the Qt toolkit, and Qt was not, at that time, free software. This decision led to epic flame wars that put current squabbles (GNOME 3, or systemd, say) to shame; they just don't make flames like they used to. Some attempts were made to get Trolltech to change Qt to a free license, but Trolltech was not yet interested in considering such a move. It is interesting to speculate on what might have happened had Qt been relicensed in 1997 rather than three years later; one of the deepest divisions within the free software community might just have been avoided.

Then again, maybe not. We're not entirely happy without something to fight about, and Emacs-versus-vi, by virtue of predating Linux entirely, was old and boring even in 1997.

The stated goals of the newly-launched GNOME project were straightforward enough: "We want to develop a free and complete set of user friendly applications and desktop tools, similar to CDE and KDE but based entirely on free software". The project would be based on the GTK toolkit which, to that point, had only really been used with GIMP. The project also planned to make heavy use of the Scheme language — an objective that faded into the background fairly quickly.

GNOME itself remained very much in the foreground. Compared to KDE it had a different license (even after Qt was freed), a different implementation language (C vs. C++), and a different approach to the desktop — all fodder for plenty of heated discussions. Miguel was from early on an admirer of Microsoft's ways of doing software development and tried to push many of them into GNOME. Companies were formed around GNOME, including Helix Code/Ximian (eventually acquired by SUSE) and Eazel (which followed the classic dotcom trajectory of burning vast amounts of money before its abrupt death). There was clearly a lot of activity around GNOME from the very beginning.

The project produced three major releases: 1.0 in 1999, 2.0 in 2002, and 3.0 in 2011. The 2.0 release provoked a flood of criticism as the result of the project's focus on removing options whenever possible. A perceived arrogance on the developers' part (one described some user-requested window manager options as "crack-ridden bogosity") was not helpful. The GoneME fork was started in response, but did not get very far. Over time, enough features returned to the desktop, and things improved enough overall, that most users made their peace with it and stopped complaining.

The 3.0 release, instead, has provoked a flood of criticism as the result of the removal of options and features. A perceived arrogance on the developers' part has not helped the situation much. The MATE desktop fork has been started in response; it's too early to say just how far it will get. Meanwhile, a few features have found their way back into subsequent 3.x releases, and some users, at least, have made their peace with it and stopped complaining. Others, needless to say, have not.

Where to from here?

Fifteen years in, it would be hard to argue that GNOME has not been a success. The project is arguably the most successful Linux desktop available. It has an advanced code base, lots of developers, a well established foundation with a fair amount of corporate support, and more. There must be no end of free software projects that can only dream of the level of success that GNOME has achieved.

That said, there is a visible level of concern within the project. The relentless criticism of GNOME 3 has proved discouraging to many developers, and the millions of new users that GNOME 3 was supposed to attract have not yet shown themselves. Distributors are making noises about trying other desktops, and Ubuntu, arguably the highest-profile GNOME-based distribution, has gone off in its own direction with yet another fork. Meanwhile, the desktop in general looks like a shrinking target; the cool kids have gone elsewhere and GNOME seems to not be a part of it. In this situation, what's a project to do?

Allan Day's GNOME OS post shines some light on what at least some of the project's developers are thinking. Much of it looks like a continuation of GNOME's recent work — completing the GNOME 3 user experience for example. Some seems like basic sense: making the system easier to build and test would be near the top of list. Others are unsurprising, but may not get the results the project is after.

The post never uses these words, but the GNOME project clearly wants to put together yet another "app store" infrastructure wherein third parties can offer proprietary applications to users. For whatever reason, enabling proprietary applications has always been seen as the path to success; the whole point of the venerable Linux Standard Base exercise was to attract that kind of application. Making it easier to add applications to the system can only be a good thing, but it will not, on its own, cause either users or developers to flock to the platform.

GNOME also clearly plans to target tablets and handsets. Again, the objective makes sense: that is where a lot of the buzz — and the money — is to be found. The problem here is that this space is already crowded with free (or mostly-free) alternatives. Android dominates this area, of course; platforms like Tizen, Plasma Active, webOS, Firefox OS, and ChromeOS are also looking for users. It is far from clear that GNOME has something to offer that will make it stand out in this crowd, especially since Allan does not expect a "touch-compatible" version of GNOME 3 for another 18 months. As Eitan Isaacson put it recently:

Our weak areas are apparent: We are not mobile and we are very far from it. We will never achieve any significant social critical mass, we have had limited successes in embracing web technologies, but the web will always be a better web. Deploying “apps” is a nightmare.

He has an interesting counter-suggestion: GNOME, he says, should aim to be the platform of choice for content creators. There could be some potential here; this is not an area that large numbers of projects are targeting, and incumbents like Mac OS seem vulnerable. Where content creators lead, others will often follow. There are some obvious obstacles (codecs, for example), but this is a target that could possibly be reached.

Most likely, though, GNOME will continue its drive for mainstream success and those millions of new users. The project might just get there: it retains a solid code base, many talented developers, and a supporting ecosystem. One should never underestimate what a determined group of developers can accomplish when they set their minds to it. The rest of us should either support them or get out of the way and let them follow their path. Watch this space over the next fifteen years, and we'll all see what they are able to achieve.

Comments (207 posted)

Page editor: Jonathan Corbet

Security

Stockpiling zero-day vulnerabilities

By Jake Edge
August 15, 2012

Zero-day vulnerabilities (aka zero-days or 0days) are those that have not been disclosed, so that they could be exploited before systems can be updated to avoid them. Thus, having a supply of carefully hoarded zero-day vulnerabilities can be advantageous for various people and organizations who might want to attack systems. The market for these zero-days has been growing for some time, which raises some ethical, and perhaps political, questions.

A post to the Electronic Frontier Foundation (EFF) blog back in March was the jumping off point for a discussion of the issue on the DailyDave security mailing list recently. The EFF post highlighted the fact that these vulnerabilities are for sale and that governments are participating in the market. When vulnerabilities have a market value, there is little or no impetus to actually report and fix the problems, but those who buy them are able to protect their systems (and those of their "friends"), while leaving the rest of the world unprotected. The EFF recommended that the US government (at least) ensure that these vulnerabilities be reported:

If the U.S. government is serious about securing the Internet, any bill, directive, or policy related to cybersecurity should work toward ensuring that vulnerabilities are fixed, and explicitly disallow any clandestine operations within the government that do not further this goal. Unfortunately, if these exploits are being bought by governments for offensive purposes, then there is pressure to selectively harden sensitive targets while keeping the attack secret from everyone else, leaving technology—and its users—vulnerable to attack.

In a post about this year's Black Hat security conference, DailyDave list owner Dave Aitel mentioned the EFF post, noting that calls for restricting what zero-day owners can do is "giving up freedom for security". He pointed out that any legislative solution is likely to be ineffective, but, beyond that, it is a question of freedom. Restricting the kind of code that can be written, or what can be done with that code, is not respecting anyone's freedom, he said. He advocated something of a boycott of EFF until it changes its position.

While there was some sympathy for his view of the EFF in the thread, there was also some wider discussion of the implications of zero-day hoarding. Michal Zalewski noted that the practice makes us all less safe:

[...] the side effect of governments racing to hoard 0-days and withhold them from the general public is that this drastically increases the number of 0-day vulnerabilities that are known and unpatched at any given time. This makes the Internet statistically less safe, and gives the government a monopoly in deciding who is "important enough" to get that information and patch themselves. The disparity in purchasing power is also troubling, given that governments have tons of "free money" to spend on defense, and are eager to do so, outcompeting any other buyers.

But Bas Alberts pointed out that vulnerabilities are something of a power-leveler between individuals and larger organizations (like governments):

I would go as far as to say that 0day ownership promotes freedom for the individual, regardless of who is selling or buying it. That's coincidental. It is one of the few areas where a sufficiently motivated individual or group of individuals can find, exploit, and develop an offensive capability that rivals that of a nation state. It represents a right to bear arms (RAWR!) on the Electronic Frontier(tm).

The semi-public markets in vulnerabilities may be relatively new, but using vulnerabilities as commodities is not, as Alberts describes:

Vulnerabilities and exploits have always been a commodity ... a commodity of ego, humor and yes *gasp* money. Exploit developers on both sides of the fence have been commoditizing exploits for close to 2 decades, if not longer. They've been commoditized as marketing tools, network tools, performance art, weapons, and political statements ... regardless of whether they were private or public and regardless of who was using them.

But the focus on zero-days is somewhat misplaced, according to Ben Nagy. While they may be a threat, it is not the primary threat to individuals from governments. There are much simpler ways to compromise a system:

They send their targets stock malware and say 'please install by clicking on this photo, love, er... not the government, srsly'. Or, they leverage the fact that they have physical access to the carrier, the internet cafes and so forth. (Or probably they just use humint [human intelligence] cause it's easier).

Legislation is also something of a slippery slope. For one thing, it will be difficult (or impossible) to enforce, even within a government. But, even if it is only applied to the US government—as the EFF post seems to advocate—these kinds of laws have a tendency to grow over time. As David Maynor put it: "If you apply regulations to one part of an industry, at some point regulations will seep to every part like the stench of rotten eggs." He goes on to describe some—seemingly—unlikely scenarios, but his point is clear: if government is not "allowed" to possess zero-day exploits, who will be allowed to?

It is assumed that governments want these kinds of vulnerabilities to attack other countries (a la Stuxnet). As Nagy pointed out, there are easier ways to attack individuals. Security firms also want to stockpile zero-days to protect their customers. There are other reasons to collect vulnerabilities, though.

There are reports that various folks are stockpiling Linux vulnerabilities so that they can "root" their mobile phones and other devices that use it. Presumably, there are iOS fans doing the same thing. Because some device vendors (Apple is the poster child, but various Android vendors aren't far behind) try to prevent users from getting root access, those that want to be able to do what they want with their devices need to find some kind of vulnerability to do so. That may be a "freedom-loving" example, but it suffers from many of the same risks that other types of vulnerability hoarding do.

Zero-day vulnerabilities lose their zero-day status—along with much of their potency—once they are used, reported, or fixed. Someone holding a zero-day cannot know that someone else hasn't also discovered the problem. Any purchased zero-days are certainly known to the seller, at least, but they could also be sold multiple times. If those vulnerabilities fall into the "wrong hands" (however defined), they could be used or disclosed, which makes secrecy paramount in the eyes of the hoarder.

But if the information is to be used to protect certain systems, it has to be disseminated to some extent. Meanwhile, those on the outside are blissfully unaware of a potential problem. It is a tricky problem, but it is a little hard to see how any kind of legislation is going to "fix" it. It may, in fact, not really be a solvable problem at all. As various posters in the thread said, it is tempting to want to legislate against "bad" things, but when trying to define "bad", the devil is in the details.

Comments (23 posted)

Brief items

Security quotes of the week

There are many remaining mysteries in the Gauss and Flame stories. For instance, how do people get infected with the malware? Or, what is the purpose of the uniquely named “Palida Narrow” font that Gauss installs?

Perhaps the most interesting mystery is Gauss’ encrypted warhead. Gauss contains a module named “Godel” that features an encrypted payload. The malware tries to decrypt this payload using several strings from the system and, upon success, executes it. Despite our best efforts, we were unable to break the encryption. So today we are presenting all the available information about the payload in the hope that someone can find a solution and unlock its secrets. We are asking anyone interested in cryptology and mathematics to join us in solving the mystery and extracting the hidden payload.

-- Kaspersky Lab asks for decryption help

Starting next week, we will begin taking into account a new signal in our rankings: the number of valid copyright removal notices we receive for any given site. Sites with high numbers of removal notices may appear lower in our results. This ranking change should help users find legitimate, quality sources of content more easily—whether it’s a song previewed on NPR’s music website, a TV show on Hulu or new music streamed from Spotify.
-- Google

Comments (16 posted)

SUSE and Secure Boot: The Details (SUSE Blog)

Vojtěch Pavlík explains SUSE's plans for supporting UEFI secure boot on the company's blog. It is similar to the Fedora approach, but creates its own key database for the shim bootloader to use with UEFI "Boot Services Only Variables". These "Machine Owner Keys" (MOKs) can be updated only during execution of the shim, thus allowing users to update them, but protecting them from overwrite by malware. "The enrollment process begins by rebooting the machine and interrupting the boot process (e.g., pressing a key) when the shim loads. The shim will then go into enrollment mode, allowing the user to replace the default SUSE key with keys from a file on the boot partition. If the user chooses to do so, the shim will then calculate a hash of that file and put the result in a “Boot Services Only” variable. This allows the shim to detect any change of the file made outside of Boot Services and thus avoid the tampering with the list of user approved MOKs." Matthew Garrett called it a "wonderfully elegant solution" and suspects that Fedora will adopt it too.

Comments (60 posted)

New vulnerabilities

bind: memory leak

Package(s):bind CVE #(s):CVE-2012-3868
Created:August 10, 2012 Updated:August 15, 2012
Description:

From the Red Hat advisory:

BIND 9 tracks incoming queries using a structure called "ns_client". When a query has been answered and the ns_client structure is no longer needed, it is stored on a queue of inactive ns_clients. When a new ns_client is needed to service a new query, the queue is checked to see if any inactive ns_clients are available before a new one is allocated; this speeds up the system by avoiding unnecessary memory allocations and de-allocations. However, when the queue is empty, and one thread inserts an ns_client into it while another thread attempts to remove it, a race bug could cause the ns_client to be lost; since the queue would appear empty in that case, a new ns_client would be allocated from memory. This condition occurred very infrequently with UDP queries but much more frequently under high TCP query loads; over time, the number of allocated but misplaced ns_client objects could grow large enough to affect system performance, and could trigger an automatic shutdown of the named process on systems with an "OOM killer" (out of memory killer) mechanism.

Alerts:
openSUSE openSUSE-SU-2013:0666-1 bind 2013-04-11
openSUSE openSUSE-SU-2013:0605-1 bind 2013-04-03
Slackware SSA:2012-341-01 bind 2012-12-06
Gentoo 201209-04 bind 2012-09-23
Mageia MGASA-2012-0258 bind 2012-09-07
Fedora FEDORA-2012-11146 bind 2012-08-09

Comments (none posted)

bugzilla: information leak

Package(s):bugzilla CVE #(s):CVE-2012-1969
Created:August 13, 2012 Updated:September 5, 2012
Description: From the CVE entry:

The get_attachment_link function in Template.pm in Bugzilla 2.x and 3.x before 3.6.10, 3.7.x and 4.0.x before 4.0.7, 4.1.x and 4.2.x before 4.2.2, and 4.3.x before 4.3.2 does not check whether an attachment is private before presenting the attachment description within a public comment, which allows remote attackers to obtain sensitive description information by reading a comment.

Alerts:
Mageia MGASA-2013-0117 bugzilla 2013-04-18
Mandriva MDVSA-2013:066 bugzilla 2013-04-08
Mageia MGASA-2012-0255 bugzilla 2012-09-04
Fedora FEDORA-2012-11324 bugzilla 2012-08-13
Fedora FEDORA-2012-11364 bugzilla 2012-08-13

Comments (none posted)

calligra: code execution

Package(s):calligra CVE #(s):CVE-2012-3456
Created:August 10, 2012 Updated:September 25, 2012
Description:

From the Ubuntu advisory:

It was discovered that Calligra incorrectly handled certain malformed MS Word documents. If a user or automated system were tricked into opening a crafted MS Word file, an attacker could cause a denial of service or execute arbitrary code with privileges of the user invoking the program.

Alerts:
Gentoo 201209-10 calligra 2012-09-25
openSUSE openSUSE-SU-2012:1061-1 calligra 2012-08-30
Ubuntu USN-1525-1 calligra 2012-08-09
Fedora FEDORA-2012-11566 calligra-l10n 2012-08-21
Fedora FEDORA-2012-11566 calligra 2012-08-21

Comments (none posted)

chromium: multiple vulnerabilities

Package(s):chromium CVE #(s):CVE-2012-2846 CVE-2012-2847 CVE-2012-2848 CVE-2012-2849 CVE-2012-2853 CVE-2012-2854 CVE-2012-2857 CVE-2012-2858 CVE-2012-2859 CVE-2012-2860
Created:August 15, 2012 Updated:August 15, 2012
Description: From the CVE entries:

Google Chrome before 21.0.1180.57 on Linux does not properly isolate renderer processes, which allows remote attackers to cause a denial of service (cross-process interference) via unspecified vectors. (CVE-2012-2846)

Google Chrome before 21.0.1180.57 on Mac OS X and Linux, and before 21.0.1180.60 on Windows and Chrome Frame, does not request user confirmation before continuing a large series of downloads, which allows user-assisted remote attackers to cause a denial of service (resource consumption) via a crafted web site. (CVE-2012-2847)

The drag-and-drop implementation in Google Chrome before 21.0.1180.57 on Mac OS X and Linux, and before 21.0.1180.60 on Windows and Chrome Frame, allows user-assisted remote attackers to bypass intended file access restrictions via a crafted web site. (CVE-2012-2848)

Off-by-one error in the GIF decoder in Google Chrome before 21.0.1180.57 on Mac OS X and Linux, and before 21.0.1180.60 on Windows and Chrome Frame, allows remote attackers to cause a denial of service (out-of-bounds read) via a crafted image. (CVE-2012-2849)

The webRequest API in Google Chrome before 21.0.1180.57 on Mac OS X and Linux, and before 21.0.1180.60 on Windows and Chrome Frame, does not properly interact with the Chrome Web Store, which allows remote attackers to cause a denial of service or possibly have unspecified other impact via a crafted web site. (CVE-2012-2853)

Google Chrome before 21.0.1180.57 on Mac OS X and Linux, and before 21.0.1180.60 on Windows and Chrome Frame, allows remote attackers to obtain potentially sensitive information about pointer values by leveraging access to a WebUI renderer process. (CVE-2012-2854)

Use-after-free vulnerability in the Cascading Style Sheets (CSS) DOM implementation in Google Chrome before 21.0.1180.57 on Mac OS X and Linux, and before 21.0.1180.60 on Windows and Chrome Frame, allows remote attackers to cause a denial of service or possibly have unspecified other impact via a crafted document. (CVE-2012-2857)

Buffer overflow in the WebP decoder in Google Chrome before 21.0.1180.57 on Mac OS X and Linux, and before 21.0.1180.60 on Windows and Chrome Frame, allows remote attackers to cause a denial of service or possibly have unspecified other impact via a crafted WebP image. (CVE-2012-2858)

Google Chrome before 21.0.1180.57 on Linux does not properly handle tabs, which allows remote attackers to execute arbitrary code or cause a denial of service (application crash) via unspecified vectors. (CVE-2012-2859)

The date-picker implementation in Google Chrome before 21.0.1180.57 on Mac OS X and Linux, and before 21.0.1180.60 on Windows and Chrome Frame, allows user-assisted remote attackers to cause a denial of service or possibly have unspecified other impact via a crafted web site. (CVE-2012-2860)

Alerts:
Gentoo 201210-07 chromium 2012-10-21
Gentoo 201208-03 chromium 2012-08-14

Comments (none posted)

condor: privilege escalation

Package(s):condor CVE #(s):CVE-2012-3416
Created:August 15, 2012 Updated:September 4, 2012
Description: From the Red Hat advisory:

Condor installations that rely solely upon host-based authentication were vulnerable to an attacker who controls an IP, its reverse-DNS entry and has knowledge of a target site's security configuration. With this control and knowledge, the attacker could bypass the target site's host-based authentication and be authorized to perform privileged actions (i.e. actions requiring ALLOW_ADMINISTRATOR or ALLOW_WRITE). Condor deployments using host-based authentication that contain no hostnames (IPs or IP globs only) or use authentication stronger than host-based are not vulnerable.

Alerts:
Fedora FEDORA-2012-12127 condor 2012-08-31
Red Hat RHSA-2012:1169-01 condor 2012-08-14
Red Hat RHSA-2012:1168-01 condor 2012-08-14

Comments (none posted)

dokuwiki: cross-site scripting

Package(s):dokuwiki CVE #(s):CVE-2012-0283
Created:August 13, 2012 Updated:October 30, 2012
Description: From the Mageia advisory:

Cross-site scripting (XSS) vulnerability in the tpl_mediaFileList function in inc/template.php in DokuWiki before 2012-01-25b allows remote attackers to inject arbitrary web script or HTML via the ns parameter in a medialist action to lib/exe/ajax.php

Alerts:
Gentoo 201301-07 dokuwiki 2013-01-09
Fedora FEDORA-2012-16605 dokuwiki 2012-10-30
Fedora FEDORA-2012-16614 dokuwiki 2012-10-30
Mageia MGASA-2012-0207 dokuwiki 2012-08-12

Comments (none posted)

kernel: privilege escalation

Package(s):linux-ti-omap4 CVE #(s):CVE-2012-3364 CVE-2012-3400
Created:August 13, 2012 Updated:March 7, 2013
Description: From the Ubuntu advisory:

Dan Rosenberg discovered flaws in the Linux kernel's NCI (Near Field Communication Controller Interface). A remote attacker could exploit these flaws to crash the system or potentially execute privileged code. (CVE-2012-3364)

Some errors where discovered in the Linux kernel's UDF file system, which is used to mount some CD-ROMs and DVDs. An unprivileged local user could use these flaws to crash the system. (CVE-2012-3400)

Alerts:
SUSE SUSE-SU-2015:0812-1 kernel 2015-04-30
Oracle ELSA-2013-1645 kernel 2013-11-26
Oracle ELSA-2013-0594 kernel 2013-03-07
Oracle ELSA-2013-0594 kernel 2013-03-07
CentOS CESA-2013:0594 kernel 2013-03-06
Scientific Linux SL-kern-20130306 kernel 2013-03-06
Red Hat RHSA-2013:0594-01 kernel 2013-03-05
openSUSE openSUSE-SU-2013:0396-1 kernel 2013-03-05
Oracle ELSA-2013-2507 kernel 2013-02-28
Mageia MGASA-2013-0016 kernel-rt 2013-01-24
Mageia MGASA-2013-0011 kernel-tmb 2013-01-18
Mageia MGASA-2013-0010 kernel 2013-01-18
Mageia MGASA-2013-0012 kernel-vserver 2013-01-18
Mageia MGASA-2013-0009 kernel-linus 2013-01-18
SUSE SUSE-SU-2012:1391-1 Linux kernel 2012-10-24
Oracle ELSA-2012-2043 kernel 2012-11-09
Oracle ELSA-2012-2044 kernel 2012-11-09
Oracle ELSA-2012-2044 kernel 2012-11-09
Red Hat RHSA-2012:1426-01 kernel 2012-11-06
Scientific Linux SL-kern-20121107 kernel 2012-11-07
Red Hat RHSA-2012:1491-01 kernel-rt 2012-12-04
Oracle ELSA-2012-1426 kernel 2012-11-06
CentOS CESA-2012:1426 kernel 2012-11-07
Oracle ELSA-2012-2043 kernel 2012-11-09
Ubuntu USN-1562-1 linux-lts-backport-natty 2012-09-10
Ubuntu USN-1556-1 linux-ec2 2012-09-06
Ubuntu USN-1557-1 linux 2012-09-06
Mageia MGASA-2012-0237 kernel 2012-08-23
Ubuntu USN-1529-1 linux 2012-08-10
Ubuntu USN-1514-1 linux-ti-omap4 2012-08-10
Ubuntu USN-1539-1 linux-lts-backport-oneiric 2012-08-14
Ubuntu USN-1533-1 linux 2012-08-10
Ubuntu USN-1532-1 linux-ti-omap4 2012-08-10

Comments (none posted)

koffice: code execution

Package(s):koffice CVE #(s):CVE-2012-3455
Created:August 10, 2012 Updated:August 30, 2012
Description:

From the Ubuntu advisory:

It was discovered that KOffice incorrectly handled certain malformed MS Word documents. If a user or automated system were tricked into opening a crafted MS Word file, an attacker could cause a denial of service or execute arbitrary code with privileges of the user invoking the program.

Alerts:
openSUSE openSUSE-SU-2012:1060-1 koffice 2012-08-30
Fedora FEDORA-2012-11546 koffice 2012-08-14
Ubuntu USN-1526-1 koffice 2012-08-09

Comments (none posted)

libotr: code execution

Package(s):libotr CVE #(s):CVE-2012-3461
Created:August 13, 2012 Updated:September 16, 2013
Description: From the Debian advisory:

Just Ferguson discovered that libotr, an off-the-record (OTR) messaging library, can be forced to perform zero-length allocations for heap buffers that are used in base64 decoding routines. An attacker can exploit this flaw by sending crafted messages to an application that is using libotr to perform denial of service attacks or potentially execute arbitrary code.

Alerts:
Gentoo 201309-07 libotr 2013-09-15
Mandriva MDVSA-2013:097 libotr 2013-04-10
openSUSE openSUSE-SU-2013:0155-1 libotr 2013-01-23
SUSE SUSE-SU-2012:1578-1 libotr 2012-11-28
openSUSE openSUSE-SU-2012:1525-1 libotr 2012-11-22
Fedora FEDORA-2012-11934 libotr 2012-08-25
Fedora FEDORA-2012-11959 libotr 2012-08-25
Mageia MGASA-2012-0223 libotr 2012-08-18
Ubuntu USN-1541-1 libotr 2012-08-16
Mandriva MDVSA-2012:131 libotr 2012-08-13
Debian DSA-2526-1 libotr 2012-08-12

Comments (none posted)

libvirt: remote denial of service

Package(s):libvirt CVE #(s):CVE-2012-3445
Created:August 15, 2012 Updated:September 5, 2012
Description: From the CVE entry:

The virTypedParameterArrayClear function in libvirt 0.9.13 does not properly handle virDomain* API calls with typed parameters, which might allow remote authenticated users to cause a denial of service (libvirtd crash) via an RPC command with nparams set to zero, which triggers an out-of-bounds read or a free of an invalid pointer.

Alerts:
Fedora FEDORA-2012-12523 libvirt 2012-09-04
Scientific Linux SL-libv-20120823 libvirt 2012-08-23
Oracle ELSA-2012-1202 libvirt 2012-08-23
CentOS CESA-2012:1202 libvirt 2012-08-24
Fedora FEDORA-2012-11843 libvirt 2012-08-22
Red Hat RHSA-2012:1202-01 libvirt 2012-08-23
openSUSE openSUSE-SU-2012:0991-1 libvirt 2012-08-15

Comments (none posted)

openttd: denial of service

Package(s):openttd CVE #(s):CVE-2012-3436
Created:August 13, 2012 Updated:August 30, 2012
Description: From the Mageia advisory:

This security update fixes CVE-2012-3436 (Denial of service (server) using ships on half tiles and landscaping).

Alerts:
openSUSE openSUSE-SU-2012:1063-1 openttd 2012-08-30
Fedora FEDORA-2012-12198 openttd 2012-08-27
Fedora FEDORA-2012-12208 openttd 2012-08-27
Mageia MGASA-2012-0212 openttd 2012-08-12

Comments (none posted)

perl-RT-Authen-ExternalAuth: privilege escalation

Package(s):perl-RT-Authen-ExternalAuth CVE #(s):CVE-2012-2770
Created:August 10, 2012 Updated:August 15, 2012
Description:

From the Red Hat advisory:

RT::Authen::ExternalAuth 0.10 and below (for all versions of RT) are vulnerable to an escalation of privilege attack where the URL of a RSS feed of the user can be used to acquire a fully logged-in session as that user.

Alerts:
Fedora FEDORA-2012-11337 perl-RT-Authen-ExternalAuth 2012-08-09
Fedora FEDORA-2012-11360 perl-RT-Authen-ExternalAuth 2012-08-09

Comments (none posted)

php5: denial of service

Package(s):php5 CVE #(s):CVE-2012-3450
Created:August 14, 2012 Updated:August 15, 2012
Description: From the CVE entry:

pdo_sql_parser.re in the PDO extension in PHP before 5.3.14 and 5.4.x before 5.4.4 does not properly determine the end of the query string during parsing of prepared statements, which allows remote attackers to cause a denial of service (out-of-bounds read and application crash) via a crafted parameter value.

Alerts:
Gentoo 201209-03 php 2012-09-23
Ubuntu USN-1569-1 php5 2012-09-17
Debian DSA-2527-1 php5 2012-08-13

Comments (none posted)

rubygem-actionpack: denial of service

Package(s):rubygem-actionpack CVE #(s):CVE-2012-3424
Created:August 10, 2012 Updated:August 15, 2012
Description:

From the Red Hat advisory:

DoS Vulnerability in authenticate_or_request_with_http_digest

There is a DoS vulnerability in Action Pack digest authentication handling in Rails.

Alerts:
Red Hat RHSA-2013:0582-01 openshift 2013-02-28
openSUSE openSUSE-SU-2012:1066-1 rubygem 2012-08-30
Fedora FEDORA-2012-11363 rubygem-actionpack 2012-08-09
Fedora FEDORA-2012-11353 rubygem-actionpack 2012-08-09

Comments (none posted)

Page editor: Jake Edge

Kernel development

Brief items

Kernel release status

The current development kernel remains 3.6-rc1; Linus, who has been on vacation, has not made a new 3.6-rc release since August 2. There has been a slow flow of fixes into the mainline repository, though; a new release should happen in the near future.

Stable updates: 3.0.40, 3.4.8, and 3.5.1 were released on August 9; 3.2.27 came out on August 11; and 3.0.41, 3.4.9 and 3.5.2 were released on August 15. That final set included, along with the usual fixes, the recent random number generator improvements.

Comments (none posted)

Quotes of the week

I'm trying to appear to be an incompetent maintainer so that someone will offer to take over. It isn't working yet.
Neil Brown

Anyway. This was my attempt to spend a few days doing something more relaxing than secure boot, and all I ended up with was eczema and liver pain. Lesson learned, hardware vendors hate you even more than firmware vendors do.
Matthew Garrett (thanks to Cesar Eduardo Barros)

Comments (2 posted)

Making workqueues non-reentrant

By Jonathan Corbet
August 15, 2012
Workqueues are the primary mechanism for deferring work within the Linux kernel. The maintainer of this facility is Tejun Heo, who recently posted a patch series changing an aspect of workqueue behavior that, perhaps, few kernel developers know about. Most workqueues are reentrant, meaning that the same work item can be running on multiple CPUs at the same time. That can result in concurrency that developers are not expecting; it also significantly complicates the various "flush" operations in surprising ways. In summary, Tejun says:

In the end, workqueue is a pain in the ass to get completely correct and the breakages are very subtle. Depending on queueing pattern, assuming non-reentrancy might work fine most of the time. The effect of using flush_work() where flush_work_sync() should be used could be a lot more subtle. flush_work() becoming noop would happen extremely rarely for most users but it definitely is there.

A tool which is used as widely as workqueue shouldn't be this difficult and error-prone. This is just silly.

Tejun's patch set is intended to make the workqueue interface work more like the timer interface, which is rather more careful about allowing concurrent operations on the same timer. All workqueues become non-reentrant, and aspects of the API related to reentrant behavior have been simplified. There is, evidently, a slight performance cost to the change, but Tejun says it is small, and the cost of flush operations should go down. It seems like a worthwhile trade-off, overall, but anybody who maintains code that depends on concurrent work item execution will want to be aware of the change.

Comments (none posted)

Kernel development news

Firmware loading and suspend/resume

By Jonathan Corbet
August 15, 2012
Many devices are unable to function until the host system has loaded them with their operating firmware. Runtime-loadable firmware has some real advantages: the hardware can be a little cheaper to make, and the firmware is easily upgraded after the hardware has been sold. But it also poses some problems, especially when combined with other features. Properly handling firmware loading over suspend/resume cycles has been a challenge for the kernel for some time, but a new set of patches may be poised to make things work better with little or no need for changes to drivers.

The obvious issue with suspend/resume is that any given device may lose its firmware while the system is suspended. The whole point of suspending the system is to reduce its power consumption to a minimum, so that operation may well power down peripheral devices entirely. Loss of firmware during suspend doesn't seem like it should be a big problem; the driver can just load the firmware again at resume time. But firmware tends to live on disk, and the actual firmware loading operation involves the running of a helper process in user space. Neither the disk nor user space are guaranteed to be available at the point in the resume process when a given device wants its firmware back; drivers that attempt to obtain firmware at such times may fail badly. The result is resume failures; they may be of the intermittent, developer-never-sees-it variety that can be so frustrating to track down. So the search has been on for a more robust solution for some time.

In July, Ming Lei tried to address this problem with a patch integrating firmware loading with the deferred driver probing mechanism. In short, if a firmware load fails, the whole driver initialization process would be put on the deferred queue to be retried later on. So, a driver that is unable to load its firmware at resume time will be put on hold and retried at a later point when, hopefully, the resources required to complete the firmware load will be available. That, Ming hoped, would resolve a lot of resume-time failures without requiring changes to lots of drivers.

Linus, however, disagreed:

Sure, for a lot of devices it's fine to load the firmware later. But some devices may be part of the resume sequence in very critical ways, and deferring the firmware loading will just mean that the resume will fail.

Deferring firmware loading in this manner, he thought, would just serve to hide problems from developers but leave them to burn users later on. It is much better, he thought, to force driver writers to deal with the problem explicitly.

The classic way for a driver writer to handle this problem is to just keep the firmware around after it is loaded at system boot time. Permanently cached firmware will always be available when it is needed, so firmware loading at resume time should be robust. The problem with that approach is that the firmware blobs loaded into some devices can be quite large; keeping them around forever can waste a fair amount of kernel-space memory. To make things worse, these blobs are loaded into vmalloc() memory (so that they appear to be contiguous in memory); that memory can be in short supply on 32-bit systems. Permanently caching the firmware is, thus, not an ideal solution, but that is what a number of drivers do now.

After the discussion with Linus, Ming thought for a while and came back with a new proposal: cache firmware blobs, but only during the actual suspend/resume cycle. Drivers can, of course, do that now; they can request a copy of the firmware while suspending their devices, and release that copy once it's no longer needed at resume time. But that is a chunk of boilerplate code that would need to be added to each driver. Ming's patch, instead, makes this process automatic and transparent.

In particular, request_firmware() is changed to make a note of the name of every firmware blob it is asked to load. This information is reference-counted and tied to the devices that needed the firmware; it can thus be discarded if all such devices disappear. The result is a simple data structure tracking all of the firmware blobs that may be needed by the hardware currently present in the system.

At system suspend time, the code simply goes and loads every piece of firmware that it thinks may be needed. That data then sits in memory while the system is suspended. At resume time, those cached blobs are available to any driver, with no need for filesystem access or user-space involvement, via the usual request_firmware() interface. Once the resume process is complete, the firmware loader will, after a small delay, release all of those cached firmware images, freeing the associated memory and address space for other uses.

The patch seems close to an ideal solution. Firmware loading at resume time becomes more robust, there is no need for drivers to be concerned with how it works, and wasted memory is minimized. Even Linus said "Nothing in this patchset made me go 'Eww'", which, from him, can be seen as reasonably high praise. It doesn't solve every problem; there are, for example, some strange devices that retain firmware over a reboot but not over suspend, so the system may not know that a specific firmware image is needed until resume time, when it's too late. But such hardware is probably best handled as a special case. For the rest, we may be close to a solution that simply works—and that brings an end to the recurring "firmware at resume time" discussions on the mailing lists.

Comments (7 posted)

TCP friends

By Jonathan Corbet
August 15, 2012
One of the many advantages of the TCP network protocol is that the process at one end of a connection need not have any idea of where the other side is. A process could be talking with a peer on the other side of the world, in the same town, or, indeed, on the same machine. That last case may be irrelevant to the processes involved, but it can be important for performance-sensitive users. A new patch from Google seems likely to speed that case up in the near future.

A buffer full of data sent on the network does not travel alone. Instead, the TCP layer must split that buffer into reasonably-sized packets, prepend a set of TCP headers to it, and, possibly, calculate a checksum. The packets are then passed to the IP layer, which throws its own headers onto the beginning of the buffer, finds a suitable network interface, and hands the result off to that interface for transmission. At the receiving end the process is reversed: the IP and TCP headers are stripped, checksums are compared, and the data is merged back into a seamless stream for the receiving process.

It is all a fair amount of work, but it allows the two processes to communicate without having to worry about all that happens in between. But, if the two processes are on the same physical machine, much of that work is not really necessary. The bulk of the overhead in the network stack is there to ensure that packets do not get lost on their way to the destination, that the data does not get corrupted in transit, and that nothing gets forgotten or reordered. Most of these perils do not threaten data that never leaves the originating system, so much of the work done by the networking stack is entirely wasted in this case.

That much has been understood by developers for many years, of course. That is why many programs have been written specifically to use Unix-domain sockets when communicating with local peers. Unix-domain sockets ("pipes") provide the same sort of stream abstraction, but, since they do not communicate between systems, they avoid all of the overhead added by a full network stack. So faster communications between local processes is possible now, but it must be coded explicitly in any program that wishes to use it.

What if local TCP communications could be accelerated to the point that they are competitive with Unix-domain sockets? That is the objective of this patch from Bruce Curtis. The idea is simple enough to explain: when both endpoints of a TCP connection are on the same machine, the two sockets are marked as being "friends" in the kernel. Data written to such a socket will be immediately queued for reading on the friend socket, bypassing the network stack entirely. The TCP, IP, and loopback device layers are simply shorted out. The actual patch, naturally enough, is rather more complicated than this simple description would suggest; friend sockets must still behave like TCP sockets to the point that applications cannot tell the difference, so friend-handling tweaks must be applied to many places in the TCP stack.

One would hope that this approach would yield local networking speeds that are at least close to competitive with those achieved using Unix-domain sockets. Interestingly, Bruce's patch not only achieves that, but it actually does better than Unix-domain sockets in almost every benchmark he ran. "Better" means both higher data transmission rates and lower latencies on round-trip tests. Bruce does not go into why that is; perhaps the amount of attention that has gone into scalability in the networking stack pays off in his 16-core testing environment.

There is one important test for which Bruce posted no results: does the TCP friends patch make things any slower for non-local connections where the stack bypass cannot be used? Some of the network stack hot paths can be sensitive to even small changes, so one can imagine that the networking developers will want some assurance that the non-bypass case will not be penalized if this patch goes in. There are various other little issues that need to be dealt with, but this patch looks like it is on track for merging in the relatively near future.

If it is merged, the result should be faster local communications between processes without the need for special-case code using Unix-domain sockets. It could also be most useful on systems hosting containerized guests where cross-container communications are needed; one suspects that Google's use case looks somewhat like that. In the end, it is hard to argue against a patch that can speed local communications by as much as a factor of five, so chances are this change will go into the mainline before too long.

Comments (22 posted)

Signed overflow optimization hazards in the kernel

August 15, 2012

This article was contributed by Paul McKenney

A recent LWN article described a couple of the hazards that compiler optimizations can pose for multithreaded code. This article takes a different approach, looking at a compiler-optimization hazard that can also strike sequential code. This hazard stems from an annoying aspect of the C11 standard, namely that signed-integer overflow is undefined (Section 3.4.3).

Overflow is a consequence of the fact that a computer's native arithmetic capability is quite limited. For example, if a C program running on a typical Linux system tries adding one int variable with value 2,147,483,647 to another int with value 1, the result will be -2,147,483,648—which might surprise people who naively expect the mathematically correct value of +2,147,483,648. This deviation from mathematical correctness occurs because the machine cannot represent the correct value of 2,147,483,648 in a 32-bit twos-complement integer. Therefore, any attempt to compute this number without help from software will result in overflow.

Quick Quiz 1: Yecch!!! Why can't CPU designers come up with something better?
Answer

The number -2,147,483,648 is “unusual” in that adding it to itself (again, using twos-complement 32-bit integers) results in zero. Furthermore, it is its own negative: negating -2,147,483,648 results in -2,147,483,648. Therefore, this weird number is (1) equal to half of zero and (2) both positive and negative but (3) not equal to zero. This weirdness earned this number a special mention in the C standard, which says that its handling is implementation-defined (Section 6.2.6.2, Paragraph 2).

Unfortunately, relying on signed integer overflow for both normal and weird values is extremely convenient when working with free-running counters. For example, suppose that our program is dealing with a succession of work items, each designated by an integer. Suppose further that this code might be called upon to report on the past and future 25 work items. This situation will likely require a fast way to distinguish between past, current, and future work. Signed twos-complement integers make this easy. If current is the integer corresponding to the current work item,

    (current - other > 0)

will evaluate to true if the other work item is from the past. This works, even if the counter hits the maximum value and wraps around, due to the circular nature of twos-complement integers: adding one to the largest positive number that can be represented results in the smallest negative number. As a result, there is no need to write special-case code to handle counter overflows.

The simplicity of this approach has caused coders to willfully ignore the undefined nature of C signed-integer overflow since well before the dawn of Linux. For example, I was happily relying on twos-complement semantics from C signed-integer overflow in the early 1980s, and the only reason I wasn't doing so earlier was that I wasn't using C any earlier. Nor am I alone. Here are a couple of representative code fragments from version 3.5 of the Linux kernel:

    if (auth_vnode->acl_order - acl_order > 0) {

    return (int)(tcmp - __raw_readl(timer_base + MX1_2_TCN)) < 0 ?  -ETIME : 0;

The first is from afs_cache_permit() in the AFS filesystem, which is using this pattern to sort out the order of events in a distributed filesystem. The second example is from mx1_2_set_next_event() in the ARM architecture, which is using a variation on this theme to determine whether the requested event time really is in the future. Here the actual subtraction is unsigned, but the result is cast to a signed integer. Because unsigned longs are always positive, the only way that the result can be negative (when interpreted as a signed value) is overflow, which the compiler is permitted to assume never happens. The compiler is therefore within its rights to unconditionally evaluate the test as false and return zero, which might fatally disappoint the caller.

In addition, there used to be several instances of this pattern in the Linux kernel's RCU implementation, where it was used to figure out whether a given request had been implicitly satisfied due to the efforts undertaken to fulfill concurrent requests of the same type. These have since been converted to use unsigned arithmetic using the technique described below.

One might well ask: is there really a problem here? All systems running Linux are twos complement, so we really should not worry about clauses in the C standard designed to handle the wider variety of arithmetic that was available a few decades ago, right?

Unfortunately, wrong. The C compiler can and does make use of undefined behavior when optimizing. To see this, consider the following code:

     1 long long_cmp_opt(const int a, const int b)
     2 {
     3   if (a > 0) {
     4     do_something();
     5     if (b < 0) {
     6       do_something_else();
     7       if ((a - b) > 0)
     8         do_another_thing();
     9     }
    10   }
    11 }

At line 7 the compiler knows that the variable a is positive and the variable b is negative. Therefore, ignoring the possibility of integer overflow, the compiler knows that this “if” condition will always evaluate to true, meaning that the compiler is within its rights to invoke do_another_thing() unconditionally, without actually doing the subtraction and comparison. In contrast, if a is (say) 2,147,483,647 and b is -2,147,483,648, the unoptimized code would avoid invoking do_another_thing(). Therefore, this optimization has significantly changed the program's behavior.

Quick Quiz 2: But just how often is the compiler going to know the sign of both the values???
Answer

Of course, in real life, overflow really can occur. But because the C standard says that signed overflow is undefined, the compiler is permitted to do whatever it wishes in the overflow case. And GCC 4.6.1 really does omit the subtraction and comparison when compiling this example for x86 at optimization levels of -O2 or higher.

Fortunately for the Linux kernel, GCC will generate the subtraction and comparison for -O1 or less. But optimizations can migrate to lower optimization levels over time, and there may come a time when either performance or energy-efficiency considerations motivate the Linux kernel to move to higher optimization levels. If that happens, what can be done?

Quick Quiz 3: First you were talking about overflowing, now about wrapping. Consistent terminology, please?
Answer

One approach is to move to unsigned integers for free-running counters. The C standard defines unsigned integers to use modular arithmetic, so that wrapping the counter is fully defined (Section 6.2.5 Paragraph 9).

Of course, checking for counter wrap must be done differently. For purposes of comparison, here is the (undefined) signed version:

    if ((a - b) < 0)

And here is the corresponding version for unsigned long types:

    if (ULONG_MAX / 2 < a - b)

This version relies on the fact that, bit for bit, twos-complement addition and subtraction are identical to their unsigned counterparts. Now, the bitwise representation of the constant ULONG_MAX / 2 is a zero bit followed by all one-bits, which is the largest value that does not have the most-significant bit set. Therefore, if the result of computing a - b is greater than this constant, we know that this result has its uppermost bit set. Because the uppermost bit is the sign bit for twos-complement numbers, we are guaranteed that the signed and unsigned versions compute identical results.

Of course, the unsigned version is more characters to type, but that is what inline functions and C-preprocessor macros are for. But what about the code that the compiler generates? After all, the Linux kernel absolutely does not need extra instructions loading large constants for each comparison!

The good news is that GCC actually generates exactly the same code for both of the above versions when compiled with -O1 on both x86 and PowerPC:

   /* x86 code. */
      e:	8b 44 24 04          	mov    0x4(%esp),%eax
     12:	2b 44 24 08          	sub    0x8(%esp),%eax
     16:	c1 e8 1f             	shr    $0x1f,%eax
   /* PowerPC code. */
     1c:   7c 64 18 50     subf    r3,r4,r3
     20:   78 63 0f e0     rldicl  r3,r3,1,63

Of course, there will be times when the Linux kernel absolutely must rely on undefined behavior. However, this is not one of those times: As shown above, there are straightforward ways to avoid relying on signed integer overflow. Removing the kernel's reliance on signed integer overflow could avoid our getting burned by increasingly aggressive optimization, and might further allow use of higher optimization levels to improve performance and battery lifetime. So it is not too early to start future-proofing the Linux kernel by removing its reliance on signed integer overflow!

Answers to Quick Quizzes

Quick Quiz 1: Yecch!!! Why can't CPU designers come up with something better?

Answer: CPU designers have come up with a variety of schemes over the decades. However, in my experience, each scheme has its own peculiarities. I have used ones complement and twos complement, and dealing with the peculiarities of twos complement proved easier for me than those of ones complement.

That said, I suspect that the dominance of twos complement was not due to ease of use, but rather due to the fact that it allows a single hardware adder to perform both signed and unsigned computations.

Back to Quick Quiz 1.

Quick Quiz 2: But just how often is the compiler going to know the sign of both the values???

Answer: The more inline functions we add, the higher the probability that the compiler will be able to infer all sorts of things about the values in question, including their sign. And it only takes one unwanted optimization for the Linux kernel to fail.

Back to Quick Quiz 2.

Quick Quiz 3: First you were talking about overflowing, now about wrapping. Consistent terminology, please?

Answer: Interestingly enough, the C standard does not define overflow for unsigned integers. Instead, it defines the unsigned integral types to use modular arithmetic so as to eliminate the possibility of overflow. Aside from things like division by zero, that is. The term “wrap” works regardless.

Back to Quick Quiz 3.

Comments (55 posted)

Patches and updates

Kernel trees

Greg KH Linux 3.5.2 ?
Greg KH Linux 3.5.1 ?
Greg KH Linux 3.4.9 ?
Greg KH Linux 3.4.8 ?
Steven Rostedt 3.4.8-rt16 ?
Ben Hutchings Linux 3.2.27 ?
Steven Rostedt 3.2.27-rt40 ?
Greg KH Linux 3.0.41 ?
Greg KH Linux 3.0.40 ?
Steven Rostedt 3.0.40-rt60 ?

Architecture-specific

Core kernel code

Development tools

Device drivers

Documentation

Filesystems and block I/O

Memory management

Networking

Security-related

Miscellaneous

Ben Hutchings ethtool 3.5 released ?

Page editor: Jonathan Corbet

Distributions

A distribution for less-powerful systems: antiX-12

August 15, 2012

This article was contributed by Koen Vervloesem

The antiX Linux distribution started its life as a lightweight version of MEPIS, which is based on Debian stable, but it has diverged from its mother distribution. The current antiX-12 release comes 15 months after antiX-M11 and is based directly on Debian testing, instead of being remastered from MEPIS. The goal of antiX is to provide a fully functional Linux distribution for older computers, which is demonstrated by its modest minimum hardware requirements: a Pentium II 266 MHz processor and 128 MB RAM. To that end, antiX-12 uses a 3.5 kernel optimized for Pentium and AMD K5/K6 processors.

[AntiX-12 installer]

AntiX can be used as a live CD (e.g. as a rescue system), but most users will want to install it. The installation footprint for the full install is 2.6 GB. It uses a modified MEPIS installer, antiX Install, which looks quite old-fashioned but is straightforward and easy to use. A strong point of this installer is that it offers a lot of explanation in the left panel, so inexperienced users are not left out in the cold.

Lightweight window managers

AntiX-12 comes in three variants: core, base and full. The core ISO is 135 MB and doesn't contain any proprietary drivers. As a core version, it lacks X and has a command-line installer. This is the version you want to install on a headless server or if you want to choose for yourself which minimal X environment to install. But if you just want a minimal X environment without too much configuration, the base variant is better suited: this 356 MB ISO features four lightweight window managers (Fluxbox, JWM, wmii and dwm) and some applications.

The most usable variant is, of course, the full version, which offers a lot of choices for the graphical environment while still providing a complete desktop experience. It uses IceWM as its default window manager, but you can also choose Fluxbox or JWM. Moreover, you can use these three with or without the ROX Desktop. ROX adds launcher icons and the ROX-Filer to your desktop. For more minimalist users, who don't require a desktop environment with launcher icons and so on, antiX also offers the tiling window managers wmii and dwm.

Logging in is handled by the lightweight display manager SLiM. By pressing F1, you can cycle through all available window manager session types. Once you are logged in, you see some system information at the top right of the desktop background, such as the uptime, date, CPU, RAM, swap, and filesystem usage. All this is shown by the highly configurable system monitor application Conky.

Applications

The full version comes with a lot of applications: Iceweasel 10—Debian's rebranded Firefox—as the web browser, Claws-mail as the email client, Pidgin for instant messaging, LibreOffice as the office suite, XMMS as the default audio player, and GNOME MPlayer as the default video player. For file management, the user has ROX-Filer at their disposal when using the ROX Desktop. Otherwise, SpaceFM is available, which uses udevil instead of udisks. AntiX-12 uses the Debian testing repositories by default, so you can install a lot of other applications using Synaptic, which is shipped for graphical package management.

Maybe the most interesting aspect of antiX (especially the full variant) is that it comes with a lot of command-line alternatives to well-known applications. These are shown in a separate submenu "Terminal Apps". For IRC, antiX comes with the good old irssi, for downloading torrents there's rTorrent, for email there's Alpine, and for reading RSS, newsbeuter. For playing audio files from the command line antiX ships moc and for file management, Midnight Commander (mc).

[AntiX-12 sxw]

What's interesting is that antiX-12 not only includes the usual suspects (mentioned above), but also some lesser known command-line programs. For instance, for ripping CDs there's RipIt, for writing documents, wordgrinder, and for creating presentations xsw. When you click on Slides in the Terminal Apps -> Office menu, it even shows the user a presentation written in xsw, explaining how to create a presentation with xsw.

When choosing one of these command-line programs in the application menu, a terminal window is opened and the program is started in it. All in all, antiX-12 is a nice showcase of command-line utilities. My only criticism is that some of these utilities have been unmaintained for a couple of years, so teaching new Linux users how to use these tools is not really future-proof.

AntiX-12 also comes with a lot of useful scripts implementing features normally only available in full-fledged desktop environments. For instance, the program user-management lets you edit and add users, and wallpaper.py lets you change the wallpaper. A problem is that many of these scripts aren't available from the application menu. The user has to read the online documentation to discover their existence.

Precious hardware resources

I tested antiX-12 primarily on an Acer Aspire One, a first-generation netbook which is a fairly underpowered machine by current standards: it has a 1.6 GHz Intel Atom processor, 512 MB RAM and a slow SSD.

At first, I wasn't impressed with antiX-12 on the Aspire One because of some hardware support issues. For instance, once installed, antiX doesn't boot directly to a graphical mode, but shows a message about an undefined video mode. This message is shown for every boot; you can only get rid of it by changing the vga=NNN variable in /boot/grub/menu.lst (antiX still uses GRUB Legacy). In addition, the VGA mode number must be converted from hexadecimal to decimal, something a newcomer won't easily figure out on their own. Another hardware issue that I encountered on my netbook is that the mouse pointer freezes from time to time, which also happened during installation, so I had to use the keyboard to be able to complete installation. When I scroll horizontally on the touchpad while the application menu is open, the menu suddenly detaches itself from the bottom of the screen. None of these problems were present on other (granted, more modern) machines I tested antiX-12 on, such as Dell and Acer laptops, but the Aspire One is well-supported on other Linux distributions, so these issues are surprising.

Other than these glitches, though, antiX-12 works well on an underpowered machine. This is of course mostly due to the lightweight window managers. I never had the impression that I was working on an old machine, something I did have with other distributions. For instance, Ubuntu 12.04 really doesn't work well on this machine, mostly because of the small amount of RAM. Even Lubuntu 12.04, which is meant for older computers, didn't fare well on the Aspire One: the machine seemed to freeze for a long time during the installation, and when it was finally installed, it felt slower than antiX-12.

AntiX-12 succeeds in reviving that machine you may not have touched in a while. But it's not only because of the window manager: the antiX developers have cleverly chosen applications with your precious hardware resources in mind. You won't find GIMP, for example, but the mtPaint graphic editor. The gentle push to use the command-line applications also helps to use less resources.

Rough edges but helpful documentation

The antiX developers say that Linux newcomers are part of their target audience, but for these users, the distribution is a bit too rough around the edges. For instance, antiX doesn't seem to be targeted at netbook and laptop users (the words "laptop", "notebook" or "netbook" aren't even mentioned on the home page), as it doesn't show a battery indicator. You have to add some lines to ~/.conkyrc if you always want to see the status of your battery. Of course this is only an inconvenience on portable machines: a desktop or a laptop that is always connected to AC won't need such an indicator. I found another oversight in the configuration of the command-line RSS reader newsbeuter: if you start it from the application menu, it fails because the antiX developers didn't set up a default file with RSS feed URLs. As a result, newsbeuter quits directly after it has started and the terminal window is closed before the user even sees what's wrong.

Also, antiX-12 comes with the antiX2usb utility, which promises to write an ISO file to a USB stick, optionally with a persistent home partition. However, according to the antiX web site, this application has some issues, and users should resort to the command-line script new_usb.sh. Many newcomers may not be reading the web site, so it would be better if the antiX2usb utility was removed or showed a big warning when started.

On the other hand, the installer is quite helpful with a lot of information for newcomers. The application menu has also a Help submenu with links to various sources of information, such as the antiX FAQ, ROX manual, documentation about Fluxbox, IceWM and JWM, and even man pages of some of the basic command-line programs. The MEPIS wiki contains some HOWTO articles for antiX and there's also an antiX forum. So, while antiX is not as polished as it should be for a distribution that targets newcomers, it's quite powerful for users that want to revive an old computer and have some time to fiddle with it. They'll probably even learn a couple of interesting command-line applications in the process.

Comments (6 posted)

Brief items

Distribution quotes of the week

For things that someone can go work on by themselves, such as exploring openrc, the most effective approach seems to be to open a discussion on debian-devel if they want some input, read the first couple day's worth of discussion, and then ignore the rest of the thread and just go on and do whatever one feels the right thing is. Almost none of the subsequent discussion after the first few days will be original or worth reading, let alone responding to. Even for things that can't be done by one team, seeking consensus by talking directly to the other teams and groups most affected is probably going to be more productive than participating in a 100-message thread in debian-devel.
-- Russ Allbery

Canek Peláez Valdés wrote:
> So let people make their OpenRC+mdev systems without systemd, and let
> people make their systemd+udev systems without OpenRC. Everybody wins.

I for one expect nothing less of Gentoo.

-- Peter Stuge

Ah, that must be why the community hasn't rallied around upstart yet... we aren't being hostile enough! Thanks for helping me understand, I'll do what I can to make sure Canonical is being appropriately hostile wrt upstart from now on. ;)
-- Steve Langasek

Comments (none posted)

CyanogenMod 9 is stable; 10 is underway

The CyanogenMod project has officially declared CM 9 stable. Updates to the alternative Android ROM are in the process of rolling out to servers. "Tonight’s release is for the majority of our [Ice Cream Sandwich] supported devices, the stragglers will catch up, and we will leave the door open for merging in additional devices from maintainers, external and internal. The team itself, will focus solely on Jelly Bean and maintenance of the CM 7 codebase." The Jelly Bean source code release forms the basis for the ongoing CM 10 work.

Comments (40 posted)

Red Hat Announces Preview Version of Enterprise-Ready OpenStack Distribution

Red Hat has announced the availability of the preview release of Red Hat’s OpenStack distribution. "Red Hat has been working with an early group of customers who have been strong advocates for a commercial release of OpenStack from Red Hat, and who have been instrumental in providing the feedback and testing required to bring this preview release to completion. The company now seeks to work with a wider group of customers to further develop Red Hat’s OpenStack distribution and its usage with other Red Hat products. In addition, Red Hat is working closely with key partners such as Rackspace to provide fully managed Red Hat OpenStack-powered clouds in the future."

Comments (none posted)

Scientific Linux 6.3

Scientific Linux 6.3 has been released. The release notes have the details.

Comments (none posted)

Slackware 14.0 RC1

The August 9 entry in the Slackware-current changelog [x86]; [x86_64] announces the first release candidate for Slackware 14.0. "Good hello, and happy Thursday! Mercury went direct early yesterday morning, and it was like the bugs started to fix themselves. It's almost enough to get me believing in that hocus-pocus nonsense! So, here's a bunch of updates that fix all of the reported issues in the beta, and we'll call this the 14.0 release candidate 1. Still some updates needed for the top-level documentation files, but we're clearly in the home stretch now (finally). Test away, and report any remaining bugs!"

Comments (1 posted)

Distribution News

Debian GNU/Linux

Debian turns 19

Nineteen years ago, on August 16, Ian Murdock posted his original founding announcement for Debian. "This is just to announce the imminent completion of a brand-new Linux release, which I'm calling the Debian Linux Release. This is a release that I have put together basically from scratch; in other words, I didn't simply make some changes to SLS and call it a new release. I was inspired to put together this release after running SLS and generally being dissatisfied with much of it, and after much altering of SLS I decided that it would be easier to start from scratch. The base system is now virtually complete (though I'm still looking around to make sure that I grabbed the most recent sources for everything), and I'd like to get some feedback before I add the "fancy" stuff."

Comments (2 posted)

Newsletters and articles of interest

Distribution newsletters

Comments (none posted)

McRae: Are We Removing What Defines Arch Linux?

Allan McRae defends recent and planned changes to the Arch Linux distribution. "The module control is more complex. Right… because 'rc.d start foo' is much simpler than systemctl start foo'. I think the real criticism people are trying to express is that the rc.d way of doing things is more traditional Arch. But have you looked at the quality of scripts in /etc/rc.d? It is generally poor. Moving to systemd unit files instead will be an advantage as they are much more simple to write and can be pushed to the upstream packages."

Comments (51 posted)

Page editor: Rebecca Sobol

Development

The Linux digital audio workstation - Part 2

August 15, 2012

This article was contributed by Dave Phillips

This is part 2 of my tour through Linux digital audio workstations (DAWs). Take a peek at part 1 for some background and the first five DAWs. These are good times for Linux as a platform for audio production, and great work is going on in Linux audio development. Let's look at some more of that work.

MusE

[MusE]

In its original design, MusE included the standard suite of tools for recording and editing audio and MIDI data. MusE's MIDI capabilities included piano-roll and event-list editors, along with a page for note entry and editing in standard music notation. Eventually the notation editor was removed, the program's original author moved on to other projects, and MusE development continued as a team project.

On June 30, 2012 the team announced the public availability of MusE version 2.0 (shown above). This release can be considered a big milestone for MusE—the GUI toolkit has advanced to Qt4, the music notation editor has returned, all viable plugin formats are supported, a Python interface has been added for scripted automation control, and so on. Clearly its developers want a MusE for the 21st century.

I tested MusE 2.0, built locally from its SVN sources. I encountered no problems compiling or configuring the program, and as you might expect from a 2.0 release MusE appears to have no stability issues. Some demonstration files are available for study purposes, but they're not very exciting. I loaded a MIDI file of an orchestral piece, invoked MusE's FluidSynth plugin, set the track outputs to the synth, and MusE was rocking to Bartok. Despite my questionable taste in soundfonts, MusE performed like a champion, with no audio glitches or xruns (buffer under or over-runs) reported by JACK.

Two notable projects have been derived from the MusE project. Werner Schweer's MuseScore is a fine standalone music notation program with a UI similar to those of the well-known notation editors for other platforms. Open Octave MIDI is a significant fork of the MusE sequencer, a MIDI-only version with many features added specifically for composers working with the large-scale MIDI resources required for orchestral pieces and full-length movie soundtracks.

The Non DAW

Developer Jonathan Moore Liles was evidently unhappy with the state of the Linux DAW—and much else in the Linux audio world—so he created his Non* software, a set of programs for recording, mixing, and editing audio and MIDI data. The Non DAW is the audio recorder in the set.

[Non DAW]

The developer has specified what he wants from a DAW: "non-linear, non-destructive arrangement of portions of audio clips [and] tempo and time signature mapping, with editing operations being closely aligned to this map". By design, the Non DAW is a track-based audio recorder/arranger that outsources its signal routing, mixing, and plugin support to JACK and JACK-aware applications designed for those purposes, such as the other members of the Non* family (the group includes a MIDI sequencer, a mixer, and a session manager). The family's few dependencies include FLTK for its GUI components, libsndfile for audio file I/O, and liblo for OSC messaging support. All dependencies for the Non* programs are commonly found in the software repositories of the mainstream Linux distributions.

Incidentally, JACK is absolutely required, as there is no support for any other audio or MIDI system. Direct ALSA will not work with the Non* suite, though of course the ALSA system is needed by JACK.

Since the Non DAW is available only in source code, I built and installed it on a laptop running AVLinux 5.0.1. I recorded a few tracks, tested playback control (from the Non DAW and from QJackCtl), and got a superficial view of the program. I must emphasize "superficial"—there's much more to the Non* software than meets the eye. The Non DAW is light on system resources, but it certainly isn't a lightweight. Again, by design, the program has limitations—no soundfile import, no MIDI tracks, and no plugin support. The Non DAW resembles the hard-disk recorder systems of the early 1990s that focused on a single task. The Non DAW and its modular approach may not suit everyone's workflow, but I found it fast and flexible.

Qtractor

Developer Rui Nuno Capela's Qtractor was originally planned to function as a replacement for the handy 4-track tape machines popular with home recordists during the MIDI revolution. The Fostex and Tascam companies pioneered the small-scale portable studio, but by the late 1990s little demand remained for such devices. Nevertheless, it seems that Rui missed the 4-track recorder enough to compel him to create a software alternative. The result of this compulsion is Qtractor, Rui's contribution to the Linux DAW line-up.

[Qtractor]

The screen shot at right illustrates Qtractor's main display with the familiar track-based view and arrangement of recorded material, but in many respects the display is truly its own creature. Qtractor's user interface is based on the most likable features of the portable studio hardware—easy access to controls and operations, presented in a direct uncomplicated interface. Simplicity remains a key concept behind Qtractor's development, but, as is the nature of such things, what began as a limited design has expanded into a richly-featured DAW that compares favorably to any other tool in this article.

Among its many strengths, Qtractor supports every plugin format that can be supported under Linux, including VST/VSTi plugins in native-Windows and native-Linux formats. Of course it also likes LADSPA, LV2, and even DSSI plugins, making it perhaps the most comprehensive Linux host for audio and MIDI plugins.

Rui is an apparently tireless developer. His Q* family of software includes a soundfont synthesizer, a MIDI network control GUI, two LV2 plugins, an editor for the Yamaha XG sound devices, a front-end for the LinuxSampler, and a very popular GUI for controlling JACK. You can check out news and information on the whole Q-crew on Rui's site at rncbc.org.

Renoise

[Renoise]

Renoise might be best described as a nuclear-powered tracker. If you're familiar with the MOD music trackers of the late 80s and early 90s then you'll see some familiar sights in Renoise, such as the old-school tracker interface and its divisions. However, Renoise is in another category altogether. It is, in fact, one of the most sophisticated music-making environments available for Linux or any other platform.

Like Mixbus, Renoise is a cross-platform commercial offering with a reasonable price schedule and a boatload of features. The program retains the historic tracker UI, including its division into Pattern Editor, Song Editor, and Sample Editor. Graphic editors are available for many tasks, and Renoise provides extensive support for external plugins as well as offering its own excellent internal plugins.

If you need to be convinced about Renoise's capabilities, I'll direct you to the music of Modlys, Atte Andre Jensen's project featuring the wonderful singing of Britt Dencker Jensen. If Modlys doesn't do it for you, check out some of the other artists' offerings on the Renoise Web site. The program is very popular, and for years its users have been steadily pumping out music made with Renoise.

Rosegarden

[Rosegarden]

Rosegarden began its long life as a MIDI sequencer with a standard music notation interface, a rare thing for systems running X11 in 1993. Today it is a fully capable DAW, complete with all the expected audio and MIDI features, and it still provides a very good music notation interface. Rosegarden provides multiple data views—along with the notation editor, we find the expected track/arranger waveform display, a piano-roll MIDI sequencer, a rhythm/drum pattern composer, and a MIDI event list editor. All views update one another, so you can switch between the views whenever you like. Alas, Rosegarden has no integrated soundfile editor, but you can configure it to summon your favorite (mine is set to use mhWaveEdit).

Rosegarden includes some nice higher-level features built-in for the composer's assistance. For example, the top menu bar includes the typical headings for File, Edit, View, and so forth. However, the menu bar also includes headings for Composition and Studio menus. The Composition menu manages aspects of time and tempo in your piece, including some cool tempo-setting and beat-fitting operations. The Studio menu provides selections for accessing the audio and MIDI mixers, configuring your audio and MIDI devices (real and virtual), managing your synthesizer plugins, and setting other global MIDI parameters. Unique studio configurations can be saved and reloaded, and you can select your current setup as the default studio. General MIDI support is excellent, and Rosegarden comes prepared with device profiles for a variety of other synthesizer layouts. If your devices aren't already on the list, you can easily add custom profiles for your gear.

Developers may be interested to note that Rosegarden's Changelog reflects the many changes in Linux on the larger scale. The program started out as an X11-based project using the Xaw graphics toolkit. Attempts have been made to move the program's GUI elements to Motif, Tcl/Tk, and gtkmm, but eventually the toolkit selection settled on Qt, where it remains to this day. Over the years its language basis has undergone significant changes. The developers have coded Rosegarden in C, ObjectiveC, and C++ with CORBA. Eventually the CORBA components were eliminated, and Rosegarden is now a pure C++ project with a very handsome Qt4 GUI.

Alas, space is dear here, and Rosegarden has so many features worth describing. I'll leave it by mentioning one of its more unusual attractions: its ability to export your work in Csound score format. Csound has many front-ends, but none with a notation interface for event entry. Like its notation capability, the Csound export facility has been a feature since Rosegarden's earliest releases.

Traverso

[Traverso]

Its Web site claims that with its interface innovations Traverso will let you do "twice the work in half the time". While the statement may not be literally true, speed is definitely the watchword for Traverso. Keyboard and mouse are used separately and together in clever ways that do contribute to faster execution of many operations common to DAWs.

I didn't expect Traverso to be fully production-ready, since I built version 0.49 from Git sources. Some aspects of the program are already polished—the track display GUI seen at left is very cool—while others simply don't work at all. For example, plugin support is a mixed bag at this point in Traverso's development. The program supports only the LV2 format—no LADSPA or VST here—and its support is incomplete. Plugins without GUIs loaded and worked without problems, while any plugin with its own GUI crashed the program. Perhaps it simply needs a more complete implementation of the current LV2 stack, but alas, development of Traverso seems to have halted or slowed to the point of apparent immobility. I hope I'm wrong about that, because there's much to like about Traverso and I'd like to see it evolve.

Bitwig Studio

I've placed Bitwig Studio out of order for the simple reason that I haven't used it yet. That's because, as far as I know, it hasn't been released in any version for Linux. Preliminary reports seem to indicate that the tested version is running only on OS X, but the company has indicated that a proprietary version for Linux is a release target.

So why the big noise over Bitwig? Its designers have come from the development team at Ableton, the company responsible for the popular Ableton Live DAW. Ableton Live redefined the DAW for a new generation of computer-based music makers, and to date there has been nothing like it available for Linux. Bitwig may change that situation.

Ableton Live has been described as a front-end for a huge granular synthesis engine capable of realtime time and pitch compression/expansion. Audio and MIDI material recorded or retrieved into the program can be instantly modified to match the composition's tempo and pitch levels. This ability to perfectly match any material has evolved into a powerful method of realtime composition. From what I've seen so far, Bitwig appears to include at least the central characteristics of Ableton Live, and if it can live up to its advertising Bitwig will surely attract more users to Linux for their sound and music work.

By the way, I've included no screenshot of Bitwig because—surprise—I haven't used it yet. As a matter of personal policy I don't add screenshots of anything I don't run here at Studio DLP.

Outro

I hope you've enjoyed this little tour of Linux DAWs. and I'd be most pleased if you gave some of these programs a trial run or two. I'd be ecstatic if you made some music with one of the DAWs presented here, so let us know if you come up with something we should hear. Finally, I must note that Linux users have other choices beyond the DAWs presented in this article. See the apps Wiki at linuxaudio.org for pointers to more Linux DAWs.

[For further reading, I recommend: Leider, C. (2004). Digital Audio Workstation. McGraw-Hill. and Roads, C. (1996). Computer Music Tutorial. MIT Press.]

Comments (13 posted)

Brief items

Quotes of the week

Free software didn't start out as competitive with proprietary software. It became so only because a bunch of ethically motivated hackers were willing to "subsidize" the movement with their failed, and successful, attempts at free software and free culture projects and businesses.
Benjamin Mako Hill

The architecture of the system should correspond to the implementation style. Schemaless, heavy on magic, untyped approaches are suited for modular systems where each individual module is of limited complexity, and where the modules are isolated from each other and can recover from failures. The more complex the individual modules (and sometimes the complexity is inherent to the problem), the more benefits from "conservative" techniques.
Gintautas Miliauskas

Comments (1 posted)

Calligra 2.5 released

Version 2.5 of the Calligra suite has been announced. "For the productivity part of the suite (word processor, spreadsheet, and presentation program) the target user of version 2.5 is still the student or academic user. This version has a number of new features that will make it more suitable for these users." Additions include a table editor, a number of spreadsheet improvements, some new input filters, and more.

Comments (31 posted)

TizMee – Tizen compatibility layer for MeeGo

Michael Sheldon writes on his blog about TizMee, a library for MeeGo to implement Tizen's Web API. "This first early release should support general HTML5 apps fairly well (although there are still a few that have issues), some aspects of the Tizen API (this is by no means complete however) and pretty much all of the Cordova/PhoneGap API." Packages are built for Nokia's MeeGo Harmattan (the N9 and N950 devices) and the community-built Nemo.

Comments (42 posted)

HarfBuzz 0.9.2 is available

Behdad Esfahbod has announced a new release of the HarfBuzz OpenType layout engine. The 0.9.2 release is part of an ongoing rewrite, about which he says "I can finally claim that new HarfBuzz is on par with or better than both Pango and old HarfBuzz / Qt", and "this may be a good time for distributions to start putting a harfbuzz package together".

Full Story (comments: none)

Valgrind-3.8.0 is available

A new release of the Valgrind debugging tool is available. Version 3.8.0 adds support for MIPS32/Linux and X86/Android, plus partial support for Mac OS X 10.8. The release announcement also heralds "Intel AVX and AES instructions are now supported, as are POWER DFP instructions", plus "support for recent distros and toolchain components (glibc 2.16, gcc 4.7)" among numerous other improvements.

Full Story (comments: none)

Initial release of Novaprova now available

The first release of the Novaprova unit test framework for C is now available. The project advertises "advanced features previously only available in unit test frameworks for languages such as Java or Perl", and includes support for test parameters, mocking C functions at runtime, detection of C runtime errors, and dynamic test discovery using reflection.

Comments (none posted)

Newsletters and articles

Development newsletters from the last week

Comments (none posted)

The open source technology behind Twitter (opensource.com)

In an interview at opensource.com, Twitter's open source manager, Chris Aniszczyk, talks about the how the company uses (and contributes to) open source. "We use a lot of open source software. In my opinion, it’s a no-brainer as open source software allows us to customize and tweak code to meet our fast-paced engineering needs as our service and community grows. When we plan new engineering projects at Twitter, we always make sure to measure our requirements against the capabilities of open source offerings, and prefer to consume open source software whenever it makes sense. Through this method, much of Twitter is now built on open source software, and as a result the open source way is now a pervasive part of our culture. On top of that, there is a positive cycle of teaching and learning within open source communities that we benefit from."

Comments (none posted)

Dricot: A freasy future for GNOME

On his blog, GNOME's Lionel Dricot suggests that the project should pursue decentralized online services as its next big goal. "Today, freedom is not only about the code that runs on your computer. It is about all the online services you are connected to, all the servers that host your data." The concept Dricot outlines seems akin to adding ownCloud-like functionality to the existing GNOME stack. "Because we can offer a level of integration never seen before. With technologies such as Telepathy tubes, XMPP, DBus, developing an online application for GNOME would be as easy as writing a desktop application."

Comments (91 posted)

Rigo: Multicore Programming in PyPy and CPython

PyPy developer Armin Rigo describes his vision for parallel programming in higher-level languages. "We often hear about people wanting a version of Python running without the Global Interpreter Lock (GIL): a 'GIL-less Python'. But what we programmers really need is not just a GIL-less Python --- we need a higher-level way to write multithreaded programs than using directly threads and locks. One way is Automatic Mutual Exclusion (AME), which would give us an 'AME Python'."

Comments (8 posted)

Page editor: Nathan Willis

Announcements

Brief items

Digia acquires Qt

Digia has announced that it is acquiring the Qt project from Nokia. "Following the acquisition Digia becomes responsible for all the Qt activities formerly carried out by Nokia. These include product development, as well as the commercial and open source licensing and service business. Following the acquisition, Digia plans to quickly enable Qt on Android, iOS and Windows 8 platforms." Digia has run the Qt licensing business since early 2011.

Comments (60 posted)

X.Org Foundation joins OIN

The X.Org Foundation has joined the Open Invention Network (OIN). "OIN has granted the Foundation a license to use of all patents they control or which are covered by agreements with other OIN community members and licensees, in exchange for a pledge from the Foundation to license back any patents which the Foundation may come into possession of. (Currently the Foundation owns no patents, but if we ever do, they will be covered by this agreement.)"

Full Story (comments: none)

Articles of interest

Ada Initiative news July 2012

The July edition of the Ada Initiative news covers AdaCamp DC, Wikimania keynote and West Coast meetups, and other topics.

Full Story (comments: none)

Aurora: DEFCON: Why conference harassment matters

Valerie Aurora examines the costs of harassment at technical conferences and what can be done about it. "When you say, 'Women shouldn’t go to DEFCON if they don’t like it,' you are saying that women shouldn’t have all of the opportunities that come with attending DEFCON: jobs, education, networking, book contracts, speaking opportunities – or else should be willing to undergo sexual harassment and assault to get access to them. Is that really what you believe?"

Comments (108 posted)

FSF: The Shield Act fails to protect free software from patents

The Free Software Foundation comments on the Saving High-Tech Innovators from Egregious Legal Disputes Act, (SHIELD Act). "This act is meant to deal with the problem of patent trolls destroying software businesses. The bill would enable victims of patent trolling to have their costs covered if the judge decides that the plaintiff was not likely to succeed on their claims. While many are hailing the bill for fighting against patent trolls, it does not go far enough for us to support it, and it carries some risks that concern us." (Thanks to Davide Del Vento)

Comments (none posted)

Calls for Presentations

FOSDEM calls for devroom organizers and main track speakers

FOSDEM (Free and Open Source software Developers' European Meeting) will be held February 2-3, 2013 in Brussels, Belgium. The call is out for devroom organizers and main track speakers. "The main tracks host high-quality seminars for a broad and technical audience. Every track is organized around a theme (security, kernel, collaboration, ...). They are held in the two biggest auditoria and last 50 minutes. Each of the talks is given by a speaker who gets their travel and accommodation costs reimbursed." Devroom (developer room) organizers will schedule presentations, brainstorming and hacking sessions for their room. "Each year we receive more requests than we can host. To better achieve our goals, preference will be given to *proposals involving multiple, collaborating projects*. Projects with similar goals/domains that make separate requests will be asked to co-organize a devroom under their common theme."

Full Story (comments: none)

Upcoming Events

GStreamer Conference 2012 program now complete

The list of presenters and topics for the GStreamer Conference has been finalized. The conference takes place August 27-28 in San Diego, CA. "The GStreamer Conference 2012 is an annual gathering of GStreamer and Open Source multimedia enthusiasts. This year we have exciting talks about GStreamer 1.0, the GStreamer SDK, GStreamer and Embedded hardware, ALSA, Wayland, OpenGL and Mesa and much more."

Full Story (comments: none)

Python Game Programming Challenge (PyWeek) #15 is coming

PyWeek will run September 9-16, 2012. Entrants will write a game from scratch in one week (in Python, of course) either as an individual or in a team.

Full Story (comments: none)

SCALE 11x set for Feb. 22-24, 2013

The Southern California Linux Expo (SCALE) 11x will take place February 22-24, 2013 in Los Angeles, CA. "Details on SCALE 11x – including a call for papers, instructions to obtain exhibit space, sponsorship opportunities and, of course, registration -- will be announced as they are confirmed, and announcements should start later this month."

Full Story (comments: none)

Events: August 16, 2012 to October 15, 2012

The following event listing is taken from the LWN.net Calendar.

Date(s)EventLocation
August 18
August 19
PyCon Australia 2012 Hobart, Tasmania
August 20
August 22
YAPC::Europe 2012 in Frankfurt am Main Frankfurt/Main, Germany
August 20
August 21
Conference for Open Source Coders, Users and Promoters Taipei, Taiwan
August 25 Debian Day 2012 Costa Rica San José, Costa Rica
August 27
August 28
XenSummit North America 2012 San Diego, CA, USA
August 27
August 28
GStreamer conference San Diego, CA, USA
August 27
August 29
Kernel Summit San Diego, CA, USA
August 28
August 30
Ubuntu Developer Week IRC
August 29
August 31
2012 Linux Plumbers Conference San Diego, CA, USA
August 29
August 31
LinuxCon North America San Diego, CA, USA
August 30
August 31
Linux Security Summit San Diego, CA, USA
August 31
September 2
Electromagnetic Field Milton Keynes, UK
September 1
September 2
Kiwi PyCon 2012 Dunedin, New Zealand
September 1
September 2
VideoLAN Dev Days 2012 Paris, France
September 1 Panel Discussion Indonesia Linux Conference 2012 Malang, Indonesia
September 3
September 8
DjangoCon US Washington, DC, USA
September 3
September 4
Foundations of Open Media Standards and Software Paris, France
September 4
September 5
Magnolia Conference 2012 Basel, Switzerland
September 8
September 9
Hardening Server Indonesia Linux Conference 2012 Malang, Indonesia
September 10
September 13
International Conference on Open Source Systems Hammamet, Tunisia
September 14
September 16
Debian Bug Squashing Party Berlin, Germany
September 14
September 21
Debian FTPMaster sprint Fulda, Germany
September 14
September 16
KPLI Meeting Indonesia Linux Conference 2012 Malang, Indonesia
September 15
September 16
Bitcoin Conference London, UK
September 15
September 16
PyTexas 2012 College Station, TX, USA
September 17
September 19
Postgres Open Chicago, IL, USA
September 17
September 20
SNIA Storage Developers' Conference Santa Clara, CA, USA
September 18
September 21
SUSECon Orlando, Florida, US
September 19
September 20
Automotive Linux Summit 2012 Gaydon/Warwickshire, UK
September 19
September 21
2012 X.Org Developer Conference Nürnberg, Germany
September 21 Kernel Recipes Paris, France
September 21
September 23
openSUSE Summit Orlando, FL, USA
September 24
September 25
OpenCms Days Cologne, Germany
September 24
September 27
GNU Radio Conference Atlanta, USA
September 27
September 29
YAPC::Asia Tokyo, Japan
September 27
September 28
PuppetConf San Francisco, US
September 28
September 30
Ohio LinuxFest 2012 Columbus, OH, USA
September 28
September 30
PyCon India 2012 Bengaluru, India
September 28
October 1
PyCon UK 2012 Coventry, West Midlands, UK
September 28 LPI Forum Warsaw, Poland
October 2
October 4
Velocity Europe London, England
October 4
October 5
PyCon South Africa 2012 Cape Town, South Africa
October 5
October 6
T3CON12 Stuttgart, Germany
October 6
October 8
GNOME Boston Summit 2012 Cambridge, MA, USA
October 11
October 12
Korea Linux Forum 2012 Seoul, South Korea
October 12
October 13
Open Source Developer's Conference / France Paris, France
October 13
October 14
Debian BSP in Alcester (Warwickshire, UK) Alcester, Warwickshire, UK
October 13
October 14
PyCon Ireland 2012 Dublin, Ireland
October 13
October 15
FUDCon:Paris 2012 Paris, France
October 13 2012 Columbus Code Camp Columbus, OH, USA
October 13
October 14
Debian Bug Squashing Party in Utrecht Utrecht, Netherlands

If your event does not appear here, please tell us about it.

Page editor: Rebecca Sobol


Copyright © 2012, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds