User: Password:
|
|
Subscribe / Log in / New account

LWN.net Weekly Edition for March 15, 2012

A look in on Apache OpenOffice

By Jake Edge
March 14, 2012

The Apache OpenOffice (AOO) project is in the final stretch toward its first release, AOO 3.4, but there are still some hurdles to clear. The current focus is largely on identifying and fixing the "release blocker" bugs that are being found in various developer snapshots. All of that is pretty normal for a project getting ready for a release, but AOO also needs to handle a few other loose ends. Because it is an Apache incubator project (a "podling" in Apache terms), it must undergo an intellectual property (IP) review and get approval before making the release.

The IP concerns stem from the change in license after Oracle donated the OpenOffice.org code to Apache. All Apache projects must release all of their code under the Apache Software License (ASL); any OpenOffice.org code that came directly from Oracle is easily switched from the LGPL, but code from other projects that has been incorporated into the office suite may not be available under the ASL. That has led AOO to carefully audit all of the code used and to remove or replace any non-ASL pieces. The IP review will then vet those changes to try to ensure that nothing has been missed. The process is well documented on the incubator site.

In fact, there is a truly eye-opening amount of documentation at the Apache incubator site that describes, sometimes in great detail, the life of a podling. It covers such things as how the podling should get set up in terms of organization and infrastructure, how it should prepare for a release and get IP clearance, along with the steps needed to eventually graduate to a full Apache Software Foundation (ASF) project. On one hand, documenting all of these processes is important and useful, but the sheer level of bureaucracy has to be daunting to some.

A podling first needs to get set up in the Apache infrastructure, which means setting up mailing lists and a Subversion repository for its code, but it must also learn "The Apache Way". From all of the documentation, as well as the gentle prodding from AOO mentors and other longtime Apache members on the ooo-dev mailing list, it is clear that the ASF is quite happy with its policies and procedures—not surprising given its level of success over the years. But all of that "extra" effort has certainly delayed the release of 3.4, to the point where frustration among users and developers is becoming evident.

The last release of OpenOffice.org (3.3) was more than a year ago in January 2011. Since that time, Oracle donated the code to the ASF in June, but it has taken the better part of a year to get close to a new release. That's not to say that the project has been idle—far from it as documented in Rob Weir's timeline—but it is a lot of work to move a project of its size to a new home. In the meantime, though, there hasn't been a lot of time to add new features.

New for 3.4

The most talked about new feature of AOO 3.4 is the native scalable vector graphics (SVG) import feature. OpenOffice.org had an external filter that used six GPL/LGPL libraries, which needed to be replaced. The new code has an SVG interpreter in the core, which provides better SVG support while also reducing the memory footprint and startup time—not to mention removing non-ASL code. While the feature was available as a filter in OpenOffice.org (and natively in Go-OO-derived versions of the suite including LibreOffice), it is new for AOO.

As the 3.4 release notes draft points out, there are two classes of updates: those that came from Oracle with the 3.4 beta in progress at the time of the transfer and those that have been added by AOO contributors since then. The Oracle contributions are largely incremental improvements to existing functionality, while those created for AOO may be more visible to users. Certainly SVG import fits in there, but there is also a new color picker dialog, new regular expression engine, support for line caps (i.e. how lines terminate and connect visually), and more.

All of the features and bug fixes are to the good, though they have been a long time in coming. For Linux systems, the AOO 3.4 release is likely to largely be a non-event as most distributions switched to LibreOffice (LO) long ago. Most of the features from the 3.4 beta are already present in LO; should any of the AOO additions be of interest, they can be adopted as well, of course. It is in the Windows world (and to a lesser extent Mac OS X) that any rivalry between AOO and LO will really play out.

Apache OpenOffice and LibreOffice

It's clear that a rivalry does still exist between the projects, and that the bad blood between them has not been cleared up. A recent effort by Simon Phipps to clarify some facts about AOO seems to have run aground at least partly because of the unhappiness between the projects. After posting his query to the mailing list, Phipps was asked to put it into FAQ form on the wiki, which he did, but that doesn't seem to have helped. It could be argued that his wording was insufficiently neutral—many have—but his attempt was meant to answer questions that are commonly asked in various forums, mailing lists, and so on. His biggest mistake, it seems, was mentioning LO as a possible interim solution until the 3.4 release is ready. Eventually, Phipps gave up trying to work on the FAQ after Weir rewrote most of it.

Some users are understandably concerned that no releases of any form of "OpenOffice" have been made for more than a year now. Undoubtedly they are interested in new features, but bugs, particularly security bugs, haven't been addressed in that time either. It may be that there are no known security problems with OOo 3.3, but there is reason to believe otherwise. Some suggested the proprietary IBM Lotus Symphony (which is based on the OpenOffice code) as an alternative in the interim but that doesn't appear in the draft FAQ either at this point.

That conversation, which spreads itself out over at least three threads, is indicative of the tension between the two projects. There seems to be a fair amount of energy being expended in fairly pointless—quite possibly counter-productive—arguments about which project is the rightful owner of the "OpenOffice" brand and community going forward, along with combating things "in the press" and elsewhere that are deemed to be FUD. What's really needed, as is often pointed out, is to focus on the release. Right now, anyone asserting that AOO is superior to other alternatives is missing an important point: there is no AOO currently available and that won't change for a bit.

That is not to say that there aren't provocations from some on the LO side—there are. But at this point, the split has happened and there is no going back, so dwelling on it seems like wasted effort. It's likely that as the projects mature, there will be less sniping; it's rare to see KDE and GNOME engage in that sort of thing these days, for example. Once there is an AOO release, and the project graduates to a full-fledged Apache project, assuming that happens, some of the bad blood may start fading away.

Progress toward graduation

At least two of the podling mentors believe that progress is being made toward graduation. Ross Gardler listed numerous steps the project has taken toward that goal, concluding:

In summary, yes I think the AOO project is well on its way to graduation. A release is a pre-requisite to graduation as that is the point at which the ASF is able to assert that the code is fully license compliant. Once the first release is complete I imagine graduation will not be far behind.

I look forward to seeing AOO code allowing the further adoption of ODF alongside other great ODF related projects.

Joe Schaefer agreed, though he is "concerned about the level of commit activity being on the low-side". He hopes to see that pick up post-release as the project heads toward a 4.0 release. But, both Schaefer and Gardler are concerned about another problem, "learning to play nice with those not fully aligned to 'the one true vision'", as Gardler put it. There is a strong chorus of anti-LO sentiment that pervades the mailing list at times, even when it may not be in the best interest of OpenOffice users. That chorus is often led by Weir, who is one of the prime movers behind AOO and perhaps the most prolific mailing list poster.

As Schaefer pointed out, that is not an "Apache-esque" view of things: "At Apache we aren't in competition with other projects, we provide our work for the public benefit and leave discretion about adoption to the public." But Weir disagrees with that view. In the end, Weir's tone and demeanor seems to sometimes grate on contributors and potential contributors as well as on some of the project's mentors.

In the end, though, as many point out, it will come down to the code. Can AOO get a solid release out the door, and then continue that success down the road? That, much more than any branding question, is going to determine the long-term success of the project. At this point, it seems that there are only a handful of release blocking bugs, and the first release candidate may be imminent. But, so far, there have been no comments on an attempt to get the wider Apache community to start looking at the IP issues, so that may still take some time.

While it is in many ways unfortunate that the LO/AOO split ever occurred, the projects can certainly benefit from competition. Even if code can really only flow one way (and divergence is likely to limit that eventually), good ideas can certainly flow both ways. There is plenty that these two communities can work together on: ODF interoperability and enhancements, security issues in the shared code, promoting free office suite alternatives, and so on. One hopes we will see more of that in the future.

Comments (11 posted)

Vagrant 1.0: Virtual machines at your fingertips

March 14, 2012

This article was contributed by Koen Vervloesem

If you want to get up and running quickly with virtual machines, Vagrant could come in handy. After two years of development, the project has announced Vagrant 1.0, which is the first stable release and the first release for which the developers promise backward compatibility.

Vagrant starts from the idea that many developers do their development and/or testing in virtual machines to work with different distributions, avoid reboots or polluting their main workstation operating system with conflicting dependencies or simply bad packages. But having a couple of virtual machines installed requires managing them. And each time you have to install a fresh developer VM, you have to spend some time installing and configuring it. That's where Vagrant comes in: it's a tool that can automatically set up pre-configured virtual machine instances for developing and testing purposes, based on one of many VM templates. According to the 1.0 release announcement, Vagrant is in use by Mozilla, LivingSocial, EventBrite, Yammer, Disqus, and many more organizations.

For the moment, Vagrant is focused on the creation of virtual machines for Oracle's VirtualBox, so you need VirtualBox installed (version 4.0 or higher). The Vagrant web site offers rpm, deb and Arch Linux packages of version 1.0 for 32 and 64-bit x86 Linux, as well as packages for Mac OS X and Windows. Alternatively, you can also install Vagrant with Ruby's package manager RubyGems (gem install vagrant), as Vagrant is written in Ruby.

Getting started

The project has published excellent and up-to-date documentation on its web site, as well as a "Getting Started" guide. Vagrant is controlled through subcommands of the vagrant command and the configuration is done per project (preferably with each project in a separate directory) in a Vagrantfile, which has a similar goal as a Makefile for a development project. A Vagrantfile is actually a file containing Ruby code, which configures the project's virtual machine. Vagrant is able to create an initial Vagrantfile with the vagrant init command, which results in a Vagrantfile that documents the most common configuration options in long comments.

Another important concept is that of "base boxes." Instead of creating a virtual machine instance from scratch, Vagrant bases its instances on templates, which are called base boxes. A base box is basically a tar ball containing a root file system and a VM configuration with things like RAM and disk size. With a "vagrant box add" command you can download a base box from an HTTP URI or a local filesystem and copy it to your local Vagrant installation. After that, you can use this base box as a template for any of your projects by specifying its name in the Vagrantfile. The Vagrant web site contains a 32-bit, 259 MB Ubuntu Lucid Lynx base box, as well as a 64-bit variant.

When running vagrant up for the first time in a project, Vagrant creates a virtual machine based on the base box and starts a headless instance using VirtualBox. Now you can do some work with the virtual machine. You can suspend and resume it ("vagrant suspend" and "vagrant resume"), or you can completely halt it with "vagrant halt", which shuts down the VM. If the virtual machine has been shut down, a "vagrant up" doesn't re-create the machine but reboots it instead. Another option is to completely delete the virtual machine with "vagrant destroy", which of course deletes the whole VM image and thus the files included in it. After a virtual machine is deleted, a "vagrant up" command will re-create it based on the configuration in the Vagrantfile.

An advantage of Vagrant is that these virtual machines are easily shareable, for instance with co-workers: you can package a virtual machine with the "vagrant package" command. Beyond that, you can create your own base boxes for your favorite Linux distribution. That way you can package a complete development environment in a Vagrant box and distribute it to others who can use this reproducible environment with a single command.

Configuration management

If the virtual machines you could create with Vagrant were limited to copies of the base boxes, this wouldn't be so useful, as you would have to create a base box for every configuration you need. Thankfully, Vagrant allows you to provision your virtual machines using the configuration management systems Puppet or Chef. This allows you to use a base box with the very basic functionality that all your virtual machines need, and then add extra packages and configuration changes using a Puppet manifest or a Chef cookbook that you refer to in the Vagrantfile.

Provisioning is done when you enter "vagrant up" or "vagrant reload" (which reloads the VM's complete configuration in the Vagrantfile), but you can also use vagrant provision to reload only the Puppet or Chef configuration after you have changed it. Vagrant can provision your virtual machines even if you don't want to run a Puppet or Chef server: it calls these modes Chef Solo provisioning and Puppet provisioning. The only thing you have to do is add the location of your manifests or cookbooks to the Vagrantfile. Of course Vagrant is also able to provision your virtual machines using an existing Puppet or Chef server.

Talking to the virtual machine

But Vagrant isn't just about creating virtual machines based on templates and provisioning them. Its most powerful idea is that it sets up some channels to communicate with your virtual machines. For instance, it provides SSH access: with the command vagrant ssh, it logs you into the virtual machine so you'll be able to enter commands. X11 forwarding isn't enabled by default, but this can be configured in the Vagrantfile, which could come in handy if you have X installed in the virtual machine and you want to run graphical programs using ssh -X.

Moreover, Vagrant automatically configures your project's directory as a VirtualBox shared folder and mounts it in the virtual machine on /vagrant. The virtual machine has both read and write access to this directory, so you can easily use this to exchange files between your host system and the virtual machine. If the performance of the VirtualBox shared folder is not enough (which is typically the case when you have thousands of files), you can also set up NFS shared folders.

Vagrant also allows you to configure port forwarding in the Vagrantfile. For example, you could fire up a virtual machine with a test web server, forward its port 80 to a port on your host system, and then easily access the web server using a localhost URI, so you don't have to remember the virtual machine's IP address. It's also possible to create a multi-VM environment with multiple virtual machines (for instance a web and a database server) communicating with each other.

Development

Vagrant is open source, as it uses the MIT License. The code is on GitHub, and its README file offers some help on how to contribute to Vagrant. There's the #vagrant IRC channel on Freenode and the mailing list for questions; the project also has an issue tracker for reporting bugs.

Vagrant was started in January 2010 by Mitchell Hashimoto and John Bender. The first release was version 0.1.0 on March 7, 2010, and exactly two years later it saw a 1.0 release. Vagrant development is not backed by any single company, but it's sponsored by Engine Yard and Kiip and has attracted contributions from over a hundred individuals during those two years. One of these outside contributions is the Veewee tool, created by Patrick Debois to make building your own base boxes easier:

Veewee tries to automate this and to share the knowledge and sources you need to create a basebox. Instead of creating custom ISOs from your favorite distribution, it leverages the 'keyboardputscancode' command of Virtualbox to send the actual 'boot prompt' keysequence to boot an existing iso.

Veewee comes with a lot of templates for various Linux distributions, including CentOS, Debian, Fedora, Arch Linux, Gentoo, openSUSE, Ubuntu, and so on, as well as FreeBSD, OpenBSD, OpenIndiana, and even Windows. The best thing about these templates is that you can see how they are made, so you can adapt them to your needs.

Other contributors have created plugins for Vagrant. A simple:

    gem list -r | grep vagrant
command reveals more than a dozen RubyGems for Vagrant plugins. For example, Igor Sobreira has created the vagrant-screenshot plugin to take a screenshot from a running virtual machine to help debug booting issues. And Tyler Croy has integrated Vagrant with the continuous integration tool Jenkins.

The Vagrant project welcomes any contribution: code, documentation, as well as financial aid. There's a rather detailed explanation about how companies can support the project financially, by donating, sponsoring, and paying for specific feature implementations or bug fixes. The project is also very open about its current and future costs.

The future

While previous Vagrant releases regularly changed the syntax of the Vagrantfile, which could lead to some frustrations if you were an early adopter, the 1.0 release marks the end of this time of experimenting, according to the release announcement:

Equally important is that Vagrant 1.0 is the first release where backwards compatibility for the Vagrantfile will be maintained for the far future. Backwards incompatible changes to the Vagrantfile will no longer happen (exactly how this will be achieved will be revealed in the future, as I've devised a way to do so without compromising innovation).

Currently Vagrant only supports VirtualBox, but the plan is to support additional hypervisors, such as KVM, VMWare Fusion, VMWare vSphere, and so on. If you need extra functionality, you can add it using Vagrant's plugin system. All in all, the basic idea of distributable boxes coupled to the extensibility thanks to plugins makes Vagrant a handy tool for development and testing. Add to this the excellent documentation and the ecosystem of Veewee templates, and Vagrant may well be able to save you a lot of time.

Comments (5 posted)

OIN expands its coverage

By Jonathan Corbet
March 13, 2012
The Open Invention Network recently announced the expansion of its "Linux System Definition," meaning that a larger range of software is now covered by the group's patent license agreement. New packages on the list include Git, OpenJDK and WebKit; that list has also been updated to cover current versions of the listed packages. This expansion is welcome, but it also highlights some of the limitations of what an organization like OIN can accomplish.

OIN is meant to be a sort of patent club that reduces the risk of patent litigation for its members. OIN members sign on to the organization's patent license agreement, granting a license to their patents to all other members for use with Linux. There is a set of patents owned by OIN itself; companies gain access to those patents by signing the agreement. But the real value in OIN membership is meant to be protection from other OIN members; no member may assert patent claims against another member (with some exceptions - see below) without risking the loss of its own patent use rights under the agreement. The list of OIN licensees makes it clear that a lot of companies, including Cisco Systems, Collabora, Canonical, Google, HP, IBM, Mozilla, NEC, Novell, Oracle, Philips, Red Hat, Sony, and Twitter, see value in this arrangement.

That said, there are some obvious limitations to the benefits of OIN membership. It is sometimes said that members may use the full set of licensed patents in their defense, but there is nothing in the agreement that allows that use. No OIN member is required to use their patents (or to allow them to be used) in a counterattack against a patent aggressor. Indeed, if one OIN licensee (call it "EvilCorp") sues another ("NiceCorp"), a third licensee (that we'll call "ConcernedCorp") still cannot, by the agreement, withdraw the patent license it granted to EvilCorp - though, interestingly, the license for patents owned by OIN itself can be withdrawn in this situation.

In other words, OIN reduces the chances of being attacked by its other members, along with reducing the chances that such an attack would succeed. It offers no real counterattack capability at all. The agreement also only covers OIN licensees; it says nothing about their customers, who could still be the target of an attack.

The license agreement only applies to the "Linux System," a well-defined list of programs that must be used with the Linux kernel. That list contains almost 1900 programs making up the bulk of what one might expect to find on a typical Linux system, though certain types of applications - mplayer and VLC, for example - are notably missing. The agreement applies to specific versions of these programs; the 3.1.0 kernel is on the latest list, for example. "Successor releases" are also covered with an interesting exception:

to the extent such later release contains modifications to existing functionality for: compatibility (e.g., standards compliance or porting), performance enhancements (e.g., increasing execution speed, code maintainability, security or bug resistance), usability, and localization and internationalization, but to the extent the later release contains new functionality which does not exist in such component, the portion of the later release providing such new functionality is not included...

So just about anything can be tossed in as long as it's a bug fix or a performance or usability enhancement; as soon as it crosses the line into adding "new functionality" the coverage ceases. One can easily imagine a future court case hinging on whether a change is a usability improvement (covered) or a new feature (not covered). To be covered, the code must be distributed by the project's maintainer. Private changes are not covered, but the unchanged code remains covered in private versions.

There are some exceptions, though, even with regard to the exact versions of packages on the list. Anything that implements something that looks like a digital video recorder, DVD player or recorder, or an electronic program guide is excluded. Anything involving codecs is also excluded except for those found on this list; GIF, PNG, and FLAC are all covered, as is "RAW" (whatever that means), but many others, including some intended to be unencumbered, are absent from the list. Codecs remain a patent minefield, and OIN has not attempted to solve that problem.

While Philips and Sony are OIN licensees, they have carved out some additional exceptions for themselves. These include anything having to do with Blu-ray, "receiver functionality," anything related to DRM, or "digital display technology." And those are the small ones. These companies also except anything having to do with wireless networking - including both WiFi and networking through a cellular network. "Camera functionality" - anything capable of capturing an image - is excluded. There is also an exception for "technology for human-computer interaction, including interaction and appearance of applications, and remote control technology." For good measure, Philips also excludes virtualization.

In other words, Philips and Sony want the protection of OIN for everything not directly related to their product areas, but they want the ability to sue for anything else. And OIN is willing to accept them on those terms, evidently thinking that half a license is better than none. It is worth noting that both of those companies are listed as "founding members," a title which, presumably, does not come for free. The fact that no other companies have joined with such conditions suggests that they are expensive indeed; that is probably a good thing.

With all these exceptions, one might well wonder how much benefit actually derives from OIN membership. The fact that both Oracle and Google are members has not prevented Oracle from filing patent suits against Google (albeit relating to code that is not on OIN's list). Outright patent trolls will, of course, not be interested in OIN membership and will not be bound by its license. Similarly, companies like Apple and Microsoft have, thus far, declined the opportunity to be a part of OIN. All told, there is no evidence that the OIN has ever prevented a patent shakedown.

That said, one must recognize that any such evidence would be most difficult to find. No company will announce that it would have asserted its patents against another had it not been for those meddling OIN kids. It will always be difficult to measure the success of an organization like OIN; one can only try to read between the lines when looking at what companies do and don't do. For example, Microsoft's settlement of the Tom Tom suit, evidently on relatively favorable terms, happened shortly after Tom Tom joined OIN. Whether there is causality there or merely correlation is only really known to Microsoft's lawyers, but some people have certainly seen a connection.

Legal organizations like OIN are about reducing risk; in that regard OIN, by gathering together a long list of companies that are willing to license their patents for use with Linux, has almost certainly succeeded. It is also important as a very public statement by those companies that the free software commons (or, at least, a significant subset thereof) should be a sort of patent commons as well. OIN is certainly not a solution to the software patent problem, but it is a useful mitigating factor in a world where software patents continue to exist. So the updating and expansion of its list of covered software can only be a good thing.

Comments (8 posted)

Page editor: Jonathan Corbet

Security

CAP_SYS_ADMIN: the new root

March 14, 2012

This article was contributed by Michael Kerrisk.

Capabilities are—at least in theory—a nice idea: divide the privileges of root (user ID 0) into small pieces so that a process can be granted just enough power to perform specific privileged tasks. If the pieces are small enough, and well chosen, then, even if a privileged program is compromised (e.g., by a buffer overrun), the damage that can be done is limited by the set of capabilities that are available to the process. Good examples of the use of such fine-grained privileges are CAP_KILL, which permits sending signals to arbitrary processes, and CAP_SYS_TIME, which permits setting the system clock.

As of Linux 3.2, there are 36 capabilities. You can see a list of them, along with some of the main powers they each grant, in the capabilities(7) manual page. Capabilities can (since Linux 2.6.24) be attached to an executable file, to create the capabilities equivalent of a set-user-ID-root program: when the executable is run, the resulting process starts with a limited set of capabilities (instead of the full power of root, as is the case for set-user-ID-root programs).

The key point from the beginning of this article is small pieces, and it's here that the Linux capabilities implementation has gone astray.

When a kernel developer adds a new feature that should require privilege, what capability should they use, or should they perhaps even create a new capability? Although parceling root privileges into small pieces is useful from a security perspective, we don't want too many pieces, since then the task of administering capabilities would become unwieldy. Thus, it usually makes sense to employ an appropriate existing capability to control access to a new privileged kernel feature.

And this is where the problem begins. First, there is—unsurprisingly, given the Linux development model—no central authority determining how capabilities should be assigned to privileged operations. Second, there is very little guidance on what capability to choose. (Probably the best existing guide is to look at the capabilities(7) man page. By comparing with existing uses in that page, we can get some guidance on choosing the capability that best matches a new use case.)

So in practice, what happens? A kernel developer looks at the list of available capabilities in the kernel include/linux/capability.h header file, and is likely left bewildered wondering which capability to choose. (It appears that the original intent was that this header file would be updated with comments for all of the usages of each capability, so as to give an overview of capability usage, but in practice those comments have been updated only sporadically.) But the developer does know one thing: their feature will likely be administered by system administrators, and, helpfully, there is a capability called CAP_SYS_ADMIN. So, lacking sufficient information for a decision, the developer chooses CAP_SYS_ADMIN for their new feature.

Which brings us to where we are today: of the 1167 uses of capabilities in C files in the Linux 3.2 source code, 451 of those uses are CAP_SYS_ADMIN. That's rather more than a third of all capability checks. We might wonder if CAP_SYS_ADMIN is overrepresented because of duplications of similar operations in the kernel arch/ trees, or because CAP_SYS_ADMIN is commonly assigned as the capability governing administrative functions on device drivers. However, even after eliminating drivers/ and architectures other than x86, CAP_SYS_ADMIN still accounts for 167—about 30%—of the 552 uses of capabilities. (Fuller details about usage of capabilities in current and earlier kernels can be found here.)

So, on the one hand, the powers granted by CAP_SYS_ADMIN are so numerous and wide ranging that, armed with that capability, there are several avenues of attack by which a rogue process could gain all of the other capabilities. (As has been summarized by Brad Spengler, the ability to be leveraged for full root privileges is a weakness of many existing capabilities; CAP_SYS_ADMIN is just the most egregious example.) On the other hand, so many privileged operations require CAP_SYS_ADMIN that it is the capability most likely to be assigned to a privileged program.

To summarize: CAP_SYS_ADMIN has become the new root. If the goal of capabilities is to limit the power of privileged programs to be less than root, then once we give a program CAP_SYS_ADMIN the game is more or less over. That is the manifest problem revealed from the above analysis. However, if we look further, there is evidence of an additional problem, one that lies in the Linux development model.

As noted above, if we eliminate drivers/ and architectures other than x86, CAP_SYS_ADMIN accounts for 30% of the uses of capabilities. However, when capabilities were first introduced in Linux 2.2, the corresponding figures were 23 of 147 uses (16%). This supports a hypothesis that when random kernel developers are faced with the question "What capability should I use to govern access to the privileged feature that I'm adding to the kernel?", the answer often goes "I'm not sure… maybe CAP_SYS_ADMIN?". In other words, the Linux kernel development model (where, for example, there is no overall coordination of the use of capabilities) appears not to scale well when multiple developers face questions of this sort. (In retrospect, it also seems clear that the choice of the name CAP_SYS_ADMIN was rather unfortunate. The name conveys no real information about what operations the capability should govern, and it's an easy choice that looks safe to kernel developers who are uncertain of what capability to use.)

What could be done to improve matters? There's no quick and easy way out of the existing situation, but there are some steps that could be taken:

  • Avoid new kinds of uses of CAP_SYS_ADMIN. (As this article was being written, Linux 3.3-rc is adding 13 new uses of capabilities. Most of them are CAP_SYS_ADMIN, and at least some of them may be new kinds of uses of that capability. One such use has been averted, however.)
  • Rename CAP_SYS_ADMIN to CAP_AS_GOOD_AS_ROOT. Well, maybe not. But such a change would help get the point across to kernel developers looking to choose a capability for their new feature.
  • Publish better guidelines on the use of capabilities. Past attempts to do this (the capabilities(7) man page and comments in include/linux/capability.h) have only had limited success (the guidelines are incomplete, and haven't done much to alleviate the problem). However, some more explicit guidelines, coupled with some measurements of the kernel source (see next point), might achieve better results.
  • Regularly publish statistics on the use of capabilities in the kernel source and monitor new uses of capabilities in each kernel release (e.g., employ some scripting to look at capability-related changes in the diff for the current -rc release).
  • Existing uses of CAP_SYS_ADMIN could be divided out into other existing capabilities, and possibly some new capabilities. Those capabilities could then be assigned to privileged programs instead of CAP_SYS_ADMIN. (For application backward-compatibility, the kernel capability checks wouldn't remove CAP_SYS_ADMIN, but rather would check for CAP_SYS_ADMIN or its replacement. This would allow old binaries that have the CAP_SYS_ADMIN capability to continue to work, while new binaries would be assigned the replacement capability.) One or two steps in this direction have already been made, for example, with the addition of the CAP_SYSLOG capability in Linux 2.6.37. An obvious first point of focus would be non-generic uses of CAP_SYS_ADMIN in areas other than drivers and the file-system trees. Next points of focus could be generic uses of CAP_SYS_ADMIN in the drivers/ and fs/ trees.
  • Do a similar analysis of other heavily used capabilities, especially CAP_NET_ADMIN, to see whether splitting would be useful for those capabilities. (CAP_NET_ADMIN has 395 uses in Linux 3.2. However, all of those uses are restricted to code in the drivers/net/ and net/ subdirectories. If we remove CAP_NET_ADMIN from the discussion, then there are more uses of CAP_SYS_ADMIN in the kernel source than all of the remaining capabilities combined.)
As well as the above, of course the problem outlined by Brad Spengler that many capabilities can be leveraged to gain full root access remains to be addressed. (Ongoing work on namespaces will help improve this situation for some capabilities when used in conjunction with containers.)

In summary, capabilities go some way toward improving application security, but there's still further work needed before they can deliver on their early promise of being a mechanism for providing discrete, non-elevatable privileges to applications. Furthermore, as the example of the ever-widening scope of CAP_SYS_ADMIN shows, some questions requiring coordinated answers are currently not well addressed by the distributed Linux development model.

[Acknowledgment: Thanks to Serge Hallyn for comments on an early draft of this article.]

Comments (45 posted)

Brief items

Security quotes of the week

This led us to ask, if in the worst case users chose multi-word passphrases with a distribution identical to English speech, how secure would this be? Using the large Google n-gram corpus we can answer this question for phrases of up to 5 words. The results are discouraging: by our metrics, even 5-word phrases would be highly insecure against offline attacks, with fewer than 30 bits of work compromising over half of users. The returns appear to rapidly diminish as more words are required. This has potentially serious implications for applications like PGP private keys, which are often encrypted using a passphrase.
-- Joseph Bonneau

Within 48 hours of the system going live, we had gained near-complete control of the election server. We successfully changed every vote and revealed almost every secret ballot. Election officials did not detect our intrusion for nearly two business days — and might have remained unaware for far longer had we not deliberately left a prominent clue.
-- Scott Wolchok, Eric Wustrow, Dawn Isabel, and J. Alex Halderman in Attacking the Washington, D.C. Internet Voting System [PDF]

Comments (20 posted)

New vulnerabilities

flash-player: multiple vulnerabilities

Package(s):flash-player CVE #(s):CVE-2012-0768 CVE-2012-0769
Created:March 8, 2012 Updated:March 14, 2012
Description: From the CVE entries:

The Matrix3D component in Adobe Flash Player before 10.3.183.16 and 11.x before 11.1.102.63 on Windows, Mac OS X, Linux, and Solaris; before 11.1.111.7 on Android 2.x and 3.x; and before 11.1.115.7 on Android 4.x allows attackers to execute arbitrary code or cause a denial of service (memory corruption) via unspecified vectors. (CVE-2012-0768)

Adobe Flash Player before 10.3.183.16 and 11.x before 11.1.102.63 on Windows, Mac OS X, Linux, and Solaris; before 11.1.111.7 on Android 2.x and 3.x; and before 11.1.115.7 on Android 4.x does not properly handle integers, which allows attackers to obtain sensitive information via unspecified vectors. (CVE-2012-0769)

Alerts:
openSUSE openSUSE-SU-2012:0349-1 flash-player 2012-03-10
SUSE SUSE-SU-2012:0332-2 flash-player 2012-03-08
SUSE SUSE-SU-2012:0332-1 flash-player 2012-03-07

Comments (none posted)

freetype: code execution

Package(s):freetype CVE #(s):CVE-2012-1133 CVE-2012-1134 CVE-2012-1136 CVE-2012-1142 CVE-2012-1144
Created:March 9, 2012 Updated:March 23, 2012
Description: From the Debian advisory:

Mateusz Jurczyk from the Google Security Team discovered several vulnerabilties in Freetype's parsing of BDF, Type1 and TrueType fonts, which could result in the execution of arbitrary code if a malformed font file is processed.

Alerts:
Slackware SSA:2012-176-01 freetype 2012-06-25
SUSE SUSE-SU-2012:0553-1 freetype2 2012-04-23
SUSE SUSE-SU-2012:0483-2 freetype2 2012-04-23
SUSE SUSE-SU-2012:0521-1 freetype2 2012-04-18
Gentoo 201204-04 freetype 2012-04-17
Oracle ELSA-2012-0467 freetype 2012-04-12
Oracle ELSA-2012-0467 freetype 2012-04-12
SUSE SUSE-SU-2012:0484-1 freetype2 2012-04-11
SUSE SUSE-SU-2012:0483-1 freetype2 2012-04-11
openSUSE openSUSE-SU-2012:0489-1 freetype2 2012-04-12
Mandriva MDVSA-2012:057 freetype2 2012-04-12
Scientific Linux SL-free-20120411 freetype 2012-04-11
CentOS CESA-2012:0467 freetype 2012-04-10
CentOS CESA-2012:0467 freetype 2012-04-10
Red Hat RHSA-2012:0467-01 freetype 2012-04-10
Ubuntu USN-1403-1 freetype 2012-03-22
Debian DSA-2428-1 freetype 2012-03-08

Comments (none posted)

gdm-guest-session: arbitrary file deletion

Package(s):gdm-guest-session CVE #(s):CVE-2012-0943
Created:March 13, 2012 Updated:March 14, 2012
Description: From the Ubuntu advisory:

Ryan Lortie discovered that gdm-guest-session improperly cleaned out certain guest session files. A local attacker could use this issue to delete arbitrary files.

Alerts:
Ubuntu USN-1399-2 lightdm 2012-03-13
Ubuntu USN-1399-1 gdm-guest-session 2012-03-13

Comments (none posted)

glibc: multiple vulnerabilities

Package(s):eglibc, glibc CVE #(s):CVE-2011-1658 CVE-2011-2702
Created:March 12, 2012 Updated:March 14, 2012
Description: From the Ubuntu advisory:

It was discovered that the GNU C library loader expanded the $ORIGIN dynamic string token when RPATH is composed entirely of this token. This could allow an attacker to gain privilege via a setuid program that had this RPATH value. (CVE-2011-1658)

It was discovered that the GNU C library implementation of memcpy optimized for Supplemental Streaming SIMD Extensions 3 (SSSE3) contained a possible integer overflow. An attacker could use this to cause a denial of service or possibly execute arbitrary code. This issue only affected Ubuntu 10.04 LTS. (CVE-2011-2702)

Alerts:
Gentoo 201312-01 glibc 2013-12-02
Ubuntu USN-1396-1 eglibc, glibc 2012-03-09

Comments (none posted)

gnutls: information disclosure

Package(s):gnutls CVE #(s):CVE-2012-0390
Created:March 9, 2012 Updated:August 7, 2012
Description: From the CVE entry:

The DTLS implementation in GnuTLS 3.0.10 and earlier executes certain error-handling code only if there is a specific relationship between a padding length and the ciphertext size, which makes it easier for remote attackers to recover partial plaintext via a timing side-channel attack, a related issue to CVE-2011-4108.

Alerts:
SUSE SUSE-SU-2014:0320-1 gnutls 2014-03-04
Mageia MGASA-2012-0202 gnutls 2012-08-06
openSUSE openSUSE-SU-2012:0344-1 gnutls 2012-03-09

Comments (none posted)

icecast: forged log entries

Package(s):icecast CVE #(s):CVE-2011-4612
Created:March 8, 2012 Updated:April 10, 2013
Description: From the openSUSE advisory:

Icecast didn't strip newlines from log entries, therefore allowing users to forge log entries.

Alerts:
Mandriva MDVSA-2013:091 icecast 2013-04-09
Fedora FEDORA-2012-16147 icecast 2012-10-24
Mageia MGASA-2012-0211 icecast 2012-08-12
openSUSE openSUSE-SU-2012:0352-1 update 2012-03-10
openSUSE openSUSE-SU-2012:0333-1 icecast 2012-03-08

Comments (none posted)

kernel: null pointer reference on readonly regsets

Package(s):kernel CVE #(s):CVE-2012-1097
Created:March 12, 2012 Updated:November 5, 2012
Description: From the Red Hat bugzilla:

The regset common infrastructure assumed that regsets would always have .get and .set methods, but not necessarily .active methods. Unfortunately people have since written regsets without .set methods.

Rather than putting in stub functions everywhere, handle regsets with null .get or .set methods explicitly.

Alerts:
Oracle ELSA-2013-1645 kernel 2013-11-26
openSUSE openSUSE-SU-2012:1439-1 kernel 2012-11-05
Oracle ELSA-2012-0862 kernel 2012-07-02
Oracle ELSA-2012-2022 kernel 2012-07-02
Oracle ELSA-2012-2022 kernel 2012-07-02
openSUSE openSUSE-SU-2012:0799-1 kernel 2012-06-28
Red Hat RHSA-2012:1042-01 kernel 2012-06-26
SUSE SUSE-SU-2012:0616-1 Linux kernel 2012-05-14
Ubuntu USN-1458-1 linux-ti-omap4 2012-05-31
Ubuntu USN-1440-1 linux-lts-backport-natty 2012-05-08
Ubuntu USN-1433-1 linux-lts-backport-oneiric 2012-04-30
Ubuntu USN-1431-1 linux 2012-04-30
SUSE SUSE-SU-2012:0554-2 kernel 2012-04-26
Ubuntu USN-1426-1 linux-ec2 2012-04-24
Ubuntu USN-1425-1 linux 2012-04-24
Oracle ELSA-2012-2008 enterprise kernel 2012-04-23
Oracle ELSA-2012-2008 enterprise kernel 2012-04-23
Oracle ELSA-2012-2007 enterprise kernel 2012-04-23
Oracle ELSA-2012-2007 enterprise kernel 2012-04-23
Oracle ELSA-2012-0481 kernel 2012-04-23
SUSE SUSE-SU-2012:0554-1 Linux kernel 2012-04-23
openSUSE openSUSE-SU-2012:0540-1 kernel 2012-04-20
Scientific Linux SL-kern-20120418 kernel 2012-04-18
CentOS CESA-2012:0481 kernel 2012-04-18
Red Hat RHSA-2012:0481-01 kernel 2012-04-17
Ubuntu USN-1422-1 linux 2012-04-12
Ubuntu USN-1421-1 linux-lts-backport-maverick 2012-04-12
Debian DSA-2443-1 linux-2.6 2012-03-26
Ubuntu USN-1405-1 linux 2012-03-27
Ubuntu USN-1406-1 linux 2012-03-27
Ubuntu USN-1407-1 linux 2012-03-27
Fedora FEDORA-2012-3356 kernel 2012-03-15
Fedora FEDORA-2012-3350 kernel 2012-03-10

Comments (none posted)

ldm: command execution as root

Package(s):ldm CVE #(s):CVE-2012-1166
Created:March 13, 2012 Updated:March 14, 2012
Description: From the Ubuntu advisory:

Tenho Tuhkala discovered that the LTSP Display Manager (ldm) incorrectly filtered keybindings. An attacker could use the default keybindings to execute arbitrary commands as root at the login screen.

Alerts:
Ubuntu USN-1398-1 ldm 2012-03-12

Comments (none posted)

libdbd-pg-perl: format string vulnerabilities

Package(s):libdbd-pg-perl CVE #(s):CVE-2012-1151
Created:March 12, 2012 Updated:August 2, 2012
Description: From the Debian advisory:

Niko Tyni discovered two format string vulnerabilities in DBD::Pg, a Perl DBI driver for the PostgreSQL database server, which can be exploited by a rogue database server.

Alerts:
Fedora FEDORA-2012-10892 perl-DBD-Pg 2012-08-01
Fedora FEDORA-2012-10871 perl-DBD-Pg 2012-08-01
Mageia MGASA-2012-0187 perl-DBD-Pg 2012-07-30
Scientific Linux SL-perl-20120725 perl-DBD-Pg 2012-07-25
Oracle ELSA-2012-1116 perl-DBD-Pg 2012-07-25
Oracle ELSA-2012-1116 perl-DBD-Pg 2012-07-25
Mandriva MDVSA-2012:112 perl-DBD-Pg 2012-07-26
CentOS CESA-2012:1116 perl-DBD-Pg 2012-07-25
CentOS CESA-2012:1116 perl-DBD-Pg 2012-07-25
Red Hat RHSA-2012:1116-01 perl-DBD-Pg 2012-07-25
Gentoo 201204-08 DBD-Pg 2012-04-17
openSUSE openSUSE-SU-2012:0422-1 perl-DBD-Pg 2012-03-28
Debian DSA-2431-1 libdbd-pg-perl 2012-03-11

Comments (none posted)

libyaml-libyaml-perl: format string vulnerabilities

Package(s):libyaml-libyaml-perl CVE #(s):CVE-2012-1152
Created:March 13, 2012 Updated:August 17, 2012
Description: From the Debian advisory:

Dominic Hargreaves and Niko Tyni discovered two format string vulnerabilities in YAML::LibYAML, a Perl interface to the libyaml library.

Alerts:
openSUSE openSUSE-SU-2015:0319-1 perl-YAML-LibYAML 2015-02-18
openSUSE openSUSE-SU-2012:1000-1 perl-YAML-LibYAML 2012-08-17
Fedora FEDORA-2012-4997 perl-YAML-LibYAML 2012-04-08
Fedora FEDORA-2012-5035 perl-YAML-LibYAML 2012-04-08
Debian DSA-2432-1 libyaml-libyaml-perl 2012-03-12

Comments (none posted)

lightdm: arbitrary file deletion

Package(s):lightdm CVE #(s):CVE-2012-1111
Created:March 13, 2012 Updated:March 14, 2012
Description: lightdm previous to version 1.0.9 allows file descriptors to leak into the session processes.
Alerts:
openSUSE openSUSE-SU-2012:0354-1 lightdm 2012-03-12

Comments (none posted)

Mozilla products: multiple vulnerabilities

Package(s):firefox thunderbird seamonkey CVE #(s):CVE-2012-0451 CVE-2012-0455 CVE-2012-0456 CVE-2012-0457 CVE-2012-0458 CVE-2012-0459 CVE-2012-0460 CVE-2012-0461 CVE-2012-0462 CVE-2012-0464
Created:March 14, 2012 Updated:July 23, 2012
Description: The Red Hat advisory nicely describes the latest round of Mozilla vulnerabilities, most of which are fixed in the Firefox 11 and Thunderbird 11 releases:

Several flaws were found in the processing of malformed web content. A web page containing malicious content could cause Firefox to crash or, potentially, execute arbitrary code with the privileges of the user running Firefox. (CVE-2012-0461, CVE-2012-0462, CVE-2012-0464)

Two flaws were found in the way Firefox parsed certain Scalable Vector Graphics (SVG) image files. A web page containing a malicious SVG image file could cause an information leak, or cause Firefox to crash or, potentially, execute arbitrary code with the privileges of the user running Firefox. (CVE-2012-0456, CVE-2012-0457)

A flaw could allow a malicious site to bypass intended restrictions, possibly leading to a cross-site scripting (XSS) attack if a user were tricked into dropping a "javascript:" link onto a frame. (CVE-2012-0455)

It was found that the home page could be set to a "javascript:" link. If a user were tricked into setting such a home page by dragging a link to the home button, it could cause Firefox to repeatedly crash, eventually leading to arbitrary code execution with the privileges of the user running Firefox. (CVE-2012-0458)

A flaw was found in the way Firefox parsed certain web content containing "cssText". A web page containing malicious content could cause Firefox to crash or, potentially, execute arbitrary code with the privileges of the user running Firefox. (CVE-2012-0459)

It was found that by using the DOM fullscreen API, untrusted content could bypass the mozRequestFullscreen security protections. A web page containing malicious web content could exploit this API flaw to cause user interface spoofing. (CVE-2012-0460)

A flaw was found in the way Firefox handled pages with multiple Content Security Policy (CSP) headers. This could lead to a cross-site scripting attack if used in conjunction with a website that has a header injection flaw. (CVE-2012-0451)

Alerts:
openSUSE openSUSE-SU-2014:1100-1 Firefox 2014-09-09
Gentoo 201301-01 firefox 2013-01-07
Mageia MGASA-2012-0176 iceape 2012-07-21
openSUSE openSUSE-SU-2012:0567-1 firefox, thunderbird, seamonkey, xulrunner 2012-04-27
Ubuntu USN-1400-5 gsettings-desktop-schemas 2012-04-20
Debian DSA-2458-1 iceape 2012-04-24
Mandriva MDVSA-2012:032-1 mozilla 2012-04-17
Ubuntu USN-1400-4 thunderbird 2012-04-03
SUSE SUSE-SU-2012:0424-1 Mozilla Firefox 2012-03-28
SUSE SUSE-SU-2012:0425-1 Mozilla Firefox 2012-03-29
openSUSE openSUSE-SU-2012:0417-1 firefox, thunderbird 2012-03-27
Ubuntu USN-1401-2 thunderbird 2012-03-23
Scientific Linux SL-fire-20120321 firefox 2012-03-21
Scientific Linux SL-thun-20120321 thunderbird 2012-03-21
Ubuntu USN-1400-3 thunderbird 2012-03-21
Debian DSA-2437-1 icedove 2012-03-21
Mandriva MDVSA-2012:032 mozilla 2012-03-20
Ubuntu USN-1401-1 xulrunner-1.9.2 2012-03-19
Fedora FEDORA-2012-3996 nss-softokn 2012-03-17
Fedora FEDORA-2012-3996 nspr 2012-03-17
Fedora FEDORA-2012-3996 nss-util 2012-03-17
Fedora FEDORA-2012-3996 nss 2012-03-17
Fedora FEDORA-2012-3996 xulrunner 2012-03-17
Fedora FEDORA-2012-3996 firefox 2012-03-17
Ubuntu USN-1400-2 ubufox 2012-03-16
Ubuntu USN-1400-1 firefox 2012-03-16
Mandriva MDVSA-2012:031 firefox 2012-03-17
Debian DSA-2433-1 iceweasel 2012-03-15
Oracle ELSA-2012-0388 thunderbird 2012-03-15
Oracle ELSA-2012-0387 firefox 2012-03-15
Oracle ELSA-2012-0387 firefox 2012-03-15
CentOS CESA-2012:0388 thunderbird 2012-03-14
CentOS CESA-2012:0387 firefox 2012-03-14
Red Hat RHSA-2012:0388-01 thunderbird 2012-03-14
CentOS CESA-2012:0388 thunderbird 2012-03-14
CentOS CESA-2012:0387 firefox 2012-03-14
Red Hat RHSA-2012:0387-01 firefox 2012-03-14

Comments (none posted)

python-pam: code execution

Package(s):python-pam CVE #(s):CVE-2012-1502
Created:March 8, 2012 Updated:July 10, 2015
Description: From the Ubuntu advisory:

Markus Vervier discovered that PyPAM incorrectly handled passwords containing NULL bytes. An attacker could exploit this to cause applications using PyPAM to crash, or possibly execute arbitrary code.

Alerts:
Gentoo 201507-09 pypam 2015-07-09
openSUSE openSUSE-SU-2012:0487-1 python-pam 2012-04-12
Debian DSA-2430-1 python-pam 2012-03-10
Ubuntu USN-1395-1 python-pam 2012-03-08

Comments (none posted)

tremulous: code execution

Package(s):tremulous CVE #(s):CVE-2011-3012
Created:March 8, 2012 Updated:March 14, 2012
Description: From the CVE entry:

The ioQuake3 engine, as used in World of Padman 1.2 and earlier, Tremulous 1.1.0, and ioUrbanTerror 2007-12-20, does not check for dangerous file extensions before writing to the quake3 directory, which allows remote attackers to execute arbitrary code via a crafted third-party addon that creates a Trojan horse DLL file, a different vulnerability than CVE-2011-2764.

Alerts:
Mageia MGASA-2012-0148 tremulous 2012-07-09
Fedora FEDORA-2012-2405 tremulous 2012-03-08
Fedora FEDORA-2012-2419 tremulous 2012-03-08

Comments (none posted)

Page editor: Jake Edge

Kernel development

Brief items

Kernel release status

The current development kernel is 3.3-rc7, released on March 10 despite Linus's earlier wish not to do any more 3.3 prepatches. "Now, none of the fixes here are all that scary in themselves, but there were just too many of them, and across various subsystems. Networking, memory management, drivers, you name it. And instead of having fewer commits than in -rc6, we have more of them. So my hope that things would calm down simply just didn't materialize."

Stable updates: the 3.2.10 and 3.0.24 updates were released on March 12; both contain a long list of important fixes. 3.2.11 followed one day later to fix a build problem.

The 2.6.34.11 update, containing almost 200 changes, is in the review process as of this writing.

Comments (none posted)

Quotes of the week

Programming is not just an act of telling a computer what to do: it is also an act of telling other programmers what you wished the computer to do. Both are important, and the latter deserves care.
-- Andrew Morton

Dammit, I'm continually surprised by the *idiots* out there that don't understand that binary compatibility is one of the absolute top priorities. The *only* reason for an OS kernel existing in the first place is to serve user-space. The kernel has no relevance on its own. Breaking existing binaries - and then not acknowledging how horribly bad that was - is just about the *worst* offense any kernel developer can do.
-- Linus Torvalds

Kernel developers have thick heads, in most cases thicker than processor manuals.
-- Borislav Petkov

Comments (9 posted)

Greg KH: The 2.6.32 Linux kernel

Greg Kroah-Hartman discusses the history of the 2.6.32 stable kernel series and why he has stopped supporting it. "With the 2.6.32 kernel being the base of these longterm enterprise distros, it was originally guessed that it would be hanging around for many many years to come. But, my old argument about how moving a kernel forward for an enterprise distro finally sunk in for 2 of the 3 major players. Both Oracle Linux and SLES 11, in their latest releases these few months, have moved to the 3.0 kernel as the base of them, despite leaving almost all other parts of the distro alone. They did this to take advantage of the better support for hardware, newer features, newer filesystems, and the hundreds of thousands of different changes that has happened in the kernel.org releases since way back in 2009."

Comments (none posted)

McKenney: Transactional Memory Everywhere: 2012 Update for HTM

Paul McKenney looks at the state of hardware transactional memory with an eye toward how it might be useful for current software. "Even with forward-progress guarantees, HTM is subject to aborts and rollbacks, which (aside from wasting energy) are failure paths. Failure code paths are in my experience difficult to work with. The possibility of failure is not handled particularly well by human brain cells, which are programmed for optimism. Failure code paths also pose difficulties for validations, particularly in cases where the probability of failure is low or in cases where multiple failures are required to reach a given code path."

Comments (20 posted)

A proposed plan for control groups

By Jonathan Corbet
March 14, 2012
After the late-February discussion on the future of control groups, Tejun Heo has boiled down the comments and come to some conclusions as to where he would like to go with this subsystem. The first of these is that multiple hierarchies are doomed in the long term:

At least to me, nobody seems to have strong enough justification for orthogonal multiple hierarchies, so, yeah, unless something else happens, I'm scheduling multiple hierarchy support for the chopping block. This is a long term thing (think years), so no need to panic right now and as is life plans may change and fail to materialize, but I intend to at least move away from it.

So there will, someday, be a single control group hierarchy. It will not, however, be tied to the process tree; it will be an independent tree of groups allowing processes to be combined in arbitrary ways.

The responses to Tejun's conclusions have mostly focused on details (how to handle controllers that are not fully hierarchical, for example). There does not appear to be any determined opposition to the idea of removing the multiple hierarchy feature at some point when it can be done without breaking systems, so users of control groups should consider the writing to be on the wall.

Comments (3 posted)

Kernel development news

Kernel competition in the enterprise space

By Jonathan Corbet
March 14, 2012
Kernel developers like to grumble about the kernels shipped by enterprise distributions. Those kernels tend to be managed in ways that ignore the best features of the Linux development process; indeed, sometimes they seem to work against that process. But, enterprise kernels and the systems built on them are also the platform on which the money that supports kernel development is made, so developers only push their complaints so far. For years, it has seemed that nothing could change the "enterprise mindset," but recent releases show that there may, indeed, be change brewing in this area.

Consider Red Hat Enterprise Linux 6; its kernel is ostensibly based on the 2.6.32 release. The actual kernel, as shipped by Red Hat, differs from 2.6.32 by around 7,700 patches, though. Many of those are fixes, but others are major new features, often backported from more recent releases. Thus, the RHEL "2.6.32" kernel includes features like per-session group scheduling, receive packet/flow steering, transparent huge pages, pstore, and, of course, support for a wide range of hardware that was not available when 2.6.32 shipped. Throw in a few out-of-tree features (SystemTap, for example), and the end result is a kernel far removed from anything shipped by kernel.org. That is why Red Hat has had no real use for the 2.6.32 stable kernel series for some years.

Red Hat's motivation for creating these kernels is not hard to understand; the company is trying to provide its customers with a combination of the stability that comes from well-aged software and the features, fixes, and performance improvements from the leading edge. This process, when it goes well, can give those customers the best of both worlds. On the other hand, the resulting kernels differ widely from the community's product, have not been tested by the community, and exclude recent features that have not been chosen for backporting. They are also quite expensive to create; behind Red Hat's many high-profile kernel hackers is an army of developers tasked with backporting features and keeping the resulting kernel stable and secure.

When developers grumble about enterprise kernels, what they are really saying is that enterprise distributions might be better served by simply updating to more current kernels. In the process they would get all those features, improvements, and bug fixes from the community, in the form that they were developed and tested by that community. Enterprise distributors shipping current kernels could dispense with much of their support expense and could better benefit from shared maintenance of stable kernel releases. The response that typically comes back is that enterprise customers worry about kernel version bumps (though massive changes hidden behind a minor number change are apparently not a problem) and that new kernels bring new bugs with them. The cost of stabilizing a new kernel release, it is suggested, could exceed that of backporting desired features into an older release.

Given that, it is interesting to see two other enterprise distributors pushing forward with newer kernels. Both SUSE Linux Enterprise Server 11 Service Pack 2 and Oracle's Unbreakable Enterprise Kernel Release 2 feature much more recent kernels - 3.0.10 and 3.0.16, respectively. In each case, the shift to a newer kernel is a clear attempt to create a more attractive distribution; we may be seeing the beginning of a change in the longstanding enterprise mindset.

SUSE seems firmly stuck in a second-place market position relative to Red Hat. As a result, the company will be searching for ways to differentiate its distribution from RHEL. SUSE almost certainly also lacks the kind of resources that Red Hat is able to apply to its enterprise kernels, so it will be looking for cheaper ways to provide a competitive set of features. Taking better advantage of the community's work by shipping more current kernels is one obvious way to do that. By shipping recent releases, SUSE does not have to backport fixes and features, and it is able to take advantage of the long-term stable support planned for the 3.0 kernel. In that context, it is not entirely surprising that SUSE has repeatedly pulled its customers forward, jumping from 2.6.27 to 2.6.32 in the Service Pack 1 release, then to 3.0.

Oracle, too, has a need to differentiate its distribution - even more so, given that said distribution is really just a rebranded RHEL. To that end, Oracle would like to push some of its in-house features like btrfs, which is optimistically labeled "production-ready" in a recent press release. If btrfs is indeed ready for production use, it certainly has only gotten there in very recent releases; moving to the 3.0 kernel allows Oracle to push this feature while minimizing the amount of work required to backport the most recent fixes. Oracle is offering this kernel with releases 5 and 6 of Oracle Linux; had Oracle stuck with Red Hat's RHEL 5 kernel, Oracle Linux 5 users would still be running something based on 2.6.18. For a company trying to provide a more feature-rich distribution on a budget, dropping in a current kernel must seem like a bargain.

What about the down side of new kernels - all those new bugs? Both companies have clearly tried to mitigate that risk by letting 3.0 stabilize for six months or so before shipping it to customers. There have been over 1,500 fixes applied in the 24 updates to 3.0 released so far. The real proof, though, is in users' experience. If SLES or Oracle Linux users experience bugs or performance regressions as a result of the kernel version change, they may soon start looking for alternatives. In the Oracle case, the original Red Hat kernel remains an option for customers; SUSE, instead, seems committed to the newer version.

Between these two distributions there should be enough users to eventually establish whether moving to newer kernels in the middle of an enterprise distribution's support period is a smart move or not. If it works out, SUSE and Oracle may benefit from an influx of customers who are tired of Red Hat's hybrid kernels. If the new kernels prove not to be enterprise-ready, instead, Red Hat's position may become even stronger. Learning which way things will go may take a while. Should Red Hat show up one day with a newer kernel for RHEL customers, though, we'll know that the issue has been decided at last.

Comments (10 posted)

The trouble with stable pages

By Jonathan Corbet
March 13, 2012
Traditionally, the kernel has allowed the modification of pages in memory while those pages are in the process of being written back to persistent storage. If a process writes to a section of a file that is currently under writeback, that specific writeback operation may or may not contain all of the most recently written data. This behavior is not normally a problem; all the data will get to disk eventually, and developers (should) know that if they want to get data to disk at a specific time, they should use the fsync() system call to get it there. That said, there are times when modifying under-writeback pages can create problems; those problems have been addressed, but now it appears that the cure may be as bad as the disease.

Some storage hardware can transmit and store checksums along with data; those checksums can provide assurance that the data written to (or read from) disk matches what the processor thought it was writing. If the data in a page changes after the calculation of the checksum, though, that data will appear to be corrupted when the checksum is verified later on. Volatile data can also create problems on RAID devices and with filesystems implementing advanced features like data compression. For all of these reasons, the stable pages feature was added to ext4 for the 3.0 release (some other filesystems, btrfs included, have had stable pages for some time). With this feature, pages under writeback are marked as not being writable; any process attempting to write to such a page will block until the writeback completes. It is a relatively simple change that makes system behavior more deterministic and predictable.

That was the thought, anyway, and things do work out that way most of the time. But, occasionally, as described by Ted Ts'o, processes performing writes can find themselves blocked for lengthy periods (multiple seconds) of time. Occasional latency spikes are not the sort of deterministic behavior the developers were after; they also leave users unamused.

In a general sense, it is not hard to imagine what may be going on after seeing this kind of problem report. The system in question is very busy, with many processes contending for the available I/O bandwidth. One process is happily minding its own business while appending to its log file. At some point, though, the final page in that log file is submitted for writeback; it then becomes unwritable. As soon as our hapless process tries to add another line to the file, it will be blocked waiting for that writeback to complete. Since the disks are contended and the I/O queues are long, that wait can go on for some time. By the time the process is allowed to proceed, it has suffered an extensive, unexpected period of latency.

Ted's proposed solution was to only implement stable pages if the data integrity features are built into the kernel. That fix is unlikely to be merged in that form for a few reasons. Many distributor kernels are likely to have the feature enabled, but it will actually be used on relatively few systems. As noted above, there are other places where changing data in pages under writeback can create problems. So the real solution may be some sort of runtime switch - perhaps a filesystem mount option - indicating when stable pages are needed.

It is also possible that the real problem is somewhere else. Chris Mason expressed discomfort with the idea of only using stable pages where they are strictly needed:

I'm not against only turning on stable pages when they are needed, but the code that isn't the default tends to be somewhat less used. So it does increase testing burden when we do want stable pages, and it tends to make for awkward bugs that are hard to reproduce because someone neglects to mention it.

According to Chris, writeback latencies simply should not be seen on the scale of multiple seconds; he would like to see some effort put into figuring out why that is happening. Then, perhaps, the real problem could be fixed. But it may be that the real problem is simply that the system's resources are heavily oversubscribed and the I/O queues are long. In that case, a real fix may be hard to come by.

Boaz Harrosh suggested avoiding writeback on the final pages of any files that have been modified in the last few seconds. That might help in the "appending to a log file" case, but will not avoid unpredictable latency resulting from modification of the file at any location other than the end. People have suggested that pages modified while under writeback could be copied, allowing the modification to proceed immediately and not interfere with the writeback. That solution, though, requires more memory (perhaps during a time when the system is desperately trying to free memory) and copying pages is not free. Another option, suggested by Ted, would be to add a callback to be invoked by the block layer just before a page is passed on to the device; that callback could calculate checksums and mark the page unwritable only for the (presumably much shorter) time that it is actually under I/O.

Other solutions certainly exist. The first step, though, would appear to be to get a real handle on the problem so that solutions are written with an understanding of where the latency is actually coming from. Then, perhaps, we can have a stable pages implementation that provides stable data with stable latency in all situations.

Comments (15 posted)

A deep dive into CMA

March 14, 2012

This article was contributed by Michal "mina86" Nazarewicz

The Contiguous Memory Allocator (or CMA), which LWN looked at back in June 2011, has been developed to allow allocation of big, physically-contiguous memory blocks. Simple in principle, it has grown quite complicated, requiring cooperation between many subsystems. Depending on one's perspective, there are different things to be done and watch out for with CMA. In this article, I will describe how to use CMA and how to integrate it with a given platform.

From a device driver author's point of view, nothing should change. CMA is integrated with the DMA subsystem, so the usual calls to the DMA API (such as dma_alloc_coherent()) should work as usual. In fact, device drivers should never need to call the CMA API directly, since instead of bus addresses and kernel mappings it operates on pages and page frame numbers (PFNs), and provides no mechanism for maintaining cache coherency.

For more information, looking at Documentation/DMA-API.txt and Documentation/DMA-API-HOWTO.txt will be useful. Those two documents describe the provided functions as well as giving usage examples.

Architecture integration

Of course, someone has to integrate CMA with the DMA subsystem of a given architecture. This is performed in a few, fairly easy steps.

CMA works by reserving memory early at boot time. This memory, called a CMA area or a CMA context, is later returned to the buddy allocator so that it can be used by regular applications. To do the reservation, one needs to call:

    void dma_contiguous_reserve(phys_addr_t limit);

just after the low-level "memblock" allocator is initialized but prior to the buddy allocator setup. On ARM, for example, it is called in arm_memblock_init(), whereas on x86 it is just after memblock is set up in setup_arch().

The limit argument specifies physical address above which no memory will be prepared for CMA. The intention is to limit CMA contexts to addresses that DMA can handle. In the case of ARM, the limit is the minimum of arm_dma_limit and arm_lowmem_limit. Passing zero will allow CMA to allocate its context as high as it wants. The only constraint is that the reserved memory must belong to the same zone.

The amount of reserved memory depends on a few Kconfig options and a cma kernel parameter. I will describe them further down in the article.

The dma_contiguous_reserve() function will reserve memory and prepare it to be used with CMA. On some architectures (eg. ARM) some architecture-specific work needs to be performed as well. To allow that, CMA will call the following function:

    void dma_contiguous_early_fixup(phys_addr_t base, unsigned long size);

It is the architecture's responsibility to provide it along with its declaration in the asm/dma-contiguous.h header file. If a given architecture does not need any special handling, it's enough to provide an empty function definition.

It will be called quite early, thus some subsystems (e.g. kmalloc()) will not be available. Furthermore, it may be called several times (since, as described below, several CMA contexts may exist).

The second thing to do is to change the architecture's DMA implementation to use the whole machinery. To allocate CMA memory one uses:

    struct page *dma_alloc_from_contiguous(struct device *dev, int count, unsigned int align);

Its first argument is a device that the allocation is performed on behalf of. The second specifies the number of pages (not bytes or order) to allocate. The third argument is the alignment expressed as a page order. It enables allocation of buffers whose physical addresses are aligned to 2align pages. To avoid fragmentation, if at all possible pass zero here. It is worth noting that there is a Kconfig option (CONFIG_CMA_ALIGNMENT) which specifies maximum alignment accepted by the function. Its default value is 8 meaning 256-page alignment.

The return value is the first of a sequence of count allocated pages.

To free the allocated buffer, one needs to call:

    bool dma_release_from_contiguous(struct device *dev, struct page *pages, int count);

The dev and count arguments are same as before, whereas pages is what dma_alloc_from_contiguous() returned. If the region passed to the function did not come from CMA, the function will return false. Otherwise, it will return true. This removes the need for higher-level functions to track which allocations were made with CMA and which were made using some other method.

Beware that dma_alloc_from_contiguous() may not be called from atomic context. It performs some “heavy” operations such as page migration, direct reclaim, etc., which may take a while. Because of that, to make dma_alloc_coherent() and friends work as advertised, the architecture needs to have a different method of allocating memory in atomic context.

The simplest solution is to put aside a bit of memory at boot time and perform atomic allocations from that. This is in fact what ARM is doing. Existing architectures most likely already have a special path for atomic allocations.

Special memory requirements

At this point, most of the drivers should “just work”. They use the DMA API, which calls CMA. Life is beautiful. Except some devices may have special memory requirements. For instance, Samsung's S5P Multi-format codec requires buffers to be located in different memory banks (which allows reading them through two memory channels, thus increasing memory bandwidth). Furthermore, one may want to separate some devices' allocations from others to limit fragmentation within CMA areas.

CMA operates on contexts. Devices use one global area by default, but private contexts can be used as well. There is a many-to-one mapping between struct devices and a struct cma (ie. CMA context). This means that a single device driver needs to have separate struct device objects to use more than one CMA context, while at the same time several struct device objects may point to the same CMA context.

To assign a CMA context to a device, all one needs to do is call:

    int dma_declare_contiguous(struct device *dev, unsigned long size,
			       phys_addr_t base, phys_addr_t limit);

As with dma_contiguous_reserve(), this needs to be called after memblock initializes but before too much memory gets grabbed from it. For ARM platforms, a convenient place to put the call to this function is in the machine's reserve() callback. This won't work for automatically probed devices or those loaded as modules, so some other mechanism will be needed if those kinds of devices require CMA contexts.

The first argument of the function is the device that the new context is to be assigned to. The second specifies the size in bytes (not in pages) to reserve for the areas. The third is the physical address of the area or zero. The last one has the same meaning as dma_contiguous_reserve()'s limit argument. The return value is either zero or a negative error code.

There is a limit to how many “private” areas can be declared, namely CONFIG_CMA_AREAS. Its default value is seven but it can be safely increased if the need arises.

Things get a little bit more complicated if the same non-default CMA context needs to be used by two or more devices. The current API does not provide a trivial way to do that. What can be done is to use dev_get_cma_area() to figure out the CMA area that one device is using, and dev_set_cma_area() to set the same context to another device. This sequence must be called no sooner than in postcore_initcall(). Here is how it might look:

    static int __init foo_set_up_cma_areas(void)
    {
	struct cma *cma;

	cma = dev_get_cma_area(device1);
	dev_set_cma_area(device2, cma);
	return 0;
    }    
    postcore_initcall(foo_set_up_cma_areas);

As a matter of fact, there is nothing special about the default context that is created by dma_contiguous_reserve() function. It is in no way required and the system will work without it. If there is no default context, dma_alloc_from_contiguous() will return NULL for devices without assigned areas. dev_get_cma_area() can be used to distinguish between this situation and allocation failure.

dma_contiguous_reserve() does not take a size as an argument, so how does it know how much memory should be reserved? There are two sources of this information:

There is a set of Kconfig options, which specify the default size of the reservation. All of those options are located under “Device Drivers” » “Generic Driver Options” » “Contiguous Memory Allocator” in the Kconfig menu. They allow choosing from four possibilities: the size can be an absolute value in megabytes, a percentage of total memory, the smaller of the two, or the larger of the two. The default is to allocate 16 MiBs.

There is also a cma= kernel command line option. It lets one specify the size of the area at boot time without the need to recompile the kernel. This option specifies the size in bytes and accepts the usual suffixes.

So how does it work?

To understand how CMA works, one needs to know a little about migrate types and pageblocks.

When requesting memory from the buddy allocator, one provides a gfp_mask. Among other things, it specifies the "migrate type" of the requested page(s). One of the migrate types is MIGRATE_MOVABLE. The idea behind it is that data from a movable page can be migrated (or moved, hence the name), which works well for disk caches, process pages, etc.

To keep pages with the same migrate type together, the buddy allocator groups pages into "pageblocks," each having a migrate type assigned to it. The allocator then tries to allocate pages from pageblocks with a type corresponding to the request. If that's not possible, however, it will take pages from different pageblocks and may even change a pageblock's migrate type. This means that a non-movable page can be allocated from a MIGRATE_MOVABLE pageblock which can also result in that pageblock changing its migrate type. This is undesirable for CMA, so it introduces a MIGRATE_CMA type which has one important property: only movable pages can be allocated from a MIGRATE_CMA pageblock.

So, at boot time, when the dma_contiguous_reserve() and/or dma_declare_contiguous() functions are called, CMA talks to memblock to reserve a portion of RAM, just to give it back to the buddy system later on with the underlying pageblock's migrate type set to MIGRATE_CMA. The end result is that all the reserved pages end up back in the buddy allocator, so they can be used to satisfy movable page allocations.

During CMA allocation, dma_alloc_from_contiguous() chooses a page range and calls:

     int alloc_contig_range(unsigned long start, unsigned long end,
     	                    unsigned migratetype);

The start and end arguments specify the page frame numbers (or the PFN range) of the target memory. The last argument, migratetype, indicates the migration type of the underlying pageblocks; in the case of CMA, this is MIGRATE_CMA. The first thing this function does is to mark the pageblocks contained within the [start, end) range as MIGRATE_ISOLATE. The buddy allocator will never touch a pageblock with that migrate type. Changing the migrate type does not magically free pages, though; this is why __alloc_conting_migrate_range() is called next. It scans the PFN range and looks for pages that can be migrated away.

Migration is the process of copying a page to some other portion of system memory and updating any references to it. The former is straightforward and the latter is handled by the memory management subsystem. After its data has been migrated, the old page is freed by giving it back to the buddy allocator. This is why the containing pageblocks had to be marked as MIGRATE_ISOLATE beforehand. Had they been given a different migrate type, the buddy allocator would not think twice about using them to fulfill other allocation requests.

Now all of the pages that alloc_contig_range() cares about are (hopefully) free. The function takes them away from buddy system, then changes pageblock's migrate type back to MIGRATE_CMA. Those pages are then returned to the caller.

Freeing memory is much simpler process. dma_release_from_contiguous() delegates most of its work to:

     void free_contig_range(unsigned long pfn, unsigned nr_pages);

which simply iterates over all the pages and puts them back to the buddy system.

Epilogue

The Contiguous Memory Allocator patch set has gone a long way from its first version (and even longer from its predecessor – Physical Memory Management posted almost three years ago). On the way, it lost some of its functionality but got better at what it does now. On complex platforms, it is likely that CMA won't be usable on its own, but will be used in combination with ION and dmabuf.

Even though it is at its 23rd version, CMA is still not perfect and, as always, there's still a lot that can be done to improve it. Hopefully though, getting it finally merged into the -mm tree will get more people working on it to create a solution that benefits everyone.

Comments (none posted)

Patches and updates

Kernel trees

Architecture-specific

Core kernel code

Development tools

Device drivers

Documentation

Filesystems and block I/O

Memory management

Networking

Security-related

Benchmarks and bugs

Miscellaneous

Page editor: Jonathan Corbet

Distributions

Running Android on x86

March 14, 2012

This article was contributed by Nathan Willis

The Android-x86 project released the first release candidate for its version 4.0 on March 1. The release is based on the Ice Cream Sandwich (ICS) release of the Android source code, and introduces some important new features — such as hardware video acceleration for all three major GPU vendors, and a usable live USB image.

The Android-x86 project started off as an individual hobbyist effort, collecting patch sets against the upstream Android source that would allow developers to build the OS for specific hardware devices. As time went on, however, there was more and more consolidation and an ever-increasing pool of contributors. Thus, the project eventually grew into its current form, hosting Git repositories, producing ISO installers, and offering an email and IRC support community.

The project is entirely volunteer-driven and -governed, although AMD is credited as adding support for several of its platforms, and there are individual developers from Intel and other companies credited with contributions in the release notes. Interestingly enough, the very first x86 port of the Android Linux kernel was done by a Google employee. After that, outside developers first took up the challenge of getting the Android kernel to run on their own hardware, starting with the ASUS Eee PC, drawing from Moblin's efforts to get the kernel to boot and WiFi drivers to compile for the Eee, and on Canonical's Ubuntu Mobile Internet Device (MID) edition to get the GMA500 graphics chip working. Since roughly 2010, the project has steadily added new hardware platforms with each release.

The ice cream sandwich machine

[App settings]

The hardware supported by Android-x86 is still a subset of the Intel-compatible world. Intel has recently announced plans to bring Atom-based phones to the market in the latter half of 2012, which could boost interest in Android-x86 considerably, but for the moment the primary development targets are tablets and netbook-class portables. Each release is tested against specific devices; the current list supports ASUS Eee netbooks, the Dell Inspiron Mini Duo series, and several Lenovo offerings on the netbook side, plus the Viewsonic Viewpad, Samsung Q1U, and Viliv S5 tablets. All are Atom-based devices, and at the moment there are no 64-bit CPUs supported.

The project makes an effort to support as many hardware features as possible on the target platforms, including suspend/resume buttons, battery status, touch screens, cameras, Bluetooth, GPS, and accelerometers. However, most of these hardware features are already found in mobile phones and ARM-based tablets. Providing a usable experience on netbooks involves supporting external monitors, mice, and keyboards, not to mention external storage devices and a wider range of video chipsets.

ISOs of the 4.0-RC1 release are available for five hardware platforms, though more are certainly possible when the final 4.0 release is made. For comparison's sake, the 2.2 release from June 2011 (based on Android Froyo 2.2.2) supported seven separate platforms; subsequently there were test releases based on Gingerbread and Honeycomb, but neither was ever dubbed a stable release. In addition to the platform-specific ISO releases, there are nightly builds for "generic" PC hardware, and there are build instructions for the source itself. The ISO image weighs in at a modest 180 MB or so, depending on the device.

4.0-RC1 is based on the Android 4.0.3 source code release, using kernel 3.0.8 with kernel mode-setting (KMS). New to this release is hardware GPU acceleration for Radeon, NVIDIA, and Intel graphics, the addition of Chrome's V8 JavaScript engine (previous releases had used the JavaScript Compiler (JSC) instead), auto-mounting of hot-pluggable memory cards or USB storage, and experimental support for the Renderscript 3D rendering API. On the down side, wired Ethernet support is not working properly with the Android user space yet.

The ISO images themselves have also been beefed up in some noticeable ways. The filesystem is now compressed with Squashfs to save space, the live USB option runs in "hybrid" mode to permit using available flash space as persistent storage, and there is a text-based installer that supports the ext2, ext3, NTFS, and FAT32 filesystems.

Testing

I ran 4.0RC1 on a Lenovo S10 netbook; unscientifically I found it to be much faster than most Android phones I have used. That is no surprise from a hardware standpoint, of course, but even people who don't care for Android on mobile devices may be surprised at the resulting experience on a faster CPU with additional memory. All of the hardware worked, including WiFi, Bluetooth, and the touchscreen (which tends to confuse other distributions). Browsing, synchronizing email and other data, and all of the other essential tasks work smoothly. It is easy to tell that the OS was originally designed for a phone or tablet, though. In addition to referring to all disk space as "the SD card" there are places where Android-x86 is difficult to use without a touchscreen device.

For one thing, the project has added a software mouse cursor, and I had no trouble with my keyboard, but the unlock screen requires a swipe maneuver that is tricky to pull off on a touch pad. Second, although KMS correctly set the screen size to the S10's native resolution, that still results in large, Playskool-like menus and buttons. While they don't impede your ability to work, I cannot imagine liking it on a 24-inch desktop display where touch-sensitivity is not part of the arrangement.

[Apps]

However, neither of those nitpicks is an indictment of the quality of the software. Surprisingly enough, despite warnings to the contrary on the Android-x86 wiki, the new release has full access to the Android Market and can install most third-party apps as well. The tricky part is that Android-x86 will only work with pure Dalvik apps — almost all of the Android apps that use the Native Development Kit (NDK) are built for ARM. Unfortunately (or fortunately, depending on who you ask), the architecture difference also rules out Adobe Flash and applications that employ it.

However, as I discovered on the mailing list, there are beginning to be Android Market apps released specifically for the x86 architecture. Most are media players, which arguably have a greater need to optimize for speed, but their mere existence is intriguing.

More to come

Android-x86 does not have a formal roadmap, apart from the general convention of following the upstream Android source code releases. However, the site lists a few specific projects targeted for the near future. There are always new hardware platforms and chipsets to worry about, such as the AMD Brazos line, and unsupported features popular in netbook designs, such as the CrystalHD video decoder chip.

From the software side, Android-x86 does not yet have support for Android's native multitouch layer, although it is listed under "what we are working on now," and there is an effort by project maintainer Chih-Wei Huang to port the GStreamer multimedia framework to the OS. The GStreamer port would presumably replace Android's libstagefright as the multimedia abstraction library. In 2011, Collabora (home to many GStreamer developers) worked with ST Ericsson on an Android build of GStreamer, and that code remains available, although it seems highly unlikely to ever receive an official blessing from Google.

On March 11, developer Stefan Seidel sent a proposal to the Android-x86 mailing list with two suggestions for streamlining the development process itself. The first was an attempt to move from the separate-trees-for-each-target-device model currently being used to a unified tree. In addition to reducing duplication of effort and simplifying QA, he said, unifying the main tree would allow the project to produce a "base image" with each release that could be further customized by adding commercial or binary add-ons (such as the Google app suite) via an overlay filesystem like UnionFS or AuFS.

The second suggestion was partly an attempt to find a way around the difficulties posed by the Android sensor framework. The framework is what enables applications to access accelerometer or ambient-light sensor data. These sensors are far less common on Atom-based devices than in mobile phones, so many Android-x86 trees use a dummy library instead, which allows the user to simulate readings manually. Seidel proposes either overlaying real libraries over the dummy libraries for systems that support the sensors, or else adding a fail-over between the real and dummy libraries; in either case the end result would be that fewer hardware differences would exist to force developers to work on device-specific branches.

The others on the list seem supportive of the suggestions, and on board with the underlying goal of unifying the trees. In the resulting thread, many other suggestions for where the project should head next cropped up, including localization, FUSE support, and 64-bit support, which seems indicative of a motivated community.

What is not clear is whether or not there are very many people who use Android-x86 as a daily-use operating system. The question is interesting in light of how disruptive Android has been in the mobile phone market. If it takes off as a netbook offering, it would make for an unpredictable competitor for Google's other portable OS, ChromeOS, not to mention various other lightweight distributions.

This is not to say that the project needs 24/7 users in order to be valuable, of course. Evidently plenty of people do find it useful; the project blog even noted back in April 2011 that Amazon.com was using Android-x86 as the guest OS on EC2 in order to provide its application "test drive" service. Still, a test drive is by its very nature short. Users that run an OS day-in and day-out uncover different bugs and usability woes.

Should the traditional desktop Linux distributions be scared that Android will start eating into their market share? Probably not. But even as a volunteer-driven project with limited hardware targets, Android-x86 has produced a well-polished and usable release in very little time — one that comes with an pre-existing application ecosystem. Given that, the distributions would not be wise to ignore the possibility entirely.

Comments (3 posted)

Brief items

Distribution quotes of the week

I think that the general point here is that in projects people need to be able to ask for help, and people must be willing to offer help. Again, in general, good management is about bringing these things together, whereas bad management (as many of us are unfortunately all too aware) often involves someone thinking that their job is merely to tell you what to do, frequently not contributing to the activity in any way other than to indicate that things must be done more quickly, as opposed to helping you get your work done by giving you what you need.
-- Paul Boddie

When the totem law of Kbanga declares that displaying any words with two consonant clusters is illegal on Fridays, the rest of the world doesn't suffer. Being able to pop in a DVD and play it is something an average person takes for granted. If oppressive laws in a single country stop a good part of multimedia functionality, why should that functionality be taken away from everyone else?
-- Adam Borowski

In talking with visitors to FOSDEM (that I am a co-organizer of, now) and with customers, I has become clear to me over the years that Debian has a reputation of being somewhat oldfashioned and stale. That if you want to run the latest technologies, you should use something else. This reputation may have been deserved when we were having trouble releasing sarge, over half a decade ago, but it's entirely undeserved today, and I think it's well past time that we do something about that.
-- Wouter Verhelst

I've seen that work, had stones thrown at me, didn't mind. I've seen others do it, worked out nicely in the end.

However, this doesn't always work, as this is best done when the discussion can be taken private, to discourage others from throwing yet more fuel onto the fire.

On the other hand, I do not believe in a flame-war-free world, either. We do need heated arguments from time to time, and I see nothing wrong with that, as long as it remains civilised and does not resort to name-calling and an insult duel (unless it's in monkey island style ;).

-- Gergely Nagy

On the first part of your question --- assuming bad faith --- I think little can be done to avoid that. It's something quite personal: some are more prone to assume bad faith while other are more prone to assume good faith. What we need to encourage on that front is a culture that allow to change your mind once people discover their initial assumptions were wrong and to publicly say so. There is nothing wrong in being wrong. And there is a lot to gain from a community where people state publicly "sorry, I was wrong" and other people do not think bad of them because of that.
-- Stefano Zacchiroli

Comments (1 posted)

Arch Linux turns 10

Here's a brief note on the Arch Linux tenth anniversary. "If you follow Arch Planet, you may have already heard the news that we are celebrating a decade of existence, with the release of 0.1 Homer on March 11, 2002. If you haven't already, grab some birthday cake and head over to Arch Planet to read several developers chronologies and wonderful words of praise for Arch Linux."

Comments (none posted)

Release for CentOS-5.8 i386 and x86_64

The CentOS project has released CentOS 5.8. "CentOS-5.8 is based on the upstream EL 5.8 release and includes packages from all variants including Server, Client, Virtualization, and Clustering. All upstream repositories have been combined into one to make it easier for end users to work with." See the release notes for details. The project has also published a list of updates included in this release.

Full Story (comments: none)

Updated Debian 5.0: 5.0.10 released

The tenth and final update of the Debian oldstable distribution has been released. Debian 5.0.10 (lenny) mainly adds corrections for security problems. "The alpha and ia64 packages from DSA 1769 are not included in this point release for technical reasons. All other security updates released during the lifetime of `lenny' that have not previously been part of a point release are included in this update. Please note that the security support for the oldstable distribution ended in February 2012 and no updates have been released since that point."

Full Story (comments: none)

"Squeeze" based Debian Edu version released

The Debian Edu Team has announced the release of Debian Edu "Squeeze" 6.0.4+r0. Debian Edu (aka "Skolelinux") is a Debian Pure Blend targeted at schools and educational institutions. "It covers PXE installation, PXE booting for diskless machines, and setup for a school server, for stationary workstations, and for workstations that can be taken away from the school network. Several educational applications like Celestia, Dr. Geo, GCompris, GeoGebra, Kalzium, KGeography and Solfege are included in the default desktop setup."

Full Story (comments: none)

Ubuntu 10.10 (Maverick Meerkat) reachs end-of-life

Ubuntu has announced the end of life for 10.10 (Maverick Meerkat). No updates will be available after April 10, 2012, eighteen months after its release on October 10, 2010. The supported upgrade path from Ubuntu 10.10 is via Ubuntu 11.04.

Full Story (comments: none)

Distribution News

Debian GNU/Linux

Debian Project Leader Elections 2012: Candidates

There are three candidates in this year's Debian Project Leader election; Wouter Verhelst, Gergely Nagy and Stefano Zacchiroli.

Full Story (comments: none)

Debian "sid" users beware of the dpkg 1.16.2 upload

The dpkg 1.16.2 update to unstable may cause some headaches for users of Debian's unstable branch. "The previous multiarch in-core db layout was bogus, resulting in a possible inconsistent or broken on-disk db. If you are running any dpkg derived from code that has never been in the main git repo (this includes dpkg from the jenkins test builds [T], dpkg from experimental, dpkg from Ubuntu, one of the personal pu/ branches, etc), any of the following might affect you."

Full Story (comments: 17)

Newsletters and articles of interest

Distribution newsletters

Comments (none posted)

Dream Studio 11.10: Upgrade or Hands Off? (Linux.com)

Carla Schroder reviews Dream Studio, a distribution aimed at multimedia creation. "Dream Studio installs with a vast array of audio, movie, photography, and graphics applications. It's a great showcase for the richness of multimedia production software on Linux. Audio is probably the biggest pain in the behind, as the Linux audio subsystem can be a real joy* to get sorted out. One of the best things Dream Studio does is box it all up sanely, and on most systems you just fire up your audio apps and get to work. It comes with a low-latency kernel, the JACK (JACK Audio Connection Kit) low-latency sound server and device router, and the pulseaudio-module-jack for integrating PulseAudio with JACK."

Comments (none posted)

Shuttleworth: Ubuntu vs RHEL in enterprise computing

Mark Shuttleworth claims that Ubuntu deployments now exceed RHEL deployments for "large-scale enterprise workloads." "The key driver of this has been that we added quality as a top-level goal across the teams that build Ubuntu – both Canonical’s and the community’s. We also have retained the focus on keeping the up-to-date tools available on Ubuntu for developers, and on delivering a great experience in the cloud, where computing is headed."

Comments (35 posted)

Page editor: Rebecca Sobol

Development

OpenSSL and IPv6

March 14, 2012

This article was contributed by Nathan Willis

OpenSSL is one of the most popular implementations of Transport Layer Security (TLS), as well as one of the leading free software libraries for general-purpose cryptography, but as an online debate recently highlighted, it still lags behind on support for IPv6 tools. Admittedly, "lagging behind on IPv6" is a charge that could be leveled at most of the Internet, but with OpenSSL the feature requests — and the patches — have been idling in limbo for several years, which appears to be garnering frustration from some developers.

Michael Stapelberg raised the issue in a March 6 post on Google Plus:

Nearly every time I use the OpenSSL command line tools I get angry. It’s 2012 and OpenSSL’s s_client still doesn’t work with IPv6. Every time, I go to the Debian bugtracker first. Every time, I see http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=589520 and apply that patch.

I’ve been doing that so often, that I had enough of it and went to the OpenSSL request tracker: http://rt.openssl.org/index.html?q=ipv6 (user/pass: guest/guest). Turns out they actually have several patches lying around for that. The oldest one is 5 years old!

The command line tools Stapelberg refers to are s_client and s_server, both of which are standard commands for the openssl tool, and are designed to help users test their code and SSL/TLS applications. S_client implements a simple SSL/TLS client application that attempts to open a connection to a remote host. S_server implements a simple SSL/TLS server, which listens for connections and (if desired) can emulate a web server.

In practice, you might use s_client to try to connect to a new server with a command of the form:

    openssl s_client -connect somehost:someportnumber
That command could be followed by flags to test out the particular settings of interest (e.g., certificate options, support for specific ciphers, or simply printing out session information for debugging). But with an unpatched OpenSSL, the command does not work if the host requested is only reachable via IPv6. Similarly, s_server can listen on any port, but if the connection requests originate from an IPv6 address, it fails.

Patches and other options

Stapelberg linked to the OpenSSL request tracker (which requires logging in with a username and password; guest/guest is acceptable), showing seven open requests that match "ipv6." Of those, three are requests to support IPv6 addresses in s_client or s_server, and all three include patches. The most recent is RT 2051, which was originally opened by Michael Tuexen in 2009, and which has received regular patch updates as new OpenSSL releases have come out. The latest update is from December 28, 2011.

Questions have also come up on the OpenSSL mailing lists about IPv6 support. What seems to frustrate the question-askers is that the library supports IPv6 addresses in most core routines and internal data structures. In addition to the s_client and s_server testing tools, however, there are places where the API is unaware of IPv6 addressing. Namely, an application cannot use the OpenSSL library to create a new socket to an IPv6-addressed host — but the application can create an IPv6-addressed socket separately (or using a different library), then hand that socket over to OpenSSL.

In the comments on Stapelberg's Google Plus post, Florian Foster asked why anyone would use OpenSSL, which is hardly the only game in town, particularly when GnuTLS fully supports IPv6 addresses both in its API and in its command-line tool gnutls-cli. Watson Ladd concurred, and also noted that OpenSSL still has big holes in its documentation. Gregory P. Smith commented that the bulk of the SSL/TLS traffic on open source systems probably comes from the Netscape Security Services (NSS) library used by Mozilla applications and Google Chrome (as well as by many mail user agents and server products from Red Hat and Sun).

There is an argument to be made for each of those positions. GnuTLS support does IPv6 pervasively, OpenSSL's official documentation does list four out of its six man pages as "STILL INCOMPLETE," and NSS probably does handle the lion's share of SSL/TLS bits in open source software — at least on the client side. But even when taken collectively, the three arguments do not justify throwing up one's hands in disgust and uninstalling OpenSSL.

First, GnuTLS was started in order to provide a GPL-compatible alternative to OpenSSL, which is dual-licensed under the Apache 1.0 license and the "old-style-BSD-like" SSLeay license. As a result, GnuTLS is used heavily by official GNU projects, as well as by large projects like GNOME and CUPS. But OpenSSL remains more popular on the server side, in web frameworks, virtual private network (VPN) tools, and mail servers, plus system utilities like cryptmount and wpasupplicant. The need to maintain license compatibility can restrict a project's options regarding which library to use — such as commercial vendors wishing to avoid GPL-licensed code. Second, even though the two projects are roughly on par, they do differ in the details when it comes to protocol and cipher support, which could make the difference for other users.

So who cares about IP addresses anyway?

It is a little unclear why it has taken so long for the IPv6 patches to get merged. In 2009, Arkadiusz Miskiewicz objected to the style in Tuexen's patch, but Tuexen responded, and subsequently the patch was met with approval. OpenSSL core developer Stephen Henson chalked it up to a simple matter of time. "There has been a fair bit of activity lately related to the FIPS 140-2 validation work and the upcoming release of 1.0.1," he said, after which the team "can look at getting several patches including IPv6 support in place."

The only real ongoing objection to explicitly supporting IPv6 has been that IP addresses are not a fundamental SSL/TLS concern to begin with. The argument goes that TLS runs on top of TCP (and DTLS on top of UDP), so an application requiring TLS or DTLS must deal with the TCP or UDP connection, regardless of the state of the Internet layer beneath it. That is true to an extent; SSL/TLS is agnostic about what lies beneath the transport layer — but it clearly does not apply to the command-line testing tools that ship with OpenSSL. They are supposed to emulate an SSL/TLS client and server, after all, all the way up to the application layer, and not patching them means application developers must turn elsewhere.

Tuexen and others have updated the patch for both the current development version of OpenSSL and for the older releases. As Stapelberg mentioned in his Google Plus post, Debian adds the patch downstream (which is then picked up by Ubuntu and derivatives); so too does Red Hat (including Fedora).

Regardless of whether IPv6 support is conceptually important to the package, then, the interest in IPv6 tools among developers and sysadmins seems clear. The risk to the project is that by letting the patch languish for years, potential new developers may head towards GnuTLS or another competitor rather than wait. At 13 years of age, OpenSSL is a mature project, and it certainly deals with subjects that demand a lot of domain expertise, such as cryptography. Consequently it may find it difficult to recruit new contributors.

But two years is still a long time for an actively-updated patch to remain in limbo. As is the case with the still-incomplete man pages, OpenSSL probably gets more leeway than other projects thanks to its solid reputation for robustness. But it is also a gamble — in the comments on Stapelberg's post, there is frustration with the project on several fronts, and one never knows when goodwill is going to run out.

Comments (9 posted)

Brief items

Quotes of the week

In what most people would think of as counter-intuitive, copyleft licences are more predominant amongst vendor-led open source projects. The reason for this is that some vendors choose to run a dual licensing business model where they put the code out under a restrictive copyleft license and ship a commercial license themselves. They usually combine the licensing regime with a contributor agreement. This means that the intellectual property is aggregated and owned by the sponsoring vendor. This provides the sponsoring vendor with the unique advantage of being able to distribute and package the code as they see fit under a commercial licensing regime. This is exactly the business model that Sun used with OpenOffice and, as I mentioned previously, the reason that the LibreOffice could only fork the code under a copyleft license.
-- Douglas Heintzman on the IBM Software Blog

Tridge,

With Samba well on its way to a third decade as of January this year, we wanted to thank you personally for your mentorship, guidance and leadership of the Samba project over the past twenty years. For the past decade, we have personally witnessed the strength of your technical innovations, and your passionate commitment to free software. The Samba Team and project is immeasurably stronger not only because of your amazing technical skill, but also by your dedication to the cause in the legal arena as well.

-- Andrew Bartlett and Jelmer Vernooij

Comments (none posted)

bzr 2.5.0 released

Version 2.5.0 of the bzr version control system is out. "This is a bugfix and polish release over the 2.4 series, with a large number of bugs fixed (~170 for the 2.5 series alone). The 2.5 series provides a faster smart protocol implementation for many operations, basic support for colocated branches." Also new to this release is a set of translations for over 20 languages.

Full Story (comments: none)

Firefox 11 and Thunderbird 11 released

The Firefox 11 and Thunderbird 11 releases are out. They contain the usual pile of fixes to scary security-related bugs and a number of new features. New goodies in Firefox include the ability to import information from Google Chrome, synchronization of add-ons, CSS improvements, a CSS style editor, Mozilla Tilt, and more. The list of Thunderbird improvements is shorter but includes a new user interface with tabs placed above the main menu.

Comments (7 posted)

Gnuplot 4.6 released

Version 4.6 of the gnuplot plotting utility is out; this is the first major release in two years. New features include a new flow control syntax, user-definable line types in plots, statistical summary calculation, some new terminal drivers, better multi-byte encoding support, and more.

Full Story (comments: 20)

Laborejo Release 0.1

A new project called Laborejo has announced its existence with a 0.1 release. "It is a Lilypond GUI frontend, a MIDI creator and finally a tool collection to inspire and help you compose. It works by reducing music-redundancy and by seperating layout and data. Don't worry about the layout, just concentrate on the music."

Full Story (comments: none)

OpenSSL 1.0.1 released

The OpenSSL 1.0.1 release is out. The version number notwithstanding, this release contains a number of new features, including SCTP support, TLS/DTLS heartbeat support, and more.

Full Story (comments: 1)

2012 Language Summit Report (Python Insider)

The Python Insider site has a report from the 2012 Python Language Summit, held March 7 in Santa Clara, California. "One thing that seemed to have broad agreement was that shortening the standard library turnaround time would be a good thing in terms of new contributors. Few people are interested in writing new features that might not be released for over a year -- it's just not fun. Even with bug fixes, sometimes the duration can be seen as too long, to the point where users may end up just fixing our problems from within their own code if possible."

Comments (none posted)

Newsletters and articles

Development newsletters from the last week

Comments (none posted)

Brewtarget: Hop into Beer Brewing with Open Source (Linux.com)

Linux.com has an interview with Philip Lee about his Brewtarget project. Brewtarget helps homebrewers create and manage their recipes. "Right after I got into homebrewing in 2008, I was looking for open source beer tools for Linux, and I found QBrew, but after looking at its implementation and contemplating whether to extend it or start from scratch, I decided I could do better by starting from scratch. I made some simple attempts early in 2008, but didn't get very far, and resorted to calculating recipes by hand. I'm actually glad I did this, because after doing this for about a year, I learned all the math I would need to make a piece of software, plus some extra. The serious work started in December 2008, when I was sitting at home over the holidays – I was, and still am, a grad student – and had some free time to kill."

Comments (1 posted)

Firefox in 2011 – Firefox plans for 2012

The Firefox team looks back at its 2011 accomplishments and discusses its plans for this year. "With fullscreen support in web browsers, the next step is improve the gaming and interaction experience for building more advanced web sites with key input in fullscreen mode and also being able to use the mouse as a controller instead of as a pointer."

Comments (20 posted)

Idealism vs. pragmatism: Mozilla debates supporting H.264 video playback (ars technica)

Ars technica covers the discussion (the very long discussion) in the Mozilla community about relaxing its stand on patent-encumbered codecs. "Andreas Gal, Mozilla's director of research, announced on a public mailing list today that he wants to proceed with a plan that would enable H.264 decoding on Mozilla's Boot2Gecko (B2G) mobile operating system. The proposed change would allow the video element in Mozilla's HTML rendering engine to rely on codecs that are supplied by the underlying operating system or dedicated video hardware." (Thanks to Paul Wise).

Comments (85 posted)

Weir: Where did the time go?

Rob Weir has posted a timeline of work done on Apache OpenOffice and some associated commentary. "As the timeline shows, most of our attention on the project has been spent on community building and infrastructure migration efforts. We're not engaging in a race to see how fast we can come out with a release, or to show how quickly we can crank out minor releases. A huge portion of our effort has been to ensure continuity for the many millions of users of OpenOffice.org, by far the most popular open source productivity suite."

This response from Michael Meeks may also be worth a look.

Comments (19 posted)

Page editor: Jonathan Corbet

Announcements

Articles of interest

EFF: Ubuntu 12.04 will bring OS-level privacy options

The EFF (Electronic Frontier Foundation) blog has an article on new privacy features in the upcoming Ubuntu release. "Retrofitting operating systems to support privacy against local attackers is a worthy objective, but not an easy one. We hope that Ubuntu and other projects will be in this for the long haul. The first step is probably defining clear API and mechanisms to enable non-GNOME applications to be told about the user's preferences for logging, and opening a lot of bug reports to get them respected. For now, you can now delete your GNOME activity log from the past hour, day, week, a specific date range, or everything stored on your computer."

Comments (9 posted)

Linux gets a bigger shield against patent attacks (InfoWorld)

Simon Phipps comments on the expansion of the Open Innovation Network's patent protection umbrella. "Too bad there's a sinister underbelly to this good news: what's omitted. Most notably, Android -- which is based on the Linux kernel -- is missing from the list altogether, along with its Dalvik language interpreter. Moreover, the definition is now so broad that two of the founders, Sony and Phillips, are concerned their products will be affected and have effectively reserved the right to sue the Linux community."

Comments (11 posted)

New Books

Programming Perl, 4th Edition--New from O'Reilly

O'Reilly Media has released "Programming Perl, 4th Edition" by Tom Christiansen, brian d foy, Larry Wall, and Jon Orwant.

Full Story (comments: none)

Calls for Presentations

CFP: Third International Computer Art Congress (CAC.3)

CAC.3 - Post Digital Art - will take place in Paris, France, November 26-28, 2012. The call for papers deadline is June 4, 2012. "We encourage researchers, artists, engineers and thought leaders to present a paper on subjects including - but not limited to: Algorithmic Art, ASCII Art, Bio Art, Computer Graphics, Connected Creation, Computer Music, Demo-Scene, Digital Illustration and Paintings, Education, Fractal and Generative Art or Music, Interactive and Media Art, Motion Graphics, Sound Visualization, Software Art, Internet Art, Tradigital Art, Video Games,.."

Full Story (comments: none)

Upcoming Events

Events: March 15, 2012 to May 14, 2012

The following event listing is taken from the LWN.net Calendar.

Date(s)EventLocation
March 7
March 15
PyCon 2012 Santa Clara, CA, USA
March 16
March 17
Clojure/West San Jose, CA, USA
March 17
March 18
Chemnitz Linux Days Chemnitz, Germany
March 23
March 24
Cascadia IT Conference (LOPSA regional conference) Seattle, WA, USA
March 24
March 25
LibrePlanet 2012 Boston, MA, USA
March 26
April 1
Wireless Battle of the Mesh (V5) Athens, Greece
March 26
March 29
EclipseCon 2012 Washington D.C., USA
March 28 PGDay Austin 2012 Austin, TX, USA
March 28
March 29
Palmetto Open Source Software Conference 2012 Columbia, South Carolina, USA
March 29 Program your own open source system-on-a-chip (OpenRISC) London, UK
March 30 PGDay DC 2012 Sterling, VA, USA
April 2 PGDay NYC 2012 New York, NY, USA
April 3
April 5
LF Collaboration Summit San Francisco, CA, USA
April 5
April 6
Android Open San Francisco, CA, USA
April 10
April 12
Percona Live: MySQL Conference and Expo 2012 Santa Clara, CA, United States
April 12
April 19
SuperCollider Symposium London, UK
April 12
April 13
European LLVM Conference London, UK
April 12
April 15
Linux Audio Conference 2012 Stanford, CA, USA
April 13 Drizzle Day Santa Clara, CA, USA
April 16
April 18
OpenStack "Folsom" Design Summit San Francisco, CA, USA
April 17
April 19
Workshop on Real-time, Embedded and Enterprise-Scale Time-Critical Systems Paris, France
April 19
April 20
OpenStack Conference San Francisco, CA, USA
April 21 international Openmobility conference 2012 Prague, Czech Republic
April 23
April 25
Luster User Group Austin, Tx, USA
April 25
April 28
Evergreen International Conference 2012 Indianapolis, Indiana
April 27
April 29
Penguicon Dearborn, MI, USA
April 28
April 29
LinuxFest Northwest 2012 Bellingham, WA, USA
April 28 Linuxdays Graz 2012 Graz, Austria
May 2
May 5
Libre Graphics Meeting 2012 Vienna, Austria
May 3
May 5
Utah Open Source Conference Orem, Utah, USA
May 7
May 9
Tizen Developer Conference San Francisco, CA , USA
May 7
May 11
Ubuntu Developer Summit - Q Oakland, CA, USA
May 8
May 11
samba eXPerience 2012 Göttingen, Germany
May 11
May 12
Professional IT Community Conference 2012 New Brunswick, NJ, USA
May 11
May 13
Debian BSP in York York, UK
May 13
May 18
C++ Now! Aspen, CO, USA

If your event does not appear here, please tell us about it.

Page editor: Rebecca Sobol


Copyright © 2012, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds