User: Password:
Subscribe / Log in / New account Weekly Edition for November 10, 2011

Good fences make good projects?

By Jonathan Corbet
November 9, 2011
Back in August, there was a big fight over whether the user-space "native Linux KVM tool" should be merged into the mainline kernel repository. One development cycle later, we've had the same fight with many of the same arguments and roughly the same result. Sequels are rarely as good as the original; that applies to flame wars as well as to more creative works. But there is a core issue here that has relevance well beyond the kernel community: does the separation of projects help the Linux community more than it hurts it?

The proponents of merging the tool into the kernel make a number of points. Having the projects in the same repository makes development that crosses the boundary between the two easier; in particular, it helps in the creation of APIs that will stand the test of time. The project's overall standards help to keep the quality of the tools high and the release cycle predictable. Reuse of code between user-space and kernel projects gets easier. All told, they say, having the "perf" tool in the kernel tree has greatly helped its development; see this message from Ingo Molnar for a detailed description of the perceived advantages of this mode of development. Artificial separation of projects, instead, is said to have high costs; Ingo went so far as to claim that Linux lost the desktop market as the result of an ill-advised separation of projects.

Opponents, instead, say that putting the kernel and the tools in the same tree makes it easier to create API regressions for out-of-tree tools. The reason that perf has a relatively good record on this front, Ted Ts'o said, has more to do with the competence of the developers involved than its presence in the kernel tree. Adding user-space tools bloats the kernel source distribution, puts competing out-of-tree projects at a disadvantage, and, Ted said, creates a number of difficulties for distributors.

The one concrete end result of the discussion was that the pull request for the KVM tool was passed over by Linus who, feeling that he had enough stuff for this development cycle already, did not want to wander into this particular disagreement. It is not hard to imagine that he will get another chance in a future development cycle; it does not seem that any minds have been changed by the discussion so far.

In the middle of this discussion, it was asked whether it would make sense to bring other projects into the kernel - GNOME, for example. It was pointed out that BSD-based systems tend to be developed in this mode - an existence proof that operating system development can work that way. Ted responded (in the message linked above) as follows:

[T]here has speculation that this was one of many contributions to why they lost out in the popularity and adoption competition with Linux. (Specifically, the reasoning goes that the need to package up the kernel plus userspace meant that we had distributions in the Linux ecosystem, and the competition kept everyone honest. If one distribution started making insane decisions, whether it's forcing Unity on everyone, or forcing GNOME 3 on everyone, it's always possible to switch to another distribution. The *BSD systems didn't have that safety valve....)

One could note that BSD does have one safety valve: to fork the entire system. That has happened a number of times in the history of BSD; pointing this out, though, only serves to reinforce Ted's point.

Distributors play a crucial role in the Linux ecosystem; they function as the middleman between most development projects and their users. Most of us, most of the time, do not obtain the software we run directly from those who wrote it; it comes, instead, nicely packaged from our distributor. As they ponder each package, distributors (the successful ones, at least) will be keeping their users' needs in mind. If the package has obnoxious anti-social features or security problems, the distributors will either fix it or leave the package out altogether. The recent Calibre mess is a prime example; aware distributors had already eliminated the worst problems before they were generally known.

Distributors make it possible to change the source of your operating system without having to stop running Linux. Anybody who has been working with Linux long enough has almost certainly switched distributions at least once during that time; the process is not without its disruptions, but the amount of pain is usually surprisingly low. The lack of lock-in in the Linux world has improved life for users and, at the same time, given distributors an incentive to improve the Linux experience for everybody.

The role of the distributors is made possible by the boundaries between the projects. If the entire system were integrated into a single source tree, there would be little space for the distributors to do their own integration work. The lack of independent *BSD distributions makes this point clear. That suggests that too much integration at the project level might not be a good thing for Linux.

So one could make an argument that bringing GNOME into the kernel source tree is probably a bad idea for this reason alone; Linux as a whole may be better served by having the kernel and the desktop environments be separate components that can be combined (or not) at will. That makes it clear (if it wasn't before - your editor can be slow at times, please bear with him) that there is a line to be drawn somewhere; bringing some projects into the kernel source tree may be harmful for Linux even without considering the effects on the kernel itself. But separating the kernel from some user-space projects may have costs that are just as high. There is no consensus, currently, on what those costs are or where the line should be drawn.

All of this implies that the debate over the inclusion of the KVM tool has an importance that goes beyond the fate of that one project. Does (as some allege) the integration between perf and the kernel impede the development of alternatives and hurt the performance tooling ecosystem as a whole? Would the integration of the KVM tool put QEMU at the mercy of a fast-changing, regression-prone API over which its developers have no control? Are we better served by a fence between the kernel and user space that is as well defined at the project level as it is at the API level? Or, on the other hand, does keeping the KVM tool out of the kernel repository slow its growth and hurt the capability and usability of Linux tooling as a whole? And, importantly, what does the reasoning that leads to an answer to these questions tell us about which other projects should - or should not - find a home in the kernel tree?

These issues arise at a number of levels; some distributors, for example, are increasingly taking control of parts of the system through tightly-controlled in-house projects. Android is an extreme example of this approach, but it can be found in more traditional distributions as well. There are clear advantages to doing things that way, but it is worth asking whether that behavior is good for Linux in the long term and just where the line should be drawn. The fences between our projects may have played an important role in both the successes and failures of Linux; decisions on whether to strengthen them or tear them down need some serious thought.

Comments (17 posted)'s "Monty" on codecs and patents

By Jake Edge
November 9, 2011

While the talks at the 2011 GStreamer conference mostly focused on the multimedia framework itself—not surprising—there were also some that looked at the wider multimedia ecosystem. One of those was Christopher "Monty" Montgomery's presentation about, and its work to promote free and open source multimedia. Xiph is known for its work on the Ogg container format (and the Vorbis and Theora codecs), but the organization has worked on much more than just those. In addition, Montgomery outlined a new strategy that Xiph is trying out to combat one of the biggest problems in the free multimedia world: codec patents.

[Christopher 'Monty' Montgomery]

Xiph was founded in 1994, originally as a for-profit company ( that was set up to sell codecs. These days, it is a non-profit that consists of various "loosely grouped" codec projects. All of the members are volunteers, and various FOSS companies pay the salaries of some of the members as donations to For example, Red Hat pays Montgomery's salary to allow him to work on Xiph projects. The organization is "like a coffee shop where skilled codec developers hang out", Montgomery said.

Beyond Ogg, Vorbis, and Theora, there are a number of different projects under the Xiph umbrella, Montgomery said. The cdparanoia compact disc ripper program and library was something he wrote as a student that is now part of Xiph. The Icecast streaming media server is another Xiph project, he said, as are various codecs including Speex, FLAC, the new Opus audio codec, and "a whole bunch of codecs that no one remembers".

Xiph does hold "intellectual property", Montgomery said, and that is one of the reasons it exists. Non-profits have an advantage when it comes to patents because the board gets to decide what happens to the patents if the organization goes out of business. That's different from for-profit companies that go bankrupt, he said, because whoever buys the assets gets the patents free of any promises or other entanglements (at least those that aren't legally binding, like licenses). If the original company promised not to assert some patents (e.g. for free software implementations or to implement a standard), a new owner may not be bound by that promise. A non-profit's board can ensure that any patents end up with a like-minded organization, he said.

Codec news

The biggest Xiph news in the recent past is that Google chose Vorbis as the audio codec for WebM. Montgomery said that he is very happy to see Vorbis included into WebM, but is also glad to see that Google is stepping up to help the cause of free codecs. Xiph has been trying to "hold the line on free codecs", mostly by themselves, he said. He is hopeful that Google picking up some of that will allow Xiph to "go back to what we are actually good at", which is codec development.

Xiph will be continuing to do more codec development because the members enjoy doing so, Montgomery said. Revising the Ogg container format is one thing that's on the plate now. That is not something that Xiph wanted to do while Ogg was part of its effort to hold the free codec line. With the advent of WebM, which uses the Matroska container format, some of the "legitimate complaints" about Ogg can now be addressed.

FLAC is now finished, he said. It is stable and mature with good penetration; it is essentially the standard for lossless audio codecs, and one that Apple has been unable to overturn, Montgomery said. He also noted that there were plans for a Theora 1.2 release that never happened, partly because "everyone went to work on VP8 and Opus". He believes that the release will still happen at some point, but that the pressure is off because of the existence of WebM.

Opus is a new audio codec that incorporates pieces from Xiph's CELT codec and Skype's SILK codec. Opus is designed for streaming voice or other audio over the internet, and is the subject of an IETF Internet-draft. As is usual for such documents, Intellectual Property Rights (IPR) disclosures were made by various parties who believed they had IP (e.g. patents) that are required to implement the proposed standard. Qualcomm has filed such a disclosure for Opus, but, unlike the other disclosing organizations, Qualcomm has not offered its patents under a royalty-free license.

Patent strategy

Montgomery was clear that he wasn't singling out Qualcomm in his talk, because what it has done is "business as usual" in the industry, and Qualcomm is "not in any sense alone" in making these kinds of claims. But it has led Xiph to spend almost as much time on patent strategy as it has in writing code recently. Part of the problem is that these IPR disclosures are immediately assumed to be valid by everyone, whether they know something about patents in that space or not. The presumption is that Qualcomm would never have made the claims without doing a great deal of research.

But Montgomery is not convinced that there is much of substance to Qualcomm's claims. The patent game is essentially a protection racket, he said, and those who are trying to do things royalty-free are messing things up for those who want to collect tolls. "The industry is pissed at Google because they won't play the protection racket game", he said. Qualcomm and others just list some patents that look like they could plausibly read on a royalty-free codec, because it doesn't cost them anything.

That leaves Xiph with few options, though. There is the "thermonuclear option" of going to court and getting a declaratory judgement, but there are some major downsides to pursuing that strategy. It will take a lot of time and money to do so and "no one will use it while the litigation is going on". Montgomery's original inclination was to pursue a declaratory judgement, to "bash in some teeth" and "show that is not to be trifled with". But even if Xiph won, it would only impact those few patents listed by Qualcomm. What is needed is a way to "change 'business as usual'", he said.

Companies "have figured out how to fight 'free'", Montgomery said, by making it illegal. In order to fight back through the courts, there would be an endless series of cases that would have to be won, and each of those wins would not hurt the companies at all. There is a "presumption of credibility" when a patent holder makes a claim of infringement, and the press "plays along with that", he said. But Eben Moglen has pointed out that an accusation of infringement has no legal weight, so there is no real downside to making such a claim.

One way to combat that is to document why the patents don't apply. Basically, Xiph did enough research to show why the Qualcomm patents don't apply to Opus and it is planning to release that information. It is a dangerous strategy at some level because it gives away some of the defense strategy, he said, but Xiph has to try something. By publishing the results of the research, Xiph will be "giving away detailed knowledge of the patents" and may be called to testify if those patents ever do get litigated, but it should counter the belief that the Qualcomm patents cover Opus.

Qualcomm could respond to the research in several different ways. It could ignore it, respond to it, or come back with more patents. It could also formally abandon the claim. If Qualcomm doesn't respond, Montgomery said, that does have some legal weight. One advantage of this approach is that regardless of how Qualcomm responds, Xiph has something concrete (i.e. the research) for the money that it has spent, which is not really the case when taking the declaratory judgement route.

New codecs

Montgomery called Opus a "best in class codec" that Xiph would like to see widely used. Hardware implementations of Opus have been considered, but have not been done yet, he said. Finishing the Opus rollout and "responding to patent claims" have been higher on the list, but they will get to it eventually.

He mentioned two other codecs that Xiph will be working on, including Ghost, which splits audio into two components: strong tones and everything else. Each of the components will be processed separately, much like what the ears do, he said. Both can be represented compactly, but the same transforms don't work on them, so representing them separately may make sense. There was a need to "invent some amount of math for all of this", he said. In addition, Xiph will be working on a new video codec that is being done as part of a "friendly rivalry with On2" (makers of the VP8 codec in WebM).

Montgomery painted a picture of an organization that is doing a great deal to further the cause of free multimedia formats. There are lots of technical and political battles to fight, but seems to be up to the task. It will be interesting to see how Qualcomm responds to the Opus research, and generally how the codec patent landscape plays out over the next few years. The battle is truly just beginning ...

[ I'd like to thank the Linux Foundation for helping with travel expenses so that I could attend the GStreamer conference. ]

Comments (18 posted)

Authenticating Git pull requests

By Jake Edge
November 9, 2011

One of the outcomes from the compromise is the increased use of GPG among kernel developers. GPG keys are now required to get write access to the Git repositories, and folks are starting to think about how to use those keys for other things. Authenticating pull requests made by kernel hackers to Linus Torvalds are one possible use. But, as the discussion on the linux-kernel mailing list shows, there are a few different use-cases that might benefit from cryptographic signing.

Most of the code that flows into the kernel these days comes from Git trees that various lieutenants or maintainers manage. During the merge window (and at other times), Torvalds is asked to "pull" changes from these trees via an email from the maintainer. In the past, Torvalds has used some ad hoc heuristics to determine whether to trust that the request (and the tree) are valid, but, these days, stronger assurances are needed. That's where GPG signing commits and tags may be able to help.

Conceptually the idea is simple: the basic information required to do a pull (location and branch of the Git tree along with the commit ID of its head) could be signed by the developer requesting the pull. Torvalds could then use GPG with his keyring of kernel developer public keys to verify that the signature is valid for the person who sent the request. That would ensure that the pull request is valid. It could all be done manually, of course, but it could also be automated by making some changes to Git.

The discussion on how to do that automation started after a signed pull request for libata updates was posted by Jeff Garzik. The entire pull request mail (some 3200+ lines including the diffs and diffstat) was GPG signed, which mangled the diff output as Garzik noted. Beyond that, though, it is unwieldy for Torvalds to check the signature, partly because he uses the GMail web interface. In order to check it, he has to cut and paste the entire message and feed it to GPG, which is labor intensive and might be prone to the message being mangled—white space or other changes—that would lead to a false negative signature verification. As Torvalds noted: "We need to automate this some sane way, both for the sender and for the recipient."

The initial goal is just to find a way to ensure that Torvalds knows who the pull request is coming from and where to get it, all of which could be handled outside of Git. Rather than signing the entire pull request email, just a small, fixed-format piece of that mail could be signed. In fact, Torvalds posted a patch to git-request-pull to do just that. It still leaves the integrator (either Torvalds or a maintainer who is getting a pull request from another developer) doing a cut-and-paste into GPG for verification, however.

There are others who have an interest in a permanent trail of signatures that could be audited if the provenance of a particular part of the kernel needs to be traced. That would require storing the signatures inside the Git tree somehow, so that anyone with a copy of Torvalds's tree could see any of the commits that had been signed, either by Torvalds or by some other kernel hacker. But, as Torvalds pointed out, that information is only rarely useful:

Having thought about it, I'm also not convinced I really want to pollute the "git log" output with information that realistically almost nobody cares about. The primary use is just for the person who pulls things to verify it, after that the information is largely stale and almost certain to never be interesting to anybody ever again. It's *theoretically* useful if somebody wants to go back and re-verify, but at the same time that really isn't expected to be the common case.

Torvalds's idea is that the generation of the pull request is the proper time for a developer to sign something, rather than having it tied to a specific commit. His example is that a developer or maintainer may wish to push the tree out for testing (or to linux-next), which requires that it be committed, but then request a pull for that same commit if it passes the tests. Signing before testing has been done is likely to be a waste of time, but signing the commit later requires amending the commit or adding a new empty commit on top, neither of which were very palatable. Git maintainer Junio C. Hamano is not convinced that ephemeral signatures (i.e. those that only exist for the pull-request) are the right way to go, though: "But my gut feeling is that 'usually hidden not to disturb normal users, but is cast in stone in the history and cannot be lost' strikes the right balance."

The conversation then turned toward tags, which can already be signed with a GPG key. One of the problems is that creating a separate tag for each commit that gets signed rapidly becomes a logistical nightmare. If you just consider the number of trees that Torvalds pulls in a normal merge window (hundreds), the growth in the number of signed tags becomes unwieldy quickly. If you start considering all of the sub-trees that get pulled into the trees that Torvalds pulls, it becomes a combinatorial explosion of tags.

What's needed is an automated method of creating tag-like entries that live in a different namespace. That's more or less what Hamano proposed by adding a refs/audit hierarchy into the .git directory data structures. The audit objects would act much like tags, but instead carry along information about the signature verification status of the merges that result from pulls. In other words, a git-pull would verify the signature associated with the remote tag (which are often things like "for-linus" that get reused over and over) and create an entry in the local audit hierarchy that recorded the verification. Since the audit objects wouldn't pollute the tag namespace, and would be pulled and created automatically, they will have much less of an impact on users and existing tools. In addition, the audit objects could then be pushed into Torvalds's public tree so that audits could be done.

So far, Hamano has posted a patch set that implements parts of his proposed solution. In particular, it allows for signing commits, verifying the signatures, and for pulling signed tags. Other pieces of the problem are still being worked on.

As is often the case in our communities, adversity results in pretty rapid improvements. For the kernel, the SCO case brought about the Developer's Certificate of Origin, the relicensing of BitKeeper gave us Git, the break-in brought about a closer scrutiny of security practices, and the adoption of GPG keys because of that break-in will likely lead to even better assurances of the provenance of kernel code. While we certainly don't want to court adversity, we certainly do take advantage of it when it happens.

Comments (12 posted)

Page editor: Jonathan Corbet


A Periodic Table of password managers

November 9, 2011

This article was contributed by Nathan Willis

As was mentioned in the context of the Fedora Project's new password-selection rules, keeping track of the glut of "low-value" passwords that accumulate in daily web usage prompts many users to look into password-management applications. In theory, a password list saved to a file encrypted by a suitably strong algorithm beats a desk covered in sticky-notes or a single, re-used-everywhere password — provided that you remember the password that unlocks the password vault file itself. Not all such utilities are created equal, however, especially when you consider factors like usability and cross-platform compatibility.

Although this tour of password managers is limited just to those with a desktop Linux build, it is important to consider whether or not versions of the application exist for other OSes, so that you can have access to web site passwords when away from home base. These days, after all, the list of non-native OSes includes not just Windows and OS X, but mobile platforms like Android as well. It is also important to distinguish between the classes of secret information you need to store — some applications provide a simple scratchpad on which you can jot any username/password combination in plain text, while others attempt to manage OpenPGP and SSH keys as well, complete with key-signing, key lookup, and other related functionality.

The available options also vary in security-related features. Some provide a mechanism to create and manage multiple "password safes" at once, while others associate just a single safe with the active user account. The encryption algorithms used to lock the password safe are well-known and reliable, but some applications go out of their way to provide additional security through key strengthening techniques, such as hashing the original passphrase through multiple rounds (typically thousands of iterations, known as "key stretching") and/or applying a salt. Those techniques can make attacks against the password using rainbow tables or brute force more difficult or impossible. A few applications also make a point of using locked (with mlock()) memory, which prevents the kernel from swapping pages containing cleartext passwords out to disk where those passwords could be recovered by an attacker.

The noble desktop-environment natives

GNOME and KDE both provide an "official" GUI application for managing keys and passwords, each of which is a front-end to the environment's built-in key-management service. GNOME's offering is Seahorse, which serves as a front-end to GNOME Keyring, and KDE's is KWalletManager, a front-end for KWallet. Naturally, each inherits core functionality like the vault-encryption algorithm from its respective back-end service.


Seahorse and GNOME Keyring use AES-128 to encrypt the password safe, with a salt and multiple hash iterations applied to the password, and use locked memory. Seahorse separates your managed "secrets" into three tabs: one for passwords, one for your personal OpenPGP and SSH keys, and one for the public keys you have collected for others. You can create multiple "password keyrings" (as Seahorse calls them) while in the password tab, though Seahorse will continue to collect automatically-saved passwords (such as those used by online services integrated with GNOME) in the default password keyring. There is not a facility to export a password keyring to an external file, and Seahorse can only import raw keys (as opposed to encrypted files produced by other applications).

KWallet and KWalletManager use the Blowfish algorithm to encrypt the password safe. The safe's password is put through multiple hash rounds, although I have not found a clear description of either salting or locked memory usage. KWallet's approach to managing your secrets collection is different — whereas GNOME Keyring allows you to create separate "password keyrings" that are distinct from the collection of encryption keys, KWallet allows you to create separate "wallets," each of which can contain several types of credentials (passwords included). It, too, does not include functionality for exporting a password safe to an external file or importing the password safes of other applications.

The Schneier-ides

Security guru Bruce Schneier developed his own password safe application — for Windows only — called simply Password Safe, which currently sits at version 3.26. The Windows-only nature of the project has prompted several independent attempts to duplicate its functionality (with file-format compatibility) on other OSes.


MyPasswordSafe is a Qt-based Password Safe work-alike designed to run on Linux desktops. The last formal release was in 2004, however the project has migrated to Github, and there have been sporadic commits to the code as recently as early 2011. MyPasswordSafe uses Blowfish to encrypt the password safe, but the FAQ makes a point of playing down any other security features (including explicit mention that locked memory is unsupported). On the other hand it does provide a feature to copy passwords to the clipboard, and then automatically clear the clipboard after the password has been pasted. The application supports the creation of multiple safes. Like the original Password Safe, it implements password storage only, but allows you to associate each saved password with a title, username, and text notes.

Password Gorilla is another clone of Schneier's application, which uses Tcl/Tk for its GUI, and is still in active development. It supports Linux, Windows, and Mac OS X, and claims to maintain compatibility with the current 3.2-series of Password Safe, something that might be problematic for the older MyPasswordSafe. Multiple password safes are supported, encrypted by the Twofish algorithm, and protected by key stretching. As is the case with MyPasswordSafe, only password storage is implemented, and using the same schema. Password Gorilla can export a password safe as a plain (unencrypted) text file, and can open safes created in Password Safe or MyPasswordSafe.

There are several projects implementing Password Safe-compatible functions for the major mobile device OSes, some of which are open source. Passwd Safe is an Android application, and pwSafe is an app for iOS. Both support multiple password safes, and are under active development. PwSafe uses Twofish to encrypt the password safes, and salts and stretches the key.

The KeePass series

KeePass is another password manager that originated on Windows. Like Schneier's work, it was open source. However, when the project undertook a rewrite for version 2.0, it switched to Microsoft's .NET application framework, adopted several Windows APIs, and changed its file format. The project has continued to release updates for both the 1.x and 2.x series. Although it is possible to make KeePass 2.x run using the Mono implementation of .NET — with some effort — the rewrite has largely isolated the Windows code base from other platforms.

A friendly (at least, friendly enough to be linked to from the KeePass site) fork of the code called KeePassX has continued development from the 1.x branch, simultaneously supporting Linux, OS X, and Windows. KeePassX sports more flexibility than many of the other password managers; it can use either AES or Twofish to encrypt password safes, and can incorporate other authentication mechanisms, such as the presence of a "key file" in addition to a password. The original KeePass application used protected memory, password salting, and key stretching; KeePassX forum users routinely point those asking questions to the KeePass documentation, which suggests that those features have not faded away, though KeePassX does not make any representations to that effect. For file format compatibility, KeePassX would need to preserve the same password-hashing scheme, of course, but locked memory (particularly on non-Windows OSes) is another story.


Feature-wise, KeePassX supports multiple password safes, and within each safe allows you to create named groups of saved passwords. Two are provided by default with new safes, "Internet" and "Email." Each password entry comes with several associated fields: Title, Username, URL, the password itself, Comments, an optional expiration date, an icon, and optional file attachments. KeePassX can import password safes from most other password managers, including the Schneier Password Safe and its clones and KWallet's internal XML format. Individuals have posted instructions for converting other password manager files to the forums. KeePassX can export its safes to plain text or unencrypted XML.

There are also unofficial KeePass "ports" to popular mobile platforms, including Android and iOS. The Android application KeePassDroid is open source, as is one of the iOS apps, iKeePass.

The rest

Several password managers are still available through the major distribution's repositories, even though they are no longer actively developed. Of note are Revelation and Figaro's Password Manager (FPM), both written for GNOME.

Revelation focused on password safes, but could open other encrypted files, including those encrypted with LUKS. It could import password safes from several other applications, including Password Safe, and could export safes to many of the same formats in addition to unencrypted XML. It used AES-256 to encrypt the safe, with the password salted and iteratively hashed. Within each password safe, it supported ten specific "secret" types, each of which had its own combination of database fields: phone, credit card, cryptographic key, shell account, FTP account, email, web site, database, door lock code, and generic. You could create folders within each safe to further group your passwords. Revelation ceased development in 2007.

In addition to the standard password safe feature set, FPM added the ability to launch applications by clicking on an entry in the password list — primarily a web browser, but user-configurable for any executable, on a per-password basis. It also supported copying saved passwords to either the system clipboard or to the X primary selection (so that they could be pasted with a middle-click). FPM protected the password safe with Blowfish, and used locked memory. It supported multiple safes, and could import safes from several other applications of the same age.


Although FPM's last release was in 2003, another developer independently started a fork called FPM2, which is still undergoing active development. The basic feature set is the same, but it adds several enhancements. First, it encrypts the safe with AES-256, and adds key stretching for additional protection. It also allows you to assign a "category" text label to each saved password, and extends the "launcher" concept. FPM2 launchers can be configured to pass other arguments (such as hostname or username) from each saved entry to the launched application. It can also launch a URL in the browser, and at the same time copy the associated username to the clipboard and the password to the primary selection.

Pick your poison

These days, all of the actively-maintained password managers offer rough parity on the security of stored password safe — at least on the Linux desktop. A bigger question is whether or not the existence of compatible applications for your mobile device is important, since, depending on the device, you may not be able to assess the security risks inherent in that platform. Using a mobile client also supposes that the password safe is retrievable, so it must either be stored in a location accessible from the Internet, or be periodically synchronized between the PC and device.

For a casual user, the built-in password managers supplied by GNOME and KDE are probably sufficient, considering that they are already used to manage OpenPGP, SSH, and other credentials. The Schneier and KeePass families primarily offer better cross-OS support and usability niceties (such as extended data fields for each password entry and import/export for other formats). Whether or not you can make use of those features, of course, depends largely on the number of passwords you are required to juggle and how many machines you need to use.

Comments (45 posted)

Brief items

Security quotes of the week

I keep trying to leave this bug report but I keep getting dragged in. It's worse than Twitter.
-- Dan Rosenberg

They went out of their way to let researchers in, and now they're kicking me out for doing research. I didn't have to report this bug. Some bad guy could have found it instead and developed real malware.
-- Charlie Miller in Forbes after finding an iOS flaw and getting banned from Apple's developer program for reporting it

The RIAA's political strategy in the war on piracy has been alternately to oppose and support government regulation of the Internet, depending on what's expedient. I wonder if rights owners and the trade groups that represent them experience any sense of cognitive dissonance when they advocate against something at one moment and for it a little while later—to the same audience, on the same issue.
-- Annemarie Bridy in the Freedom to Tinker blog

Given a sentence to give password advice on a billboard, I'd instead say:
A really strong password is one that nobody else has ever used.

That's all you need. More complicated advice about password length or using numbers and punctuation just leads to 'Password1!' if its not motivated by finding something unusual enough to be globally unique.

-- Joseph Bonneau comments on Google's password advice billboards

Comments (none posted)

New vulnerabilities

acroread: be afraid

Package(s):acroread CVE #(s):CVE-2011-2424 CVE-2011-2431 CVE-2011-2432 CVE-2011-2433 CVE-2011-2434 CVE-2011-2435 CVE-2011-2436 CVE-2011-2437 CVE-2011-2438 CVE-2011-2439 CVE-2011-2440 CVE-2011-2442
Created:November 8, 2011 Updated:November 21, 2011
Description: The proprietary acroread tool has a whole long list of vulnerabilities leading to code execution when a PDF file has a specially-crafted SWF file embedded within it.
Gentoo 201201-19 acroread 2012-01-30
SUSE SUSE-SU-2011:1239-1 Acrobat Reader 2011-11-15
openSUSE openSUSE-SU-2011:1238-1 acroread 2011-11-15
Red Hat RHSA-2011:1434-01 acroread 2011-11-08
SUSE SUSE-SA:2011:044 acroread 2011-11-16

Comments (1 posted)

ffmpeg: code execution

Package(s):ffmpeg CVE #(s):CVE-2011-3973 CVE-2011-3974 CVE-2011-3504
Created:November 8, 2011 Updated:August 30, 2012
Description: The Chinese AVS video decoder in ffmpeg suffers from multiple memory corruption and application crash errors (CVE-2011-3973/CVE-2011-3974). There is also a vulnerability in the Matroska decoder (CVE-2011-3504) that can enable code execution via a malicious media file.
Gentoo 201310-12 ffmpeg 2013-10-25
Mandriva MDVSA-2012:148 ffmpeg 2012-08-30
Mandriva MDVSA-2012:074-1 ffmpeg 2012-08-30
Mandriva MDVSA-2012:076 ffmpeg 2012-05-15
Mandriva MDVSA-2012:075 ffmpeg 2012-05-15
Mandriva MDVSA-2012:074 ffmpeg 2012-05-14
Ubuntu USN-1333-1 libav 2012-01-17
Ubuntu USN-1320-1 ffmpeg 2012-01-05
Debian DSA-2336-1 ffmpeg 2011-11-07

Comments (none posted)

firefox, seamonkey: cross-site scripting

Package(s):seamonkey firefox CVE #(s):CVE-2011-3648
Created:November 9, 2011 Updated:July 23, 2012
Description: A flaw in firefox's and seamonkey's handling of multibyte character sets can lead to a cross-site scripting vulnerability.
openSUSE openSUSE-SU-2014:1100-1 Firefox 2014-09-09
Gentoo 201301-01 firefox 2013-01-07
Mageia MGASA-2012-0176 iceape 2012-07-21
Ubuntu USN-1254-1 thunderbird 2011-12-22
openSUSE openSUSE-SU-2011:1290-1 Seamonkey 2011-12-01
Ubuntu USN-1282-1 thunderbird 2011-11-28
Ubuntu USN-1277-2 mozvoikko, ubufox 2011-11-23
Ubuntu USN-1277-1 firefox 2011-11-23
openSUSE openSUSE-SU-2011:1243-1 MozillaFirefox 2011-11-15
openSUSE openSUSE-SU-2011:1242-1 MozillaFirefox 2011-11-15
Debian DSA-2345-1 icedove 2011-11-11
Oracle ELSA-2011-1440 seamonkey 2011-11-09
Oracle ELSA-2011-1438 thunderbird 2011-11-09
Oracle ELSA-2011-1437 firefox 2011-11-09
Oracle ELSA-2011-1437 firefox 2011-11-09
Oracle ELSA-2011-1437 firefox 2011-11-09
CentOS CESA-2011:1440 seamonkey 2011-11-09
CentOS CESA-2011:1438 thunderbird 2011-11-09
CentOS CESA-2011:1437 firefox 2011-11-09
Red Hat RHSA-2011:1438-01 thunderbird 2011-11-08
SUSE SUSE-SU-2011:1266-1 MozillaFirefox 2011-11-21
SUSE SUSE-SU-2011:1256-2 mozilla-nss 2011-11-21
SUSE SUSE-SU-2011:1256-1 Mozilla Firefox 2011-11-18
Ubuntu USN-1251-1 firefox, xulrunner-1.9.2 2011-11-10
Oracle ELSA-2011-1439 thunderbird 2011-11-09
Mandriva MDVSA-2011:169 mozilla 2011-11-09
CentOS CESA-2011:1438 thunderbird 2011-11-09
CentOS CESA-2011:1437 firefox 2011-11-09
Scientific Linux SL-fire-20111108 firefox 2011-11-08
Scientific Linux SL-seam-20111108 seamonkey 2011-11-08
Debian DSA-2341-1 iceweasel 2011-11-09
Debian DSA-2342-1 iceape 2011-11-09
Red Hat RHSA-2011:1440-01 seamonkey 2011-11-08
Scientific Linux SL-thun-20111108 thunderbird 2011-11-08
Scientific Linux SL-thun-20111108 thunderbird 2011-11-08
Red Hat RHSA-2011:1437-01 firefox 2011-11-08
Red Hat RHSA-2011:1439-01 thunderbird 2011-11-08

Comments (none posted)

firefox, seamonkey: privilege escalation

Package(s):iceape seamonkey firefox CVE #(s):CVE-2011-3647 CVE-2011-3650
Created:November 9, 2011 Updated:July 23, 2012
Description: Firefox's and Seamonkey's addon-handling code contains an unspecified privilege escalation vulnerability (CVE-2011-3647), and JavaScript profiling can lead to memory corruption (CVE-2011-3650).
openSUSE openSUSE-SU-2014:1100-1 Firefox 2014-09-09
Gentoo 201301-01 firefox 2013-01-07
Mageia MGASA-2012-0176 iceape 2012-07-21
Ubuntu USN-1254-1 thunderbird 2011-12-22
openSUSE openSUSE-SU-2011:1290-1 Seamonkey 2011-12-01
Ubuntu USN-1282-1 thunderbird 2011-11-28
Ubuntu USN-1277-2 mozvoikko, ubufox 2011-11-23
Ubuntu USN-1277-1 firefox 2011-11-23
openSUSE openSUSE-SU-2011:1243-1 MozillaFirefox 2011-11-15
openSUSE openSUSE-SU-2011:1242-1 MozillaFirefox 2011-11-15
Oracle ELSA-2011-1439 thunderbird 2011-11-09
Oracle ELSA-2011-1437 firefox 2011-11-09
Oracle ELSA-2011-1437 firefox 2011-11-09
Oracle ELSA-2011-1437 firefox 2011-11-09
CentOS CESA-2011:1437 firefox 2011-11-09
SUSE SUSE-SU-2011:1266-1 MozillaFirefox 2011-11-21
SUSE SUSE-SU-2011:1256-2 mozilla-nss 2011-11-21
SUSE SUSE-SU-2011:1256-1 Mozilla Firefox 2011-11-18
Debian DSA-2345-1 icedove 2011-11-11
Ubuntu USN-1251-1 firefox, xulrunner-1.9.2 2011-11-10
Mandriva MDVSA-2011:169 mozilla 2011-11-09
CentOS CESA-2011:1437 firefox 2011-11-09
Scientific Linux SL-fire-20111108 firefox 2011-11-08
Debian DSA-2341-1 iceweasel 2011-11-09
Debian DSA-2342-1 iceape 2011-11-09
Scientific Linux SL-thun-20111108 thunderbird 2011-11-08
Red Hat RHSA-2011:1437-01 firefox 2011-11-08
Red Hat RHSA-2011:1439-01 thunderbird 2011-11-08

Comments (none posted)

icedtea-web: sandboxing failure

Package(s):icedtea-web CVE #(s):CVE-2011-3377
Created:November 9, 2011 Updated:March 14, 2012
Description: A flaw in the same-origin policy implementation in the icedtea-web browser plugin can enable malicious JavaScript code to connect to sites other than the originating host.
openSUSE openSUSE-SU-2012:0371-1 icedtea-web 2012-03-14
Debian DSA-2420-1 openjdk-6 2012-02-28
Ubuntu USN-1263-1 icedtea-web, openjdk-6, openjdk-6b18 2011-11-16
Fedora FEDORA-2011-15691 icedtea-web 2011-11-10
Red Hat RHSA-2011:1441-01 icedtea-web 2011-11-08
openSUSE openSUSE-SU-2011:1251-1 icedtea-web 2011-11-16
Mandriva MDVSA-2011:170 java-1.6.0-openjdk 2011-11-11
Oracle ELSA-2011-1441 icedtea-web 2011-11-09
Fedora FEDORA-2011-15673 icedtea-web 2011-11-10
Scientific Linux SL-iced-20111108 icedtea-web 2011-11-08

Comments (2 posted)

kernel: multiple vulnerabilities

Package(s):kernel CVE #(s):CVE-2011-4081 CVE-2011-4077
Created:November 7, 2011 Updated:December 20, 2011

From the Red Hat bugzilla entries [1, 2]:

CVE-2011-4081: The ghash_update function passes a pointer to gf128mul_4k_lle which will be NULL if ghash_setkey is not called or if the most recent call to ghash_setkey failed to allocate memory. This causes an oops. Fix this up by returning an error code in the null case.

This is trivially triggered from unprivileged userspace through the AF_ALG interface by simply writing to the socket without setting a key.

The ghash_final function has a similar issue, but triggering it requires a memory allocation failure in ghash_setkey _after_ at least one successful call to ghash_update.

CVE-2011-4077: A flaw was found in the way Linux kernel's XFS filesystem implementation handled links with pathname larger than MAXPATHLEN. When CONFIG_XFS_DEBUG configuration option was not enabled when compiling Linux kernel, an attacker able to mount malicious XFS image could use this flaw to crash the system, or potentially, elevate his privileges on that system.

Oracle ELSA-2013-1645 kernel 2013-11-26
openSUSE openSUSE-SU-2012:1439-1 kernel 2012-11-05
Oracle ELSA-2012-0862 kernel 2012-07-02
openSUSE openSUSE-SU-2012:0799-1 kernel 2012-06-28
SUSE SUSE-SU-2012:0736-1 Linux kernel 2012-06-14
openSUSE openSUSE-SU-2012:0540-1 kernel 2012-04-20
SUSE SUSE-SU-2012:0364-1 Real Time Linux Kernel 2012-03-14
Oracle ELSA-2012-0350 kernel 2012-03-12
Oracle ELSA-2012-2003 kernel-uek 2012-03-12
Oracle ELSA-2012-2003 kernel-uek 2012-03-12
Scientific Linux SL-kern-20120308 kernel 2012-03-08
Oracle ELSA-2012-0150 kernel 2012-03-07
CentOS CESA-2012:0350 kernel 2012-03-07
Red Hat RHSA-2012:0350-01 kernel 2012-03-06
Red Hat RHSA-2012:0333-01 kernel-rt 2012-02-23
openSUSE openSUSE-SU-2012:0236-1 kernel 2012-02-09
openSUSE openSUSE-SU-2012:0206-1 kernel 2012-02-09
SUSE SUSE-SU-2012:0153-2 Linux kernel 2012-02-06
SUSE SUSE-SU-2012:0153-1 kernel 2012-02-06
Ubuntu USN-1340-1 linux-lts-backport-oneiric 2012-01-23
Debian DSA-2389-1 linux-2.6 2012-01-15
Ubuntu USN-1330-1 linux-ti-omap4 2012-01-13
Oracle ELSA-2012-0007 kernel 2012-01-12
Scientific Linux SL-kern-20120112 kernel 2012-01-12
CentOS CESA-2012:0007 kernel 2012-01-11
Red Hat RHSA-2012:0010-01 kernel-rt 2012-01-10
Red Hat RHSA-2012:0007-01 kernel 2012-01-10
Ubuntu USN-1322-1 linux 2012-01-09
Ubuntu USN-1313-1 linux-lts-backport-oneiric 2011-12-19
Ubuntu USN-1312-1 linux 2011-12-19
Ubuntu USN-1311-1 linux 2011-12-19
Ubuntu USN-1304-1 linux-ti-omap4 2011-12-13
Ubuntu USN-1303-1 linux-mvl-dove 2011-12-13
Ubuntu USN-1302-1 linux-ti-omap4 2011-12-13
Ubuntu USN-1301-1 linux-lts-backport-natty 2011-12-13
Ubuntu USN-1300-1 linux-fsl-imx51 2011-12-13
Ubuntu USN-1299-1 linux-ec2 2011-12-13
Ubuntu USN-1294-1 linux-lts-backport-oneiric 2011-12-08
Ubuntu USN-1293-1 linux 2011-12-08
Ubuntu USN-1292-1 linux-lts-backport-maverick 2011-12-08
Ubuntu USN-1291-1 linux 2011-12-08
Ubuntu USN-1286-1 linux 2011-12-03
Ubuntu USN-1287-1 linux-ti-omap4 2011-12-05
Fedora FEDORA-2011-15856 kernel 2011-11-13
Fedora FEDORA-2011-15241 kernel 2011-11-02

Comments (none posted)

kernel: information disclosure

Package(s):kernel linux CVE #(s):CVE-2011-2494
Created:November 9, 2011 Updated:October 24, 2012
Description: The taskstats interface fails to enforce access restrictions, allowing hostile processes to obtain more information than is called for.
SUSE SUSE-SU-2014:0536-1 Linux kernel 2014-04-16
Oracle ELSA-2013-1645 kernel 2013-11-26
openSUSE openSUSE-SU-2013:0927-1 kernel 2013-06-10
SUSE SUSE-SU-2012:1391-1 Linux kernel 2012-10-24
SUSE SUSE-SU-2012:0554-2 kernel 2012-04-26
SUSE SUSE-SU-2012:0554-1 Linux kernel 2012-04-23
Oracle ELSA-2012-0150 kernel 2012-03-07
SUSE SUSE-SU-2012:0153-2 Linux kernel 2012-02-06
SUSE SUSE-SU-2012:0153-1 kernel 2012-02-06
Red Hat RHSA-2012:0010-01 kernel-rt 2012-01-10
Ubuntu USN-1294-1 linux-lts-backport-oneiric 2011-12-08
Scientific Linux SL-kern-20111129 kernel 2011-11-29
CentOS CESA-2011:1479 kernel 2011-11-30
Oracle ELSA-2011-1479 kernel 2011-11-30
Ubuntu USN-1285-1 linux 2011-11-29
Red Hat RHSA-2011:1479-01 kernel 2011-11-29
Oracle ELSA-2011-1465 kernel 2011-11-28
Oracle ELSA-2011-2033 unbreakable kernel 2011-11-28
Oracle ELSA-2011-2033 unbreakable kernel 2011-11-28
Ubuntu USN-1281-1 linux-ti-omap4 2011-11-24
Ubuntu USN-1279-1 linux-lts-backport-natty 2011-11-24
Scientific Linux SL-kern-20111122 kernel 2011-11-22
Red Hat RHSA-2011:1465-01 kernel 2011-11-22
Ubuntu USN-1275-1 linux 2011-11-21
Ubuntu USN-1256-1 linux-lts-backport-natty 2011-11-09
Ubuntu USN-1260-1 linux-ti-omap4 2011-11-14
Ubuntu USN-1253-1 linux 2011-11-08

Comments (none posted)

mahara: multiple vulnerabilities

Package(s):mahara CVE #(s):CVE-2011-2771 CVE-2011-2772 CVE-2011-2773
Created:November 7, 2011 Updated:November 9, 2011

From the Debian advisory:

CVE-2011-2771: Teemu Vesala discovered that missing input sanitising of RSS feeds could lead to cross-site scripting.

CVE-2011-2772: Richard Mansfield discovered that insufficient upload restrictions allowed denial of service.

CVE-2011-2773: Richard Mansfield that the management of institutions was prone to cross-site request forgery.

(no CVE ID available yet): Andrew Nichols discovered a privilege escalation vulnerability in MNet handling.

Debian DSA-2334-1 mahara 2011-11-04

Comments (none posted)

man2html: cross-site scripting

Package(s):man2hhtml CVE #(s):CVE-2011-2770
Created:November 7, 2011 Updated:November 9, 2011

From the Debian advisory:

Tim Starling discovered that the Debian-native CGI wrapper for man2html, a program to convert UNIX man pages to HTML, is not properly escaping user-supplied input when displaying various error messages. A remote attacker can exploit this flaw to conduct cross-site scripting (XSS) attacks.

Debian DSA-2335-1 man2hhtml 2011-11-05

Comments (none posted)

moodle: multiple vulnerabilities

Package(s):moodle CVE #(s):
Created:November 7, 2011 Updated:November 9, 2011

From the Debian advisory:

Several cross-site scripting and information disclosure issues have been fixed in Moodle, a course management system for online learning:

  • MSA-11-0020 Continue links in error messages can lead offsite
  • MSA-11-0024 Recaptcha images were being authenticated from an older server
  • MSA-11-0025 Group names in user upload CSV not escaped
  • MSA-11-0026 Fields in user upload CSV not escaped
  • MSA-11-0031 Forms API constant issue
  • MSA-11-0032 MNET SSL validation issue
  • MSA-11-0036 Messaging refresh vulnerability
  • MSA-11-0037 Course section editing injection vulnerability
  • MSA-11-0038 Database injection protection strengthened
Debian DSA-2338-1 moodle 2011-11-07

Comments (none posted)

nss: insecure pkcs11.txt load path (possible code execution)

Package(s):nss CVE #(s):CVE-2011-3640
Created:November 7, 2011 Updated:January 5, 2012

From the CVE entry:

** DISPUTED ** Untrusted search path vulnerability in Mozilla Network Security Services (NSS), as used in Google Chrome before 17 on Windows and Mac OS X, might allow local users to gain privileges via a Trojan horse pkcs11.txt file in a top-level directory. NOTE: the vendor's response was "Strange behavior, but we're not treating this as a security bug."

Gentoo 201301-01 firefox 2013-01-07
openSUSE openSUSE-SU-2012:0030-1 mozilla-nss 2012-01-05
openSUSE openSUSE-SU-2011:1290-1 Seamonkey 2011-12-01
Mandriva MDVSA-2011:169 mozilla 2011-11-09
openSUSE openSUSE-SU-2011:1241-1 mozilla-nss 2011-11-15
Debian DSA-2339-1 nss 2011-11-07

Comments (none posted)

openswan: denial of service

Package(s):openswan CVE #(s):CVE-2011-4073
Created:November 3, 2011 Updated:September 12, 2013

From the Red Hat advisory:

A use-after-free flaw was found in the way Openswan's pluto IKE daemon used cryptographic helpers. A remote, authenticated attacker could send a specially-crafted IKE packet that would crash the pluto daemon. This issue only affected SMP (symmetric multiprocessing) systems that have the cryptographic helpers enabled. The helpers are disabled by default on Red Hat Enterprise Linux 5, but enabled by default on Red Hat Enterprise Linux 6. (CVE-2011-4073)

Mandriva MDVSA-2013:231 openswan 2013-09-12
Mageia MGASA-2012-0300 openswan 2012-10-20
Gentoo 201203-13 openswan 2012-03-16
Debian DSA-2374-1 openswan 2011-12-26
Fedora FEDORA-2011-15127 openswan 2011-10-29
Fedora FEDORA-2011-15077 openswan 2011-10-29
Fedora FEDORA-2011-15196 openswan 2011-11-01
Oracle ELSA-2011-1422 openswan 2011-11-03
Oracle ELSA-2011-1422 openswan 2011-11-03
Scientific Linux SL-open-20111102 openswan 2011-11-02
CentOS CESA-2011:1422 openswan 2011-11-03
Red Hat RHSA-2011:1422-01 openswan 2011-11-02

Comments (none posted)

perl: multiple vulnerabilities

Package(s):perl CVE #(s):CVE-2011-3597 CVE-2011-2939
Created:November 3, 2011 Updated:January 29, 2014

From the Red Hat bugzilla entries [1, 2]:

CVE-2011-3597: A flaw was reported in perl Digest module's "Digest->new()" function, which did not properly sanitize input before using it in an eval() call, which could possibly be exploited to inject and execute arbitrary perl code.

CVE-2011-2939: Perl bundles 'Encode' module that contains 'Unicode.xs' file where a heap overflow bug has been fixed recently.

Gentoo 201401-33 digest-base 2014-01-29
Gentoo 201401-11 perl 2014-01-19
Ubuntu USN-1643-1 perl 2012-11-29
Mandriva MDVSA-2012:009 perl 2012-01-18
Mandriva MDVSA-2012:008 perl 2012-01-18
Oracle ELSA-2011-1797 perl 2011-12-08
Oracle ELSA-2011-1797 perl 2011-12-08
Scientific Linux SL-perl-20111208 perl 2011-12-08
CentOS CESA-2011:1797 perl 2011-12-09
CentOS CESA-2011:1797 perl 2011-12-09
Red Hat RHSA-2011:1797-01 perl 2011-12-08
openSUSE openSUSE-SU-2011:1278-1 perl 2011-11-24
Oracle ELSA-2011-1424 perl 2011-11-03
Scientific Linux SL-perl-20111103 perl 2011-11-03
Red Hat RHSA-2011:1424-01 perl 2011-11-03
Fedora FEDORA-2011-13874 perl 2011-10-05

Comments (none posted)

xen: code execution

Package(s):xen CVE #(s):CVE-2011-3262
Created:November 7, 2011 Updated:November 9, 2011

From the Debian advisory:

CVE-2011-3262: Local users can cause a denial of service and possibly execute arbitrary code via a crafted paravirtualised guest kernel image.

Gentoo 201309-24 xen 2013-09-27
Debian DSA-2337-1 xen 2011-11-06

Comments (none posted)

Page editor: Jake Edge

Kernel development

Brief items

Kernel release status

The current development kernel is 3.2-rc1, released on November 7. "Have fun, give it a good testing. There shouldn't be anything hugely scary in there, but there *is* a lot of stuff. The fact that 3.1 dragged out did mean that this ended up being one of the bigger merge windows, but I'm not feeling *too* nervous about it." There's a new code name - Saber-toothed squirrel - to go with it.

Stable updates: the and stable updates were released on November 7; both contain a long list of important fixes. was released on November 8 to fix some build problems introduced in 2.6.33 users should note that is the final planned update for that kernel.

Comments (none posted)

Quotes of the week

Crash test dummy folds.
KVM mafia wins.
Innovation cries.
-- Dan Magenheimer

Seriously, if someone gave me a tools/term/ tool that has rudimentary xterm functionality with tabbing support, written in pure libdri and starting off a basic fbcon console and taking over the full screen, i'd switch to it within about 0.5 nanoseconds and would do most of my daily coding there and would help out with extending it to more apps (starting with a sane mail client perhaps).
-- Ingo Molnar

Comments (1 posted)

Quotas for tmpfs

By Jonathan Corbet
November 9, 2011
The second version of the plumber's wish list for Linux included a request for support for usage quotas on the tmpfs filesystem. Current kernels have no such support, making it easy for local users to execute denial-of-service attacks by filling up /tmp or /dev/shm. Davidlohr Bueso answered that call with a patch providing that support. But it turns out that there is a disagreement over how tmpfs use limits should be managed.

Davidlohr's patch does not actually implement quotas; instead, it adds a new resource limit (RLIMIT_TMPFSQUOTA) controlling how much space a user can occupy on all mounted tmpfs systems. This is the approach requested in the wish list; it has some appeal because tmpfs is not a persistent filesystem. Normal filesystem implementations store quotas on the filesystem itself, but tmpfs cannot do that. So use of quotas would require that user space, in some fashion, reload the quota database on every boot (or, depending on the implementation, for every tmpfs mount). Resource limits look like a simpler situation.

Even so, there is opposition to the resource limit approach. Developers would rather see tmpfs behave like other filesystems. More to the point, perhaps, users and applications have some clue, some of the time, on how to respond to "quota exceeded" errors. Blown resource limits, instead, are on less solid ground. As Alan Cox pointed out, loading the quotas need not be a big problem; it could be as simple as a mount option setting a default quota for all users.

In the end, it seems unlikely that an implementation based on anything other than disk quotas will be merged, so this patch will need to be reworked.

Comments (5 posted)

Kernel development news

The second half of the 3.2 merge window

By Jonathan Corbet
November 8, 2011
Linus announced the 3.2-rc1 release and closed the merge window on November 7. During the two-week window, some 10,214 non-merge changesets were pulled into the mainline kernel. That is the most active merge window ever, edging past the previous record holder (2.6.30, at 9,603 changesets) by a fair amount. The delay in the start of this development cycle will certainly have caused more work to pile up, but there was also, clearly, just a lot of work going on.

User-visible changes merged since last week's summary include:

  • The device mapper has a new "thin provisioning" capability which, among other things, offers improved snapshot support. This feature is considered experimental in 3.2. See Documentation/device-mapper/thin-provisioning.txt for information on how it works. Also added to the device mapper is a "bufio" module that adds another layer of buffering between the system and a block device; the thin provisioning code is the main user of this feature.

  • There is a new memory-mapped virtio device intended to allow virtualized guests to use virtio-based block and network devices in the absence of PCI support.

  • It is now possible for a process to use poll() on files under /proc/sys; the result is the ability to get a notification when a specific sysctl parameter changes.

  • The btrfs filesystem now records a number of previous tree roots which can be useful in recovering damaged filesystems; see this article for more information. Btrfs has also gained improved readahead support.

  • The I/O-less dirty throttling patch set has been merged; that should improve writeback performance for a number of workloads.

  • New drivers include:

    • Processors and systems: Freescale P3060 QDS boards and non-virtualized PowerPC systems.

    • Block: M-Systems Disk-On-Chip G3 MTD controllers.

    • Media: MaxLinear MXL111SF DVB-T demodulators, Abilis AS102 DVB receivers, and Samsung S5K6AAFX sensors.

    • Miscellaneous: Intel Sandybridge integrated memory controllers, Intel Medfield MSIC (audio/battery/GPIO...) controllers, IDT Tsi721 PCI Express SRIO (RapidIO) controllers, GPIO-based pulse-per-second clients, and STE hardware semaphores.

    • Graduations: the Conexant cx25821 V4L2 driver has moved from staging into the mainline.

Changes visible to kernel developers include:

  • The new GENHD_FL_NO_PART_SCAN device flag suppresses the normal partition scan when a new block device is added to the system.

  • The venerable block layer function __make_request() has been renamed to blk_queue_bio() and exported to modules.

  • The TAINT_OOT_MODULE taint flag is now set when out-of-tree modules are inserted into the kernel. Naturally, the module itself tells the kernel about its provenance, so this mechanism can be circumvented, but anybody trying to do that would certainly be caught and publicly shamed sooner or later.

  • A few macros (EXPORT_SYMBOL_* and THIS_MODULE) have been split out of <linux/module.h> and placed in <linux/export.h>. Code that only needs to export symbols can now use the latter include file; the result is a reduction in kernel compile time.

Despite the size of this development cycle, a number of trees ended up not being pulled. Linus explicitly avoided those that were controversial (FrontSwap and the KVM tool, for example); others seem to have simply been passed over. Some may slip in for -rc2, but, for the most part, the time has come to stabilize all of this code. If the usual pattern holds, the 3.2 release can be expected sometime around mid-January.

Comments (3 posted)

Better device power management for 3.2

By Jonathan Corbet
November 8, 2011
The Linux kernel has long had the ability to regulate the CPU's voltage and frequency for optimal behavior, where "optimal" is a function of both performance and power consumption. But a system is more than just a CPU, and there are many other components which are able to run at multiple performance levels. It is unsurprising that a proper infrastructure for managing device operating points has lagged that for the CPU, since the amount of power to be saved is usually smaller. But now that CPU power behavior is fairly well optimized, the power infrastructure is growing to encompass the rest of the system. The 3.2 kernel will have a new set of APIs intended to allow drivers to let the system find the best operating level for the devices they manage.

There are three separate pieces to the dynamic voltage and frequency scaling (DVFS) API, the first of which was actually merged for the 2.6.37 release. The "operating power points" module simply tracks the various operating levels available to a given device; the API is declared in <linux/opp.h>. Briefly, operating points are managed with:

    int opp_add(struct device *dev, unsigned long freq, unsigned long u_volt);
    int opp_enable(struct device *dev, unsigned long freq);
    int opp_disable(struct device *dev, unsigned long freq);

Operating points are enabled by default; a driver may disable specific points to reflect temperature or performance concerns. There is a set of functions for retrieving operating points above or below a given frequency, useful for moving up or down the power/performance scale.

A driver wanting to support DVFS on a specific device would start by filling in one of these structures (declared, along with the rest of the API, in <linux/devfreq.h>):

    struct devfreq_dev_profile {
	unsigned long initial_freq;
	unsigned int polling_ms;

	int (*target)(struct device *dev, unsigned long *freq);
	int (*get_dev_status)(struct device *dev,
			      struct devfreq_dev_status *stat);
	void (*exit)(struct device *dev);

Here initial_freq is, unsurprisingly, the original operating frequency of the device. Almost everything else in this structure is there to help frequency governors do their jobs. If polling_ms is non-zero, it tells the governor how often to poll the device to get its usage information; that polling will take the form of a call to get_dev_status(). That function should fill the stat structure with the relevant information:

    struct devfreq_dev_status {
	/* both since the last measure */
	unsigned long total_time;
	unsigned long busy_time;
	unsigned long current_frequency;
	void *private_data;

The governor will use this information to decide whether the current operating frequency should be changed or not. Should a change be needed, the target() callback will be called to change the operating point accordingly. This function should pick a frequency at least as high as the passed in *freq, then update *freq to reflect the actual frequency chosen. The exit() callback gives the driver a chance to clean things up if the DVFS layer decides to forget about the device.

Once the devfreq_dev_profile structure is filled in, the driver registers it with:

    struct devfreq *devfreq_add_device(struct device *dev,
				       struct devfreq_dev_profile *profile,
				       const struct devfreq_governor *governor,
				       void *data);

If need be, a driver can supply its own governor to manage frequencies, but the kernel supplies a few of its own: devfreq_powersave (keeps the frequency as low as possible), devfreq_performance (keeps the frequency as high as possible), devfreq_userspace (allows control of the frequency through sysfs), and devfreq_simple_ondemand (tries to strike a balance between performance and power consumption).

The notifier mechanism built into the operating power points code can be used to automatically invoke the governor should the set of available power points change. There are a number of ways in which that change could come about; one of those is a change in expectations regarding how quickly the device can respond. For this case, 3.2 also gained an enhancement to the quality-of-service (pm_qos) code to handle per-device QOS requirements. Kernel code can express its QOS expectations for a device using these functions (all from <linux/pm_qos.h>):

    int dev_pm_qos_add_request(struct device *dev, struct dev_pm_qos_request *req,
			       s32 value);
    int dev_pm_qos_update_request(struct dev_pm_qos_request *req, s32 new_value);
    int dev_pm_qos_remove_request(struct dev_pm_qos_request *req);

The dev_pm_qos_request structure is used as a handle for managing requests, but calling code does not need to access its internals. The passed value describes the desired quality of service; the documentation is surprisingly vague on just what the units of value are. It would appear to describe the desired latency, but the desired precision is unclear.

On the driver side, the notifier interface is used:

    int dev_pm_qos_add_notifier(struct device *dev,
			    	struct notifier_block *notifier);
    int dev_pm_qos_remove_notifier(struct device *dev,
			           struct notifier_block *notifier);

When a device's quality-of-service requirements are changed, the notifier will be called with the new value. The driver can then adjust the available operating power points, disabling any that would render the device unable to meet the specified QOS requirement.

It is worth noting that none of the new code has any in-tree users as of this writing. That suggests that the interface might be more than usually volatile; once developers try to make use of this facility, they are likely to find things that can be improved. But, then, internal interfaces are always subject to change; regardless of any evolution here, the underlying capability should prove useful.

Comments (2 posted)

Fast interprocess communication revisited

November 9, 2011

This article was contributed by Neil Brown

Slightly over a year ago, LWN reported on a couple of different kernel patches aimed at providing fast, or at least faster, interprocess communication (IPC): Cross Memory Attach (CMA) and kernel-dbus (kdbus). In one of the related email threads on the linux-kernel list, a third (pre-existing) kernel patch called KNEM was discussed. Meanwhile, yet another kernel module - "binder" used by the Android platform - is in use in millions of devices worldwide to provide fast IPC, and Linus recently observed that code that actually is used is the code that is actually worth something so maybe more of the Android code should be merged despite objections from some corners. Binder wasn't explicitly mentioned in that discussion but could reasonably be assumed to be included.

This article is not about whether any of these should be merged or not. That is largely an engineering and political decision in which this author claims no expertise, and in any case one of them - CMA - has already been merged. Rather we start with the observation that this many attempts to solve essentially the same problem suggests that something is lacking in Linux. There is, in other words, a real need for fast IPC that Linux doesn't address. The current approaches to filling this gap seem to be piecemeal attempts: Each patchset is clearly addressing the needs of a specific IPC model without obvious consideration for others. While this may solve current problems, it may not solve future problems, and one of the strengths of the design of Unix and hence Linux is the full exploitation of a few key ideas rather than the ad hoc accumulation of many distinct (though related) ideas.

So, motivated by that observation we will explore these various implementations to try to discover and describe the commonality they share and to highlight the key design decisions each one makes. Hopefully this will lead to a greater understanding of both the problem space and the solution space. Such understanding may be our best weapon against chaos in the kernel.

What's your address?

One of the interesting differences between the different IPC schemes is their mechanism for specifying the destination for a message.

CMA uses a process id (PID) number combined with offsets in the address space of that process - a message is simply copied to that location. This has the advantage of being very simple and efficient. PIDs are already managed by the kernel and piggy-backing on that facility is certainly attractive. The obvious disadvantage is that there is no room for any sophistication in access control, so messages can only be sent to processes with exactly the same credentials. This will not suit every context, but it is not a problem for the target area (the MPI message passing interface) which is aimed at massively parallel implementations in which all the processes are working together on one task. In that case having uniform credentials is an obvious choice.

KNEM uses a "cookie" which is a byte string provided by the kernel and which can be copied between processes. One process registers a region of memory with KNEM and receives a cookie in return. It can then pass this cookie to other processes as a simple byte string; the recipients can then copy to or from the registered region using that cookie as an address. Here again there is an assumption that the processes are co-operating and not a threat to each other (KNEM is also used for MPI). KNEM does not actually check process credentials directly, so any process that registers a region with KNEM is effectively allowing any other process that is able to use KNEM (i.e. able to open a specific character device file) to freely access that memory.

Kdbus follows the model of D-Bus and uses simple strings to direct messages. It monitors all D-Bus traffic to find out which endpoints own which names and then, when it sees a message sent to a particular name, it routes it accordingly rather than letting it go through the D-Bus daemon for routing.

Binder takes a very different approach from the other three. Rather than using names that appear the same to all processes, binder uses a kernel-internal object for which different processes see different object descriptors: small integers much like file descriptors. Each object is owned by a particular process (which can create new objects quite cheaply) and a message sent to an object is routed to the owning process. As each process is likely to have a different descriptor (or none at all) for the one object, descriptors cannot be passed as byte strings. However they can be passed along with binder messages much like file descriptors can be passed using Unix-domain sockets.

The main reason for using descriptors rather than names appears to involve reference counting. Binder is designed to work in an object-oriented system which (unsurprisingly) involves passing messages to objects, where the messages can contain references other objects. This is exactly the pattern seen in the kernel module. Any such system needs some way of determining when an object is no longer referenced, the typical approaches being garbage collection and reference counting. Garbage collection across multiple different processes is unlikely to be practical, so reference counting is the natural choice. As binder allows communication between mutually suspicious processes, there needs to be some degree of enforcement: a process should not be able to send a message when it doesn't own a reference to the target, and when a process dies, all its references should be released. To ensure these rules are met it is hard to come up with any scheme much simpler than the one used by binder.

Possibly the most interesting observation here is that two addressing schemes used widely in Linux are completely missing in these implementations: file descriptors and socket addresses (struct sockaddr).

File descriptors are used for pipes (the original UNIX IPC), for socket pairs and other connected sockets, for talking to devices, and much more. It is not hard to imagine them being used by CMA, and binder too. They are appealing as they can be used with simple read() and write() calls and similar standard interfaces. The most likely reason that they are regularly avoided is their cost - they are not exactly lightweight. On an x86_64 system a struct file - the minimum needed for each file descriptor - is 288 bytes. Of these, maybe 64 are relevant to many novel use cases, the rest is dead weight. This weight could possibly be reduced by a more object-oriented approach to struct file but such a change would be very intrusive and is unlikely to happen. So finding other approaches is likely to become common. We see that already in the inotify subsystem which has "watch descriptors"; we see it here in binder too.

The avoidance of socket addresses does not seem to admit such a neat answer. In the cases of CMA, kdbus, and binder it doesn't seem to fit the need for various different reasons. For KNEM it seems best explained as arbitrary choice. The developer chose to write a new character device rather than a new networking domain (aka address family) and so used ioctl() and ad hoc addresses rather than sendmsg()/recvmsg() and socket addresses.

The conclusion here seems to be that there is a constant tension between protection and performance. Every step we take to control what one process can do to another by building meaning into an address adds extra setup cost and management cost. Possibly the practical approach is not to try to choose between them but to unify them and allow each client to choose. So a client could register itself with an external address that any other process can use if it knows it, or with an internal address (like the binder objects) which can only be used by a process that has explicitly been given it. Further, a registered address may only accept explicit messages, or may be bound to a memory region that other processes can read and write directly. If such addresses and messages could be used interchangeably in the one domain it might allow a lot more flexibility for innovation.

Publish and subscribe

One area where kdbus stands out from the rest is in support for a publish/subscribe interface. Each of the higher level IPC services (MPI, Binder, D-Bus) have some sort of multicast or broadcast facility, but only kdbus tries to bring it into the kernel. This could simply reflect the fact that multicast does not need to be optimized and can be adequately handled in user space. Alternately it could mean that implementing it in the kernel is too hard so few people try.

There are two ways we can think about implementing a publish/subscribe mechanism. The first follows the example of IP multicast where a certain class of addresses is defined to be multicast addresses and sockets can request to receive multicasts to selected addresses. Binder does actually have a very limited form of this. Any binder client can ask to be notified when a particular object dies; when a client closes its handle on the binder (e.g. when it exits) all the objects it owns die and messages are accordingly published for all clients who have subscribed to that object. It would be tempting to turn this into a more general publish/subscribe scheme.

The second way to implement publish/subscribe is through a mechanism like the Berkeley packet filter that the networking layer provides. This allows a socket to request to receive all messages, but the filter removes some of them based on content following an almost arbitrary program (which can now be JIT compiled). This is more in line with the approach that kdbus uses. D-Bus allows clients to present "match" rules such that they receive all messages with content that matches the rules. kdbus extracts those rules by monitoring D-Bus traffic and uses them to perform multicast routing in the kernel.

Alban Crequy, the author of kdbus, appears to have been exploring both of these approaches. It would be well worth considering this effort in any new fast-IPC mechanism introduced into Linux to ensure it meets all use cases well.

Single copy

A recurring goal in many efforts at improving communication speed is to reduce the number of times that message data is copied in transit. "Zero-copy" is sometime seen as the holy-grail and, while it is usually impractical to reach that, single-copy can be attained; three of our four examples do achieve it. The fourth, kdbus, doesn't really try to achieve single-copy. The standard D-Bus mechanism is four copies - sender to kernel to daemon to kernel to receiver. Kdbus reduces this to two copies (and more particularly reduces context-switches to one) which is quite an improvement. The others all aim for single-copy operation.

CMA and KNEM achieve single-copy performance by providing a system call which simply copies from one address space to the other with various restrictions as we have already seen. This is simple, but not secure in a hostile environment. Binder is, again, quite different. With binder, part of the address space of each process is managed by the binder module through the process calling mmap() on the binder file descriptor. Binder then allocates pages and places them in the address space as required.

This mapped memory is read-only to the process, all writing is performed by the kernel. When a message is sent from one process to another the kernel allocates some space in the destination process's mapped area, copies the message directly from the sending process, and then queues a short message to the receiving process telling it where the received message is. The recipient can then access that message directly and will ultimately tell the binder module that it is finished with the message and that the memory can be reused.

While this approach may seem a little complex - having the kernel effectively provide a malloc() implementation (best fit as it happens) for the receiving process - it has the particular benefit that it requires no synchronization between the sender and the recipient. The copy happens immediately for the sender and it can then move on assuming it is complete. The receiver doesn't need to know anything about the message until it is all there ready and waiting (much better to have the message waiting than the processes waiting).

This asynchronous behavior is common to all the single-copy mechanisms, which makes one wonder if using Linux's AIO (Asynchronous Input/Output) subsystem might provide another possible approach. The sender could submit an asynchronous write, the recipient an asynchronous read, and when the second of the two arrives the copy is performed and each is notified. One unfortunate, though probably minor, issue with this approach is that, while Linux-aio can submit multiple read and write requests in a single system call and can receive multiple completion notifications in another system call, it cannot do both in one. This contrasts with the binder which has a WRITE_READ ioctl() command that sends messages and then waits for the reply, allowing an entire transaction to happen in a single system call. As we have seen with addition of recvmmsg() and, more recently, sendmmsg(), doing multiple things in a single system call has real advantages. As Dave Miller once observed:

The old adage about syscalls being cheap no longer holds when we're talking about traversing all the way into the protocol stack socket code every call, taking the socket lock every time, etc.

Tracking transactions

All of the high-level APIs for IPC make a distinction between requests and replies, connecting them in some way to form a single transaction. Most of the in-kernel support for messaging doesn't preserve this distinction with any real clarity. Messages are just messages and it is up to user space to determine how they are interpreted. The binder module is again an exception; understanding why helps expose an important aspect of the binder approach.

Though the code and the API do not present it exactly like this, the easiest way to think about the transaction tracking in binder is to imagine that each message has a "transaction ID" label. A request and its reply will have the same label. Further, if the recipient of the message finds that it needs to make another IPC before it can generate a final reply, it will use the same label on this intermediate IPC, and will obviously expect it on the intermediate reply.

With this labeling in place, Binder allows (and in fact requires) a thread which has sent a message, and which is waiting for a reply to that message, to only receive further messages with the same transaction ID. This rule allows a thread to respond to recursive calls and, thus, allow that thread's own original request to progress, but causes it to ignore any new calls until the current one is complete. If a process is multithreaded, each thread can work on independent transactions separately, but a single thread is tied to one complex transaction at a time.

Apart from possibly simplifying the user-space programming model, this allows the transaction as a whole to have a single CPU scheduling priority inherited from the originating process. Binder presents a model that there is just one thread of control involved in a method call, but that thread may wander from one address space to another to carry out different parts of the task. This migration of process priority allows that model to be more fully honored.

While many of the things that binder does are "a bit different", this is probably the most unusual. Having the same open file descriptor behave differently in different threads is not what most of us would expect. Yet it seems to be a very effective way to implement an apparently useful feature. Whether this feature is truly generally useful, and whether or not there is a more idiomatic way to provide it in Linux are difficult questions. However they are questions that need to be addressed if we want the best possible high-speed IPC in our kernel of choice.

Inter-Programmer Communication

There is certainly no shortage of interesting problems to solve in the Linux kernel, and equally no shortage of people with innovative and creative solutions. Here we have seen four quite different approaches to one particular problem and how each brings value of one sort or another. However each could probably be improved by incorporating ideas and approaches from one of the others, or by addressing needs that others present.

My hope is that by exposing and contrasting the different solutions and the problems they address, we can take a step closer to finding unifying solutions that address both today's needs and the needs for our grandchildren.

Comments (16 posted)

Patches and updates

Kernel trees


Core kernel code

Development tools

Device drivers


Filesystems and block I/O

Memory management


Virtualization and containers


Page editor: Jonathan Corbet


Two flavors of GNOME for Linux Mint 12

November 9, 2011

This article was contributed by Joe 'Zonker' Brockmeier.

Earlier this year, Linux Mint seemed to have two choices: Stay close to Ubuntu and take on the Unity desktop, or move to GNOME 3.0. Rather than choose between two immature desktops, Mint chose to stand pat on GNOME 2.32. This time around, Mint is taking a different approach, taming GNOME 3.2 for Mint users and planning to offer a legacy version of GNOME as well.

Mint project lead Clement Lefebvre has been fairly quiet about desktop plans for releases after Mint 11. On Friday November 4th, Lefebvre finally took the wraps off the plans for Linux Mint 12. Despite the Ubuntu-based heritage of Mint, it looks like the team is sticking with GNOME over Unity.


Lefebvre said that Mint would like to keep GNOME 2.32 "a little longer," but "we need to look forward and embrace new technologies." He said that GNOME 3.x is "a fantastic desktop" that's getting better with each release. Eventually, Lefebvre said, "we'll be able to do much more with it than was possible with the traditional desktop." Eventually, but not today.

In the meantime, Lefebvre's plan is to ship GNOME 3.2 and MATE, which is a continuation of GNOME 2.32 that is currently packaged for Arch Linux. One problem, though, is that MATE has naming conflicts with GNOME 3.x, which Lefebvre said the Mint team is "working hard in collaboration with the MATE developers to identify and fix these conflicts so that we can have both Gnome 3 and MATE installed by default on the DVD edition of Linux Mint 12." Unfortunately, there's precious little information online about MATE, but you can find Debian-ized packages for MATE on GitHub. In the comments to the post about Mint 12, Lefebvre also directs interested parties to the #MATE channel on Freenode.

So, if all goes well, users will have a familiar GNOME 2.32-ish desktop to use. More adventurous users, though, can opt for Mint's take on GNOME 3.2. This GNOME is not what you'd see with Fedora 16 or openSUSE 12.1, though. Lefebvre said that they've put together a "desktop layer" that hammers GNOME 3.2 into a traditional desktop if users want that:

We've been using application menus, window lists and other traditional desktop features for as far as I can remember. It looked different in KDE, Xfce, or even Windows and Mac OS, but it was similar. Gnome 3 is changing all that and is developing a better way for us to interact with our computer. From our point of view here at Linux Mint, we're not sure they're right, and we're not sure they're wrong either. What we're sure of, is that if people aren't given the choice they will be frustrated and our vision of an Operating System is that your computer should work for you and make you feel comfortable. So with this in mind, Gnome 3 in Linux Mint 12 needs to let you interact with your computer in two different ways: the traditional way, and the new way, and it's up to you to decide which way you want to use.

For this, we developed "MGSE" (Mint Gnome Shell Extensions), which is a desktop layer on top of Gnome 3 that makes it possible for you to use Gnome 3 in a traditional way. You can disable all components within MGSE to get a pure Gnome 3 experience, or you can enable all of them to get a Gnome 3 desktop that is similar to what you've been using before. Of course you can also pick and only enable the components you like to design your own desktop.

Unfortunately, there's not a lot that Mint can do about GNOME 3's other major drawback — 3D acceleration required. Lefebvre said that Mint 12 will allow running GNOME 3.2 in Virtualbox if you have video acceleration enabled, but otherwise, you're stuck with the fallback mode. Users can also choose the MATE desktop.

So, in the end, users should have three options with Mint's main release: GNOME 3.2, GNOME 3.2 with Mint's extensions, or MATE as a GNOME 2.x replacement.

Ubuntu users' pain is Mint's gain

According to Lefebvre, Mint saw a "40% increase in a single month" and he claims that Mint is quickly catching up with Ubuntu for the top spot in the Linux desktop market and fourth overall for desktop operating systems.

There's little doubt that Mint saw a big jump in users following the Fedora 16 and Ubuntu 11.04 releases. Linux users around the world made their unhappiness with GNOME 3.0 and Unity widely known. If you consider DistroWatch's rankings to be accurate, consider that Linux Mint is currently in the top spot (for the six-month ranking). Ubuntu hasn't been displaced from that spot for years. Unfortunately, like Ubuntu, Mint doesn't actually publish hard numbers. Occasionally Canonical cites a hard number (most recently 20 million), but doesn't provide anything the public can verify. (Unlike Fedora and openSUSE, which provide statistics based on the number of unique IPs that connect to their update servers.) So Mint appears to be doing well lately, but how well we don't really know.

With so many users, though, Mint may want to do a bit more to publicize its security fixes and explain its security policy. Trying to find a coherent policy about security updates on the Linux Mint website is an exercise in futility. In addition, Mint doesn't have mailing lists, so no security list exists. Since many of Mint's packages are taken directly from Ubuntu, and use Ubuntu's repositories, users will get security updates when Ubuntu's users do for those packages. But for Mint-specific packages, it's unclear what the policies are.

Search revenue

Another interesting development with this release is Lefebvre's announcement that Mint will be trying to go beyond user donations and extract revenue out of searches.

Mint has always shipped an add-on that "enhances" search results given by Google in Firefox. With any luck that's going away, since the default pages produced by Mint were, shall we say, less than optimal. However, Mint may be limiting user choice when it comes to search engines out of the box. Lefebvre said:

Our goal is to give users a good search experience while funding ourselves by receiving a share of [search] income. Search engines who do not share the income generated by our users, are removed from Linux Mint and might get their ads blocked.

Exactly how Mint will be blocking ads is not explained — and Lefebvre hasn't yet responded to our questions about the plans to block ads in Mint 12 — or whether this might influence the browsers shipped with Mint 12. A preview to Mint's partnerships with browser vendors might be found in the updated Opera 11.52 package for Linux Mint, which seems to be aimed at demonstrating the size of Mint's user base.

It's not entirely surprising that Mint is looking to go beyond what users contribute directly. Lefebvre writes that "we're in a difficult situation financially" because the project is only generating income via donors. Despite having "millions" of users, the September stats show the project raising about $5,600 from 316 donors.


If the timeline put forward by Lefebvre holds, then Mint should ship its first RC for Mint 12 by November 11th and a final release around November 20th. Lefebvre said that the GNOME 3.x stuff is "fully ready and fully functional" with just a few minor bugs. The MATE packages may need more work, though, and negotiations with browser vendors may mean some search engines are not included in the RC.

As a project that was caught between the GNOME Shell and Unity conflicts, Mint seems to have not only weathered the desktop turbulence but emerged better for it. By catering to what the existing audience wants, Mint has grown its user base considerably. Whether the project can now turn that into a reliable source of revenue and continue that growth is another question entirely.

Comments (7 posted)

Brief items

Distribution quotes of the week

The magic happening in Android, and I hate to admit but iOS too, is they've gone back to the bazaar model where anyone can share any app they like. Sure, most of it is crap. In fact they probably have an app for crap. Part of it is driven by developer greed, which is counter to what Debian stands for, but most of it is just hackers enjoying their new found freedom to share. Sure, the base is solid, and carefully crafted and built at Google. You can't just write any old crud and expect it to ship installed on every phone by default. You need the default code base to "just work". However, anyone is enabled to share whatever crap they like as an app in the market. That freedom to share is missing in Debian.
-- Bill Cox (Thanks to Paul Wise.)

Significant accommodations were made by Banshee upstream in order to make life easier for Canonical to integrate Banshee into their OS. For one thing, that's why the Ubuntu One Music Store support is a core Banshee feature, not part of the third-party community extensions package. If Banshee was being considered for replacement due to unresolved technical issues, then perhaps it would have been polite to, I don't know, inform upstream that it was on the cards? Or, if Canonical felt that problems specific to their own itches required scratching, then is it completely beyond the realm of possibility to imagine they might have spent developer resources on bug fixing their OS and sending those fixes upstream? Or even - and call me crazy - providing access for upstream to specialized hardware such as a $174 Pandaboard to empower upstream to isolate and fix unreproducible bugs specific to Canonical's target hardware?
-- Jo Shields is unhappy about Banshee possibly being removed as an Ubuntu 12.04 default application

As such, while Ubuntu has always shipped a huge archive of available software, today the visibility on that software and the gems inside is better than ever. I think it would be a disservice for us to obsess too much on what is included on the default installation when there is a wealth of content available in the Ubuntu Software Center. Default apps are important (particularly for those in non-networked environments), but let's not forget about the wider commons that in only a click away and all the value it offers.
-- Jono Bacon

Doing btrfs development makes sense, but inflicting it by default on users who really have no need for it isn't quite the same discussion. For performance it's not showing any signs of being better than ext3/4 - in fact on some media its massively underperforming them currently. The funky feature set really isn't relevant to most users while their data still being available most definitely *is*.
-- Alan Cox

I admire and respect the fact that you can make free software do exactly what you want - that's precisely what I set out to support in founding Ubuntu. What I did not set out to found was a project which pandered to the needs of a few, at the cost to the many. Especially when the few can perfectly well help themselves, and the many cannot.
-- Mark Shuttleworth

Comments (none posted)

Distribution News


Fedora 16 released

The Fedora 16 release is now available. There's a lot of new stuff in this release, of course, including GNOME 3.2, KDE 4.7, a new document manager, GRUB2, and more; see the feature list for details. The Fedora project has also announced that the Fedora 14 release will be unsupported as of December 8.

Comments (none posted)

Rawhide gets GNOME Shell for all display types

One of the big complaints about GNOME Shell is that it requires 3D acceleration to function. The Fedora Rawhide distribution is about to get an update, though, that removes that requirement, enabling GNOME Shell to work on all displays, including those on virtualized systems. This work should find its way into the Fedora 17 release due sometime around April.

While people are happy to see this change, there is concern about whether it will bring about an end to support for fallback mode. In that thread, Adam Williamson rather confirmed those fears: "But based on what they've said in the past, I expect that once most hardware that previously needed the fallback mode is covered, fallback mode will die. AIUI, fallback mode isn't meant to be a GNOME 2-by-stealth for Shell refuseniks, it's purely an attempt to accommodate hardware which doesn't support Shell." That is not quite the message that "refuseniks" have been given in the past; expect complaints.

Full Story (comments: 203)

Ask Fedora launches

Ask Fedora is a new forum site run by the Fedora project. "The goal of Ask Fedora is to be the best place for community support in Fedora and integrate tightly with the rest of the Fedora infrastructure." The project is looking for both users to post (and answer) questions and developers to help make the site better.

Full Story (comments: none)


openSUSE 12.1 RC2

The final openSUSE 12.1 release candidate is now available. "All of development is now frozen except for the most urgent bugfixes. If you find any new, grave problems, please report it as soon as possible!"

Full Story (comments: none)

Other distributions

A Linux Mint 12 preview

The Linux Mint Blog looks forward to the Linux Mint 12 release. "Going forward, we won't be using a custom search engine anymore. Linux Mint is the 4th most popular desktop OS in the World, with millions of users, and possibly outgrowing Ubuntu this year. The revenue Mint users generate when they see and click on ads within search engines is quite significant. So far this revenue's entirely gone towards search engines and browsers. Our goal is to give users a good search experience while funding ourselves by receiving a share of this income. Search engines who do not share the income generated by our users, are removed from Linux Mint and might get their ads blocked."

Comments (28 posted)

Newsletters and articles of interest

Distribution newsletters from the last week

Comments (none posted)

Page editor: Jake Edge


A preview of GIMP 2.8

November 9, 2011

This article was contributed by Nathan Willis

The venerable GIMP image editor is nearing its next release, version 2.8, but as a decidedly "release-when-ready" project, there is no pre-determined drop date to circle on one's calendar. Judging by builds from the unstable 2.7 branch, however, the next release will have goodies to share with several different types of GIMP user: photo editors, web designers, high-end, and casual.

At the moment, the newest code is version 2.7.3, although the project's Git changelog suggests that there will be at least one more release (2.7.4) between now and a stable 2.8.0. Official development releases are made as source tarballs only, but Linux binaries are available through various third-party sites, such as or Matt Walker's personal package archive (PPA). The 2.7-series does introduce some API changes, so exercise caution when testing it out. It will "migrate" settings from ~/.gimp-2.6/ to ~/.gimp-2.7/, but third-party plug-ins in particular are not likely to work without modification. If in doubt, make backups.

Headline features

Each new release of a GUI application is expected to include at least a few immediately-usable improvements, either in the form of UI changes, or new tools and functionality. GIMP 2.8 introduces four new "big toys" in this category.

[Cage tool]

The first is the Cage tool, an entirely new image transformation tool that originated in a Google Summer of Code (GSoC) 2010 project. The cage tool allows you to draw arbitrary polyhedral outlines around part of an image, then twist and distort the image within by manipulating the corners of the outline. The effect is something like making a marionette move — with the major difference being that you can stretch and distort the marionette in addition to twisting and turning it. Drawing the right cage is critical to being able to manipulate the image the way you want; fortunately you can adjust the cage on-canvas even after you start distorting it. The tool is also a bit like the "Liquify" effect in Photoshop, in the sense that it allows smooth, "hands-on" manipulation of the image, in a manner that is far more interactive (and thus more intuitive) than traditional distort and transform tools.

The second feature is support for layer groups. As the name suggests, layer grouping allows you to nest image layers into sets, which you can then make adjustments to collectively. Although you cannot paint simultaneously into multiple layers in a layer group, you can move, transform, and hide groups, as well as change their opacity, blending mode, and other settings. GIMP 2.8 also introduces the ability to lock the pixels and alpha channels of individual layers (with "Lock" toggles on the layer dialog box) to prevent accidental changes; this feature can also be used to lock the contents of entire layer groups.

[Text editing]

On-canvas text editing is the third new feature, and it is one that has been in development since GSoC 2008. The new text editor pops up a miniature editing toolbar that hovers on the canvas over the cursor. From there, you can change font, text size, text color, and text decorations at will, just as you would in any text editor. This editing toolbar allows you to select any text in the layer and change it, so you can incorporate multiple colors and font settings in a single text layer — something that required multiple text layers (and fine layer-alignment skills to boot) in previous releases.

The final new "bullet point" feature is an optional single window editing mode, which you can toggle in and out of from the "Windows" menu. In single window mode, the tool palette and dialog dock (which usually holds the layers dialog) snap onto the sides of the image window. When you open multiple images at once, they are placed into tabs across the top of the window, with a thumbnail for each image.

[Single window]

For a long time there has been a highly-vocal subset of people who swore that GIMP was downright unusable without a single window mode. I have difficulty processing this criticism, because I have never seen anyone articulate something that works better (or is easier to do) in single window mode, or more difficult to do in floating-palette mode. It strikes me as an essentially personal preference; nevertheless, now that single window mode is available, I suppose it will make quite a few people happy. But it remains an option only; for those who find the floating palettes easier to use, the new mode will have no effect.

The fun stuff you might overlook

A second group of new features consists of those changes that either introduce new functionality or provide better usability, but are not big enough to make most lists of "flashy new improvements."

For example, there are several small UI changes that provide incremental improvements. One is an on-canvas progress meter (using a circular, stopwatch-like animation), which replaces the flat, bar-shaped progress meter at the bottom edge of the image window. It appears closer to the mouse cursor, so it is a better visual cue that GIMP is working on completing a task. There are also composition guides (such as rule-of-thirds grid lines) overlaid when you use a transformation tool; this simply helps you eyeball the transformations that you want to make. [Spladers] But the most interesting is a new widget for setting tool options. It combines a "spinbox," a text label, and a slider into a single unit (several examples are shown at left). You can adjust its value with finesse using the spinbox +/- buttons, edit it precisely with the keyboard, or drag and slide it with the cursor (which is intended to be a particular boon for tablet users). There does not appear to be a name for the new widget (I am secretly hoping for "splader"), but in the code it is implemented as a GimpSpinScale.

Working with brush dynamics (the manner in which brush tools respond to pressure-sensitive tablet usage) is also improved in the unstable GIMP builds. The GUI has been completely redesigned (a GSoC 2009 project), you can edit custom response curves for each metric, and you can change the rotation and aspect ratio of any brush. There is still not quite as much control over brush behaviors in GIMP as there is in a painting-centric application like Krita or MyPaint, but this set of changes offers a noticeable improvement of benefit to tablet users.

Two other changes are also reminiscent of the direction taken in recent Krita builds. First, you can save (and name) your customized brush dynamics presets, and switch between them quickly in the tool palette. There is a more general tool preset system as well, which allows you to save the current settings of any tool (including the foreground and background color selections) to a dock-able "Tool Presets" dialog, and access it in the future with one click. That aids in reproducing tricky settings, particularly where destructive editing is concerned and extended trial-and-error would consume too much time.

Second, a close cousin to the tool preset and brush dynamic preset functionality is the ability to assign keyword tags to resources, and to filter through them by tag. "Resources" in this sense refers to image editing components like colors, gradients, textures, or brushes. GIMP can share resources with a large number of applications (both free and proprietary), and among designers collecting them is a hobby. It is all too easy to amass a large, unwieldy collection if you work with GIMP all the time, so tagging allows you to whittle down the excess. Finally, you can export color palettes used in GIMP into a variety of forms suitable for consumption in other applications — notably CSS stylesheets, PHP or Python dictionaries, and Java color maps.

The truly under-the-hood work

A final category of improvements includes those non-interactive features and utilities that benefit you even if you do not notice them during the typical editing session.

This includes support for new file formats — GIMP 2.8 can now import and export to the OpenRaster interchange format, can import JPEG2000 files, and can export images directly to PDF. It also includes cleanup work like adding new texture brushes and clearing out old and accidentally-duplicated brushes, adding IPv6 support (which GIMP and its plug-ins use to retrieve objects from the web), and adding support for right-to-left languages in the interface. The project has also moved to GPLv3+ and LGPLv3+ (the latter for the library components of the code, naturally), a decision that may not immediately affect users, but is noteworthy for developers.

I also appreciate a few of the minor enhancements simply because they are useful to me personally. For example, GIMP can now print crop marks automatically when printing an image — the checkbox is found in the "Image Settings" tab of the Print dialog. I can see a number of projects where that will prove valuable. Another example is the ability to assign custom function mappings to arbitrary buttons on your input devices. GIMP has allowed this for graphics tablets before, but in 2.8 you can do the same thing for the spare buttons on your mouse, trackball, or jog/shuttle controller. I have been experimenting with mapping Undo and Redo to the extra buttons on my trackball; I am not yet sure if they are here to stay, or if horizontal scrolling would be more useful.

Several of the new features mentioned above are only possible because of the ongoing work to port GIMP internals to new toolkits. For example, the Cage tool is implemented entirely in GEGL operations (the next-generation image processing core being incrementally merged into the application), and the GIMP team has dictated that all new tools be written for GEGL from the ground up, a decision that will affect some work in the next development cycle. But the Cage tool also makes use of GIMP's port to Cairo as the rendering engine, replacing older GDK bits. As a result, the on-canvas controls are smoother-looking and just-translucent enough to let you see some of the pixels underneath them.

The GEGL and Cairo porting work continues, and will not end with the release of 2.8. This release adds layer scaling, layer modes, floating selections, and projection (which is GIMP terminology for compositing layers onto the canvas view) to the list of subsystems ported to GEGL. There is also new selection code, new save/export code, and new APIs for plug-in writers. Documentation (particularly for the plug-in authors) is still forthcoming, although there is activity in the Git repository.

The GIMPs of the future

Speaking of ongoing work, the last major update to the stable GIMP was 2.6.0, back in 2008. The project has made it clear in recent months that it wants to shift to a faster update cycle, and to develop new features on feature-branches to be merged back in once complete — changes which are no simple task given the size of the code base and the small development team. The project's pace has always been a popular target for detractors, but it seems to have staked out a definitive roadmap that covers the completion of long-standing major tasks (such as rewriting the internals to use GEGL) and ongoing feature development.

According to Alexandre Prokoudine at Libre Graphics World, the plan is still to release 2.8 by the end of 2011, a decision that forced the team to delay one or two key features that had previously been slated for the 2.8 release. The next major milestone is now expected to be 2.10, which will integrate the last of the still-unadopted GSoC work: a Warp tool, Seamless Clone tool, a new widget for changing image and layer sizes, functional masks for layer groups, and the porting of all image filters to GEGL. It is not clear whether or not that milestone also includes GSoC work to add a GPU back-end to GEGL via the OpenCL framework or not.

The plan then calls for version 3.0, which will be a "port" of 2.10's functionality to GTK+ 3, and will finalize the transition of internal functions to GEGL buffers. That release will mark the debut of the most-requested feature in recent years, support for editing images in 16-bit-per-channel — and higher — bit depths. The roadmap does extend beyond 3.0, and includes other major enhancements like non-destructive editing, which was previewed as far back as Libre Graphics Meeting 2010, and marks the start of a new development direction — requiring new interface conventions and file formats at the very least.

There are no dates associated with any of the future milestones, but in practice they would not be too useful more than one release cycle out anyway. GIMP is developed without financial underwriting from a major distribution or other open source software company; a fact that its critics tend to overlook when lamenting floating tool palettes or other pain points. Nevertheless, it advances year after year, and the 2.8 release cycle holds a great deal of new functionality for end users. Hopefully it will not be another three years before 2.10; accelerating the development cycle would probably help draw in new users, plug-in writers, and perhaps even core developers. In the meantime, however, there is enough new in the next stable release to keep most people busy for a while.

Comments (8 posted)

Brief items

Quotes of the week

Thanks to the contribution of Michael Bauer, a volunteer who took the long-time-abandoned Scottish Gaelic translation and produced a complete UI localization in just a few months, LibreOffice 3.4.4 adds yet another native-language version, bringing the total to 105. This shows the unparalleled value of copyleft licenses for end user software, as LibreOffice is now the most-important office suite when it comes to protecting cultural heritage worldwide, especially when the number of native speakers is not sufficiently attractive for large corporations to devote localization resources to.
-- Andras Timar

PEP: 405
Title: Python 2.8 Release Schedule


This document describes the development and release schedule for Python 2.8.

Release Schedule

The current schedule is:

- 2.8 final Never

Official pronouncement

There will never be an official Python 2.8 release.

Upgrade path

The official upgrade path from Python 2.7 is to Python 3.

-- Barry Warsaw

Comments (none posted)

Transactional memory for GCC 4.7.0

Following a last-minute request, the "transactional memory" GCC branch has been merged into the trunk for the 4.7.0 release. Transactional memory is specified in a draft standard [PDF] for C and C++; the idea is to provide a relatively simple way for developers to execute code with atomic "all or nothing" semantics.

A transaction looks something like this:

    __transaction {
	/* Stuff done here is atomic */

Anything done within the __transaction block will either be visible to other threads in its entirety or not at all. Most exits from a transaction (return, goto, or break, for example) will cause it to close normally. There is a __transaction_cancel statement that can abort (and roll back) a transaction.

There are some constraints, naturally. If changes to a specific variable are protected by a transaction in one place, all accesses to that variable must be done within transactions. Transactions can only be rolled back if they consist of exclusively "safe" statements; performing I/O is an obvious way to lose transactional semantics. Exception handling gets more complicated. All this leads to a certain amount of complexity, with developers needing to mark functions as being "safe," add Java-like declarations of exceptions that a function may raise, and so on.

Details on the specific implementation are scarce; it appears that, in the current patch set, transactions will be implemented using a global lock. GCC developers debated for a bit over whether this code was ready for merging or not. In the end, though, the possibility of being the first to support an interesting new feature seemed to win out. Current plans are to release 4.7.0 sometime around next April.

Comments (8 posted)

Firefox 8 released

The announcement of a new Firefox release on the Mozilla blog does not mention a version number anywhere, but the actual release calls itself "Firefox 8." The headline feature appears (sadly) to be a Twitter search option. Beyond that, this release disables add-ons by third-party programs and adds a "load tabs on demand" preference option along with improved performance and a number of security fixes; see the release notes for details.

Comments (87 posted)

recutils 1.4 released

Recutils is "a set of tools and libraries to access human-editable, text-based databases called recfiles. The data is stored as a sequence of records, each record containing an arbitrary number of named fields." New features in the just-announced 1.4 release include support for encryption and sorting, an improved manual, and more.

Full Story (comments: none)

Spyder v2.1 released

Spyder is an interactive development environment for the Python language; see this page for a list of features and screen shots. The 2.1 release is out; key improvements include a lot of performance work, PySide support, a new profiler plugin, and more. Details can be found in the changelog.

Full Story (comments: none)

Newsletters and articles

Development newsletters from the last week

Comments (none posted)

Trinity Project keeping 3.5 alive (KDE.News)

KDE.News looks at the 3.5.13 release from the Trinity Desktop Project. "For people who prefer the KDE 3.5-style desktop, a new version of the Trinity Desktop Environment (TDE) has been released. Trinity is a continuation of the KDE 3.5 Desktop Environment with ongoing updates and new features." The release has a new control panel for monitors, a new compositor, and a number of new applications.

Comments (37 posted)

Why GNOME refugees love Xfce (Register)

The Register has a review of XFCE. "Perhaps more important to GNOME 3 refugees, Xfce isn't planning to try 'revolutionising' the desktop experience. Development is historically very slow — the recently released Xfce 4.8 was two years in the making — and the Xfce project tends to pride itself on the lack of new features in each release. The focus is generally improving existing features, polishing rough edges and fixing bugs rather than trying to out whiz-bang the competitors."

Comments (214 posted)

Page editor: Jonathan Corbet


Articles of interest

The sods must be crazy: OLPC to drop tablets from helicopters to isolated villages (ars technica)

Ars technica is reporting on an odd plan for how to distribute the One Laptop Per Child's XO-3 touchscreen tablet: "'We'll take tablets and drop them out of helicopters into villages that have no electricity and school, then go [back] a year later and see if the kids can read,' [OLPC founder Nicholas] Negroponte told The Register. He reportedly cited Professor Sugata Mitra's Hole in the Wall experiment as the basis for his belief that dropping the tablets will encourage self-directed literacy. [...] Among the major challenges that the OLPC project was never able to fully overcome during its laptop days were supporting the hardware in the field and providing teachers with the proper training and educational material. In light of the cost and difficulty of tackling those issues, it’s not hard to see why the eccentric stealth drop approach looks appealing to Negroponte."

Comments (66 posted)

New Books

The Art of Readable Code

O'Reilly Media has announced the publication of The Art of Readable Code by Dustin Boswell and Trevor Foucher. "After reading this book you'll be able to look at your own code and realize why it might be unreadable. More importantly, you'll be armed with a lot of principles and techniques that will let you comb through your code to make it better."

Full Story (comments: 3)

The Tangled Web

The Tangled Web is a new book from No Starch Press, written by Michal Zalewski. "Michal Zalewski, one of the world's top security experts and author of Google's Browser Security Handbook, explains how browsers work and why they're fundamentally insecure. Rather than simply list known vulnerabilities, Zalewski examines the entire browser security model, revealing weak points and providing crucial information for shoring up web application security."

Full Story (comments: none)

Event Reports

Videos from the 2011 Embedded Linux Conference Europe

The folks at Free Electrons have released videos from the Embedded Linux Conference Europe that was held in Prague, October 26-28. All of the videos are in WebM format, in two different sizes, and have been posted much more quickly than has happened in the past, thanks, no doubt, to lots of hard work from Free Electrons. "Below, you'll find 51 videos, in both a 1920×1080 HD format and a reduced 800×450 format. In total, it represents 28 GB of video, for a duration of 2214 minutes, that is more of 36 hours of video. We hope that you will enjoy those videos and that these will be useful to those who couldn't attend the conference."

Comments (18 posted)

Upcoming Events

Paul Fenwick to keynote at LCA

The organizers have announced that the event's third keynote will be given by Paul Fenwick, who will be talking about "known bugs and exploits - in your brain." "Humans - as a species, we suck! The only real evolutionary advantage we have is our brains, and by using them we've become the dominant species on the planet. Our brains are superbly adapted for our survival and success in the environment in which they evolved - the African savanna 200,000 years ago. Our brains are not-at-all suited for modern life, and are plagued by a raft of bugs and unwanted features that we've been unable to remove."

Full Story (comments: none)

Events: November 17, 2011 to January 16, 2012

The following event listing is taken from the Calendar.

November 14
November 17
SC11 Seattle, WA, USA
November 14
November 18
Open Source Developers Conference 2011 Canberra, Australia
November 17
November 18
LinuxCon Brazil 2011 São Paulo, Brazil
November 18 LLVM Developers' Meeting San Jose, CA, USA
November 18
November 20
Foswiki Camp and General Assembly Geneva, Swizerland
November 19
November 20
MediaWiki India Hackathon 2011 - Mumbai Mumbai, India
November 20
November 22
Open Source India Days 2011 Bangalore, India
November 24 verinice.XP Berlin, Germany
November 28 Automotive Linux Summit 2011 Yokohama, Japan
December 2
December 4
Debian Hildesheim Bug Squashing Party Hildesheim, Germany
December 2
December 4
Open Hard- and Software Workshop Munich, Germany
December 4
December 9
LISA ’11: 25th Large Installation System Administration Conference Boston, MA, USA
December 4
December 7 2011 Mumbai, India
December 27
December 30
28th Chaos Communication Congress Berlin, Germany
January 12
January 13
Open Source World Conference 2012 Granada, Spain
January 13
January 15
Fedora User and Developer Conference, North America Blacksburg, VA, USA

If your event does not appear here, please tell us about it.

Page editor: Rebecca Sobol

Copyright © 2011, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds