User: Password:
Subscribe / Log in / New account Weekly Edition for April 22, 2010

Some notes from the Collaboration Summit

By Jonathan Corbet
April 19, 2010
Your editor has just returned from the Linux Foundation's annual Collaboration Summit, held in San Francisco. LFCS is a unique event; despite becoming more developer-heavy over the years, it still pulls together an interesting combination of people from the wider Linux ecosystem. The following article is not intended to be a comprehensive report from the summit; it is, instead, a look at a few of the more interesting thoughts that came from there.


As has seemingly become traditional, your editor moderated a panel of kernel developers (James Bottomley, Christoph Hellwig, Greg Kroah-Hartman, and Andrew Morton, this year). We discussed a wide range of topics, but the subject that caught the attention of the wider world was the "graybeards" question. As a result, we've been treated to a number of lengthy discussions on just why the kernel is no longer attracting energetic young developers.

The only problem is: the kernel doesn't really have that problem. Every three-month development cycle involves well over 1000 developers, a substantial portion of whom are first-time contributors. Nearly 20% of these contributors are working on their own time, because they want to. There does not appear to be a problem attracting developers to the kernel project; indeed, if anything, the problem could be the reverse: the kernel is such an attractive project that it gets an order of magnitude more developers than just about anything else. Your editor has heard it said in the past that Linux as a whole might be better off if some of those developers were to devote a bit more time to user-space projects, most of which would be delighted to have them.

The question the panel discussed, instead, had to do with the graying of the ranks of the kernel's primary subsystem maintainers. Many of those developers wandered into kernel development in the 1990's, when there were opportunities everywhere. A decade or more later, those developers are still there; James Bottomley suggested they may stay in place until they start dying off. Entrenched graybeards at high levels could, someday, start to act as a brake on kernel development as a whole. That does not appear to be a problem now; it's just something to watch out for in the future.

Andrew Morton raised an interesting related issue: as the core developers gain experience and confidence, they are happy to put code of increasing complexity into the kernel. That could indeed make it harder for new developers to come up to speed; it might also not bode well for the long-term maintainability of the kernel in general.

Companies and Linux 1

Dan Frye's keynote reflecting on IBM's 10+ years of experience with Linux was easily one of the best of the day. IBM's experience has certainly not been 100% smooth sailing; there were a lot of mistakes made along the way. As Dan put it, it is relatively easy for a company to form a community around itself, but it's much harder - and more valuable - to join an established community under somebody else's control.

A number of lessons learned were offered, starting with an encouragement to get projects out into the community early and to avoid closed-door communications. IBM discovered the hard way that dumping large blocks of completed code into the kernel community was not going to be successful. The community must be involved earlier than that. To help in that direction, IBM prohibited the use of internal communications for many projects, forcing developers to have their discussions in public forums.

Beyond that, companies need to demonstrate an ongoing commitment to the community - no drive-by submissions. Developers need to remain immersed in the community in order to build the expertise and respect needed to get things done. Companies, Dan says, should manage their developers, but they should not attempt to manage the maintainers those developers are working with. In general, influence in the project should be earned through expertise, skills, and a history of engagement - and not through attempts to control things directly.

One interesting final point is that what matters is results, not whose code gets merged. To that end, IBM has reworked things internally to reward developers who have managed to push things forward, even if somebody else's code gets merged in the end. This is an important attitude which should become more widely adopted; it certainly leads to a more team-oriented and productive kernel community in the long term.

Companies and Linux 2

Chris DiBona's talk on Google and the community was always going to be a bit of a hard sell; Greg Kroah-Hartman's discussion of Android and the community had happened just two days before. Additionally, Chris was scheduled last in the day, immediately after Josh Berkus gave a high-energy version of his how to destroy your community talk. So perhaps Chris cannot be blamed for his decision to dump his own slides and, instead, give a talk to Josh's slides - running backward.

Chris did eventually move on to his real talk. There was discussion of Google's contributions to the community, including 915 projects which have been released so far and something like 200 people creating patches at any given time. There are over 300,000 projects on now. Google has also supported the community by providing infrastructure to sites like and, of course, the Summer of Code program.

In short: Chris doesn't think that Google has much to apologize for; indeed, the contrary is true. This extends to the Android situation, which, Chris says, he's not really unhappy with. The targeted users for Android are different, and, in any case, it always takes a long time to get a real community going. That said, more of the Android code should indeed get into the mainline, and Google should be doing a better job. Part of the problem, it seems, is finding "masochists" who are willing to do the work of trying to upstream the code.

The truth of the matter is that Chris's talk failed to satisfy a lot of people; much of it came off as "Google is doing good stuff, we know best, we're successful, and we're going to continue doing things the same way." Starting with his playing with Josh's slides, Chris gave the impression that he wasn't taking the community's concerns entirely seriously.

On the other hand, the announcement at the end of his talk that Google was giving a Nexus One phone to every attendee almost certainly served to mute the criticism somewhat.

Outside of the sessions, your editor had the opportunity to talk with some of Google's Android kernel developers; the folks actually doing the work have a bit of a different take on things. They are working flat-out to create some very nice Linux-based products, and they are being successful at it, but they are in the bind that is familiar to so many embedded systems developers: the product cycles are so short and the deadlines are so tight that there just isn't time to spend on getting code upstream. That said, they are trying and intend to try harder. We are starting to see some results; for example, the NVIDIA Tegra architecture code has just been posted for review and merging.

The Android developers seem to feel that they have been singled out for a level of criticism which few other embedded vendors - including those demonstrating much worse community behavior - have to deal with. It can be a dismaying and demotivating thing. Your editor would like to suggest that, at this point, the Android developers have heard and understood the message that the community has tried to communicate to them. There is a good chance that things will start to get better in the near future. Perhaps it's time to back off a little bit and focus on helping them to get their code merged when they get a chance to submit it.

In summary

As promised, this article has barely scratched the surface of what happened at the 2010 Collaboration Summit. In particular, the large presence of MeeGo (which had a separate, day-long session dedicated to it) has been passed over, though some of that will be made up for in a separate article. In general, it was a good collection of people who do not always get a chance to mingle in the same hallway, all backed by the Linux Foundation's seamless "it all just works" organization. Improved collaboration within our community should indeed be the result of this event.

Comments (19 posted)

MeeGo: open development and upstream involvement

By Jake Edge
April 21, 2010

The Linux Foundation's Collaboration Summit (LFCS) is focused on, well, collaboration, so it is no surprise that a recent high-profile collaborative effort in the Linux world, MeeGo, had a strong presence at the conference. Both sides of the merger of Moblin and Maemo, Intel and Nokia, had representatives giving keynote speeches about MeeGo and how it intends to interact with the community. Since the project is hosted by the foundation, it makes sense that it would devote a good portion of LFCS—a day-long MeeGo track in addition to the keynotes—to the mobile distribution. The focus of both speakers was on developing MeeGo in the open and working closely with upstream projects, rather than targeting MeeGo itself.

Ari Jaaksi, Nokia's VP for Maemo devices and MeeGo operations, spoke first, which he saw as an advantage because Intel's Imad Sousou would be sure to correct anything he said "wrong". The goal of the MeeGo project is to "provide industry with an open platform" for various kinds of devices. Both companies have been working on mobile distributions, which means that they "integrate the same components multiple times", and that is "stupid", Jaaksi said. That is one of the main ideas behind the merger.

Nokia has been working with Linux and free software for a number of years, since 2002 or 2003, and that it has been a "learning exercise". It "made a lot of mistakes" but tried to work within the community by participating with many different projects. Its early realization that it needed to be part of the community, and not "just use the code", was important.

But the integration process, where the various components that made up Maemo were built and collected into a release, was not open to community involvement. That is something that will change for MeeGo: "We are going to build the MeeGo platform in the open". It is a "huge change" that is going on "right now, as I speak". The idea of doing that "may seem trivial" to LFCS attendees, he said, "but it is a big deal with us".

Sousou, who is the director of Intel's Open Source Technology Center, echoed that idea. Working in the open will make collaboration easier, but "you will see the messes, and we are OK with that". One of the keys to making that work will be to focus on the upstream projects, he said. It took Intel "some time to figure it out", but downstream projects must "contribute and use the open source model".

There are "hundreds" of Intel engineers working on MeeGo, Sousou said, but most of the work is not actually in MeeGo itself. "It's happening upstream", at,, and others. He doesn't want to see kernel patches, for example, submitted to MeeGo, "submit it upstream". It's all about "working with upstream and contributing upstream — there is nothing more".

Both speakers talked about governing MeeGo in the open, with steering committee meetings on IRC. Jaaksi notes that there is still some adjustment that Nokia needs to make. He gets email from other employees about seeing MeeGo roadmap plans on the Internet; they are worried about competitors getting that kind of formerly secret information. He tells them: "Yes, that's how we do things".

Jaaksi notes that Palm had gotten products out earlier than Nokia, "with our code", and that was "not their fault, [but] our fault" by being too slow to market. Google has also used Maemo code, and "we hope to use theirs". A concern is that MeeGo will give competitors an advantage, but he believes that it is the companies which participate in the project that will see the most benefit. That concern may not be relevant for most of the people in attendance, he said, but within Nokia, there is a question on how to differentiate itself.

Sousou listed oFono and ConnMan as two projects where the two companies had already worked together. For MeeGo, they complement each other well, Jaaksi said. Nokia brings experience working with mobile handsets, ARM, and the phone companies, while Intel has "so much knowledge about the Linux kernel". Both have good teams that "know how to work in open source and combine open source with business". Choosing the "best of breed" components from Moblin and Maemo—or elsewhere—for MeeGo is something that both speakers stressed—there is a general sense that the project is trying to avoid "turf wars".

But it's not just Intel and Nokia, as the MeeGo project is looking for more contributors, which is another thing that both speakers emphasized. Because of their close working relationship, it was relatively straightforward to merge Maemo and Moblin into one project. They didn't bring in other companies at the start, because, in Jaaksi's opinion, "it would have taken too much time" to get agreement with more companies. He said that MeeGo wants to "demonstrate that it is an open source project" that others can participate in. He listed multiple companies that have become involved since the MeeGo announcement, including hardware vendors, Linux distributors, embedded Linux vendors, and so on.

The two main participants have decided on a blueprint of the architecture, which includes Qt as a platform for application developers, but the design of the system is an "ongoing process that we invite people to participate in", Jaaksi said. "Now is the time to join MeeGo", he said, and there is much to be done, but there is little risk for others because they have made a commitment to do things in the open. Both stressed that there is a simple, open governance model.

But, as an audience member pointed out, there is a veto power that Intel and Nokia have over the project. The audience member wondered if a community can still be built around that veto. Jaaksi responded that things "will be fixed if we need to fix them" and that changes will be made for anything that becomes an obstacle. Currently, MeeGo is focused on getting more people involved and having "simple governance". Earlier in his talk, he said that the governance needed to stay out of the way to maintain the speed of development.

There are 200-300 participants in the IRC meetings, Sousou said, and anyone can get involved. "If you contribute, you can help make decisions", Jaaksi said, but MeeGo will "make some mistakes going forward".

The veto power for Nokia and Intel was one of the things that LFCS participants grumbled about in the "hallway track". There is concern that community input will be ignored. One area that is particularly sensitive is the choice of RPM as the mandatory package format for MeeGo-branded devices. Debian/Ubuntu-oriented developers felt slighted by that requirement, and there seems to be no room to change that decision, which gave rise to concerns about governance. Assuming it wants only one package format, there is no "good" choice for the project as either plausible choice would irritate some.

Beyond that, attendees seemed interested, some even excited, by the prospects of MeeGo. Some consolidation in the mobile Linux arena is to be welcomed. In the end, though, it will be the MeeGo devices that will largely decide its fate. While Moblin and Maemo are available, neither has gained the widespread device availability that Android is starting to enjoy. Most participants seemed to be taking a "wait and see" approach, with the sense that many will be watching developments fairly closely.

Comments (21 posted)

Linux and branding

April 19, 2010

This article was contributed by Joe 'Zonker' Brockmeier.

Marketing isn't the first word that one associates with the Linux community, but it is a necessary activity for those who wish to bring new users into the fold (and perhaps make a buck at the same time). The recent Ubuntu rebrand, and the subsequent media attention surrounding it, provides an opportunity to consider the larger question of branding for Linux distributions and open source projects.

The brand of a project includes the name, theming, artwork, fonts, logos, and even audio cues for a project. Community projects, especially those with a primary sponsor, have a unique set of challenges and considerations when it comes to branding. Companies promoting proprietary projects can pour millions into building cohesive brands, and have the ability to dictate branding decisions from the top down. No one at Microsoft need worry about whether branding decisions will affect the community distributions of Microsoft Office, because there aren't any. Nor do the decision makers in Redmond need to worry about offending a community art team when imposing brand changes, or worry about a lack of talent when it comes time to develop and implement branding.

Not so for community distributions and open projects. The tradeoff for strong community involvement is a diverse set of stakeholders that will want to weigh in on branding, but come with considerably less money and talent on tap to develop branding.

Corporate and community branding

Canonical has created a somewhat confusing situation with the Ubuntu brand, because it is the corporate and community brand simultaneously. The company has been criticized for using its community brand, specifically the Ubuntu trademark, for proprietary offerings as well as its community distribution. It has also been a slight source of problems in the past when the community team didn't produce work that lived up to Mark Shuttleworth's standards.

This time around, Canonical brought members of different sub-projects together to work on the rebrand effort for 10.04. This hasn't satisfied everyone, and the decision to re-arrange the window buttons has generated quite a lot of grumbling. Overall, though, Canonical's handling of the rebrand seems to have gone well with its community.

While Canonical pursues a unified brand for Ubuntu, other companies attempt an approach of using a derivative of the corporate brand for the community project, such as was done with openSUSE and with OpenSolaris. This strategy has its merits, but can lead to a number of problems. Coordinating with a corporate marketing communications team can be a bit challenging for a community project. As with a single brand, it means that the commercial and community efforts will occasionally generate friction when the community wishes to go in one direction and the company wishes to go in another.

Coordinating community-generated materials with a corporate art department also leads to unexpected challenges. Community members tend to use, not surprisingly, free tools to generate art. Tools like Inkscape, GIMP, and Evince have come a long way and are capable of generating and displaying professional-looking artwork. They still, however, pose challenges when trying to send artwork to a professional art department that works with proprietary tools — and vice-versa. While Evince, for example, is perfectly capable of displaying most PDFs with reasonable fidelity, it does not handle all output perfectly. When developing art for the openSUSE retail box sets and t-shirts, for example, I still needed to rely on Adobe's reader to ensure the output matched what would be sent to the printer.

Further, tying the community brand and name to the corporate brand has side-effects. The upside is that a successful and healthy project will reflect well on the corporate brand. If the community project is well-known and popular, it has a "halo" effect on the commercial brand and has a marketing benefit for the corporate parent. The downside is that the association runs both ways, and if either the corporate parent or community project takes a drubbing, the other suffers as well.

The downside is that this means the primary sponsor has a vested interest in the community's handling of the brand, in particular trademarks and logos, that are derivative of the primary brand. This means a community may wind up with a slightly more restrictive trademark policy, and requirements to coordinate with the marketing communications team rather than leaving decisions to the community. This is also true of a shared brand, of course. Even when the corporate backer has been relatively hands-off with the community branding, as was the case when I worked with openSUSE, it still introduced some minor problems. For example, SUSE chose a proprietary font (Cholla) for its logo and associated marks, a practice that continued under Novell.

This meant that community members interested in producing art for shows, swag, and derivative projects were out of step with the "official" openSUSE brand. Ultimately it was necessary to create a look-alike called FifthLeg, which has worked out relatively well.

The other end of the spectrum is the Debian Project. With no primary sponsor, Debian's community can develop the brand as it sees fit without interference. But Debian has done relatively little with this freedom. It has an art team and has created a site for community contributed artwork, as well as official and unofficial logos with clear guidelines on how they may be used. But there's little in the way of an organized effort to produce a coherent Debian "brand" beyond logos and related artwork.

A more practical and community friendly approach for sponsored projects may be the one taken by Red Hat with Fedora. By providing a separate brand, but providing at least some corporate support for the development of the branding in conjunction with the community teams, it's possible to avoid the conflicts between commercial and community brands. Fedora has a relatively free hand in creating artwork for the distribution, and most of the guidelines are oriented around practical issues rather than sponsor issues — excepting the prohibition on artwork including hats.

Red Hat's trademark guidelines around the Fedora mark, however, have generated some friction with the community. The company has shown interest in trying to satisfy reasonable complaints with the trademark guidelines, but it is rarely possible to please everyone with a trademark licensing agreement.

Distributions: How branding affects downstreams

When a project rebrands, it also has an impact on all the affiliated downstream projects that make use of its branding. All of the major distributions have one or more derivative projects or "respins" that make use of the core distribution but include significant changes from the standard offering.

This means that when an upstream, like Ubuntu, makes changes they roll downhill to Kubuntu, Xubuntu, and other assorted Ubuntu derivatives. Fedora has its respins, and openSUSE now has quite a few offshoots like the Education project or custom spins of openSUSE generated via SUSE Studio.

Derivatives have technical and legal issues to consider. When a distribution actively seeks spinoffs, it has to ensure that it's reasonably easy to rebrand the distribution. This means providing clear guidelines — as Fedora has — on what is allowed (and what's not) with regards to using trademarks and branding for remixed distributions.

Distributions also need to provide tools and instructions on rebranding. For instance, Fedora provides guidelines on replacing the branding packages for Fedora to ensure that Fedora and Red Hat trademarks are stripped from Spins. The openSUSE Project provides instructions and a tool called Rembrand to create derivatives — though prospective respinners may find it easier to use SUSE Studio to create rebranded openSUSE derivatives.

Another issue crops up for LUGs and advocacy projects like Spread Ubuntu. With the Ubuntu refresh, all of the Loco groups and downstream advocacy projects are left with outdated materials. Projects should tread carefully, and slowly, to reduce issues here.

Upstream and downstream cooperation

Distributions not only have to consider the effect of branding on derivative projects, but also have to consider the presentation of their upstream projects. For example, the branding decisions made by distributions have a direct impact on the presentation of GNOME or KDE.

The KDE Project's Aaron Seigo has taken some exception to distributions rebranding KDE and creating "microbrands:"

[...] we have made it hard for people to take notice of what we are doing with the Linux Desktop since none of the brands are identifiable as "belonging to the same thing". Instead we end up with microbrands that nearly no one outside of the server room or the hardcore F/OSS community recognizes.

Many (most?) operating systems bearing KDE packages come with their own logo as the application launcher button, many ship their own icon sets or their own (for branding purposes) customizations of the default icon set, many ship their own wallpapers, many change the default window borders or widget themes.

This is even more unfortunate because there is a scarcity of quality artists in the F/OSS world, and when each distribution or project gobbles up one of them to work exclusively on their own mojo ... we just divide and conquer ourselves.

While this may make their individual believers-in-the-cause users happy and may make corporate management feel they are getting some good corporate marketing out there with that happy little logo in the bottom left hand corner ... this is obviously inspired by the "old way" of doing things that is centered around corporate balkanization of the consumer space: flags or fruits, right? (Microsoft and Apple .. :) We need to rise above that and consider the long term benefits of the entire ecosystem because each of our projects thrives or diminishes in step with it.

What KDE has done in this case is to propose a service to work with distributions to customize some elements to achieve a shared visual identity. For instance, the openSUSE 11.2 release included work from Nuno Pinheiro of KDE to provide a shared theme.

Does it matter?

Ultimately, one has to wonder how much branding actually affects open source adoption. While Ubuntu received a great deal of media attention surrounding its rebranding for the 10.04 release, it was only because Ubuntu is already popular and well-received — thus making the rebrand effort noteworthy even in mainstream IT press.

One might argue that some small amount of Ubuntu's success stems from previous branding efforts, but very little. What has proven more effective for Ubuntu is the distribution itself and an effective marketing campaign beyond branding. ShipIt to put the distribution in the hands of, literally, millions of users and a motivated community of advocates that have provided word-of-mouth marketing for the distribution.

This isn't to say the look and feel of projects is unimportant. Some have argued, persuasively, that pretty is a feature. Clearly if a project is gunning for mainstream acceptance it has to be attractive enough to stand side-by-side with proprietary software. But it would be a mistake to assume that a distribution's brand is the key to success.

But it can be a key to failure. As Seigo and others have pointed out, ugly software or "micro" branding make gaining traction with a wider audience unlikely. Projects that are looking to succeed with mainstream audiences will need to pay attention to branding and find ways to harness the relatively small number of talented designers that are interested in working with open projects, or find new ways to attract them.

Comments (9 posted)

Reminder: LWN reader survey

We would like to remind some of you and inform the others that we are conducting a reader survey. The intent is to create a "media kit", which will help us sell better advertising. The hope is for the ads to be higher quality, both in terms of appearance and revenue, so that we can run fewer ads but bring in more money. To do that, we need to be able to describe LWN and its readers to advertisers in terms they understand. We would really appreciate you taking a few minutes to fill out the survey. There is some more information on the media kit project there as well. Please take the time to tell us a bit about yourself; doing so will help us provide a more pleasant and successful LWN for everybody.

The LWN site code does not allow for comments on surveys directly, and we noticed that some of you had thoughts about it. Thanks for the email, but you can now also comment below. We are aware that some of the survey questions can have multiple interpretations or don't cover every situation and for that we apologize.

Comments (2 posted)

Page editor: Jonathan Corbet


OSSEC for host-based intrusion detection

By Jake Edge
April 21, 2010

A free software entrant into the host-based intrusion detection system (HIDS) arena, OSSEC, released version 2.4 earlier this month, with a number of upgrades and bug fixes. OSSEC may not be as well-known as other free software HIDS, like Samhain, AIDE, Osiris, or Open Source Tripwire, but they are all trying to do a similar job: detect changes to a running system that may have been caused by malicious activity. The techniques used by HIDS varies considerably, from simply hashing file contents and comparing them periodically to more sophisticated log file and behavioral analysis.

Conceptually, a HIDS should monitor everything about the system's state, such that it can detect changes in behavior that stem from some kind of host intrusion. Unlike network intrusion detections systems, which look at the network traffic to try to detect intrusion attempts, HIDS will only see problems after the fact. It is, in some sense, a second line of defense that is generally deployed behind a NIDS, at least in those installations with high security needs.

Most HIDS implementations only bite off some portion of the job. The simplest look for changes to system files and binaries by using hashes of their contents. Taking that a step further, and storing the hashes of "important" files on a separate system or read-only media provides defense against an intrusion that targets the files which store the hashes. OSSEC takes that idea even further by moving most of the monitoring and analysis to separate, presumably strongly hardened systems.

The basic architecture is intended to be client-server, with a "manager" running on a central server and "agents" running on each of the systems to be monitored. The agent is a small program that runs with low privileges and forwards information to the manager. There is also a "logcollector" process that runs as root on a client, and does just what its name would imply. Configuration information is mostly stored by the manager with some being locally cached. For obvious reasons, that configuration cache is monitored and changes to it will cause an alert.

OSSEC can be run in standalone mode, where the analysis and gathering are on the same host. The manager can also gather information from various devices, such as routers, firewalls, and other IDS systems without using an agent. There are agentless solutions for some devices, while others can use remote syslog to send their log information to the manager system. OSSEC is cross-platform, running on most major Unix systems as well as various flavors of Windows.

There are four main features to OSSEC, starting with file integrity monitoring. For logs, the monitoring rules are fairly extensive, covering a wide range of free and proprietary applications like apache, asterisk, Cisco IOS, McAfee anti-virus, MySQL, PostgreSQL, and so on. Much of what OSSEC does with log files is similar to what logwatch or syslog-ng can do, but the analysis can be done site-wide, and actions can be performed based on what OSSEC finds. New rules can be added for additional services or site-specific logging using an XML rule syntax.

As would be expected, system administrators can be alerted by email if some class of problem is detected. In addition, OSSEC has the ability to perform "active responses" based on certain kinds of attacks. OSSEC comes with a handful of pre-defined responses for things like adding an IP address to /etc/hosts.deny or to various firewalls' deny lists. Adding additional active responses is done by creating an XML chunk that specifies what to run and another to describe when to run it.

The fourth main feature of OSSEC is rootkit detection that runs periodically on client systems. For Windows clients, there is an additional feature that checks the registry for changes, and alerts the administrator of any it finds.

OSSEC was originally written by Daniel Cid and released as free software in 2004. Since that time, the code has been acquired twice, most recently by Trend Micro, which offers commercial support for OSSEC. It is licensed under the GPLv3, and is available as a tarball (along with SHA1/MD5 hashes for verification) from the installation page.

As with any HIDS solution, it will require some tweaking for specific environments to reduce false-positives to a manageable level. OSSEC has a number of useful features and looks to be a solution that is growing in popularity. It would seem to be a good candidate for one or more distributions to pick up and configure for their specific needs, which would make it easier for their users to start monitoring with OSSEC. For anyone considering HIDS for security at their site, OSSEC is worth a look.

Comments (6 posted)

New vulnerabilities

apache-mod_auth_shadow: restriction bypass

Package(s):apache-mod_auth_shadow CVE #(s):CVE-2010-1151
Created:April 19, 2010 Updated:May 28, 2010
Description: From the Mandriva advisory:

A race condition was found in the way mod_auth_shadow used an external helper binary to validate user credentials (username / password pairs). A remote attacker could use this flaw to bypass intended access restrictions, resulting in ability to view and potentially alter resources, which should be otherwise protected by authentication

Fedora FEDORA-2010-6290 mod_auth_shadow 2010-04-09
Fedora FEDORA-2010-6359 mod_auth_shadow 2010-04-10
Fedora FEDORA-2010-6323 mod_auth_shadow 2010-04-10
Mandriva MDVSA-2010:081 apache-mod_auth_shadow 2010-04-18

Comments (none posted)

clamav: denial of service

Package(s):clamav CVE #(s):CVE-2010-1311
Created:April 19, 2010 Updated:September 8, 2010
Description: From the Mandriva advisory:

The qtm_decompress function in libclamav/mspack.c in ClamAV before 0.96 allows remote attackers to cause a denial of service (memory corruption and application crash) via a crafted CAB archive that uses the Quantum (aka .Q) compression format. NOTE: some of these details are obtained from third party information.

Gentoo 201009-06 clamav 2010-09-07
Mandriva MDVSA-2010:082-1 clamav 2010-05-20
SuSE SUSE-SR:2010:010 krb5, clamav, systemtap, apache2, glib2, mediawiki, apache 2010-04-27
Pardus 2010-55 clamav 2010-04-20
Mandriva MDVSA-2010:082 clamav 2010-04-18

Comments (none posted)

gource: predictable temporary filename

Package(s):gource CVE #(s):
Created:April 20, 2010 Updated:April 21, 2010
Description: From the Red Hat bugzilla:

A Debian bug report notes that Gource creates its log file with a predictable name (/tmp/gource-$(UID).tmp), which a malicious user could use to overwrite arbitrary files via a symlink attack, with the privileges of the user running Gource.

Fedora FEDORA-2010-6766 gource 2010-04-16

Comments (none posted)

irssi: multiple vulnerabilities

Package(s):irssi CVE #(s):CVE-2010-1155 CVE-2010-1156
Created:April 16, 2010 Updated:June 21, 2010
Description: From the Ubuntu advisory:

It was discovered that irssi did not perform certificate host validation when using SSL connections. An attacker could exploit this to perform a man in the middle attack to view sensitive information or alter encrypted communications. (CVE-2010-1155)

Aurelien Delaitre discovered that irssi could be made to dereference a NULL pointer when a user left the channel. A remote attacker could cause a denial of service via application crash. (CVE-2010-1156)

Fedora FEDORA-2010-6618 irssi 2010-04-15
Fedora FEDORA-2010-6612 irssi 2010-04-15
Fedora FEDORA-2010-6629 irssi 2010-04-15
SuSE SUSE-SR:2010:011 dovecot12, cacti, java-1_6_0-openjdk, irssi, tar, fuse, apache2, libmysqlclient-devel, cpio, moodle, libmikmod, libicecore, evolution-data-server, libpng/libpng-devel, libesmtp 2010-05-10
Slackware SSA:2010-116-01 irssi 2010-04-26
Ubuntu USN-929-2 irssi 2010-04-20
Mandriva MDVSA-2010:079 irssi 2010-04-17
Ubuntu USN-929-1 irssi 2010-04-16

Comments (none posted)

java: information disclosure

Package(s):java-1.6.0-sun CVE #(s):CVE-2010-0886 CVE-2010-0887
Created:April 20, 2010 Updated:July 21, 2010
Description: From the Oracle advisory:

This Security Alert addresses security issues CVE-2010-0886 and CVE-2010-0887, which are vulnerabilities in desktop Java running in web browsers only; these vulnerabilities are not present in Java running on servers or standalone Java desktop applications and do not impact any Oracle server based software. The desktop vulnerabilities are in the Java Deployment Toolkit and the new Java Plug-in that are included in various Oracle Java SE and Java for Business releases. They only affect Java when running in a 32-bit web browser. These vulnerabilities may be remotely exploitable without authentication, i.e., they may be exploited over a network without the need for a username and password. For a successful exploit, a user running an affected release in their browser will need to visit a malicious web page that exploits this vulnerability. Successful exploits can impact the availability, integrity, and confidentiality of the user's system.

Red Hat RHSA-2010:0356-02 java-1.6.0-sun 2010-04-19
Red Hat RHSA-2010:0549-01 java-1.6.0-ibm 2010-07-21
Gentoo 201006-18 sun-jre-bin 2010-06-04

Comments (none posted)

libnids: denial of service

Package(s):libnids CVE #(s):CVE-2010-0751
Created:April 20, 2010 Updated:April 21, 2010
Description: From the Pardus advisory:

The ip_evictor function in ip_fragment.c in libnids, as used in dsniff and possibly other products, allows remote attackers to cause a denial of service (NULL pointer dereference and crash) via crafted fragmented packets.

Pardus 2010-56 libnids 2010-04-20

Comments (none posted)

memcached: denial of service

Package(s):memcached CVE #(s):CVE-2010-1152
Created:April 20, 2010 Updated:June 14, 2010
Description: From the Pardus advisory:

memcached.c in memcached allows remote attackers to cause a denial of service (daemon hang or crash) via a long line that triggers excessive memory allocation.

SuSE SUSE-SR:2010:012 evolution-data-server, python/libpython2_6-1_0, mozilla-nss, memcached, texlive/te_ams, mono/bytefx-data-mysql, libpng-devel, apache2-mod_php5, ncpfs, pango, libcmpiutil 2010-05-25
SuSE SUSE-SR:2010:013 apache2-mod_php5/php5, bytefx-data-mysql/mono, flash-player, fuse, java-1_4_2-ibm, krb5, libcmpiutil/libvirt, libmozhelper-1_0-0/mozilla-xulrunner190, libopenssl-devel, libpng12-0, libpython2_6-1_0, libtheora, memcached, ncpfs, pango, puppet, python, seamonkey, te_ams, texlive 2010-06-14
Pardus 2010-52 memcached 2010-04-20

Comments (none posted)

scsi-target-utils: format string vulnerability

Package(s):scsi-target-utils CVE #(s):CVE-2010-0743
Created:April 20, 2010 Updated:January 23, 2012
Description: From the Red Hat advisory:

A format string flaw was found in scsi-target-utils' tgtd daemon. A remote attacker could trigger this flaw by sending a carefully-crafted Internet Storage Name Service (iSNS) request, causing the tgtd daemon to crash.

Gentoo 201201-06 iscsitarget 2012-01-23
SUSE SUSE-SR:2010:017 java-1_4_2-ibm, sudo, libpng, php5, tgt, iscsitarget, aria2, pcsc-lite, tomcat5, tomcat6, lvm2, libvirt, rpm, libtiff, dovecot12 2010-09-21
openSUSE openSUSE-SU-2010:0608-1 iscsitarget/tgt 2010-09-14
openSUSE openSUSE-SU-2010:0604-1 iscsitarget/tgt 2010-09-13
CentOS CESA-2010:0362 scsi-target-utils 2010-05-28
Mandriva MDVSA-2010:131 iscsitarget 2010-07-12
Debian DSA-2042-1 iscsitarget 2010-05-05
Red Hat RHSA-2010:0362-01 scsi-target-utils 2010-04-20

Comments (none posted)

sudo: arbitrary command execution

Package(s):sudo CVE #(s):CVE-2010-1163
Created:April 19, 2010 Updated:January 25, 2011
Description: From the Mandriva advisory:

The command matching functionality in sudo 1.6.8 through 1.7.2p5 does not properly handle when a file in the current working directory has the same name as a pseudo-command in the sudoers file and the PATH contains an entry for ., which allows local users to execute arbitrary commands via a Trojan horse executable, as demonstrated using sudoedit, a different vulnerability than CVE-2010-0426.

SUSE SUSE-SR:2011:002 ed, evince, hplip, libopensc2/opensc, libsmi, libwebkit, perl, python, sssd, sudo, wireshark 2011-01-25
openSUSE openSUSE-SU-2011:0050-1 sudo 2011-01-19
rPath rPSA-2010-0075-1 sudo 2010-10-27
Gentoo 201006-09 sudo 2010-06-01
CentOS CESA-2010:0361 sudo 2010-05-28
Mandriva MDVSA-2010:078-1 sudo 2010-04-28
Ubuntu USN-928-1 sudo 2010-04-15
Slackware SSA:2010-110-01 sudo 2010-04-21
Red Hat RHSA-2010:0361-01 sudo 2010-04-20
Mandriva MDVSA-2010:078 sudo 2010-04-17

Comments (none posted)

Page editor: Jake Edge

Kernel development

Brief items

Kernel release status

The current development kernel is 2.6.34-rc5, released on April 19. "Random fixes all around. The most noticeable (for people who got hit by it) may be the fix for bootup problems that some people had (ACPI dividing by zero: kernel bugzilla 15749), but there's stuff all over. The shortlog gives some idea." Said shortlog is in the announcement, or see the full changelog for all the details. For added fun, this release has a new code name: "Sheep on meth."

There have been no stable updates since April 1.

Comments (none posted)

Quote of the week

vi fs/direct-reclaim-helper.c, it has a few placeholders for where the real code needs to go....just look for the ~ marks.
-- Chris Mason

Comments (3 posted)

Ceph: The Distributed File System Creature from the Object Lagoon (Linux Mag)

Linux Magazine has a detailed look at the Ceph filesystem which was merged for 2.6.34. "However, probably the most fundamental core assumption in the design of Ceph is that large-scale storage systems are dynamic and there are guaranteed to be failures. The first part of the assumption, assuming storage systems are dynamic, means that storage hardware is added and removed and the workloads on the system are changing. Included in this assumption is that it is presumed there will be hardware failures and the file system needs to adaptable and resilient."

Comments (none posted)

Fixing the ondemand governor

By Jonathan Corbet
April 20, 2010
The "cpufreq" subsystem is charged with adjusting the CPU clock frequency for optimal performance. Definitions of "optimal" can vary, so there's more than one governor - and, thus, more than one policy - available. The "performance" governor prioritized throughput above all else, while the "powersave" tries to keep power consumption to a minimum. The most commonly-used governor, though, is "ondemand," which attempts to perform a balancing act between power usage and throughput.

In a simplified form, ondemand works like this: every so often the governor wakes up and looks at how busy the CPU is. If the idle time falls below a threshold, the CPU frequency will be bumped up; if, instead, there is too much idle time, the frequency will be reduced. By default, on a system with high-resolution timers, the minimum idle percentage is 5%; CPU frequency will be reduced if idle time goes above 15%. The minimum percentage can be adjusted in sysfs (under /sys/devices/system/cpu/cpuN/cpufreq/); the maximum is wired at 10% above the minimum.

This governor has been in use for some time, but, as it turns out, it can create performance difficulties in certain situations. Whenever the system workload alternates quickly between CPU-intensive and I/O-intensive phases, things slow down. That's because the governor, on seeing the system go idle, drops the frequency down to the minimum. After the CPU gets busy again, it runs for a while at low speed until the governor figures out that the situation has changed. Then things go idle and the cycle starts over. As it happens, this kind of workload is fairly common; "git grep" and the startup of a large program are a couple of examples.

Arjan van de Ven has come up with a fix for this governor which is quite simple in concept. The accounting of "idle time" is changed so that time spent waiting for disk I/O no longer counts. If a processor is dominated by a program alternating between processing and waiting for disk operations, that processor will appear to be busy all the time. So it will remain at a higher frequency and perform better. That makes the immediate problem go away without, says Arjan, significantly increasing power consumption.

But, Arjan says, "there are many things wrong with ondemand, and I'm writing a new governor to fix the more fundamental issues with it." That code has not yet been posted, so it's not clear what sort of heuristics it will contain. Stay tuned; the demand for ondemand may soon be reduced significantly.

Comments (8 posted)

DM and MD come a little closer

By Jonathan Corbet
April 20, 2010
The management of RAID arrays in the kernel is a complicated task - and one upon which the fate of much data relies. Given that, it would make sense to have a single set of RAID routines which is improved by all. What the Linux kernel has, instead, is three different RAID implementations: in the multiple device (MD) subsystem, in the device mapper (DM) code, and in the Btrfs filesystem. It has often been said that unifying these implementations would be a good thing, but that is not easy and thus far, it has not happened.

MD maintainer Neil Brown has now taken a step in this direction with the posting of his dm-raid456 module, a RAID implementation for the device mapper which is built on the MD code. This patch set set has the potential to eliminate a bunch of duplicated code, which can only be a good thing. It also brings some nice features, including RAID6 support, multiple-target support, and more to the device mapper layer.

This is early work which, probably, is not destined for the next merge window. The response from the device mapper side has been reasonably positive, though. So, with luck, we'll someday have both subsystems using the same RAID code.

Comments (4 posted)

Kernel development news

ELC: Using LTTng

By Jake Edge
April 21, 2010

Several tracing presentations were in evidence at this year's Embedded Linux Conference, including Mathieu Desnoyers's "hands-on tutorial" on the Linux Trace Toolkit next generation (LTTng). Desnoyers showed how to use LTTng to solve real-world performance and latency problems, while giving a good overview of the process of Linux kernel problem solving. The target was embedded developers, but the presentation was useful for anyone interested in finding and fixing problems in Linux itself, or applications running atop it.

Desnoyers has been hacking on the kernel for many years, and was recently awarded his Ph.D. in computer engineering based largely on his work with LTTng. Since then, he has started his own company, EfficiOS to do consulting work on LTTng as well as helping various customers diagnose their performance problems. In addition to LTTng itself, he also developed the Userspace RCU library that allows user-space applications to use the Read-Copy-Update (RCU) data synchronization technique that is used by the kernel.

LTTng consists of three separate components: patches to the Linux kernel to add tracepoints along with the LTTng infrastructure and ring buffer, the LTT control user-space application, and the LTTV GUI interface for viewing trace output. Each is available on the LTTng web site and there are versions for kernels going back to 2.6.12. There is extensive documentation as well.

Lockless trace clock

The lockless trace clock is "one major piece of LTTng that is architecture dependent". This clock is a high-precision timestamp derived from the processor's cycle counter that is placed on each trace event. The timestamp is coordinated between separate processors allowing system-wide correlation of events. On processors with frequency scaling and non-synchronized cycle counters, like some x86 systems, the trace clock can get confused when the processor operating frequency changes, so that feature needs to be disabled before tracing. Desnoyers noted that Nokia had funded work for LTTng to properly handle frequency scaling and power management features of the ARM OMAP3 processor, but that it hasn't yet been done for x86.

Tracing strategy

A portion of the talk was about the tradeoffs of various tracing strategies. Desnoyers described the factors that need to be considered when deciding on how to trace a problem including what kind of bug it is, how reproducible it is, how much tracing overhead the system can tolerate, the availability of the system to reproduce it on, whether it occurs only on production systems, and so on. Each of these things "impact the number of tracing iterations available". It may be that because it occurs infrequently or is on a third-party production system, one can only get a single tracing run, "or you may have the luxury of multiple trace runs".

Based on those factors, there are different kinds of tracing to choose from in LTTng. At the top level, you can use producer-consumer tracing, where all of the enabled events are recorded to a filesystem, "flight recorder" mode, where the events are stored in fixed-size memory buffers and newer events overwrite the old, or both of these modes can be active at once. There are advantages and disadvantages to each, of course, starting with higher overhead for producer-consumer traces. But the amount of data which can be stored for those types of traces is generally much higher than for flight recorder traces.

Because there is generally an enormous amount of data in a trace, Desnoyers described several techniques to help hone in on the problem area. In LTTng, events are grouped by subsystem into "channels" and each channel can have a different buffer size for flight recorder mode. That allows more backlog for certain kinds of events, while limiting others. In addition, instrumentation (trace events, kernel markers) can be disabled to reduce the amount of trace data that is generated.

Another technique is to use "anchors", specific events in the trace that are the starting point for analysis. The anchor is often used by going backward in the trace from that point. The anchors can either come from instrumentation like the LTTng user-space tracer or kernel trace events, or they can be generated by the analysis itself. The longest timer interrupt jitter (i.e. how far from the nominal time where it should happen) is an example he gave of one kind of analysis-generated anchor.

A related idea is "triggers", which are a kind of instrumentation with a side-effect. By using ltt_trace_stop("name"), a trigger can stop the tracing when a particular condition occurs in the kernel. Using lttctl from user space is another way to stop a trace. Triggers are particularly helpful for flight recorder traces, he said.

Live demo

Desnoyers also did a demonstration of LTTng on two separate kinds of problems both involving the scheduler. One was to try to identify sources of audio latency by running a program with a periodic timer that expired every 10ms. The code was written to put an anchor into the trace data every time the timer missed by more than 5ms. He ran the program, then moved a window around on the screen, which caused delays to the timer of up to 60ms.


Using that data, he brought up the LTTV GUI to look at what was going on. It was a bit hard to follow exactly what he was doing, but eventually he narrowed it down to the scheduling of the server. He then instrumented the kernel scheduler with a kernel marker to get more information on how it was making its decisions. Kernel markers were proposed for upstream inclusion a ways back, but were roundly criticized for cluttering up the kernel with things that looked like a debug printk(). Markers are good for ad hoc tracepoints, but "don't try to push it upstream", he warned.

The other demo was to look at a buffer underrun in ALSA's aplay utility. The same scheduler marker was used to investigate that problem as well.

The status of LTTng

Perhaps the hardest question was saved to the end of the presentation: what is the status of LTTng getting into the mainline? Desnoyers seemed upbeat about the prospects of that happening, partly because there has been so much progress made in the kernel tracing area. Most of the instrumentation side has already gone in, and the tracer itself is ongoing work that he now has time do. He presented a "status of LTTng" presentation at the Linux Foundation Collaboration Summit (LFCS), which was held just after ELC, and there was agreement among some of the tracing developers to work together on getting more of LTTng into the kernel.

Desnoyers does not see a problem with having multiple tracers in the kernel, so long as they use common infrastructure and target different audiences. Ftrace is "more oriented toward kernel developers", he said, and it has different tracers for specific purposes. LTTng on the other hand is geared toward users and user-space programmers who need a look into the kernel to diagnose their problems. With Ftrace developer Steven Rostedt participating in both the ELC and LFCS talks—and agreeing with many of Desnoyers's ideas—the prospects look good for at least parts of LTTng to make their way into the mainline over the next year.

Comments (3 posted)

When writeback goes wrong

By Jonathan Corbet
April 20, 2010
Like any other performance-conscious kernel, Linux does not immediately flush data written to files back to the underlying storage. Caching that data in memory can help optimize filesystem layout and seek times; it also eliminates duplicate writes should the same blocks be written multiple times in succession. Sooner or later (preferably sooner), that data must find its way to persistent storage; the process of getting it there is called "writeback." Unfortunately, as some recent discussions demonstrate, all is not well in the Linux writeback code at the moment.

There are two distinct ways in which writeback is done in contemporary kernels. A series of kernel threads handles writeback to specific block devices, attempting to keep each device busy as much of the time as possible. But writeback also happens in the form of "direct reclaim," and that, it seems, is where much of the trouble is. Direct reclaim happens when the core memory allocator is short of memory; rather than cause memory allocations to fail, the memory management subsystem will go casting around for pages to free. Once a sufficient amount of memory is freed, the allocator will look again, hoping that nobody else has swiped the pages it worked so hard to free in the meantime.

Dave Chinner recently encountered a problem involving direct reclaim which manifested itself as a kernel stack overflow. Direct reclaim can happen as a result of almost any memory allocation call, meaning that it can be tacked onto the end of a call chain of nearly arbitrary length. So, by the time that direct reclaim is entered, a large amount of kernel stack space may have already been used. Kernel stacks are small - usually no larger than 8KB and often only 4KB - so there is not a lot of space to spare in the best of conditions. Direct reclaim, being invoked from random places in the kernel, cannot count on finding the best of conditions.

The problem is that direct reclaim, itself, can invoke code paths of great complexity. At best, reclaim of dirty pages involves a call into filesystem code, which is complex enough in its own right. But if that filesystem is part of a union mount which sits on top of a RAID device which, in turn, is made up of iSCSI drives distributed over the network, the resulting call chain may be deep indeed. This is not a task that one wants to undertake with stack space already depleted.

Dave ran into stack overflows - with an 8K stack - while working with XFS. The XFS filesystem is not known for its minimalist approach to stack use, but that hardly matters; in the case he describes, over 3K of stack space was already used before XFS got a chance to take its share. This is clearly a situation where things can go easily wrong. Dave's answer was a patch which disables the use of writeback in direct reclaim. Instead, the direct reclaim path must content itself with kicking off the flusher threads and grabbing any clean pages which it may find.

There is another advantage to avoiding writeback in direct reclaim. The per-device flusher threads can accumulate adjacent disk blocks and attempt to write data in a way which minimizes seeks, thus maximizing I/O throughput. Direct reclaim, instead, takes pages from the least-recently-used (LRU) list with an eye toward freeing pages in a specific zone. As a result, pages flushed by direct reclaim tend to be scattered more widely across the storage devices, causing higher seek rates and worse performance. So disabling writeback in direct reclaim looks like a winning strategy.

Except, of course, we're talking about virtual memory management code, and nothing is quite that simple. As Mel Gorman pointed out, no longer waiting for writeback in direct reclaim may well increase the frequency with which direct reclaim fails. That, in turn, can throw the system into the out-of-memory state, which is rarely a fun experience for anybody involved. This is not just a theoretical concern; it has been observed at Google and elsewhere.

Lumpy reclaim, by its nature, is likely to create seeky I/O patterns, but skipping lumpy reclaim increases the likelihood of higher-order allocation failures. Direct reclaim is also where lumpy reclaim is done. The lumpy reclaim algorithm attempts to free pages in physically-contiguous (in RAM) chunks, minimizing memory fragmentation and increasing the reliability of larger allocations. There is, unfortunately, a tradeoff to be made here: the nature of virtual memory is such that pages which are physically contiguous in RAM are likely to be widely dispersed on the backing storage device. So lumpy reclaim, by its nature, is likely to create seeky I/O patterns, but skipping lumpy reclaim increases the likelihood of higher-order allocation failures.

So various other solutions have been contemplated. One of those is simply putting the kernel on a new stack-usage diet in the hope of avoiding stack overflows in the future. Dave's stack trace, for example, shows that the select() system call grabs 1600 bytes of stack before actually doing any work. Once again, though, there is a tradeoff here: select() behaves that way in order to reduce allocations (and improve performance) for the common case where the number of file descriptors is relatively small. Constraining its stack use would make an often performance-critical system call slower.

Beyond that, reducing stack usage - while being a worthy activity in its own right - is seen as a temporary fix at best. Stack fixes can make a specific call chain work, but, as long as arbitrarily-complex writeback paths can be invoked with an arbitrary amount of stack space already used, problems will pop up in places. So a more definitive kind of fix is required; stack diets may buy time but will not really solve the problem.

One common suggestion is to move direct reclaim into a separate kernel thread. That would put reclaim (and writeback) onto its own stack where there will be no contention with system calls or other kernel code. The memory allocation paths could poke this thread when its services are needed and, if necessary, block until the reclaim thread has made some pages available. Eventually, the lumpy reclaim code could perhaps be made smarter so that it produces less seeky I/O patterns.

Another possibility is simply to increase the size of the kernel stack. But, given that overflows are being seen with 8K stacks, an expansion to 16K would be required. The increase in memory use would not be welcome, and the increase in larger allocations required to provide those stacks would put more pressure on the lumpy reclaim code. Still, such an expansion may well be in the cards at some point.

According to Andrew Morton, though, the real problem is to be found elsewhere:

The poor IO patterns thing is a regression. Some time several years ago (around 2.6.16, perhaps), page reclaim started to do a LOT more dirty-page writeback than it used to. AFAIK nobody attempted to work out why, nor attempted to try to fix it.

In other words, the problem is not how direct reclaim is behaving. It is, instead, the fact that direct reclaim is happening as often as it is in the first place. If there were less need to invoke direct reclaim in the first place, the problems it causes would be less pressing.

So, if Andrew gets his way, the focus of this work will shift to figuring out why the memory management code's behavior changed and fixing it. To that end, Dave has posted a set of tracepoints which should give some visibility into how the writeback code is making its decisions. Those tracepoints have already revealed some bugs, which have been duly fixed. The main issue remains unresolved, though. It has already been named as a discussion topic for the upcoming filesystems, storage, and memory management workshop (happening with LinuxCon in August), but many of the people involved are hoping that this particular issue will be long-solved by then.

Comments (2 posted)

Using the TRACE_EVENT() macro (Part 3)

April 21, 2010

This article was contributed by Steven Rostedt

Tracepoints within the kernel facilitate the analysis of how the kernel performs. The flow of critical information can be followed and examined in order to debug a latency problem, or to simply figure out better ways to tune the system. The core kernel tracepoints, like the scheduler and interrupt tracepoints, let the user see when and how events take place inside the kernel. Module developers can also take advantage of tracepoints; if their users or customers have problems, the developer can have them enable the tracepoints and analyze the situation. This article will explain how to add tracepoints in modules that are outside of the core kernel code.

In Part 1, the process of creating a tracepoint in the core kernel was explained. Part 2 explained how to consolidate tracepoints with the use of DECLARE_EVENT_CLASS() and DEFINE_EVENT() and went over the field macros of TP_STRUCT__entry, and the function helpers of TP_printk(). This article takes a look at how to add tracepoints outside of the core kernel, which can be used by modules or architecture specific code. A brief look at some of the magic behind the TRACE_EVENT() macro, and a few more examples to get your feet wet with using tracepoints.

Defining a trace header outside of include/trace/events

For tracepoints in modules or in architecture-specific directories, having trace header files in the global include/trace/events directory may clutter it. The result would be to put files like mips_cpu.h or arm_cpu.h, which are not necessary for the core kernel, into that directory. It would end up something like the old include/asm-*/ setup. Also, if tracepoints went into staging drivers, putting staging header files in the core kernel code base would be a bad design.

Because trace header files are handled very differently than other header files, the best solution is to have the header files placed at the location where they are used. For example, the XFS tracepoints are located in the XFS subdirectory in fs/xfs/xfs_trace.h. But, some of the magic of define_trace.h is that it must be able to include the trace file that included it (the reason for TRACE_HEADER_MULTI_READ). As explained in Part 1, the trace header files start with the cpp conditional:

   #if !defined(_TRACE_SCHED_H) || defined(TRACE_HEADER_MULTI_READ)
   #define _TRACE_SCHED_H

Part 1 explained that one and only one of the C files that include a particular trace header will define CREATE_TRACE_POINTS before including the trace header. That activates the define_trace.h that the trace header file includes. The define_trace.h file will include the header again, but will first define TRACE_HEADER_MULTI_READ. As the cpp condition shows, this define will allow the contents of the trace header to be read again.

For define_trace.h to include the trace header file, it must be able to find it. To do this, some changes need to be made to the Makefile where the trace file is included, and that file will need to tell define_trace.h not to look for it in the default location (include/trace/events).

To tell define_trace.h where to find the trace header, the Makefile must define the path to the location of the trace file. One method is to extend CFLAGS to include the path:

    EXTRA_CFLAGS = -I$(src)

But that affects CFLAGS for all files that the Makefile builds. If it is desired to only modify the the CFLAGS for the C file that has the CREATE_TRACE_POINTS defined, then the method used by the net/mac80211/Makefile can be used:

   CFLAGS_driver-trace.o = -I$(src)

The driver-trace.c file contains the CREATE_TRACE_POINTS define and the include of driver-trace.h that contains the TRACE_EVENT() macros for the mac80211 tracepoints.

To demonstrate how to add tracepoints to a module, I wrote a simple module, called sillymod, which just creates a thread that wakes up every second and performs a printk and records the number of times that it has done so. I will look at the relevant portions of the files, but the full file contents are also available: module, Makefile, the module with tracepoint, and the trace header file.

The first step is to create the desired tracepoints. The trace header file is created the same way as the core trace headers described in Part 1, with a few more additions. The header must start by defining the system where all tracepoints within the file will belong to:

   #undef TRACE_SYSTEM
   #define TRACE_SYSTEM silly

This module creates a trace system called silly. Then the special cpp condition is included:

   #if !defined(_SILLY_TRACE_H) || defined(TRACE_HEADER_MULTI_READ)
   #define _SILLY_TRACE_H

The linux/tracepoint.h file is included, and finally the TRACE_EVENT() macros; one in this example:

   #include <linux/tracepoint.h>


	TP_PROTO(unsigned long time, unsigned long count),

	TP_ARGS(time, count),

		__field(	unsigned long,	time	)
		__field(	unsigned long,	count	)

		__entry->time = jiffies;
		__entry->count = count;

	TP_printk("time=%lu count=%lu", __entry->time, __entry->count)

   #endif /* _SILLY_TRACE_H */

The above is the same as what was described in Part 1 for core kernel tracepoints. After the #endif things become a bit different. Before including the define_trace.h file the following is added:

   /* This part must be outside protection */
   #define TRACE_INCLUDE_FILE silly-trace
   #include <trace/define_trace.h>

The TRACE_INCLUDE_PATH tells define_trace.h not to look in the default location (include/trace/events) for the trace header, but instead look in the include search path. By default, define_trace.h will include a file defined by TRACE_SYSTEM. The TRACE_INCLUDE_FILE tells define_trace.h that the trace header is called silly-trace.h (The .h is automatically added to the end of TRACE_INCLUDE_PATH).

To add the tracepoint to the module, the module now includes the trace header. Before including the trace header it must also define CREATE_TRACE_POINTS:

   #include "silly-trace.h"

The tracepoint can now be added to the code.

    printk("hello! %lu\n", count);
    trace_me_silly(jiffies, count);

Finally the Makefile must set the CFLAGS to have the includes include the local directory where the silly-trace.h file resides.

   CFLAGS_sillymod-event.o = -I$(src)

One might believe the following would also work without modifying the Makefile, if the module resided in the kernel tree:

   #define TRACE_INCLUDE_PATH ../../path/to/trace/header

Using a path name in TRACE_INCLUDE_PATH other than '.' runs the risk of containing a macro. For example, if XFS defined TRACE_INCLUDE_PATH as ../../fs/xfs/linux-2.6/xfs_trace.h, it would fail. That is because the Linux build #defines the name linux to nothing, which would make the path be ../../fs/xfs/-2.6/xfs_trace.h.

Now the trace event is available.

   [mod] # insmod sillymod-event.ko
   [mod] # cd /sys/kernel/debug/tracing
   [tracing] # ls events
   block   ext4    header_event  i915  jbd2  module  sched  skb       timer
   enable  ftrace  header_page   irq   kmem  power   silly  syscalls  workqueue
   [tracing] # ls events/silly
   enable  filter  me_silly
   [tracing] # echo 1 > events/silly/me_silly/enable
   [tracing] # cat trace
   # tracer: nop
   #           TASK-PID    CPU#    TIMESTAMP  FUNCTION
   #              | |       |          |         |
       silly-thread-5377  [000]  1802.845581: me_silly: time=4304842209 count=10
       silly-thread-5377  [000]  1803.845008: me_silly: time=4304843209 count=11
       silly-thread-5377  [000]  1804.844451: me_silly: time=4304844209 count=12
       silly-thread-5377  [000]  1805.843886: me_silly: time=4304845209 count=13

Once define_trace.h file can safely locate the trace header, the module's tracepoints can be created. To understand why all this manipulation is needed, a look at how define_trace.h is implemented may clarify things a bit.

A look inside the magic of TRACE_EVENT()

For those that dare to jump into the mystical world of the C preprocessor, take a look into include/trace/ftrace.h. But be warned, what you find there may leave you a bit loony, or at least think that the ones that wrote that code were a bit loony (in which case, you may be right). The include/trace/define_trace.h file does some basic set up for the TRACE_EVENT() macro, but for a trace to take advantage of it, it must have a header file in define_trace.h to do the real work (as do Ftrace and perf).

cpp tricks and treats

While I was working on my Masters, a professor showed me a trick with cpp that lets one map strings to enums using the same data:

   #undef C
   #define C(a) ENUM_##a
   enum dog_enums DOGS;
   #undef C
   #define C(a) #a
   char *dog_strings[] = DOGS;
   char *dog_to_string(enum dog_enums dog)
           return dog_strings[dog];

The trick is that the macro DOGS contains a sub-macro C() that we can redefine and change the behavior of DOGS. This concept is key to how the TRACE_EVENT() macro works. All the sub-macros within TRACE_EVENT() can be redefined and cause the TRACE_EVENT() to be used to create different code that uses the same information. Part 1 described the requirements needed to create a tracepoint. One set of data (in the TRACE_EVENT() definition) must be able to do several things. Using this cpp trick, it is able to accomplish just that.

The tracepoint code created by Mathieu Desnoyers required a DECLARE_TRACE(name, proto, args) be defined in a header file, and in some C file DEFINE_TRACE(name) was used. TRACE_EVENT() now does both jobs. In include/linux/tracepoint.h:

   #define TRACE_EVENT(name, proto, args, struct, assign, print)	\
        DECLARE_TRACE(name, PARAMS(proto), PARAMS(args))

The PARAMS() macro lets proto and args contain commas and not be mistaken as multiple parameters of DECLARE_TRACE(). Since the tracepoint.h must be included in all trace headers, this makes the TRACE_EVENT() macro fulfill the first part of the tracepoint creation. When a C file defines CREATE_TRACE_POINTS before including a trace header, the define_trace.h becomes active and performs:

   #undef TRACE_EVENT
   #define TRACE_EVENT(name, proto, args, tstruct, assign, print)	\

That is not enough, however, because the define_trace.h is declared after the TRACE_EVENT() macros are used. For this code to have an impact, the TRACE_EVENT() macro must be included again. The define_trace.h does some nasty C preprocessor obfuscation to be able to include the file that just included it:


The defining of the TRACE_HEADER_MULTI_READ will let the trace header be read again (and this is why it is needed in the first place). The TRACE_INCLUDE(TRACE_INCLUDE_FILE) is more cpp macro tricks that will include the file that included it. As explained in previous articles, this macro will use either TRACE_SYSTEM.h or TRACE_INCLUDE_FILE.h if that is defined, and will include the file from include/trace/events/ if TRACE_INCLUDE_PATH is not defined. I'll spare the reader the ugliness of the macros to accomplish this. For the more masochistic reader, feel free to look at the include/trace/define_trace.h file directly. When the file is included again, the TRACE_EVENT() macro will be processed again, but with its new meaning.

The above explains how tracepoints are created. It only creates the tracepoint itself, and it does nothing to add it to a tracer's infrastructure. For Ftrace, this is where the ftrace.h file inside the define_trace.h comes into play. (Warning, the ftrace.h file is even more bizarre than define_trace.h). The macros in ftrace.h create the files and directories found in tracing/events. ftrace.h uses the same tricks explained earlier of redefining the macros within the TRACE_EVENT() macro as well as redefining the TRACE_EVENT() macro itself. How ftrace.h works is beyond the scope of this article, but feel free to read it directly, if you don't have any allergies to backslashes.

Playing with trace events

If you change directories to the debugfs filesystem mount point (usually /sys/kernel/debug) and take a look inside tracing/events you will see all of the trace event systems defined in your kernel (i.e. the trace headers that defined TRACE_SYSTEM).

   [tracing] # ls events
   block  enable  ftrace  header_event  header_page  irq       kmem  module
   power  sched   skb     syscalls      timer        workqueue

As mentioned in Part 2, the enable files are used to enable a tracepoint. The enable file in the events directory can enable or disable all events in the system, the enable file in one of the system's directories can enable or disable all events within the system, and the enable file within the specific event directory can enable or disable that event.

Note, by writing '1' into any of the enable files will enable all events within that directory and below. Writing a '0' will disable all events within that directory and below.

One nice feature about events is that they also show up in the Ftrace tracers. If an event is enabled while a tracer is running, those events will show up in the trace. Enabling events can make the function tracer even more informative:

   [tracing] # echo 1 > events/sched/enable
   [tracing] # echo function > current_tracer
   [tracing] # head -15 trace
   # tracer: function
   #           TASK-PID    CPU#    TIMESTAMP  FUNCTION
   #              | |       |          |         |
               Xorg-1608  [001]  1695.236400: task_of <-update_curr
               Xorg-1608  [001]  1695.236401: sched_stat_runtime: task: Xorg:1608 runtime: 402851 [ns], vruntime: 153144994503 [ns]
               Xorg-1608  [001]  1695.236402: account_group_exec_runtime <-update_curr
               Xorg-1608  [001]  1695.236402: list_add <-enqueue_entity
               Xorg-1608  [001]  1695.236403: place_entity <-enqueue_entity
               Xorg-1608  [001]  1695.236403: task_of <-enqueue_entity
               Xorg-1608  [001]  1695.236404: sched_stat_sleep: task: gnome-terminal:1864 sleep: 639071 [ns]
               Xorg-1608  [001]  1695.236405: __enqueue_entity <-enqueue_entity
               Xorg-1608  [001]  1695.236406: hrtick_start_fair <-enqueue_task_fair
               Xorg-1608  [001]  1695.236407: sched_wakeup: task gnome-terminal:1864 [120] success=1 [001]
               Xorg-1608  [001]  1695.236408: check_preempt_curr <-try_to_wake_up

Combining the events with tricks from the function graph tracer, we can find interrupt latencies, and which interrupts are responsible for long latencies.

   [tracing] # echo do_IRQ > set_ftrace_filter
   [tracing] # echo 1 > events/irq/irq_handler_entry/enable
   [tracing] # echo function_graph > current_tracer
   [tracing] # cat trace
   # tracer: function_graph
   # CPU  DURATION                  FUNCTION CALLS
   # |     |   |                     |   |   |   |
    0)   ==========> |
    0)               |  do_IRQ() {
    0)               |  /* irq_handler_entry: irq=30 handler=iwl3945 */
    0)   ==========> |
    0)               |    do_IRQ() {
    0)               |      /* irq_handler_entry: irq=30 handler=iwl3945 */
    0) + 22.965 us   |    }
    0)   <========== |
    0) ! 148.135 us  |  }
    0)   <========== |
    0)   ==========> |
    0)               |  do_IRQ() {
    0)               |  /* irq_handler_entry: irq=1 handler=i8042 */
    0) + 45.347 us   |  }
    0)   <========== |

Writing do_IRQ into set_ftrace_filter makes the function tracer only trace the do_IRQ() function. Then the irq_handler_entry tracepoint is activated and the function_graph tracer is selected. Since the function graph tracer shows the time a function executed, we can see the how long the interrupts ran for. But the function graph tracer only shows that the do_IRQ() function ran, but not which interrupt it executed. By enabling the irq_handler_entry event, we now see which interrupt was running. The above shows that my laptop's iwl3945 interrupt that handles the wireless communication caused a 148 microsecond latency!


Tracepoints are a very powerful tool, but to make them useful, they must be flexible and trivial to add. Adding TRACE_EVENT() macros are quite easy and they are popping up all over the kernel. The 2.6.34-rc3 kernel currently has over 300 TRACE_EVENT() macros defined; 341 as of this writing.

The code to implement the trace events uses lots of cpp tricks to accomplish its task. But the complexity of the implementation simplified the usage of tracepoints. The rule of thumb in creating the TRACE_EVENT() macro was: make the use of the TRACE_EVENT() macro as simple as possible, even if that makes the implementation of it extremely complex. Sometimes, the easier something is to use, the more complexity there is to create it. Now, a developer does not need to know how the TRACE_EVENT() macro works, she or he only needs to know that the work has been done for them. Adding TRACE_EVENT()s are easy, and any developer can now take advantage of them.

Comments (4 posted)

Patches and updates

Kernel trees


Core kernel code

Development tools

Device drivers

Filesystems and block I/O

Memory management


Benchmarks and bugs


Page editor: Jonathan Corbet


News and Editorials

DragonFly BSD 2.6: towards a free clustering operating system

April 21, 2010

This article was contributed by Koen Vervloesem

There aren't many BSDs compared to Linux distributions: most people only know FreeBSD, OpenBSD, and NetBSD. Of course there are more than just the "big three" BSD operating systems, but most of them are marginally used or (in the case of Mac OS X) proprietary. However, two operating systems are becoming more and more popular in the BSD world: one is PC-BSD, a KDE-based FreeBSD derivative that strives to be the Ubuntu of the BSDs, and the other is DragonFly BSD, a FreeBSD fork that aims to provide single-system image clustering in the long term.

[DragonFly screen shot]

In 2003, FreeBSD developer Matthew Dillon created DragonFly BSD as a fork of FreeBSD 4.8. He did this because he didn't agree with the direction FreeBSD 5 was going in the domains of threading and symmetric multiprocessing. Since then, the DragonFly BSD kernel diverged significantly from its mother kernel, for example by adding a Light Weight Kernel Threads (LWKT) implementation and a virtual kernel similar to User Mode Linux. However, there is still a close collaboration between DragonFly BSD and FreeBSD, and FreeBSD device drivers are regularly imported into DragonFly BSD. The operating system has also ported some functionality from NetBSD and OpenBSD.

The ultimate goal of DragonFly BSD is to allow programs to run across multiple machines as if they are running on one system. The operating system is still far from that goal, but Dillon has done a great deal of rewriting in nearly every subsystem of the kernel to lay the foundations for future work. Much of the rationale behind the design goals is explained on the project's web site. It's an interesting read, because it shows how they want to tackle an ambitious vision with a realistic plan:

First and foremost among all of our goals is a desire to be able to implement them in small bite-sized chunks, while at the same time maintaining good stability for the system as a whole.


After some preliminary work in version 1.12, DragonFly BSD added a new clustering filesystem in version 2.0, which was finally considered production-ready in version 2.2: HAMMER. There are also ports of the filesystem to Linux and Mac OS X, both using FUSE. HAMMER is a modern filesystem with fine-grained snapshots, integrity checking, instant crash recovery, and networked mirroring. It's no coincidence that this sounds a lot like ZFS: Dillon investigated ZFS and for a while it looked like he would port it to DragonFly BSD, but in the end he wasn't satisfied with the design and wrote his own, more cluster-oriented filesystem. The main reason for this was simple: DragonFly BSD's goal is transparent clustering, which needs a multi-master replicated environment. In this type of environment, ZFS doesn't quite fit the bill, as Dillon explained on the DragonFly kernel mailing list:

The problem ZFS has is that it is TOO redundant. You just don't need that scale of redundancy if you intend to operate in a multi-master replicated environment because you not only have wholely independant (logical) copies of the filesystem, they can also all be live and online at the same time.

HAMMER's approach to redundancy is logical replication of the entire filesystem. That is, wholely independant copies operating on different machines in different locations.

HAMMER is the default filesystem now, but it's not recommended for storage media smaller than 50 G; DragonFly BSD uses UFS for small media. HAMMER supports filesystems up to 1 exabyte. Each HAMMER filesystem can span up to 256 disks, which can be added to an existing filesystem to let it grow. Users don't need to manually take snapshots: the system automatically writes historical data during each filesystem sync, which is every 30 to 60 seconds. Prior versions of files and directories are then accessible by appending @@ and a 64-bit hexadecimal transaction ID to the file name. In this way, users can even cd into a prior version of a directory. The system administrator can choose a history retention policy to prevent the filesystem from filling up too quickly, and explicit snapshots can also be made. Although HAMMER is considered production-ready by its developers, it's still a relatively young filesystem with the occasional serious bug. For example, soon after the 2.6 release a serious HAMMER corruption issue came up.

New features

New releases of the operating system occur approximately twice a year. The latest release is DragonFly BSD 2.6.1. Three of the most interesting new features are swapcache, tmpfs, and POSIX message queues. The former is a mechanism that allows the operating system to use a fast SSD to cache data and/or metadata for hard drive filesystems, which should improve disk performance dramatically. Swapcache works on all filesystems (e.g. HAMMER, UFS, or NFS) and is a simple turn-on-and-forget type of feature. The man page of swapcache has an extensive description of the new functionality and adds an analysis with some real-life examples.

The memory filesystem tmpfs is a port from NetBSD. After loading the tmpfs driver as a kernel module at boot time, the user can create tmpfs filesystems. The data is backed by swap space when there's not enough free memory, but the metadata is stored in kernel memory. The tmpfs man page recommends that a modern DragonFly BSD platform reserve a large amount of swap space to accommodate tmpfs and other subsystems. The DragonFly BSD developers also ported the POSIX message queues API from NetBSD 5, which allows processes to exchange data in the form of messages.

Live CD

DragonFly BSD is distributed as a live CD ISO or USB image that lets users check their system for hardware compatibility before installation. For now, there are only versions for i386 and x86_64 architectures. The web site also talks about a DVD ISO that is able to show a full live X environment, but in the 2.6 release this has been replaced by a GUI USB image that boots into the desktop. At the time of this writing, the GUI USB image was not available yet due to some problems.

DragonFly BSD uses BSD Installer, a console based system installation and configuration tool that is more user-friendly than FreeBSD's sysinstall. BSD Installer started its life as DragonFly BSD's installer, but it has since been ported to FreeSBIE and pfSense. The installation is straightforward, although for non-US users it's slightly annoying that the choice of keyboard layout is presented only after choosing the root password and adding a user. After installation and configuration, the system reboots into a minimal console-based BSD system.

Instead of FreeBSD ports, DragonFly BSD uses NetBSD's pkgsrc as its official package management system. This freed the DragonFly BSD developers from having to maintain a large number of third-party software, and pkgsrc is designed with portability in mind. Users can install over 9000 binary packages in DragonFly BSD (with the pkg_radd command) or build them from source. For users that want to run software that exists for Linux but not for BSD, DragonFly BSD has a Linux emulation layer: the Linuxulator in the 2.6 release even runs Java and Flash, at least on the i386 architecture.


The project's web site has a lot of documentation, both for users and developers, including some specific howtos. There's also an extensive but slightly out-of-date handbook based on the FreeBSD handbook, and a less extensive but more up-to-date new handbook. The handbook guides the user through the installation, but it also has chapters about "UNIX Basics" (which is a mix of DragonFly BSD basics and general UNIX basics), the pkgsrc packaging system, configuring X, and more advanced topics like jails, security, the kernel, and virtual kernels.

Readers that want to keep an eye on the project but don't have the time to read the mailing lists can read The DragonFly BSD Digest by Justin Sherrill. Some Gource visualizations of the DragonFly BSD development nicely show what the small but active group of developers is doing. We looked at Gource and other visualization programs earlier this month.


For a relatively small (45 committers) and lesser-known BSD operating system, DragonFly BSD is surprisingly good and development of new features happens remarkably fast. Maybe this small scale is the reason why the project is so innovative. The fact that the BSD Certification Group has added knowledge about the operating system to the requirements of the BSD Associate certification is another sign that DragonFly BSD is here to stay. But ultimately, the road to its goal is still paved with a lot of work.

Comments (2 posted)

New Releases

RHEL 6 beta version available

Red Hat has announced the availability of a beta version of Red Hat Enterprise Linux 6. "Red Hat engineers have played key roles in the upstream development of a wide range of kernel performance enhancements that we plan to feature in Red Hat Enterprise Linux 6. This includes a complete rewrite of the process scheduler so that it more fairly shares compute cycles among processes and provides more determinism by enabling higher-priority processes to run with minimal interference from lower-priority processes." (Thanks to Greg Bailey).

Comments (67 posted)

PCLinuxOS 2010 Edition arrives in six variants (The H)

The H covers the release of PCLinuxOS 2010. "The PCLinuxOS developers have announced the availability of the final version of the 2010 edition of their Linux distribution. The distribution's goal is to be "radically simple" and easy to use. The latest stable release includes a number of updates, new features and a choice of six desktop environments."

Comments (none posted)

Distribution News

Debian GNU/Linux

Debian Project Leader Election 2010 Results

Debian Project Secretary, Kurt Roeckx, has announced the results of this year's Debian Project Leader elections. Stefano Zacchiroli is winner. His term begins April 17, 2010.

Full Story (comments: 15)

bits from the (newbie) DPL

Stefano Zacchiroli presents his first bits as the newly elected Debian Project Leader. "48 hours into this new DPL term, here are my first ever "bits from the DPL" with kudos, directions, and some practical information."

Full Story (comments: none)


Request for Comments: Fedora Project Contributor Agreement Draft

Tom "spot" Callaway has announced that a draft version of the Fedora Project Contributor Agreement (FPCA) is available for comments. "Please, take a moment and read the FPCA and the FAQ. It is not a long, or overly complicated document, as legal documents go, but it is important that all Fedora Contributors read it over and make sure they understand it and like it (or can at least agree to it)."

Full Story (comments: none)

F14 Naming: Constaine -> Goddard -> ???

Fedora 13 will be released soon, so it's time to start finding a name for Fedora 14. Click below for instructions and guidelines. Suggestions will be collected until Tuesday, April 27, 2010.

Full Story (comments: none)

Fedora Board Recap 2010-04-15

Click below for a recap of the April 15, 2010 meeting of the Fedora Advisory Board. Topics include SWG close out and Beta Release.

Full Story (comments: none)

Ubuntu family

Shuttleworth: Ubuntu's Indicator Menus

For those who are interested in where Ubuntu plans to go with its user interface innovations: Mark Shuttleworth has posted a writeup on indicator menus, which will provide persistent state information to users. "We will place all indicators at the top right of the screen. We'll place them in a particular order, too, with the 'most fundamental' indicator, which controls the overall session, in the top right. The order will not be random, but predictable between sessions and screen sizes. There will be no GUI support for users to reorder the indicators." The current plan is to do (and ship) the work, then try to get GNOME and KDE to accept it.

Comments (36 posted)

Jonathan Carter: What's been happening with Edubuntu?

Jonathan Carter looks at Edubuntu 10.04 and beyond. "Edubuntu 9.10 was our first release that returned from being an add-on CD to a full installation disc. It had a big problem though, it was almost double the size what it needed to be. The alternate installation that shipped with the disc required for LTSP installation meant that every program and its files were shipped twice on the image, resulting in a very bloated disc. It was unavoidable at the time though but for Lucid we have managed to integrate everything that's required for a full Edubuntu setup into the desktop LiveCD, so no more alternate installation is required."

Comments (none posted)

New Distributions

Peppermint: A New Linux OS for the Cloud (ReadWriteWeb)

ReadWriteWeb introduces Peppermint. "Peppermint, a new Linux-based operating system with a focus on cloud computing and Web applications, is launching into a private beta this week to a limited number of participants, and will open up later next month to even more. The OS is a fork of Lubuntu and uses some of Linux Mint's configuration files, hence the name "Peppermint." Unlike desktop-focused Linux distributions, running applications on Peppermint won't require "installing countless numbers of software packages and reading wikis all Saturday afternoon," reads the product homepage. Instead, users will run Web apps in their own windows via Mozilla's Prism technology."

Comments (none posted)

Distribution Newsletters

DistroWatch Weekly, Issue 350

The DistroWatch Weekly for April 19, 2010 is out. "Operating systems come in many different shapes and sizes - some are enormous and created by huge multinational software companies, others are tiny and represent the result of a few curious individuals. HelenOS belongs to the latter camp. Although it was started as a research project and is presently still just a tool to learn about system internals, the project has ambition to become a usable, general-purpose operating system. Read on for our interview with founder Jakub Jermar and a first-look review of the latest version. In the news section, Fedora continues its march towards its next stable release, Debian developers elect a new project leader and MOPSLinux faces uncertain future following a sponsor's departure. Also not to be missed: a link to an excellent article on migrating to GRUB 2, and our questions and answers feature which explains the intricacies of the "nice" and "renice" commands. Happy reading!"

Comments (none posted)

Fedora Weekly News 221

The Fedora Weekly News for April 14, 2010 is out. "This week's issue starts off with announcements, including notice of an upcoming wiki freeze on April 19 for release notes, more detail on Fedora Summer of Code, and the release of Fedora 13 beta. News from the Fedora Planet is next, including tips on using NetworkManager for servers, an event report from FLOSS HCI Workshop at CHI 2010 Atlanta, and a discussion why DocBarX in Fedora would be a Good Thing. In news from the Marketing team, reports on recent work in helping Fedora Ambassadors, work on a new Fedora flyer, an update on activities with students at Alleghany College, and pointers to weekly meeting notes. Our irregular 'In the News' beat features several stories about Fedora 13 in the press since yesterday's release. Ambassadors this week features an event report from the Texas LinuxFest. In QA news, details from last week's test day on virtualization, and three upcoming test days around graphics drivers, as well as a wrap up report of Fedora 13 Beta validation testing, and several other items. Translation includes reports of activity around Fedora 13 release notes, upcoming tasks, new modules and new team members.In Design team news, an update on Fedora 13 wallpaper and a discussion around user experience (UX) groups and the design team. This issue wraps up with an overview of the security advisories issued this past week for Fedora 11, 12 and 13. Enjoy FWN 221!"

Full Story (comments: none)

openSUSE Weekly News/119

The openSUSE Weekly News for April 17, 2010 is out. "In this issue you can find a new exclusive Kernel Review with openSUSE Flavor, and the new Milestone 5 of openSUSE 11.3 is out. Feel free to test it. Now we wish you many joy by reading the new issue..."

Comments (none posted)

Ubuntu Weekly Newsletter #189

The Ubuntu Weekly Newsletter for April 17, 2010 is out. "In this issue we cover: Archive frozen for preparation of Ubuntu 10.04 LTS, Ubuntu Open Week, New Loco Council Members Announced, New operators appointed on #ubuntu, #ubuntu-offtopic and #kubuntu, Reminder: Regional Membership Boards - Restaffing, 1st Annual Ubuntu Women World Play Day Competition Announced, New Ubuntu Member, Lucid Parties, Hungarian Loco Team shares Release Party Badges, Lucid Release parties in Norway, Ubuntu-ni presentation at American College, Ubuntu Honduras Visited UNAH-VS, Minor Team Reporting Change, Feature Friday: project announcements, Links round-up 16th April, Facebook app for Lucid countdown banners, Free Software and Linux Days 2010 in Istanbul, Quickly 4.0 available in Lucid!, Out of beta: 40 Ubuntu-based TurnKey virtual appliances, Full Circle Podcast #4: It's Everyone Else's Fault, Ubuntu-UK podcast: Hear Em Rave and much, much more!"

Full Story (comments: none)

Distribution meetings

Mini DebConf in Berlin, Germany, June 10th/11th

There will be a mini DebConf during LinuxTag. "Every year the LinuxTag (Linux day) is a four-day event in Berlin, Germany, likely comparable to other OpenSource events. It provides talks, discussion and a lot of booths of various Free Software projects. This year's LinuxTag will be on June 9th till 12th. Since it's held on a huge fairground, we were able to get quite some piece of it for our own use. We will have two areas, one for talks, the other as a hack lab and use right the middle of the LinuxTag for the Mini DebConf: June 10th and 11th -- with the hack lab open all through the night of course."

Full Story (comments: none)

Newsletters and articles of interest

The lost world of the Xandros desktop (ITPro)

There is a lengthy article on the history and status of Xandros on ITPro. "Xandros tried to imprint a traditional software sales model on GNU/Linux, which hasn't really worked, despite the cost and efficiency advantages of GNU/Linux for commercial users, so the company has looked instead towards the mobile and netbook market, where it has found some success."

Comments (1 posted)

Page editor: Rebecca Sobol


On bootstrapping a community-run FOSS event

April 21, 2010

This article was contributed by Nathan Willis

On Saturday, April 10th, I was in Austin Texas for the inaugural Texas Linux Fest (TXLF), a community-run FLOSS conference. The idea to stage the show arose last August during OSCON, picked up steam in the fall, and in the end a little under 400 people turned out — including speakers and volunteers — which most considered a successful number for a first year event.

[TXLF attendees]

The fact that it worked demonstrates that the developer and end-user open source community is eager to get together. But that fact guarantees no automatic success; along the way the TXLF planning team met challenges that anyone investigating launching their own regional show could learn from — as well as opportunities where the open source community could build tools useful for a wide range of all-volunteer projects.

Brief background

The genesis for TXLF was a series of independent conversations along the lines "there should be a regional community Linux show in Austin," mostly by Matt Ray of Zenoss and myself, with other people. Eventually both Ray and I had that conversation with Ilan Rabinovitch of SCALE, who told us to start talking to each other. Gathering all of the interested parties in one place was the first challenge. There is little you can do other than put the word out in every conceivable medium and see what happens — the group contacted individual free software hackers, business contacts, and every regional LUG and developers' group with an active presence on the Internet.

As the collection of interested parties expanded, counting out the tasks involved in putting such an event together became the next hurdle. Many of the team had personal experience at least participating in some aspect of behind-the-scenes conference work — as an exhibitor, a speaker, or a volunteer — but as no one had the freedom to work full-time on the task list, organizing it became an ad-hoc affair. I eventually took on the role of trying to keep the geographically dispersed team coordinated on the to-do list, and worked on raising sponsorships, marketing, working with exhibitors, and helping to develop the program.

Inertia and chicken-* problems

At the practical level, the biggest obstacle a community-run event faces is inertia. There are at least three types. First, all of the volunteers are likely to be enthusiastic about the idea of the event, but individuals generally cannot devote their time to working on it until they personally cross the "I'm sure it's going to happen" threshold. For some it is lower than for others — particularly those with behind-the-scenes conference experience. The challenge is for the team to move forward in the early stages and thus grow the pool of volunteers that can make time to pitch in. For each individual, though, the inertia is not lack of interest in helping out on the project, it is simply human nature to put tasks on the back burner until the project becomes real enough to move up in priority.

The second type of inertia is just a lack of group structure. In the case of TXLF, very few of the volunteers knew each other well, none had worked together before, and as a result it was at times difficult to come to a consensus on nuts-and-bolts decisions such as "should there be a free ticket option, or should all tickets be priced," or, "who should we invite as a keynote speaker?" In most cases, there is no right or wrong answer and often no strong opinions, so in a democratic group the best option is often to simply vote and make do with the results.

The third type of inertia, though, is the biggest challenge: the seemingly intractable problem of having three or four mission-critical decisions, all of which must be made first. In the case of TXLF, it was the date, the venue, and the sponsors, a chicken-and-egg-and-rooster problem, if you will. Selecting the date first risks eliminating availability of key sponsors or venues; selecting the venue first limits the choice of dates and means gambling on the ability to raise enough sponsorship to rent the venue; gathering sponsors first is impossible because without a date and a venue, they do not know their availability and may doubt whether or not the event will genuinely come to pass. Other events or volunteer projects may face a different combination; in any case there is no simple answer. TXLF selected the date first, attempting to minimize interference with other FOSS and local events — even so, the date selected ended up conflicting with COSSFEST.

Knowledge capture

Once in the planning process, however, the challenges become all practical. Arguably the biggest impediment to planning a new event is that there is no definitive guide to the process. There is a generally-accepted list of the "large scale" components — asking sponsors for support, putting out a call-for-participation, opening for registration, arranging for audio/video at the venue, setting up the network, publicizing the event, etc. — but nothing in the way of a step-by-step guide that can break the tasks down for easier group consumption.

[TXLF organizers]

Fortunately, most of the existing regional open source conferences do much of their own planning in public (from mailing lists to wikis), which makes for good reference material at the early stages. Even better, all of the organizers are dyed-in-the-wool enthusiasts who will offer their assistance to answer questions, refer questions to others, and in many cases actually pitch in. TXLF received a tremendous amount of help from the SCALE organizers (some of whom even volunteered in-person), as well as from the teams behind LinuxFest Northwest, the Florida Linux Show, Southeast LinuxFest, and Ohio LinuxFest.

Speaking to someone who has organized an event before gives a new team more specific insight into the process of dealing with the contractors and volunteers to make arrangements. For example, knowing that certain companies will not agree to sponsor an event until there is a public call-for-participation, learning how to negotiate counting concession sales against venue rental price, or what sort of wording needs to go into the exhibitor agreement for an expo booth.

The TXLF group came by most of this knowledge through person-to-person conversation and mailing list or IRC discussion. A considerably better approach for the future is the site, which SCALE's Gareth Greenaway discussed in a session at TXLF. is a newly launched site sponsored by the Peer-Directed Projects Center (best known for Freenode) that hopes to serve as a focal point for community-run free and open source software events, whether conferences, hackfests, or informal meet-ups.

The plans include several services, such as a central location where speakers willing to present at FOSS events can register their availability and contact information, but one of its high-priority tasks is building a wiki-style guide to the event planning process. already provides a searchable calendar of such events, which is itself a valuable resource. A number of people in the audience for Greenaway's presentation raised their hands when asked if they were planning an event of their own, so the service is certainly needed.

At present, the TXLF organizers and all on-site volunteers are attempting to collect and process their own observations and feedback on the event, both for institutional knowledge and to better share with other groups like The challenge, as always, includes time, but also stems from the organization software tools themselves.

Tools, and the lack thereof


One of the biggest surprises of coordinating the first-year event was seeing firsthand where free and non-free software fit the bill. In some areas, there was no surprise — the networking team built the wired and wireless network at the venue with open source, as one would expect. All of the design and pre-press work for the fliers, ads, shirts, and program guide was done with Inkscape, Gimp, and especially Scribus, which evidently surprises some who are not as familiar with those applications. There are even a few open source conference-management packages to handle tasks like registration, call-for-participation, and scheduling. ConMan from the Utah Open Source Conference is one such package that is still being developed; TXLF used SCALE's SCALEreg specifically for its registration, on-site check-in, and badge-printing capabilities.

On the other hand, at multiple points the group found itself using closed-source solutions — particularly for collaboration — solely because there was (or at the very least, appeared to be) no viable open source alternative. This started at the very beginning, when the initial organizers needed a mailing list. Setting up a Google Groups list is free and fast; sadly the same cannot be said of any open source list service. If you have a server and can set up an instance of Mailman, you can create as many lists as you want — however, this is of no help before you have a domain name and a server itself. GNOME, KDE, GNU, and the various Linux distributions all host free mailing lists for their constituent projects, but at none of them can an interested party simply walk up and start their own list. Commercial services like Google and Yahoo offers this to any user; free software services like Mozilla Messaging, or perhaps Mailman itself with its highly-desirable domain, are way behind.

Similarly, when it comes to collaborative work on documents, there is not yet an open source offering to compete with Google Docs. There are several collaborative text editors, but no spreadsheets — a vital need for budget tracking and program development. The TXLF team set up a MediaWiki installation (as is a common first step in any site launch), but wikis make for terrible collaboration tools. Wiki markup is, at its best, a weaker and ill-defined substitute for basic HTML, but more importantly, wikis are too often used as an amalgam for a public content management system (CMS), team task-tracker, and personal notebook. But they lack the access control, hierarchy, and editorial features of a proper CMS, and the real project-planning capabilities of a task management application.


There are several open source project management suites that a team could use to keep track of deadlines, deliverables, important documents (such as contracts), and contacts. Here again, though, most are behind the curve when compared to free software-as-a-service offerings, and in most cases the projects do not offer a free hosted solution at all. TXLF, like most volunteer projects, had to run its site on what we were given, a donated virtual host with only part-time support provided by a volunteer, and no choice over the application server or frameworks made available. It is easy to say "install it on your server," but without money and a systems administrator, that is rarely an option. Eventually, the TXLF group moved to tracking some multi-person tasks on Zoho, which offered the best compromise of features and limitations.

Perhaps these examples highlight something that the open source community rarely talks about: building tools for non-development tasks. If you intend to start a software project, you can sign up for free project hosting at a wide variety of services (from completely free software to those that are free of charge, but with closed code); you can get mailing lists, web forums, issue tracking, and release management. On the other hand, you do not get a customer (or "constituent") relationship management (CRM) tool, shared iCalendar service, or collaborative document editing. Moreover, if you want to start a project that does not involve code — say, design, documentation, or translation — you may not be eligible for an account at all.

Before working with TXLF, there were software applications I had only a tenuous awareness of; since the conference I have grown in appreciation for them. At the start, I managed all of my personal to-do elements for the event the way I do for writing assignments and other personal projects: with VTODO feeds organized within Thunderbird/Lightning. But that, of course, does not scale to multiple people, nor does it expose task dependencies or other tools to keep larger projects on deadline. While it is easy to keep in touch with a group on IRC, for large-scale projects one will eventually need collaborative document sharing and editing. Finally, while it seems simple enough for an individual to keep track of one year's sponsorship discussions via IMAP folders, that answer does not offer the flexibility of a CRM system, which multiple users can contribute to, and which multiple users can use to assist the project. The TXLF team did not find a document management or CRM system to use during the 2010 planning cycle; although Zoho worked well enough for multi-user task tracking, it offered neither of the other features; finding a free solution encompassing both is on the to-do list for next year.

Closing thoughts

Anyone considering starting an open source or Linux conference in their local area should take the plunge and do so; so long as you are comfortable scaling the size of the event to the number of volunteers and potential attendees in the area, it is within reach. Quite a few people I have met while wearing the "Press" badge at other Linux conferences have shared the opinion that these weekend-based, community-driven events are the wave of the future.

Unlike large-scale and corporate-run conferences, they tend to be very low-cost and draw on the ever-growing numbers of home-Linux-users and home-based-telecommuters. The odds are that if there is not already a regional Linux show close enough that you don't mind driving to it, there are other people in your area who feel the same way. Hopefully the project will make it easier to start from scratch, assess the viability of such a project and make cost- and time-appropriate plans; if nothing else you know it has been done many times before and the community is willing to share its knowledge.

Progress on the tools front is probably a longer-term goal. I am aware that several of the larger-scale non-profit software projects are themselves grappling with CRM and document-management application selection, so the community recognizes the need. Building truly free web services is a topic getting increasing coverage in the press, blog world, and conference circuit, but it has not yet reached critical mass. Nevertheless, if we do not hang a shingle outside and offer to tell our neighbors about open source software, who will?

Comments (16 posted)

Brief items

Quote of the week

I've recently begun an effort to enable myself to work on the Mercurial project full-time. This should help speed up development in many areas of Mercurial, especially in the areas that require deep expertise to make progress.... If your company is a serious user of Mercurial, please consider accelerating Mercurial's development by sponsoring me.
-- Matt Mackall

Comments (none posted)

ALSA 1.0.23 released

The Advanced Linux Sound Architecture project has released ALSA packages version 1.0.23. See the changelog for details.

Comments (4 posted)

"Binary analysis" license compliance tool released

Armijn Hemel and Shane Coughlan, who have written about license compliance on LWN, have released a version of the tool they use to look for GPL-licensed software in device firmware. The Binary Analysis Tool digs through binary blobs and attempts to identify the software contained therein. "One advanced feature of the tool is users can build a customized knowledgebase. This can contain information about products and/or code like upstream suppliers, chip-sets, offsets, file systems and application strings. The tool can read the knowledgebase, open compiled code, and check if the specified data is included."

Comments (1 posted)

GCC 4.5.0 released

GCC 4.5.0 has been released. The changes page has the details. Support has been added or enhanced for a number architectures such as ARM, IA-32/x86-64, M68K/ColdFire, MeP, MIPS, RS/6000 (POWER/PowerPC), and RX. There are plenty of language specific improvements and general optimizer improvements as well.

Comments (20 posted)

Giggle 0.5 released

Giggle is a GTK-based graphical front end to the git source code management system. The 0.5 release - said to be stable - is now available. A quick test run by your editor suggests that there might be some interesting features there, but that it can be slow on kernel-size repositories. Giggle may not be ready to replace gitk for most users, but it's worth keeping an eye on.

Full Story (comments: 1)

OpenSSH 5.5 released

OpenSSH 5.5 is out. There's not much in the way of exciting new features in this bugfix release, but, with a tool like OpenSSH, it's generally a good idea to stay with the current version.

Full Story (comments: none)

Subversion 1.6.11 released

Version 1.6.11 of the subversion source code management system is out. This release includes a number of useful fixes and improvements; see the release notes for details.

Full Story (comments: 2)

Newsletters and articles

Development newsletters from the last week

Comments (none posted)

Goerzen: Moral obligations of Free Software authors?

John Goerzen, creator of OfflineIMAP and hpodder, worries about what to do when he's no longer interested in working on a package. "None of these are features that I care about, and I don't have much time to devote to OfflineIMAP these days. It is not an interesting problem to me anymore as, well, I've solved it already. Yet I'll be honest and say I feel guilty about the bug reports that are stacking up in the OfflineIMAP bug tracking system. OfflineIMAP is used by people that have an expectation for improvement. My efforts to hand over maintainership of OfflineIMAP have failed (the people have gone AWOL shortly after agreeing to maintain it)."

Comments (23 posted)

Page editor: Jonathan Corbet


Non-Commercial announcements

Fedora Summer Coding 2010 - now with more time

The Fedora Project has extended the deadlines for Summer Coding 2010. "We have pushed back the first part of the Summer Coding 2010 schedule. There wasn't enough time to find sponsors. Now there is more time for mentors and students to generate ideas and write up good proposals, while people like you and me look for more sponsors." The deadlines have been extended by one month.

Full Story (comments: none)

FSF recommends CiviCRM

The Free Software Foundation (FSF) announced that CiviCRM has earned its recommendation as a fully featured donor and contact management system for nonprofits. "The FSF had highlighted the need for a free software solution in this area as part of its High Priority Projects campaign. With this announcement, the FSF will also be adopting CiviCRM for its own use, and actively encouraging other nonprofit organizations to do the same."

Full Story (comments: none)

Commercial announcements

Synaptics Gesture Suite Linux for TouchPads

Synaptics has announced Synaptics Gesture Suite Linux (SGS-L), which offers multi-touch support. "SGS-L was developed from analyzing the most common workflows, from entertainment activities such as viewing photos and listening to music, to productivity activities such as accessing emails and presentations. The result is an enhanced usability model that makes it intuitive for consumers to easily understand and discover features, resulting in a better user experience."

Comments (1 posted)

Articles of interest

The SEC to require filings in Python?

As seen in this ITWorld article, the US Securities and Exchange Commission (SEC) is currently contemplating new rules for disclosure around the offering of asset-backed securities. The proposal, a monster PDF file, reads: "We are proposing to require the filing of a computer program (the 'waterfall computer program,' as defined in the proposed rule) of the contractual cash flow provisions of the securities in the form of downloadable source code in Python, a commonly used computer programming language that is open source and interpretive." There are all kinds of interesting implications, including effects on the language, security, and more.

Comments (46 posted)

Linux Graybeards? Yes, But Also A Wisdom Circle (InformationWeek)

InformationWeek reports from the kernel panel discussion at the Linux Foundation Collaboration Summit. "Andrew Morton, a key aide to Linux lead developer Linus Torvalds and often referred to as the 'Colonel of the kernel,' put the issue equally bluntly: 'Yes, we're getting older, and we're getting more tired. I don't see people jumping with enthusiasm to work on things the way that I used to.' But he added that meant the developers in the kernel process had gained deep knowledge of the code they're working with and are willing to tackle greater complexity in making additions."

Comments (11 posted)

Aaron Seigo on the Future of KDE (Datamation)

Bruce Byfield talks with Aaron Seigo about the future of KDE. "According to Seigo, the large-scale changes that began two years ago with the release of KDE 4.0 are mostly complete now. "We've reached the stage with the 4.4 release that happened in January where we've got this nice feature set on the desktop and we have applications available for it and some nice refinements in the look and feel. That's where we are. But where are we going? That's always the difficult question. Once you've arrived at a place,what are you going to aim for?" Seigo's answer to his own question is that KDE is currently moving in three directions: adding functionality to the desktop in both small features and within specific applications, extending the concept of the social desktop, and the introduction of KDE on to every possible hardware platform. Each is a small story in itself."

Comments (74 posted)

Oracle presents "much faster" MySQL beta (The H)

The H looks at Oracle's plans for MySQL. "Oracle presented a beta of what it called a "much faster" MySQL at the O'Reilly MySQL Conference and insists it will be continuing to invest in the open source database. Oracle's Chief Corporate Architect, Edward Screven, presented the beta version of MySQL 5.5 which will now use InnoDB as its default storage engine, saying that the switch offers a 200% performance improvement and over ten times faster recovery times. He assured the audience that despite the switch to Oracle's InnoDB, Oracle will be maintaining the pluggable storage engine architecture and that the company would continue to ship the same code base in the community and enterprise editions." (Thanks to Raji Ramsharma)

Comments (48 posted)

You Can't Control Linux (CIO Update)

Sean Michael Kerner covers a Collaboration Summit keynote by Dan Frye, vice president of open system development at IBM. "For IBM, one of the hardest lessons it had to learn was one about control. Mainly, there is none. "There is nothing that we can do to control individuals or communities, and if you try, you make thing worse," Frye told the audience. "What you need is influence. It goes back to the most important lesson, which is to give back to the community and develop expertise. You'll find that if your developers are working with a community, that over time they'll develop influence and that influence will allow you to get things done.""

Comments (1 posted)

Python support in GNOME gets a boost from hackfest (ars technica)

Ryan Paul reports on the Python GNOME hackfest. "Some GNOME developers have gathered in Boston for a Python GNOME hackfest that is hosted by the One Laptop Per Child (OLPC) project. The primary goals behind the hackfest include establishing a strategy for delivering Python 3.0 compatibility for the GNOME platform and advancing the Python GObject introspection project."

Comments (none posted)

Where's the Summer of Documentation? (OStatic)

Joe "Zonker" Brockmeier looks at the missing elements from Summer of Code. "I bring up documentation, but really the problem that I see is that the Summer programs are simply too code- and developer-centric. Projects and companies in this space should also be thinking about involving translators, user interface designers, artists, and other disciplines in their projects. Not only because it would help these projects be more well-rounded and address areas outside of just developing code, but because it would also provide a wonderful opportunity for cross-pollination. Students who are pursuing other fields of study would provide an opportunity to inform and enthuse ambassadors for open source who move in different circles. It would do open source projects worlds of good to have articulate and interested participants who could carry open source ideals to their peers in other disciplines."

Comments (12 posted)

Contests and Awards

Announcing the 2010 We're Linux Video Winners ( has announced the winners of this year's We're Linux Video Contest. First place: Go Linux, Second place: Create Something Unique, and Third place: Linux: Free Your Computer.

Comments (none posted)

Best Object Databases Lecture Notes Award 2010.

ODBMS.ORG has announced that it will issue the "Best Object Databases Lecture Notes" Award 2010, "for the most complete and up to date lecture notes on Object Databases, that have been, or have strong potential to be, instrumental to the teaching of theory and practice in the field of object database systems." Submissions will be accepted until June 4, 2010.

Full Story (comments: none)

Calls for Presentations

Libre Graphics Meeting 2010

Libre Graphics Meeting (LGM) takes place May 27-30, 2010 in Brussels, Belgium. The deadline for submissions is May 1, 2010.

Full Story (comments: none)

Call for Presentations for the Flash Memory Summit

The Flash Memory Summit takes place August 17-19, 2010 in Santa Clara, California. The call for proposals is open until May 7, 2010.

Full Story (comments: none)

KVM Forum 2010: Call for Papers

This year's KVM Forum takes place August 9-10, 2010 (colocated with LinuxCon) in Boston, Massachusetts. Abstracts are due by May 14, 2010.

Full Story (comments: none)

Upcoming Events

LAC 2010 program is online

The program for Linux Audio Conference 2010 has been posted. LAC 2010 takes place May 1-4, 2010 in Utrecht, the Netherlands.

Full Story (comments: none)

LAC 2010 live stream coverage

The Linux Audio Conference 2010 will have live streaming coverage of all the paper presentations and selected workshops. "for remote participants, there is an IRC channel called #lac2010 on, which serves as a backchannel for your questions and comments, hangout for conference chatter, and helpdesk for any streaming troubles you might encounter."

Full Story (comments: none)

Events: April 29, 2010 to June 28, 2010

The following event listing is taken from the Calendar.

April 25
April 29
Interop Las Vegas Las Vegas, NV, USA
April 28
April 29
Xen Summit North America at AMD Sunnyvale, CA, USA
April 29 Patents and Free and Open Source Software Boulder, CO, USA
May 1
May 2
OggCamp Liverpool, England
May 1
May 4
Linux Audio Conference Utrecht, NL
May 1
May 2
Devops Down Under Sydney, Australia
May 3
May 7
SambaXP 2010 Göttingen, Germany
May 3
May 6
Web 2.0 Expo San Francisco San Francisco, CA, USA
May 6 NLUUG spring conference: System Administration Ede, The Netherlands
May 7
May 9
Pycon Italy Firenze, Italy
May 7
May 8
Professional IT Community Conference New Brunswick, NJ, USA
May 10
May 14
Ubuntu Developer Summit Brussels, Belgium
May 17
May 21
Fourth African Conference on FOSS and the Digital Commons Accra, Ghana
May 18
May 21
PostgreSQL Conference for Users and Developers Ottawa, Ontario, Canada
May 24
May 25
Netbook Summit San Francisco, CA, USA
May 24
May 30
Plone Symposium East 2010 State College, PA, USA
May 24
May 26
DjangoCon Europe Berlin, Germany
May 27
May 30
Libre Graphics Meeting Brussels, Belgium
June 1
June 4
Open Source Bridge Portland, Oregon, USA
June 3
June 4
Athens IT Security Conference Athens, Greece
June 7
June 10
RailsConf 2010 Baltimore, MD, USA
June 7
June 9
German Perl Workshop 2010 Schorndorf, Germany
June 9
June 12
LinuxTag Berlin, Germany
June 9
June 11
PyCon Asia Pacific 2010 Singapore, Singapore
June 10
June 11
Mini-DebConf at LinuxTag 2010 Berlin, Germany
June 12
June 13
SouthEast Linux Fest Spartanburg, SC, USA
June 15
June 16
Middle East and Africa Open Source Software Technology Forum Cairo, Egypt
June 19 FOSSCon Rochester, New York, USA
June 21
June 25
Semantic Technology Conference 2010 San Francisco, CA, USA
June 22
June 25
Red Hat Summit Boston, USA
June 23
June 24
Open Source Data Center Conference 2010 Nuremberg, Germany
June 26
June 27
PyCon Australia Sydney, Australia

If your event does not appear here, please tell us about it.

Audio and Video programs

Film: "Patent absurdity"

For anybody who is not yet convinced about the fundamental nature of software patents: Patent Absurdity is a 30-minute film containing interviews with Karen Sandler, Richard Stallman, Eben Moglen, and others. It is available in Ogg Theora format, naturally.

Comments (64 posted)

Page editor: Rebecca Sobol

Copyright © 2010, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds