User: Password:
Subscribe / Log in / New account Weekly Edition for July 29, 2010

GUADEC: Luis Villa points GNOME at the web

By Jake Edge
July 28, 2010

Longtime GNOME developer and community member Luis Villa kicked off the GNOME users' and developers' European conference (GUADEC) with a challenge to the project to "embrace the web" as a way for the project to remain relevant. The web has won the battle to produce a "robust, libre platform" over various desktop efforts like GNOME, but there is still time for the project to find a seat at that table. It is a "big scary step" to take, Villa said, but one that he thinks is ultimately the right direction for the project.

[Luis Villa] While he is currently working for Mozilla, which might have colored his thinking some, Villa certainly disclaimed (in true lawyerly fashion) that he was representing anyone's views but his own. He was taking vacation time to attend the conference and wore a shirt from a company (Ximian) that "no one can be pissed at any more". He was there because "I love GNOME", he said.

Villa was speaking from "the other side", referring back to a talk he gave at GUADEC in 2006 when he was "vanishing into the bowels of law school" and told the project members that he would see them on the other side. That historical perspective was a major element of Villa's talk; one theme revolved around a picture of a party on a Paris boat at the first GUADEC in 2000. He considered what one would tell the folks in that picture about the progress that has been made in the ten years since.

Today there is a free and open platform that runs on all PCs and laptops, but which also runs on phones and televisions, a fact which would likely surprise the crowd from 2000. Most people using that platform also use Linux every day; the licensing of the platform is generally LGPL or more permissive. Even Microsoft has an implementation. There are some 400 million users. High school kids are learning to program for this platform partially by using a "View Source" button. Unfortunately Villa would have to tell those folks that this platform isn't GNOME, it is, instead, the web.

So the question is: what should GNOME do about that? Villa described "one possible answer", which is for GNOME to join forces with the web development community and bring its strengths, in terms of technical ability, culture, user focus, and hacker mentality, to that party. GNOME should figure out how to deliver the best combination of desktop and web applications to users.

Basically, the web won because it "co-opted our message", he said. He pointed to the famous Gandhi quote ("First they ignore you ...") but noted that things don't always work out that way. "Sometimes your ideas win without you", he said.

But, the web didn't win because it is perfect for either developers or users. There are problems with proprietary applications as well as control and privacy issues. It delivers sophisticated, powerful applications, though, which are run by someone else, freeing users from that burden. It's not a fad, and not going away, as it will only get better, he said. He also said that he had pointed the audience to an EtherPad site as a way to send questions, rather than to a Gobby instance, because he could be sure that all the attendees had web browsers while many would not have Gobby installed.

He noted that Apple and others brag about a thousand new "apps" this week, but said that there are a thousand new web applications every hour. Developers have already embraced the web in a big way; GNOME needs to get on board. It is extremely easy to develop a web application by putting some HTML and JavaScript up on a site somewhere; GNOME needs to be thinking about making development that easy for the GNOME platform. His suggestion was to start with "baby steps" by reimplementing the web's ideas for the desktop.

The web should be treated as a first-class object and various desktop applications should integrate with web services, he said. He pointed to the GNOME background image chooser which now allows picking images from Flickr or other web photo sites as an example. Though he noted that Zeitgeist hadn't made the cut for GNOME 3.0, he saw that as a step in the right direction because it treats the web as just another object.

Beyond that, the project should be thinking about even bolder strategies that would not just copy what the web is doing. It will be bigger and harder step, but he suggested that GNOME start writing code for the browsers to provide any needed functionality. "Bring our ideas, bring our code" to fix areas that don't work for GNOME. As a concrete proposal, he thought the Desktop Summit being planned for next year (combining GUADEC and KDE's Akademy conference) should be renamed to the "Free User Software Summit" and include browser developers from Mozilla and Google.

Further out, GNOME should "burn the boats" by writing all of its applications in HTML and JavaScript first. Only when that doesn't work should there be a fall back to GTK. According to Villa, GNOME needs to start thinking that way because "that's how our users and developers are thinking". Instead of pointing developers at C and GTK or PyGTK, GNOME should provide a first-class toolkit for HTML and JavaScript. It should also be made easier to run the same code on the desktop or the web, he said.

He allowed as to how this would be a major upheaval; "I told you this would be hard." While it is going to require lots of new code, and potentially abandoning lots of old code, it is still an embodiment of "our old culture". Bringing that culture of freedom and user-focus to the web is Villa's prescription.

For his part, Villa "welcomes skepticism". Maybe folks think the web isn't free enough or they hate JavaScript, but if so, they need a counter-narrative: "Maybe my answer isn't right, but what is?" Maybe there are those that think the web is a fad, but they need an argument to back that up.

He is optimistic about the future because of the people that make up GNOME. "We are the right people" to do this job, but need the right code. The clear indication from the talk is that he's convinced that the GNOME project's current direction isn't right and that a radical shift in focus is needed. "Whether you agree or disagree or think I'm crazy", the challenge is to identify the right direction and "go out and do it". Villa has presented his idea of what that direction should be, and he clearly thinks others should do the same.

Comments (35 posted)

WordPress, themes, and derivative works

July 28, 2010

This article was contributed by Nathan Willis

The WordPress community witnessed the end of a high-profile war of words last week when the distributor of a popular commercial theme for the blogging platform agreed to license some of his work under the GPL. Prior to last week, Chris Pearson had argued fiercely that WordPress themes are not derivative works of WordPress itself — as the project has long claimed — and thus he was free to sell his Thesis theme under his own restrictive licensing terms.

The stand-off between the projects erupted in a live argument on the Mixergy podcast between Pearson and WordPress founder Matt Mullenweg, one that ended with Pearson essentially challenging Mullenweg to bring a lawsuit against him. With Pearson's change of heart, the issue appears to be resolved. Pearson announced on his Twitter account that Thesis was now available under a "split" license, with the GPL applying to executable portions, and a separate license covering the images, CSS rules, and client-side JavaScript — the formula insisted upon by the WordPress project.

What's in a theme?

The disagreement hinged on a question that will sound familiar to free software enthusiasts: what constitutes a derivative work under the GPL? The WordPress project has long taken the position that both plugins and themes are derivatives of the WordPress application itself, and thus must inherit its license, the GPL v2.

Pearson disagreed, claiming that Thesis was his creation and that he could select a license for it at will. As he said during the interview:

I think the license, the GPL, is at odds with how I want to distribute my software and what I want it to be. I don't think that it necessarily should inherit WordPress' license when over 99% of the code within Thesis is Thesis code based on the actual process of building a website. Certain processes that occur in nature can be [described] mathematically by code. I am trying to describe it with code. I am describing a process that exists separate from WordPress or from any piece of software that deals with website development for that matter. It's its own thing.

Many commenters on the blog coverage of the fight seemed to be of the same mind, asserting that the WordPress license was irrelevant to "original work" written by a theme creator. Underlying that position, however, seems to be the belief that a WordPress theme is a layer "above" the WordPress application, which happens to call APIs exposed by WordPress.

Perhaps WordPress's use of the term theme is itself misleading, because it suggests something cosmetic.

Considering that belief, perhaps WordPress's use of the term theme is itself misleading, because it suggests something cosmetic, like a static template or a set of look-and-feel rules implemented in HTML and CSS. But that is not what WordPress themes are. Rather, themes in WordPress are a collection of PHP scripts that implement the entire outward-facing user interface of the site (the dashboard functionality is implemented elsewhere).

WordPress themes are executables that create all of the elements displayed in the browser: pulling the content of posts, comments, user information, category, tag, archive, and navigation links, even search functionality, all by calling WordPress functions. To put it another way, a WordPress theme is the interface component of the application; without a theme installed, WordPress does not serve up any pages, and when not installed in a WordPress site, a theme cannot even execute.

The debate over the GPL inheritance of themes and plugins has been around for several years, prompting Mullenweg to seek legal analysis. According to the Mixergy interview, he first consulted with Mozilla's attorney Heather Meeker, but it is the Software Freedom Law Center's (SFLC) official opinion that he refers to as conclusive proof.

The SFLC analysis states that "the PHP elements, taken together, are clearly derivative of WordPress code," citing the fact that they are loaded into the WordPress PHP application with include(), combined in memory with the rest of the WordPress code, and executed by PHP as part of a single executable program. On the other hand, SFLC noted, some elements of a theme, such as CSS rules, image files, and JavaScript, reside on the system only to be served by the web server as data delivered to the client. These elements could be distributed under the same license, but because they are not combined with WordPress itself, do not have to inherit the GPL.

This reading of the situation is essentially the same as the Free Software Foundation's (FSF) take on the licensing requirements for plugins. The GPL FAQ states:

If the program dynamically links plug-ins, and they make function calls to each other and share data structures, we believe they form a single program, which must be treated as an extension of both the main program and the plug-ins.

The status of JavaScript components was not explicitly addressed in the SFLC's analysis. The lone discussion of JavaScript in the GPL FAQ deals with web page templates, by which it seems to mean static templates that "assemble" a page, in contrast to the executable definition of "plugin" explored above.

Code reuse and other considerations

During the Mixergy debate, Pearson referenced a 2009 blog post by Florida attorney Michael Wasylik, who asserted that WordPress themes did not inherit the GPL from the WordPress application, based largely on the "running on top of" WordPress argument. Mullenweg and others observed that Wasylik is a real estate, not a copyright, attorney, and that the court cases he references in his blog post are about hardware devices, not software. But Wasylik also said that "actual incorporation of code" makes the work "probably derivative, and the GPL probably applies." Drew Blas subsequently analyzed the Thesis source code and concluded that the theme incorporates code lifted from WordPress itself.

Furthermore, WordPress core developer Mark Jaquith, in a longer analysis of the problem, observed that a former Thesis developer openly admitted that code from WordPress was copied into Thesis, and Andrew Nacin noted that the Thesis documentation commented on such inclusions: "This function is mostly copy pasta from WP (wp-includes/media.php), but with minor alteration to play more nicely with our styling."

Perhaps it was in the face of this evidence that Pearson changed his mind and switched over to a "split" license for Thesis — his only public comments on the decision have been made through his Twitter account.

Whatever the reasoning, Mullenweg seemed relieved to hear the news. During the podcast debate, Mullenweg repeatedly told Pearson that switching to the GPL would help, not hurt, his sales, observing that there are many other commercial theme developers who sell their works while complying with the requirements of WordPress's license. He said that, should Pearson come into compliance, he would add Thesis to the list of commercially-available themes promoted on the official WordPress site (although the addition does not appear to have happened yet).

It is always better for the community surrounding a free software project when disputes such as these reach an amicable solution. In another sense, though, Pearson's decision to relicense Thesis without comment leaves open — in some people's minds — the original question over when themes and plugins are rightfully considered derivative works.

WordPress is not alone in its position; the Drupal project also states that plugins and themes must inherit the GPL from Drupal. Joomla makes the same claim about Joomla extensions, although it admits that it may also be possible to create Joomla extensions that are not derivative works.

There may never be a simple black-and-white test to determine unambiguously when a theme is a derivative of the application that it themes. Fortunately, for the determined professional themer, it makes little difference. As the list maintained at the WordPress site demonstrates, there are quite few talented individuals who can make a living producing and selling GPL-licensed themes.

Comments (44 posted)

OSCON: That "open phone" is not so open

July 28, 2010

This article was contributed by Joe 'Zonker' Brockmeier.

Think that your Android smartphone is fully open? Aaron Williamson delivered some bad news to the audience at OSCON with his presentation Your Smartphone May Not Be as Open as You Think. Williamson, counsel for the Software Freedom Law Center, explained to the audience what components were still proprietary, and the problems with replacing those with open source components. Unfortunately, it doesn't look like a fully open phone is likely in the very near future.

Many LWN readers are already aware that Android phones contain proprietary components. However, the larger open source community, and certainly the consumer public that is not well-informed about goings on in open source development, are usually not aware how much proprietary software the Android phones depend on.

So what's open and what's not? Everything that's shipped by the Android Project is fine, but Williamson pointed out that manufacturers ship more than just Android with their phones. The phone manufacturers, companies like HTC, Motorola, and Samsung, produce the software to meld Android to the hardware it's shipping on. So it's not possible to ship an Android distribution that's completely open source that will work on any specific phone.

Some packagers do ship Android distributions, but they're not likely to have permission to ship all of the software that they include. For instance, there's CyanogenMod, which adds features not found in Android, but it's hard to ship such a distribution and stay on the right side of all the proprietary licenses. As a result, a typical CyanogenMod installation requires saving the proprietary code shipped with the phone to the side at the beginning, then reinstalling that software as one of the final steps.

What do you get if you remove most of the proprietary software? Williamson has done the research and managed to compile Android for an HTC Dream with as little proprietary software as possible. He kept three components necessary for making phone calls, and left the rest out. Without the proprietary components, the HTC Dream isn't quite a brick, but it might as well be. It's unable to take pictures or record video, connect to WiFi, connect to Bluetooth devices, or use GPS. This also leaves out the accelerometer, so the landscape mode doesn't work.

Of course that leaves plenty of functionality as well, but the phone is hardly as functional without the software as with. Unless a user is deeply committed to software freedom, they're unlikely to go to that extreme. So the goal should be to convince companies to open the software as much as possible.

Why They're Closed

Williamson pointed out that this problem is unlikely to be specific to Android, and when MeeGo or open source Symbian devices ship, they're likely to have the same problems. He also gave Google credit for working with the manufacturers and trying to get as much software available as open source as possible.

For the most part, Williamson says that mobile component manufacturers largely give the same reasons for proprietary licensing that PC component manufacturers used to avoid providing free drivers for video cards, sound cards, etc. The manufacturers are concerned that they'll lose the edge against competitors or will give away intellectual property. Manufacturers see little competitive value in being open. They don't want to use licenses (like the GPLv3) that would harm their ability to pursue patent infringement suits.

There's also the issue of regulatory agencies and their influence on radio components for Bluetooth, GSM, and WiFi. Whether that's a legitimate issue is debatable, but it does seem to concern quite a few parties. The result of these regulatory concerns isn't debatable, however: You're unlikely to find open source drivers for most of the radio components of phones, which makes it difficult to operate a phone with 100% open source software.

Williamson also said he didn't see it likely that the community could keep up with maintaining open source drivers without the cooperation of the hardware manufacturers. The device updates tend to move so quickly, and the skills required to develop and maintain the drivers without assistance, make it unlikely that the community would be able to maintain a 100% free Android system with drivers. Of course, Linux developers, who have managed to keep up with a lot of fast-changing hardware over the years, might just disagree.

What to Do?

For users who are concerned with software freedom, what can be done to acquire fully (or more) open phones or inspire vendors to sell them? Williamson said that it requires educating the vendors and, more or less, walking through the same process that the community went through with Intel, ATI, and other hardware vendors that have come a long way towards supporting software freedom.

He pointed out that the community can reward vendors that are relatively open. For instance he pointed out that enthusiasts should be avoiding Motorola phones as long as the company continues trying to block mods as it does with the Droid X. Aside from that, Williamson says there's not much for end users to do. The good news is that Williamson thinks we can move faster than with PC hardware, because we've been down the road before and the community knows how to talk to vendors.

When I spoke to Williamson after OSCON, he indicated that tablets are likely to have the same problems as handsets, and some additional issues as well. Because most of the tablet manufacturers to date are not working directly with Google or as part of the Android community, they are not only shipping a lot of proprietary software, but also likely to produce lower quality products and violate licenses. The last is almost certainly true as shipping tablets are rarely found to be in compliance with the GPL. Even though most of Android's licensing doesn't require much in the way of compliance, few vendors seem to be living up to the GPL'ed components.

For now, a truly open smartphone seems elusive, but the prospects over time look positive. Until then, users have to decide between seriously crippled devices or devices that are only largely free.

Comments (12 posted)

Page editor: Jonathan Corbet


On comment spam

By Jonathan Corbet
July 28, 2010
There are both good and bad things that come from LWN's use of its own content management system; one strong "good" point has always been our relative freedom from comment spam problems. Many comment spammers seem to rely on automated tools written for commonly-used publication platforms; these tools don't work on LWN, so spammers have to do their work by hand. That said, some readers may have noticed that spammers have been making occasional appearances here.

The biggest offender appears to be associated with a shady-looking apparel store. Even though it's shady-looking, though, we know it's a legitimate business, because the site's FAQ tells us so:

Is this a legit website? Yes.We are selling the items displayed on our website. We have sent many packages to different countries.This is James,a real Person,working for you now,not machine.Thank you.

However, we would like it to be known that even businesses as proper, upstanding, and trustworthy as this one are not welcome to post their spam on LWN. We have spent years building this site and even convincing people that it is something worth paying for. How these people might think that we would allow them to destroy it is beyond imagining. Comment spam, for us, is truly a security issue.

Our recent discovery that nearly 3,000 LWN accounts had been created from a single site known as the origin of much comment spam has also helped to focus our minds on this issue. We don't know what the intended use of all those accounts was, but we doubt it was anything good.

Thus far, we have responded to spam by deleting it immediately on discovery and blocking the accounts and site it came from. The problem appears to be growing, though, to the point that the manual deletion approach will eventually run into scalability problems. Besides, we would rather be writing useful stuff than scrubbing graffiti from the site. But options for dealing with comment spam appear to be somewhat limited.

We could, of course, moderate all comments, but that approach, too, scales poorly; it also delays and distorts conversations. Full-scale moderation is just not a business we want to get into. There are blacklists out there which identify known sources of spam, but they are far from complete. One could try content-based filtering approaches, but they have their own hazards.

What we are likely to do, in the plausible scenario that this problem persists, is to impose some sort of moderation on comments from new accounts. After a legitimate comment or two, the moderation block will be removed and comments will be posted immediately; existing accounts would not be affected. We might also automatically remove the block if a subscription is purchased - spammers have shown a surprising reluctance to support LWN, for some reason.

Nothing is decided yet, so plans could change. We'd be more than interested in any ideas that readers might have; please post them as (non-spam) comments on this article. One thing that won't change, though, is our absolute determination that we will not allow LWN to be used as a platform for the spamming of our readers.

Comments (51 posted)

Brief items

Quotes of the week

FWIW, security by obscurity has a bad rep in some circles, but it is an essential component of any serious security policy. It just should never be the *only* component.
-- Guido van Rossum

[I]t appears to be a packet of pork product, combined with a big sign saying something like: "Warning. If you blow up a bomb right here, you'll get pork stuff all over you before you die -- which might be suboptimal from a religious point of view."

This appears to not be a joke.

-- Bruce Schneier

Comments (7 posted)

New vulnerabilities

bind: denial of service

Package(s):bind CVE #(s):CVE-2010-0213
Created:July 23, 2010 Updated:November 3, 2010
Description: From the Internet Systems Consortium advisory:

If a query is made explicitly for a record of type 'RRSIG' to a validating recursive server running BIND 9.7.1 or 9.7.1-P1, and the server has one or more trust anchors configured statically and/or via DLV, then if the answer is not already in cache, the server enters a loop which repeatedly generates queries for RRSIGs to the authoritative servers for the zone containing the queried name. This rarely occurs in normal operation, since RRSIGs are already included in responses to queries for the RR types they cover, when DNSSEC is enabled and the records exist.

SUSE SUSE-SR:2010:020 NetworkManager, bind, clamav, dovecot12, festival, gpg2, libfreebl3, php5-pear-mail, postgresql 2010-11-03
openSUSE openSUSE-SU-2010:0917-1 bind 2010-10-28
Fedora FEDORA-2010-11344 bind 2010-07-23

Comments (none posted)

bogofilter: denial of service

Package(s):bogofilter CVE #(s):CVE-2010-2494
Created:July 27, 2010 Updated:January 23, 2013
Description: From the CVE entry:

Multiple buffer underflows in the base64 decoder in base64.c in (1) bogofilter and (2) bogolexer in bogofilter before 1.2.2 allow remote attackers to cause a denial of service (heap memory corruption and application crash) via an e-mail message with invalid base64 data that begins with an = (equals) character.

openSUSE openSUSE-SU-2013:0166-1 bogofilter 2013-01-23
openSUSE openSUSE-SU-2012:1650-1 bogofilter 2012-12-17
openSUSE openSUSE-SU-2012:1648-1 bogofilter 2012-12-17
Ubuntu USN-980-1 bogofilter 2010-08-31
Fedora FEDORA-2010-13154 bogofilter 2010-08-20
Fedora FEDORA-2010-13139 bogofilter 2010-08-20
SUSE SUSE-SR:2010:014 OpenOffice_org, apache2-slms, aria2, bogofilter, cifs-mount/samba, clamav, exim, ghostscript-devel, gnutls, krb5, kvirc, lftp, libpython2_6-1_0, libtiff, libvorbis, lxsession, mono-addon-bytefx-data-mysql/bytefx-data-mysql, moodle, openldap2, opera, otrs, popt, postgresql, python-mako, squidGuard, vte, w3m, xmlrpc-c, XFree86/xorg-x11, yast2-webclient 2010-08-02
Pardus 2010-99 bogofilter 2010-08-02
openSUSE openSUSE-SU-2010:0439-1 bogofilter 2010-07-27

Comments (none posted)

firefox: arbitrary code execution

Package(s):firefox CVE #(s):CVE-2010-2755
Created:July 26, 2010 Updated:August 17, 2010
Description: From the Red Hat advisory:

An invalid free flaw was found in Firefox's plugin handler. Malicious web content could result in an invalid memory pointer being freed, causing Firefox to crash or, potentially, execute arbitrary code with the privileges of the user running the Firefox application.

Gentoo 201301-01 firefox 2013-01-07
CentOS CESA-2010:0557 seamonkey 2010-08-16
Mandriva MDVSA-2010:147 firefox 2010-08-10
CentOS CESA-2010:0558 firefox 2010-08-06
SUSE SUSE-SA:2010:032 MozillaFirefox,MozillaThunderbird,seamonkey 2010-07-30
openSUSE openSUSE-SU-2010:0430-3 MozillaFirefox 2010-07-29
CentOS CESA-2010:0557 seamonkey 2010-07-27
CentOS CESA-2010:0556 firefox 2010-07-27
Fedora FEDORA-2010-11452 xulrunner 2010-07-27
Fedora FEDORA-2010-11472 xulrunner 2010-07-27
Ubuntu USN-930-6 firefox, firefox-3.0, xulrunner-1.9.2 2010-07-26
Ubuntu USN-957-2 firefox, firefox-3.0, xulrunner-1.9.2 2010-07-26
Slackware SSA:2010-204-01 mozilla 2010-07-26
Red Hat RHSA-2010:0557-01 seamonkey 2010-07-23
Red Hat RHSA-2010:0558-01 firefox 2010-07-23
Red Hat RHSA-2010:0556-01 firefox 2010-07-23

Comments (none posted)

gnupg: code execution

Package(s):gnupg2 CVE #(s):CVE-2010-2547
Created:July 28, 2010 Updated:October 24, 2011
Description: GnuPG 2 suffers from a use-after-free vulnerability which could possibly be exploited (via a signature or certificate) to execute arbitrary code.
Oracle ELSA-2013-1459 gnupg2 2013-10-24
Gentoo 201110-15 gnupg 2011-10-22
MeeGo MeeGo-SA-10:30 gnupg2 2010-10-09
SUSE SUSE-SR:2010:020 NetworkManager, bind, clamav, dovecot12, festival, gpg2, libfreebl3, php5-pear-mail, postgresql 2010-11-03
Slackware SSA:2010-240-01 gnupg2 2010-08-30
Fedora FEDORA-2010-11382 gnupg2 2010-07-27
SUSE SUSE-SR:2010:015 gpg2, krb5, kvirc, libpcsclite1/pcsc-lite, libpython2_6-1_0, libvorbis, libwebkit, squidGuard, strongswan 2010-08-17
Pardus 2010-105 gnupg 2010-08-11
Ubuntu USN-970-1 gnupg2 2010-08-11
CentOS CESA-2010:0603 gnupg2 2010-08-06
openSUSE openSUSE-SU-2010:0479-1 gpg2 2010-08-06
Red Hat RHSA-2010:0603-01 gnupg2 2010-08-04
Fedora FEDORA-2010-11413 gnupg2 2010-07-27
Mandriva MDVSA-2010:143 gnupg2 2010-07-28
Debian DSA-2076-1 gnupg2 2010-07-27

Comments (1 posted)

horde: privacy compromise

Package(s):horde CVE #(s):CVE-2010-0463
Created:July 27, 2010 Updated:July 27, 2010
Description: From the CVE entry:

Horde IMP 4.3.6 and earlier does not request that the web browser avoid DNS prefetching of domain names contained in e-mail messages, which makes it easier for remote attackers to determine the network location of the webmail user by logging DNS requests.

Fedora FEDORA-2010-11432 imp 2010-07-27
Fedora FEDORA-2010-11399 imp 2010-07-27
Fedora FEDORA-2010-11445 horde 2010-07-27
Fedora FEDORA-2010-11392 horde 2010-07-27

Comments (none posted)

iputils: denial of service

Package(s):iputils CVE #(s):CVE-2010-2529
Created:July 23, 2010 Updated:March 15, 2013
Description: From the Mandriva advisory:

Ovidiu Mara reported a vulnerability in ping.c (iputils) that could cause ping to hang when responding to a malicious echo reply.

Gentoo 201412-08 insight, perl-tk, sourcenav, tk, partimage, bitdefender-console, mlmmj, acl, xinit, gzip, ncompress, liblzw, splashutils, m4, kdm, gtk+, kget, dvipng, beanstalkd, pmount, pam_krb5, gv, lftp, uzbl, slim, iputils, dvbstreamer 2014-12-11
Debian DSA-2645-1 inetutils 2013-03-14
Fedora FEDORA-2010-12252 iputils 2010-08-07
Pardus 2010-110 iputils 2010-08-11
Fedora FEDORA-2010-12273 iputils 2010-08-07
Mandriva MDVSA-2010:138 iputils 2010-07-23

Comments (none posted)

libvirt: multiple vulnerabilities

Package(s):libvirt CVE #(s):CVE-2010-2242 CVE-2010-2237 CVE-2010-2238 CVE-2010-2239
Created:July 27, 2010 Updated:November 9, 2010
Description: From the Red Hat bugzilla: Jeremy Nickurak reported an issue with how libvirt creates iptables rules when guest systems are setup for masquerading. (CVE-2010-2242)

From the Red Hat bugzilla: It was found that libvirt did not honour the user defined main disk format in guest XML when looking up disk backing stores in the security drivers. This could be possibly exploited by privileged guest user to access arbitrary files on the host. (CVE-2010-2237)

From the Red Hat bugzilla: It was found that libvirt did not extract the defined disk backing store format when recursing into disk image backing stores in the security drivers. This could be possibly exploited by privileged guest user to access arbitrary files on the host. (CVE-2010-2238)

From the Red Hat bugzilla: It was found that libvirt did not explicitly set the user defined backing store format when creating new image. This results in images being created with an potentially insecure configuration, preventing applications from opening backing stores without resorting to probing. A privileged guest user could use this flaw to access arbitrary files on the host. (CVE-2010-2239)

Ubuntu USN-1008-4 libvirt 2010-11-08
Ubuntu USN-1008-3 libvirt 2010-10-23
openSUSE openSUSE-SU-2010:0620-1 libvirt 2010-09-16
SUSE SUSE-SR:2010:017 java-1_4_2-ibm, sudo, libpng, php5, tgt, iscsitarget, aria2, pcsc-lite, tomcat5, tomcat6, lvm2, libvirt, rpm, libtiff, dovecot12 2010-09-21
Ubuntu USN-1008-2 virtinst 2010-10-21
CentOS CESA-2010:0615 libvirt 2010-08-11
Red Hat RHSA-2010:0615-01 libvirt 2010-08-10
Fedora FEDORA-2010-11021 libvirt 2010-07-13
Fedora FEDORA-2010-10960 libvirt 2010-07-13
Ubuntu USN-1008-1 libvirt 2010-10-21

Comments (none posted)

likewise-open: unauthorized local access

Package(s):likewise-open CVE #(s):CVE-2010-0833
Created:July 27, 2010 Updated:August 4, 2010
Description: From the Ubuntu advisory:

Matt Weatherford discovered that Likewise Open did not correctly check password expiration for the local-provider account. A local attacker could exploit this to log into a system they would otherwise not have access to.

Ubuntu USN-964-2 likewise-open 2010-07-29
Ubuntu USN-964-1 likewise-open 2010-07-26

Comments (none posted)

lvm2-cluster: privilege escalation

Package(s):lvm2-cluster CVE #(s):CVE-2010-2526
Created:July 28, 2010 Updated:October 7, 2010
Description: The cluster logical volume manager deamon (clvmd) in the lvm2-cluster package does not authenticate clients connecting to the Unix-domain societ used for control operations. As a result, local, unprivileged users can perform cluster management operations.
Gentoo 201412-09 racer-bin, fmod, PEAR-Mail, lvm2, gnucash, xine-lib, lastfmplayer, webkit-gtk, shadow, PEAR-PEAR, unixODBC, resource-agents, mrouted, rsync, xmlsec, xrdb, vino, oprofile, syslog-ng, sflowtool, gdm, libsoup, ca-certificates, gitolite, qt-creator 2014-12-11
Fedora FEDORA-2010-12250 lvm2 2010-08-07
openSUSE openSUSE-SU-2010:0615-1 lvm2-clvm 2010-09-16
SUSE SUSE-SR:2010:017 java-1_4_2-ibm, sudo, libpng, php5, tgt, iscsitarget, aria2, pcsc-lite, tomcat5, tomcat6, lvm2, libvirt, rpm, libtiff, dovecot12 2010-09-21
Fedora FEDORA-2010-13708 lvm2 2010-08-30
Fedora FEDORA-2010-13708 udisks 2010-08-30
Mandriva MDVSA-2010:171 lvm2 2010-09-06
Debian DSA-2095-1 lvm2 2010-08-23
CentOS CESA-2010:0567 lvm2-cluster 2010-07-29
Red Hat RHSA-2010:0567-01 lvm2-cluster 2010-07-28
Ubuntu USN-1001-1 lvm2 2010-10-06

Comments (none posted)

lxsession: arbitrary code execution

Package(s):lxsession CVE #(s):CVE-2010-2532
Created:July 23, 2010 Updated:August 2, 2010
Description: From the openSUSE advisory:

lxsession-logout did not properly lock the screen before suspending, hibernating and switching between users which could allow attackers with physical access to take control of the system to obtain sensitive information and / or execute arbitrary code in the context of the user who is currently logged in.

SUSE SUSE-SR:2010:014 OpenOffice_org, apache2-slms, aria2, bogofilter, cifs-mount/samba, clamav, exim, ghostscript-devel, gnutls, krb5, kvirc, lftp, libpython2_6-1_0, libtiff, libvorbis, lxsession, mono-addon-bytefx-data-mysql/bytefx-data-mysql, moodle, openldap2, opera, otrs, popt, postgresql, python-mako, squidGuard, vte, w3m, xmlrpc-c, XFree86/xorg-x11, yast2-webclient 2010-08-02
openSUSE openSUSE-SU-2010:0426-1 lxsession 2010-07-23

Comments (none posted)

mysql: denial of service

Package(s):mysql CVE #(s):CVE-2010-2008
Created:July 27, 2010 Updated:November 11, 2010
Description: From the CVE entry:

MySQL before 5.1.48 allows remote authenticated users with alter database privileges to cause a denial of service (server crash and database loss) via an ALTER DATABASE command with a #mysql50# string followed by a . (dot), .. (dot dot), ../ (dot dot slash) or similar sequence, and an UPGRADE DATA DIRECTORY NAME command, which causes MySQL to move certain directories to the server data directory.

Ubuntu USN-1397-1 mysql-5.1, mysql-dfsg-5.0, mysql-dfsg-5.1 2012-03-12
Gentoo 201201-02 mysql 2012-01-05
Ubuntu USN-1017-1 mysql-5.1, mysql-dfsg-5.0, mysql-dfsg-5.1 2010-11-11
Mandriva MDVSA-2010:155-1 mysql 2010-11-08
openSUSE openSUSE-SU-2010:0730-1 mysql 2010-10-18
Pardus 2010-117 mysql-server 2010-08-24
Mandriva MDVSA-2010:155 mysql 2010-08-20
Fedora FEDORA-2010-11126 mysql 2010-07-15
Fedora FEDORA-2010-11135 mysql 2010-07-15

Comments (none posted)

openttd: denial of service

Package(s):openttd CVE #(s):CVE-2010-2534
Created:July 27, 2010 Updated:July 27, 2010
Description: From the Red Hat bugzilla:

A remote attacker could use this flaw to conduct denial of service attacks, leading to game server infinite loop consuming excessive amount of CPU time.

Fedora FEDORA-2010-11450 openttd 2010-07-27
Fedora FEDORA-2010-11401 openttd 2010-07-27

Comments (none posted)

php: multiple vulnerabilities

Package(s):php CVE #(s):CVE-2010-2531 CVE-2010-2484 CVE-2010-2225
Created:July 27, 2010 Updated:July 5, 2011
Description: From the Mandriva advisory:

  • Rewrote var_export() to use smart_str rather than output buffering, prevents data disclosure if a fatal error occurs (CVE-2010-2531).
  • Fixed a possible interruption array leak in strrchr().(CVE-2010-2484)
  • Fixed SplObjectStorage unserialization problems (CVE-2010-2225).
Ubuntu USN-1231-1 php5 2011-10-18
Gentoo 201110-06 php 2011-10-10
Debian DSA-2266-2 php5 2011-07-01
Debian DSA-2266-1 php5 2011-06-29
CentOS CESA-2010:0919 php 2010-12-01
CentOS CESA-2010:0919 php 2010-11-30
Red Hat RHSA-2010:0919-01 php 2010-11-29
SUSE SUSE-SR:2010:017 java-1_4_2-ibm, sudo, libpng, php5, tgt, iscsitarget, aria2, pcsc-lite, tomcat5, tomcat6, lvm2, libvirt, rpm, libtiff, dovecot12 2010-09-21
Ubuntu USN-989-1 php5 2010-09-20
openSUSE openSUSE-SU-2010:0599-1 php5 2010-09-10
Slackware SSA:2010-240-04 php 2010-08-30
Fedora FEDORA-2010-11428 maniadrive 2010-07-27
Fedora FEDORA-2010-11481 maniadrive 2010-07-27
Fedora FEDORA-2010-11428 php-eaccelerator 2010-07-27
Fedora FEDORA-2010-11481 php-eaccelerator 2010-07-27
Fedora FEDORA-2010-11428 php 2010-07-27
Fedora FEDORA-2010-11481 php 2010-07-27
Pardus 2010-104 mod_php php-cli 2010-08-09
Debian DSA-2089-1 php5 2010-08-06
Pardus 2010-98 mod_php 2010-08-02
Mandriva MDVSA-2010:140 php 2010-07-27
Mandriva MDVSA-2010:139 php 2010-07-27
openSUSE openSUSE-SU-2010:0678-1 php5 2010-09-29
SUSE SUSE-SR:2010:018 samba libgdiplus0 libwebkit bzip2 php5 ocular 2010-10-06

Comments (none posted)

pidgin: denial of service

Package(s):pidgin CVE #(s):CVE-2010-2528
Created:July 27, 2010 Updated:August 30, 2010
Description: From the Red Hat bugzilla:

Mark Doliner, upstream pidgin/libpurple developer, discovered a NULL pointer dereference flaw in the way libpurple handled certain malformed X-Status messages in ICQ/Oscar protocol. This flaw could allow remote attacker to crash the victim's instant messenger application using libpurple such as pidgin.

Slackware SSA:2010-240-05 pidgin 2010-08-30
Pardus 2010-116 pidgin 2010-08-12
Mandriva MDVSA-2010:148 pidgin 2010-08-12
Fedora FEDORA-2010-11315 pidgin 2010-07-23
Fedora FEDORA-2010-11321 pidgin 2010-07-23

Comments (none posted)

samba: multiple vulnerabilities

Package(s):samba CVE #(s):CVE-2010-1635 CVE-2010-1642
Created:July 27, 2010 Updated:July 27, 2010
Description: From the Mandriva advisory:

The chain_reply function in process.c in smbd in Samba before 3.4.8 and 3.5.x before 3.5.2 allows remote attackers to cause a denial of service (NULL pointer dereference and process crash) via a Negotiate Protocol request with a certain 0x0003 field value followed by a Session Setup AndX request with a certain 0x8003 field value (CVE-2010-1635).

The reply_sesssetup_and_X_spnego function in sesssetup.c in smbd in Samba before 3.4.8 and 3.5.x before 3.5.2 allows remote attackers to trigger an out-of-bounds read, and cause a denial of service (process crash), via a \xff\xff security blob length in a Session Setup AndX request (CVE-2010-1642).

Gentoo 201206-22 samba 2012-06-24
SUSE SUSE-SU-2012:0348-1 Samba 2012-03-09
Mandriva MDVSA-2010:141 samba 2010-07-27

Comments (none posted)

Page editor: Jake Edge

Kernel development

Brief items

Kernel release status

The current development kernel is 2.6.35-rc6, released on July 22. Linus says:

I actually hope/think that this is going to be the last -rc. Things have been pretty quiet, and while this -rc has more commits than -rc5 had, it's not by a large amount, nor does it look scary to me. So there doesn't seem to be any point in dragging out the release any more, unless we find something new that calls for it.

It contains mostly fixes, but also a rename of the logical memory block (LMB) subsystem to "memblock." See the announcement for the short-form changelog, or the full changelog for all the details.

There have been no stable updates over the last week.

Comments (none posted)

Quotes of the week

One of the primary reasons why I started the kernel summit ten years ago was because I've found that people work better after they have had a chance to meet each other face to face. If you only know someone via e-mail, it's lot easier to get into flame wars. But after you've met someone, broken bread and drunk beer with them, it's easier to work with them as a colleague and fellow developer. While the Linux Kernel development community has grown significantly since March, 2001, this principle still holds true.
-- Ted Ts'o

This is one reason why I wrote the ARM Linux kernel booting document some 8 years ago, which specifies the _minimum_ of information that a boot loader needs to supply the kernel needs to be able to boot. Fat lot of good that did - as far as I'm concerned, writing documentation is a total and utter waste of my time and resources. It just gets ignored.

So I now just don't bother with any documentation _at_ _all_.

-- Russell King

My gut reaction to this sort of thing is "run away in terror". It encourages kernel developers to operate like lackadaisical userspace developers and to assume that underlying code can perform heroic and immortal feats. But it can't. This is the kernel and the kernel is a tough and hostile place and callers should be careful and defensive and take great efforts to minimise the strain they put upon other systems.
-- Andrew Morton

Comments (4 posted)

Kernel Summit 2010 planning process begins

The 2010 Kernel Summit will be held in Cambridge, Massachusetts, on November 1 and 2, immediately prior to the Linux Plumbers Conference. The planning process for this year's summit has begun, and the program committee is looking for ideas on what should be discussed and who should be there. "The kernel summit is organized by a program committee, but it could just as easily said that it is organized by the whole Linux Kernel development community. Which is to say, its goals are to make Linux kernel development flow more smoothly, and what we talk about is driven by the work that is going on in the development community at large. So to that end, we need your help!"

Full Story (comments: none)

Kernel development news

File creation times

By Jonathan Corbet
July 26, 2010
Linux systems, like the Unix systems that came before, maintain three different timestamps for each file. The semantics of those timestamps are often surprising to users, though, and they don't provide the information that users often want to know. The possible addition of a new system call is giving kernel developers the opportunity to make some changes in this area, but there is not, yet, a consensus on how that should be done.

The Unix file timestamps, as long-since enshrined by POSIX, are called "atime," "ctime," and "mtime." The atime stamp is meant to record the last time that the file was accessed. This information is almost never used, though, and can be quite expensive to maintain; Ingo Molnar once called atime "perhaps the most stupid Unix design idea of all times." So atime is often disabled on contemporary systems or, at least, rolled back to the infrequently-updated "relatime" mode. Mtime, instead, makes a certain amount of sense; it tells the user when the file was last modified. Modification requires writing to the file anyway, so updating this time is often free, and the information is often useful.

That leaves ctime, which is a bit of a strange beast. Users who do not look deeply are likely to interpret ctime as "creation time," but that is not what is stored there; ctime, instead, is updated whenever a file's metadata is changed. The main consumer of this information, apparently, is the venerable dump utility, which likes to know that a file's metadata has changed (so that information must be saved in an incremental backup), but the file data itself has not and need not be saved again. The number of dump users has certainly fallen over the years, to the point that the biggest role played by ctime is, arguably, confusing users who really just want a file's creation time.

So where do users find the creation time? They don't: Linux systems do not store that time and provide no interface for applications to access it.

That situation could change, though. Some newer filesystems (Btrfs and ext4, for example) have been designed with space for file creation times. Other operating systems also provide this information, and some network filesystem protocols expect to have access to it. So it would be nice if Linux properly supported file creation times; the proposed addition of the xstat() system call would be the ideal time to make that change.

Current xstat() implementations do, in fact, add a st_btime field to struct xstat; the "b" stands for "birth," which is a convention established in the BSD camp. There has been a fair amount of discussion about that addition, though, based on naming and semantics.

The naming issue, one would think, would be relatively straightforward. It was pointed out, though, that other names have been used in the kernel. JFS and Btrfs use "otime," for some reason, while ext4 uses "crtime." And BSD, it turns out, uses "birthtime" instead of "btime." That discussion inspired Linus to exclaim:

Oh wow. And all of this just convinces me that we should _not_ do any of this, since clearly it's all totally useless and people can't even agree on a name.

After that, though, Linus looked a bit more deeply at the problem, which he saw as primarily being to provide a Windows-style creation time that Samba could use. It turns out that Windows allows the creation time to be modified, so Linus saw it as being a sort of variation on the Unix ctime notion. That led to a suggestion to change the semantics of ctime to better suit the Windows case. After all, almost nobody uses ctime anyway, and it would be a trivial change to make ctime look like the Windows creation time. This behavior could be specified either as a per-process flag or a mount-time option; then there would be no need to add a new time field.

This idea was not wildly popular, though; Jeremy Allison said it would lead to "more horrible confusion." If ctime could mean different things in different situations, even fewer people would really understand it, and tools like Samba could not count on its semantics. Jeremy would rather just see the new field added; that seems like the way things will probably go.

There is one last interesting question, though: should the kernel allow the creation time to be modified? Windows does allow modification, and some applications evidently depend on that feature. Windows also apparently has a hack which, if a file is deleted and replaced by another with the same name, will reuse the older file's creation time. BSD systems, instead, do not allow the creation time to be changed. When Samba is serving files from a BSD system, it stores the "Windows creation time" in an extended attribute so that the usual Windows semantics can be provided.

If the current xstat() patch is merged, Linux will disallow changes to the creation time by default - there will be no system call which can make that change. Providing that capability would require an extended version of utimes() which can accept the additional information. Allowing the time to be changed would make it less reliable, but it would also be useful for backup/restore programs which want to restore the original creation time. That is a discussion which has not happened yet, though; for now, creation times cannot be changed.

Comments (29 posted)

zcache: a compressed page cache

By Jonathan Corbet
July 27, 2010
Last year, Nitin Gupta was pushing the compcache patch, which implemented a sort of swap device which stored pages in main memory, compressing them on the way. Over time, compcache became "ramzswap" and found its way into the staging tree. It's not clear that ramzswap can ever graduate to the mainline kernel, so Nitin is trying again with a development called zcache. But zcache, too, currently lacks a clear path into the mainline.

Like its predecessors, zcache lives to store compressed copies of pages in memory. It no longer looks like a swap device, though; instead, it is set up as a backing store provider for the Cleancache framework. Cleancache uses a set of hooks into the page cache and filesystem code; when a page is evicted from the cache, it is passed to Cleancache, which might (or might not) save a copy somewhere. When pages are needed again, Cleancache gets a chance to restore them before the kernel reads them from disk. If Cleancache (and its backing store) is able to quickly save and restore pages, the potential exists for a real improvement in system performance.

Zcache uses LZO to compress pages passed to it by Cleancache; only pages which compress to less than half their original size are stored. There is also a special test for pages containing only zeros; those compress exceptionally well, requiring no storage space at all. There is not, at this point, any other attempt at the unification of pages with duplicated contents (as is done by KSM), though.

There are a couple of obvious tradeoffs to using a mechanism like zcache: memory usage and CPU time. With regard to memory, Nitin says:

While compression reduces disk I/O, it also reduces the space available for normal (uncompressed) page cache. This can result in more frequent page cache reclaim and thus higher CPU overhead. Thus, it's important to maintain good hit rate for compressed cache or increased CPU overhead can nullify any other benefits. This requires adaptive (compressed) cache resizing and page replacement policies that can maintain optimal cache size and quickly reclaim unused compressed chunks. This work is yet to be done.

The current patch does allow the system administrator to manually adjust the size of the zcache area, which is a start. It will be a rare admin, though, who wants to watch cache hit rates and tweak low-level memory management parameters in an attempt to sustain optimal behavior over time. So zcache will almost certainly have to grow some sort of adaptive self-tweaking before it can make it into the mainline.

The other tradeoff is CPU time: it takes processor time to compress and decompress pages of memory. The cost is made worse by any pages which fail to compress down to less than 50% of their original size - the time spent compressing them is a total waste. But, as Nitin points out: "with multi-cores becoming common, benefits of reduced disk I/O should easily outweigh the problem of increased CPU usage." People have often wondered what we are going to do with the increasing number of cores on contemporary processors; perhaps zcache is part of the answer.

One other issue remains to be resolved, though: zcache depends on Cleancache, which is not currently in the mainline. There is some opposition to merging Cleancache, mostly because that patch, which makes changes to individual filesystems, is seen as being overly intrusive. It's also not clear that everybody is, yet, sold on the value of Cleancache, despite the fact that SUSE has been shipping it for a little while now. Until the fate of Cleancache is resolved, add-on patches like zcache will be stuck outside of the mainline.

Comments (11 posted)

Realtime Linux: academia v. reality

July 26, 2010

This article was contributed by Thomas Gleixner

The 20th Euromicro Conference on Real-Time Systems (ECRTS2010) was held in Brussels, Belgium from July 6-9, along with a series of satellite workshops which took place on July 6. One of those satellite workshops was OSPERT 2010 - the Sixth International Workshop on Operating Systems Platforms for Embedded Real-Time Applications, which was co-chaired by kernel developer Peter Zijlstra and Stefan M. Petters from the Polytechnic Institute of Porto, Portugal. Peter and Stefan invited researchers and practitioners from both industry and the Linux kernel developer community. I participated for the second year and tried, with Peter, to nurse the discussion between the academic and real worlds which started last year at OSPERT in Dublin.

Much to my surprise, I was also invited to give the opening keynote at the main conference, which I titled "The realtime preemption patch: pragmatic ignorance or a chance to collaborate?". Much to the surprise of the audience I did my talk without slides, as I couldn't come up with useful ones as much as I twisted my brain around it. The organizers of ECRTS asked me whether they could publish my writeup, but all I had to offer were my scribbled notes which outlined what I wanted to talk about. So I agreed to do a transcript from my notes and memory, without any guarantee that it's a verbatim transcript. Peter at least confirmed that it matches roughly the real talk.

An introduction

First of all I want to thank Jim Anderson for the invitation to give this keynote at ECRTS and his adventurous offer to let me talk about whatever I want. Such offers can be dangerous, but I'll try my best not to disappoint him too much.

The Linux Kernel community has a proven track record of being in disagreement with - and disconnected from - the academic operating system research community from the very beginning. The famous Torvalds/Tannenbaum debate about the obsolescence of monolithic kernels is just the starting point of a long series of debates about various aspects of Linux kernel design choices.

One of the most controversial topics is the question how to add realtime extensions to the Linux kernel. In the late 1990's, various research realtime extensions emerged from universities. These include KURT (Kansas University), RTAI (University of Milano), RTLinux (NMT, Socorro, New Mexico), Linux/RK (Carnegie Mellon University), QLinux (University of Massachusetts), and DROPS (University of Dresden - based on L4), just to name a few. There have been more, but many of them have only left hard-to-track traces in the net.

The various projects can be divided into two categories:

  1. Running Linux on top of a micro/nano kernel
  2. Improving the realtime behavior of the kernel itself

I participated in and watched several discussions about these approaches over the years; the discussion which is burned into my memory forever happened in summer 2004. In the course of an heated debate one of the participants stated: "It's impossible to turn a General Purpose Operating System into a Real-Time Operating System. Period." I was smiling then as I had already proven, together with Doug Niehaus from Kansas University, that it can be done even if it violates all - or at least most - of the rules of the academic OS research universe.

But those discussions were not restricted to the academic world. The Linux kernel mailing list archives provide a huge choice of technical discussions (as well as flame wars) about preemptability, latency, priority inheritance and approaches to realtime support. It was fun to read back and watch how influential developers changed their minds over time. Especially Linus himself provides quite a few interesting quotes. In May 2002 he stated:

With RTLinux, you have to split the app up into the "hard realtime" part (which ends up being in kernel space) and the "rest".

Which is, in my opinion, the only sane way to handle hard realtime. No confusion about priority inversions, no crap. Clear borders between what is "has to happen _now_" and "this can do with the regular soft realtime".

Four years later he said in a discussion about merging the realtime preemption patch during the Kernel Summit 2006:

Controlling a laser with Linux is crazy, but everyone in this room is crazy in his own way. So if you want to use Linux to control an industrial welding laser, I have no problem with your using PREEMPT_RT.

Equally interesting is his statement about priority inheritance in a huge discussion about realtime approaches in December 2005:

Friends don't let friends use priority inheritance. Just don't do it. If you really need it, your system is broken anyway.

Linus's clear statement that he wouldn't merge any PI code ever was rendered ad absurdum when he merged the PI support for pthread_mutexes without a single comment only half a year later.

Both are pretty good examples of the pragmatic approach of the Linux kernel development community and its key figures. Linus especially has always silently followed the famous words of the former German chancellor Konrad Adenauer: "Why should I care about my chatter from yesterday? Nothing prevents me from becoming wiser."

Adding realtime response to the kernel

But back to the micro/nano-kernel versus in-kernel approaches which emerged in the late 90es. From both camps emerged commercial products and, more or less, active open source communities, but none of those efforts was commercially sustainable or ever got close to being merged into the official mainline kernel code base due to various reasons. Let me look at some of those reasons:

  • Intrusiveness and maintainability: Most of those approaches lacked - and still lack - proper abstractions and smooth integration into the Linux kernel code base. #ifdef's sprinkled all over the place are neither an incentive for kernel developers to delve into the code nor are they suitable for long-term maintenance.

  • Complexity of usage: Dual-kernel approaches tend to be hard to understand for application programmers, who often have a hard time coping with a single API. Add a second API and the often backwards-implemented IPC mechanisms between the domains and failure is predictable.

    I'm not saying that it can't be done, it's just not suitable for the average programmer.

  • Incompleteness: Some of those research approaches solve only parts of the problem, as this was their particular area of interest. But that prevents them from becoming useful in practice.

  • Lack of interest: Some of the projects never made any attempt to approach the Linux kernel community, so the question of inclusion, or even partial merging of infrastructure, never came up.

In October 2004, the real time topic got new vigor on the Linux kernel mailing list. MontaVista had integrated the results of research at the University of the German Federal Armed Forces at Munich into the kernel, replacing spinlocks with priority-inheritance-enabled mutexes. This posting resulted in one of the lengthiest discussions about realtime on the Linux kernel mailing list as almost everyone involved in efforts to solve the realtime problem surfaced and praised the superiority of their own approach. Interestingly enough, nobody from the academic camp participated in this heated argument.

A few days after the flame fest started, the discussion was driven to a new level by kernel developer Ingo Molnar, who, instead of spending time with rhetoric, had implemented a different patch which, despite being clumsy and incomplete, built the starting point for the current realtime preemption patch. In no time quite a few developers interested in realtime joined Ingo's effort and brought the patch to a point which allowed real-world deployment within two years. During that time a huge number of interesting problems had to be solved: efficient priority inheritance, solving per cpu assumptions, preemptible RCU, high resolution timers, interrupt threading etc. and, as a further burden, the fallout from sloppily-implemented locking schemes in all areas across the kernel.

Help from academia?

Those two years were mostly spent with grunt work and twisting our brains around hard-to-understand and hard-to-solve locking and preemption problems. No time was left for theory and research. When the dust settled a bit and we started to feed parts of the realtime patch to the mainline, we actually spent some time reading papers and trying to leverage the academic research results.

Let me pick out priority inheritance and have a look at how the code evolved and why we ended up with the current implementation. The first version which was in Ingo's patchset was a rather simple approach with long-held locks, deep lock nesting and other ugliness. While it was correct and helped us to go forward it was clear that the code had to be replaced at some point.

A first starting point for getting a better implementation was of course reading through academic papers. First I was overwhelmed by the sheer amount of material and puzzled by the various interesting approaches to avoid priority inversion. But, the more papers I read, the more frustrated I got. Lots of theory, proof-of-concept implementations written in Ada, micro improvements to previous papers, you all know the academic drill. I'm not at all saying that it was waste of time as it gave me a pretty good impression of the pitfalls and limitations which are expected in a non-priority-based scheduling environment, but I have to admit that it didn't help me to solve my real world problem either.

The code was rewritten by Ingo Molnar, Esben Nielsen, Steven Rostedt and myself several times until we settled on the current version. The way led from the classic lock-chain walk with instant priority boosting through a scheduler-driven approach, then back to the lock-chain walk as it turned out to be the most robust, scalable and efficient way to solve the problem. My favorite implementation, though, would have been based on proxy execution, which already existed in Doug Niehaus's Kansas University Real Time project at that time, but unfortunately it lacked SMP support. Interestingly enough, we are looking into it again as non-priority-based scheduling algorithms are knocking at the kernel's door. But in hindsight I really regret that nobody—including myself—ever thought about documenting the various algorithms we tried, the up- and down-sides, the test results and related material.

So it seems that there is the reverse problem on the real world developer side: we are solving problems, comparing and contrasting approaches and implementations, but we are either too lazy or too busy to sit down and write a proper paper about it. And of course we believe that it is all documented in the different patch versions and in the maze of the Linux kernel mailing list archives which are freely available for the interested reader.

Indeed it might be a worthwhile exercise to go back and extract the information and document it, but in my case this probably has to wait until I go into retirement, and even then I fear that I have more favorable items on my ever growing list of things which I want to investigate. On the other hand, it might be an interesting student project to do a proper analysis and documentation on which further research could be based.

On the value of academic research

I do not consider myself in any way to be representative of the kernel developer community, so I asked around to learn who was actually influenced by research results when working on the realtime preemption patch. Sorry for you folks, the bad news is that most developers consider reading research results not to be a helpful and worthwhile exercise in order to get real work done. The question arises why? Is academic OS research useless in general? Not at all. It's just incredibly hard to leverage. There are various reasons for this and I'm going to pick out some of them.

First of all—and I have complained about this before—it's often hard to get access to papers because they are hidden away behind IEEE's paywall. While dealing with IEEE, a fact of life for the academic world, I personally consider it as a modern form of robber barony where tax payers have to pay for work which was funded by tax money in the first place. There is another problem I have with the IEEE monopoly. Universities' rankings are influenced by the number of papers written by their members and accepted at a IEEE conferences, which I consider to be one of the most idiotic quality measurement rules on the planet. And it's not only my personal opinion; it's also provable.

I actually took the time to spend a day at a university where I could gain access to IEEE papers without wasting my private money. I picked out twenty recent realtime related papers and did a quick survey. Twelve of the papers were a rehash of well-known and well-researched topics, and at least half of them were badly written as well. From the remaining eight papers, six were micro improvements based on previous papers where I had a hard time figuring out why the papers had been written at all. One of those was merely describing the effects of converting a constant which influences resource partitioning into a runtime configurable variable. So that left two papers which seemed actually worthwhile to read in detail. Funny enough, I had already read one of those papers as it was publicly accessible in a slightly modified form.

That survey really convinced me to stay away from IEEE forever and to consider the university ranking system even more suspicious.

There are plenty of other sources where research papers can be accessed, but unfortunately the signal-to-noise ratio there is not significantly better. I have no idea how researchers filter that, but on the other hand most people wonder how kernel developers filter out the interesting stuff from the Linux kernel mailing list flood.

One interesting thing I noticed while skimming through paper titles and abstracts is that the Linux kernel seems to have become the most popular research vehicle. On one site I found roughly 600 Linux-based realtime and scheduling papers which were written in the last 18 months. About 10% of them utilized the realtime preemption patch as their baseline operating system. Unfortunately almost none of the results ever trickled through to the kernel development community, not to mention actually working code being submitted to the Linux kernel mailing list.

As a side note: one paper even mentioned a hard-to-trigger longstanding bug in the kernel which the authors fixed during their research. It took me some time to map the bug to the kernel code, but I found out that it got fixed in the mainline about three months after the paper was published—which is a full kernel release cycle. The fix was not related to this research work in any way, it just happened that some unrelated changes made the race window wider and therefore made the bug surface. I was a bit grumpy when I discovered this, but all I can ask for is: please send out at least a description of a bug you trip over in your research work to the kernel community.

Another reason why it's hard for us to leverage research results is that academic operating system research has, as probably any other academic research area, a few interesting properties:

  • Base concepts in research are often several decades old, but they don't show up in the real world even if they would be helpful to solve problems which have been worked around for at least the same number of decades more or less.

    We discussed the sporadic server model yesterday at OSPERT, but it has been around for 27 years. I assume that hundreds of papers have been written about it, hundreds of researchers and students have improved the details, created variations, but there is almost no operating system providing support for it. As far as I know Apple's OSX is the only operating system which has a scheduling policy which is not based on priorities but, as I learned, it's well hidden away from the application programmer.

  • Research often happens on narrow aspects of an already narrow problem space. That's understandable as you often need to verify and contrast algorithms on their own merit without looking at other factors. But that leaves the interested reader like me with a large amount of puzzle pieces to chase and fit together, which often enough made me give up.

  • Research often happens on artificial application scenarios. While again understandable from the research point of view, it makes it extremely hard, most of the time, to expand the research results into generalized application scenarios without shooting yourself in the foot and without either spending endless time or giving up. I know that it's our fault that we do not provide real application scenarios to the researchers, but in our defense I have to say that in most of the cases we don't know what downstream users are actually doing. We only get a faint idea of it when they complain about the kernel not doing what they expect.

  • Research often tries to solve yesterday's problems over and over while the reality of hardware and requirements have already moved to the next levels of complexity. I can understand that there are still interesting problems to solve, but seeing the gazillionst paper about priority ceilings on uniprocessor systems is not really helpful when we are struggling with schedulability, lock scaling and other challenges on 64- (and more) core machines.

  • Comparing and contrasting research results is almost impossible. Even if a lot of research happens on Linux there is no way to compare and contrast the results as researchers, most of the time, base their work on completely different base kernel versions. We talked about this last year and I have to admit that neither Peter nor myself found enough spare time to come up with an approach to create a framework on which the various research groups could base their scheduler of the day. We haven't forgotten about this, but while researchers have to write papers, we get our time occupied by other duties.

  • Research and education seem to happen in different universes. It seems that operating system and realtime research have little influence on the education of Joe Average Programmer. I'm always dumbstruck when talking to application programmers who have not the faintest idea of resources and their limitations. It seems that the resource problems on their side are all solvable by visiting the hardware shop across the street and buying the next-generation machine. That approach also manifests itself pretty well in the "enterprise realtime" space where people send us test cases which refuse to even start on anything smaller than a machine equipped with 32GB of RAM and at least 16 cores.

    If you have any chance to influence that, then please help to plant at least some clue on the folks who are going to use the systems you and we create.

    A related observation is the inability of hardware and software engineers to talk to each other when a system is designed. While I observe that disconnect mainly on the industry side, I have the feeling that it is largely true in the universities as well. No idea how to address this issue, but it's going to be more important the more the complexity of systems increases.

I'll stop bashing on you folks now, but I think that there are valid questions and we need to figure out answers to them if we want to get out of the historically grown state of affairs someday.

In conclusion

We are happy that you use Linux and its extensions for your research, but we would be even more happy if we could deal with the outcome of your work in an easier way. In the last couple of years we started to close the gap between researchers and the Linux kernel community at OSPERT and at the Realtime Linux Workshop and I want to say thanks to Stefan Petters, Jim Anderson, Gerhard Fohler, Peter Zijlstra and everyone else involved. It's really worthwhile to discuss the problems we face with the research community and we hope that you get some insight into the problems we face and requirements which are behind our pragmatic approach to solve them.

And of course we appreciate that some code which comes out straight of the research laboratory (the EDF scheduler from ReTiS, Pisa) actually got cleaned up and published on the Linux kernel mailing list for public discussion and I really hope that we are going to see more like this in the foreseeable future. Problem complexity is increasing, unfortunately, and we need all the collective brain power to address next year's challenges. We already started the discussion and first interesting patches have shown up, so really I hope we can follow down that road and get the best out of it for all of us.

Thanks for your attention.


I got quite a bit of feedback after the talk. Let me answer some of the questions.

Q: Is there any place outside LKML where discussion between academic folks and the kernel community can take place?

A: Björn Brandenberg suggested setting up a mailing list for research related questions, so that the academics are not forced to wade through the LKML noise. If a topic needs a broader audience we always can move it to LKML. I'm already working on that. It's going to be low traffic, so you should not be swamped in mail.

Q: Where can I get more information about the realtime preemption patch ?

A: General information can be found on the realtime Linux wiki, this LWN article, and this Linux Symposium paper [PDF].

Q: Which technologies in the mainline Linux kernel emerged from the realtime preemption patch?

A: The list includes:

  • the Generic interrupt handling framework. See: Linux/Documentation/DocBook/genericirq and this LWN article.

  • Threaded interrupt handlers, described in LWN and again in LWN.

  • The mutex infrastructure. See: Linux/Documentation/mutex-design.txt

  • High-resolution timers, including NOHZ idle support. See: Linux/Documentation/timers/highres.txt and these presentation slides.

  • Priority inheritance support for user space pthread_mutexes. See: Linux/Documentation/pi-futex.txt, Linux/Documentation/rt-mutex.txt, Linux/Documentation/rt-mutex-design.txt, this LWN article, and this Realtime Linux Workshop paper [PDF].

  • Robustness support for user-space pthread_mutexes. See: Linux/Documentation/robust-futexes.txt and this LWN article.

  • The lock dependency validator, described in LWN.

  • The kernel tracing infrastructure, as described in a series of LWN articles: 1, 2, 3, and 4.

  • Preemptible and hierarchical RCU, also documented in LWN: 1, 2, 3, and 4.

Q: Where do I get information about the Realtime Linux Workshop?

A: The 2010 realtime Linux Workshop (RTLWS) will be in Nairobi, Kenya, Oct. 25-27th. The 2011 RTLWS is planned to be at Kansas University (not confirmed yet). Further information can be found on the RTLWS web page. General information about the organisation behind RTLWS can be found on the OSADL page, and information about it's academic members is on this page.

Conference impressions

I stayed for the main conference, so let me share my impressions. First off the conference was well organized and, in general, the atmosphere was not really different from an open source conference. The realtime researchers seem to be a well-connected and open-minded community. While they take their research seriously, at least most of them admit freely that the ivory tower they are living in can be a complete different universe. This was pretty much observable in various talks where the number of assumptions and the perfectly working abstract hardware models made it hard for me to figure out how the results of this work could be applied to reality.

The really outstanding talks were the keynotes on day two and three.

On Thursday, Norbert When from the Technical University Kaiserslautern gave an interesting talk titled Hardware modeling: A critical assessment with case studies [PDF]. Norbert is working on hardware modeling and low-level software for embedded devices, so he is not the typical speaker you would expect at a realtime-focused conference. But it seems that the program committee tried to bring some reality into the picture. Norbert gave an impressive overview over the evolution of hardware and the reasons why we have to deal with multi-core hardware and have to face the fact that today's hardware is not designed for predictability and reliability. So realtime folks need to rethink their abstract models and take more complex aspects of the overall system into account.

One of the interesting aspects was his view on energy efficient computing: A cloud of 1.7 million AMD Opteron cores consumes 179MW while a cloud of 10 million Xtensa cores provides the same computing power at 3MW. Another aspect of power-aware computing is the increasing role of heterogeneous systems. Dedicated hardware for video decoding is about 100 times more power efficient than a software-based solution on a general-purpose CPU. Even specialized DSPs consume about 10 times more power for the same task than the optimized hardware solution.

But power optimized hardware has a tradeoff: the loss of flexibility which is provided by software. But the mobile space has already arrived in the heterogeneous world, and researchers need to become aware of the increased complexity to analyze such hybrid constructs and develop new models to allow the verification of these systems in the hardware design phase. Workarounds for hardware design failures in application specific systems are orders of magnitudes more complex than on general purpose hardware. All in all, he gave his colleagues from the operating system and realtime research communities quite a list of homework assignments and connected them back to earth.

The Friday morning keynote was a surprising reality check as well. Sanjoy Baruah from the University of North Carolina at Chapel Hill titled his talk "Why realtime scheduling theory still matters". Given the title one would assume that the talk would be focused on justifying the existence of the ivory tower, but Sanjoy was very clear about the fact that the realtime and scheduling research has focused for too long on uniprocessor systems and is missing answers to the challenges of the already-arrived multi-core era. He gave pretty clear guidelines about which areas research should focus on to prove that it still matters.

In addition to the classic problem space of verifiable safety-critical systems, he was calling for research which is relevant to the problem space and built on proper abstractions with a clear focus on multi-core systems. Multi-core systems bring new—and mostly unresearched—challenges like mixed criticalities, which means that safety critical, mission critical and non critical applications run on the same system. All of them have different requirements with regard to meeting their deadlines, resource constraints, etc., and therefore bring a new dimension into the verification problem space. Other areas which need care, according to Sanjoy, are component-based designs and power awareness.

It was good to hear that despite our usual perception of the ivory tower those folks have a strong sense of reality, but it seems they need a more or less gentle reminder from time to time. ECRTS was a real worthwhile conference and I can only encourage developers to attend such research-focused events and keep the communication and discussion between our perceived reality and the not-so-disconnected other universe alive.

Comments (84 posted)

Patches and updates

Kernel trees


Build system

Core kernel code

Development tools

Device drivers

Filesystems and block I/O

Memory management


  • Mimi Zohar: EVM . (July 22, 2010)

Virtualization and containers

Benchmarks and bugs

Page editor: Jonathan Corbet


T2 SDE 8.0: a universal distribution build kit

July 28, 2010

This article was contributed by Koen Vervloesem

There are not a lot of source-based Linux distributions, so when one of them announces a new release, it's always a good opportunity to take a look. We're not talking about Gentoo Linux or Linux From Scratch now, but about a relatively unknown but nonetheless interesting distribution: T2 SDE. After years of development, the project published a new stable release, version 8.0 ("Phoenix").

Distribution build kit

As the project's home page hastens to stress, T2 SDE (which stands for "System Development Environment") is not just a Linux distribution, it's an open source distribution build kit. At the core of T2 lies an automated build system that manages the whole compilation process from fetching the sources of packages to creating a CD image for a desktop system or a ROM image for embedded use. After initial creation of the tool chain, all packages are built inside a sandbox environment.

When configuring the build system, users can choose from various pre-defined target definitions or create their own target definition. A target handles the selection of packages, C library (Glibc, dietlibc, uClibc), compiler (GCC or LLVM), and so on, and it even supports cross-compilation. Depending on the chosen target, the user can build a Linux distribution for an embedded system, a desktop system or a server. There is even an experimental target to build a T2 system to run on a wireless router (wrt2), but it is not yet fully done. If someone picks up development of this target, the result should be an OpenWRT-like embedded Linux system.

The principal developer of T2 SDE is René Rebe, CTO of the German software development company ExactCODE. They use T2 in commercial embedded systems, industrial computers and appliances. Hence, the real target audience of the distribution are developers of appliances and embedded systems. According to René, ExactCODE's clients are using T2 to power firewall products, greenhouse controllers, IPTVs, and a lot of other embedded devices. But T2 is also used as the base of general-purpose Linux distributions, such as Puppy Linux.

The Phoenix has landed

T2 SDE 8.0 is based on Linux kernel 2.6.34, GCC 4.5, Glibc 2.11 and X.Org 7.5. In total there are around 3200 packages in the repository. Users can download minimal ISO images for i486, x86_64, powerpc and powerpc64 (PowerMac G5) or download the source and build their own ISO image. The advantage of the latter is that it allows you to build an ISO file for another architecture (ARM, Blackfin, MIPS, Sparc and many others are supported) or optimized for a specific processor instruction set, and that other package sets are supported. By the way, these ISO images can be fully cross-compiled, as has been done with the minimal ISO images.

The website has extensive but a little out-of-date documentation, with the T2 Handbook as an excellent in-depth reference, and two short step-by-step guides for the impatient: one for building and one for installing T2.

In short, after checking out the T2 sources with Subversion, the user starts the configuration of the build system as root with the command ./scripts/Config, which shows an ncurses interface. Then the user chooses a target (generic, t2 sde embedded, and so on) and a package selection, as well as distribution media, the CPU architecture and optionally some optimizations. There a lot of advanced configuration options, for example for choosing another C library, compiler, and so on. When the configuration is done, the build is started with a ./scripts/Build-Target command. Multiple targets can be built from the same T2 build system by specifying a configuration file with the -cfg argument. Building T2 is obviously optimally done on a T2 system, but with the right prerequisites it's also possible on other Linux distributions.

Working with T2

T2 is obviously an excellent framework for building embedded Linux systems. But is it also suitable as a desktop system? It depends on what the user is looking for. The target users are not the ones that want to have a completely preconfigured operating system such as Ubuntu. In contrast, T2 is the ultimate do-it-yourself distribution: users install the base system from the minimal ISO image and install the packages they need. The operating system installation and configuration tool stone is really bare-bones, but it does the job. Just be sure to select "Full install" when asked about the package set.

In contrast to many other distributions, T2 only applies patches to the original source files when absolutely necessary, and it follows the latest version of all packages. This means that users have a cutting-edge Linux distribution, but they have to configure a lot themselves. Moreover, all services are disabled by default. All this makes T2's philosophy closer to the BSD world than to most Linux distributions.

Building and installing a package on a T2 system is simply done with the Emerge-Pkg script (after checking out the T2 source with Subversion). This script not only builds and installs the named package, but also its dependencies. The same command can be used to update an already installed package. Removing a package is done with:

    mine -r packagename

Where mine is T2's binary package manager. By the way, T2 uses human-readable text files (found in /var/adm) for package management. For example a list of all installed files belonging to packages can be found in /var/adm/flists/packagename. This makes it possible to query package information with normal UNIX tools. For example, grepping for the name of a header file in /var/adm/flists will give you the package which offers this file.

However, dependencies are currently a big hurdle for desktop use of T2. Emerge-Pkg only installs direct dependencies, so a lot of the builds fail. The user then has to Emerge-Pkg the failed dependencies to try to build them directly, and if that doesn't work out the error log files in /var/adm/logs can give some information about which dependencies are missing. Then you should install these first before trying to build the original package again. Emerge-Pkg has an option to build all indirect dependencies, but this often builds way too much, so it isn't advisable to use this. With the current approach, the build system is not user-friendly enough to use T2 as a desktop system without annoyances, but René is aware that the problem has been neglected for too long and he is working on improving the experience.

T2 uses its own non-standard framework for network configuration, which can be somewhat confusing at first, although it looks somewhat like Debian's /etc/network/interfaces. The network configuration is stored in /etc/conf/network and allows setups from very simple to complex, even with multiple profiles and basic firewall rules. The T2 handbook is an invaluable resource to get a network running on a T2 system, although the system configuration tool stone ("Setup Tool ONE") can handle simple network configurations.


There will always come a moment when a user wants to install a package that T2 doesn't have. Luckily the package format (with the .desc file extension) is quite easy to grasp: it's a simple text file with some metainformation such as a description and a URL where the source of the program can be downloaded. T2 understands a lot of build systems (among others GNU autoconf, Makefiles, cmake, Python and Perl Makefile.PL) and automatically fetches and extracts the source, modifies the package build and compiles and installs the software. So in most cases you only have to fill in some basic information to create a new T2 package. In case this doesn't work, you have to add a separate .conf file that modifies the build process or .patch files with patches to the source. More information about the process can be found in the T2 handbook, and there is also a simple package creation tutorial.

When you have created a new package, contributing it to the T2 mailing list guarantees that it will be added to the repository. There is also the IRC channel #t2 on Freenode, where a small but helpful community is available. All in all, the process of writing your own packages is really straightforward: last year, your author contributed a handful of packages to T2 while evaluating the distribution, and it struck him how extremely readable and self-documenting the package format is.

The future

T2 SDE is not only a cross-architecture distribution, it also wants to become cross-platform. While currently the only platform it uses is the Linux kernel, a long-term goal is support for other open source kernels like MINIX, GNU Hurd, *BSD, Haiku, OpenDarwin or OpenSolaris. At the moment no work is being done in this domain, but the build system should make the task doable. According to René, who is especially interested in having a microkernel to run T2 on, it is not so difficult besides patching some of the packages:

If GCC can produce working executables for the platform, it's an easy game: you just have to package the kernel header and C library and you're already halfway. Most packages should already build, and then you just have to collect the kernel sources into a kernel package and maybe package some system control tools.

A first step into the direction of other kernels has already been made, and it is, surprisingly, support for Windows. More specifically, T2 added support for MinGW in August of last year. MinGW (Minimalist GNU for Windows) is a port of GCC and GNU Binutils for use in the development of Windows applications. This means that T2 can be used to cross-compile 32-bit Windows binaries on a Linux system. The work has been done by ExactCODE for a client who wanted to compile a Windows executable from a more automated (read: UNIX-like) environment.

Another important mid-term project is improved LLVM/Clang support, René says:

It is already pretty usable. I have one virtual machine that has most executables compiled by LLVM. However, there is still some work needed on cross-compiling and bootstrapping the base system, as well as compiling the Linux kernel, because LLVM has issues with some advanced inline assembly and 16-bit boot code.


If you want to use T2 SDE as a desktop system, expect to invest a lot of time chasing dependencies and configuring a lot of details yourself. Because T2 SDE doesn't have extensive documentation about the daily use of the system, like Gentoo and Arch Linux have, it's no trivial task. However, its source-based nature and its clean BSD-like philosophy will surely appeal to do-it-yourself users.

These issues notwithstanding, T2 SDE is a powerful and flexible Linux distribution for all sorts of purposes. What's interesting is that it offers just one distribution build kit that can be used to create various targets, from embedded to desktop, while many other distributions have different versions for different targets. Moreover, T2's handbook covers extensively how to create packages and how to build your own distribution. If you want to build a special-purpose Linux distribution, T2 SDE should be one of the first places to look.

Comments (4 posted)

New Releases

Debian Live squeeze alpha2

The Debian Live team has announced the second alpha of Debian Squeeze Live images. These images were built using the archive state of squeeze from 2010-07-17, plus some additional components from sid.

Full Story (comments: none)

Debian Policy released

Debian Policy has been released. Lintian has been updated to v2.4.3 for this release of Policy. Click below to see the primary changes in this version.

Full Story (comments: none)

FreeBSD 8.1-RELEASE Available

The FreeBSD Release Engineering Team has announced the availability of FreeBSD 8.1-RELEASE. "This is the second release from the 8-STABLE branch which improves on the functionality of FreeBSD 8.0 and introduces some new features."

Full Story (comments: none)

SchilliX-0.7.0 ready for testing

SchilliX-0.7.0 is available for testing. This version uses OpenSolaris Nevada Build 130.

Full Story (comments: none)

Debian Edu/Skolelinux 6.0.0 alpha0 test release

Debian Edu/Skolelinux 6.0.0 alpha0 is available for testing. "This is the first test release based on Squeeze. The focus of this release is to test the user application selection."

Full Story (comments: none)

Distribution News

Quote of the week

Be bold. The developers aren't a cabal cult worshiping the Dark God of Ubuntu, they're friendly people willing to help. If you have questions and a web search doesn't answer them, come to IRC and ask! Along the way, you can learn something new from the conversations that go there all the time.
-- Maia Kozheva

Comments (none posted)


Report: An Exploration of Fedora's Online Open Source Development Community

Diana Harrelson, an anthropology graduate student, spent several months surveying the Fedora community; a draft version of her report is now available. It looks at contributors' motivations and problems they have encountered, and makes a number of recommendations on how to make the project easier to contribute to. "The key here, and the large difference between FLOSS development processes and traditional ones, is that it's not the act of doing something that needs approval; instead it's the result of the action and quality of the work that must be approved. Again, this is where not only having a mentor program for new contributors is useful, but also making such a program highly visible, transparent, and accessible is important."

Comments (2 posted)

Rawhide changes: systemd and Fedora 14 branch

There are some interesting changes coming for Rawhide users, starting with the fact that systemd is now the default init system. The early reports are mostly about dependency issues; it's not clear that all that many users have gotten as far as running the new system yet. "I have tested all this quite extensibly on my machines, but of course, I am not sure how this will break on other people's machines. I sincerely hope I didn't break anything major with this transition. So please report bugs and don't rip off my head because I might have broken your boot... I didn't do it on purpose, promised!"

Meanwhile, the Fedora 14 branch is coming on July 27, with the added twist that the project is switching its CVS-based system over to git at the same time. For now, they will be mostly focused on just making it work, but there's some interesting ideas for the future: "Later on we will start to explore more interesting advancements such as automatic patch management with exploded sources, linking to upstream source repositories, automatic %changelog generation from git changelogs, or things I haven't even thought about."

Comments (4 posted)

SUSE Linux and openSUSE

openSUSE 11.0 has reached End Of Life

openSUSE 11.0 has been officially discontinued and is no longer supported. "openSUSE 11.0 was released on June 17 2008, making it 2 years and 1 month of security and bugfix support."

Full Story (comments: none)

Jos Poortvliet named openSUSE Community Manager

The openSUSE project has announced that Jos Poortvliet will be its new community manager. "Jos commented, 'The opportunity to become part of the international openSUSE community is very exciting. There are a great number of interesting developments going on in the free software world, and openSUSE plays a major role in many of them. I look forward to working with the community on these, helping it grow, finding new directions and ways of developing, and delivering its innovative technologies to users and developers around the world.'"

Full Story (comments: 1)

Newsletters and articles of interest

Distribution newsletters

Comments (none posted)

Hands on: Jolicloud 1.0, a Linux distro in progress (ComputerWorld)

ComputerWorld has a review of Jolicloud 1.0. "Does Jolicloud live up to its promises? Somewhat. Its biggest problem is that it feels more like a second beta, not a 1.0 release; it needs more work before it's truly useful instead of one step above a curiosity."

Comments (none posted)

Page editor: Rebecca Sobol


OSCON: Building communities

July 28, 2010

This article was contributed by Joe 'Zonker' Brockmeier.

How do you build a successful community that attracts contributors? Two talks at OSCON offered very good advice on the topic: "Secrets of building and participating in open source communities" by Drupal founder Dries Buytaert, and "Junior Jobs and Bite-sized Bugs: Entry Points for New Contributors to Open Source," which was co-presented by Mel Chua of Red Hat and Asheesh Laroia of OpenHatch. Both presentations offered some worthwhile insight and tips on building community.

To be sure, these weren't the only talks focused on the fine art of community management. However, Buytaert's talk seemed worthwhile to attend because it's obvious that Drupal does enjoy a healthy community of contributors, and the "junior jobs" presentation seemed worthwhile because it was focused on practical techniques for dealing with an obvious problem for any community, rather than being of the hand-wavy motivational community talk variety.

Secrets of Building Communities

Given that community management is a well-covered topic, one might be skeptical about a presentation that promises to teach "secrets" of building community. And, indeed, Buytaert's presentation was less revelatory than the title suggested. But Buytaert did, in fact, offer very useful advice and insight into the success of Drupal that might well apply to other communities.

He started with a rundown of Drupal's success so far. According to Buytaert's statistics, Drupal sites account for about 1% of Web sites. Drupal is downloaded about 300,000 times per month, and sees about 1.5 million unique visitors per month. He also said that Drupal has more than 6,000 modules. All of which point to a project that's doing something right, but what? Buytaert offered several pieces of advice or wisdom for the audience.

First, Buytaert advised that there's no "quick rich" formula to build a community. When growth does happen, the second tip was to embrace growing pains. Buytaert pointed to an incident with Drupal in 2005 when Drupal's server was pushed to its limit and couldn't handle the traffic it was receiving. Buytaert said that it made the Drupal community stronger, and that "communities are always a bit broken. Nothing better than suffering together."

Next, he offered two pieces of concrete advice: provide the right tools, and provide a architecture of evolution. He suggested that any project should have a modular design, and favor accessible technologies like PHP and MySQL (if appropriate) rather than languages or technology that are perhaps better but less accessible. Drupal has succeeded in part because PHP and MySQL are so commonplace. Would Drupal have flourished to the same extent if it was written in Perl and used a less popular database? Unlikely.

Buytaert offered one unusual "secret": build a commercial ecosystem. But he also suggested that projects needed to "find a higher purpose" while striving to make money. For instance, he offered Drupal's goal to democratize online publishing, and Mozilla's goals for an open web. Both initiatives that have powered a commercial ecosystem, but still meld well with open source communities.

Enabling Bite-sized Contributions

Laroia and Chua have a fair amount of experience working with new contributors. Chua spends a great deal of time working with the Fedora community. Laroia works on OpenHatch, a project that helps connect new contributors to projects by breaking bugs and projects down into small and introductory chunks.

The start of the Junior Jobs presentation was slightly unorthodox. Chua and Laroia asked the audience to be "productively lost" and explore some project sites looking for the the bug tracker to find bugs to work on or other resources that a new contributor might seek out. The audience, about 20 people, gathered in pairs or small groups and checked some popular project sites like Sugar Labs, the Fedora Web site, and other mainstream open source projects.

The audience, predictably, discovered that finding the way around open source project sites for contributor resources was not always a simple exercise. Even when a contributor can find the bug tracker, it may take a translator to help them understand the information. One example given was the "UNCO" status in GNOME's bugtracker, which wasn't immediately obvious as "unconfirmed" to some of the audience at the talk. New contributors are not going to experience the sites like longstanding contributors who know where to look — and probably have deeply buried resources bookmarked for fast access.

Part of the problem is that tools and documentation that are helpful to experienced contributors may not be so useful for newbies. As an illustration, they talked about cookbooks and the difference between a cookbook useful for new cooks and one useful for an experienced cook. It might be necessary to explain "rolling boil" and what it means to "fold" an ingredient in to a mix for a beginning cook, but but such explanations are only frustrating to an experienced cook.

Looking for pointers for your project? In talking with Chua and Laroia after the presentation, they offered Dreamwidth as a prime example of a project that does well in attracting new and diverse contributors. Chua also pointed to Drupal's Dojo for its classes on how to get started with Drupal contribution, and the Fedora How to be a successful contributor document. Chua also noted that each community has to forge its own path:

You can't point to specific wiki pages and say 'these words are all you need to make it easy for new contributors to join' - it's about the human dynamics of the community, and that's hard to capture in static form.

The overall message of this presentation should be heeded by any project that looks to attract contributors who are new to contributing to open source projects. Breaking down tasks into "junior jobs" or easy-to-tackle bugs is fine, but that's only so useful if the sites are difficult to navigate or the bugtracker is confusing.

Chua recommends thinking of FOSS contributions more broadly:

It may be helpful to think of FOSS contributions as being towards not just a codebase but a *community* that includes and is centered around a codebase — so you can patch the code, but you can also patch the tests and docs and processes and how the technology you're building and the people you're building it with interact with the rest of the world.

Over all, the dominant message of the presentation was to communicate with new contributors and try to anticipate how newcomers will view your project. As Laroia said during the close of the talk, "Communicate. It can't make things any worse." While not a definitive guide to community management, it's a good first step.

Comments (2 posted)

Brief items

Quotes of the week

I'm reminded of my sister's 3-year-old child "helping" with the cooking: good for the kid's development, but not necessarily for the dish and certainly not less effort for the parent. We don't have the time to play parent to all our users and we shouldn't try.
-- Ian Jackson

So the Gnash team is broke, and has been for most of a year. This has forced many, but not all of the Gnash developers to find paying work, and mostly stop working on Gnash. The few of us left focused on Gnash like to eat and pay bills.
-- Rob Savoye

Comments (3 posted)

BlueDevil: a new KDE bluetooth stack

The KDE software collection has a new BlueTooth stack called "BlueDevil." "This release should be stable enough to be used by everybody, but we’re looking specially for advanced users with 'compiling skills' so we can get quick feedback and fix as many bugs as possible."

Comments (3 posted)

Neary: GNOME Census

Dave Neary has posted the highlights of his work to determine where contributions to GNOME come from. "While over 70% of GNOME developers identify themselves as volunteers, over 70% of the commits to the GNOME releases are made by paid contributors. Red Hat are the biggest contributor to the GNOME project and its core dependencies. Red Hat employees have made almost 17% of all commits we measured, and 11 of the top 20 GNOME committers of all time are current or past Red Hat employees. Novell and Collabora are also on the podium."

Comments (34 posted)

GNOME 3.0 release delayed

The much-anticipated release of GNOME 3.0—scheduled for September—has been pushed back to March 2011 due to quality issues in the code. The announcement was made at GUADEC (GNOME users' and developers' European conference), which is being held July 26-30 in The Hague, Netherlands. There will be a GNOME 2.32 release in September along with GNOME 3 beta. GNOME 2.32 will have the usual performance enhancements and bug fixes along with a new control center design, UPnP, and color management support. The extra time will be used to improve GNOME Accessibility support, GNOME Shell, and documentation for GNOME 3.0. There should be a press release on the GNOME web site before too long, stay tuned.

Comments (28 posted)

Hibari "big data" database released

Gemini Mobile Technologies has sent out a press release announcing the availability (under the Apache license) of "Hibari," a non-relational database, implemented in Erlang. "Hibari is a database optimized for the highly reliable, highly available storage of massive data, so-called 'Big Data.' Hibari can be used in Cloud Computing Applications such as web mail, Social Networking Services (SNS), and other services requiring storage of tera-bytes and peta-bytes of new daily data."

Comments (5 posted)

van Rossum: Thoughts fresh after EuroPython

Guido van Rossum has put together a set of impressions from his recent experience at EuroPython; they are a good read for anybody curious about the future direction of the Python community. "This made me think of how the PEP process should evolve so as to not require my personal approval for every PEP. I think the model for future PEPs should be the one we used for PEP 3148 (futures, which was just approved by Jesse): the discussion is led and moderated by one designated "PEP handler" (a different one for each PEP) and the PEP handler, after reviewing the discussion, decides when the PEP is approved. A PEP handler should be selected for each PEP as soon as possible; without a PEP handler, discussing a PEP is not all that useful. The PEP handler should be someone respected by the community with an interest in the subject of the PEP but at an arms' length (at least) from the PEP author."

Full Story (comments: none)

Report: Python Language Summit at EuroPython 2010

Tim Golden has posted a report from the Python Language Summit, recently held in Birmingham. "The PyPy guys also announced a C API bridging layer which should enable a range of Python extension modules to work directly with PyPy. This is only a stepping stone by way of broadening support. Brett [Cannon] suggested that the Unladen Swallow merge to trunk was waiting for some work to complete on the JIT compiler and Georg [Brandl], as release manager for 3.2, confirmed that Unladen Swallow would not be merged before 3.3."

Full Story (comments: 2)

QuteCsound 0.6.0 released

Version 0.6.0 of QuteCsound is out with a long list of new features. "QuteCsound is a frontend for Csound featuring a highlighting editor with autocomplete, interactive widgets and integrated help. It can open files created in MacCsound, and aims to be a simple yet powerful and complete development environment for Csound."

Full Story (comments: none)

Sphinx 1.0 released

The final 1.0 release of the Sphinx documentation tool is out; new features include domains and support for output in the EPub format. LWN looked at Sphinx back in June.

Full Story (comments: none)

Systemd v5 released

The fifth systemd release is out. "This includes a fairly major interface change. After some longer discussions on fedora-devel systemd-install is now folded into systemctl and greatly simplified in its invocation."

Full Story (comments: 1)

Newsletters and articles

Development newsletters from the last week

There was exactly one development newsletter in the LWN mailbox this week; everybody else must be on vacation.

Comments (none posted)

Release Early, Release Often (The Chromium Blog)

The Chromium Blog has announced an accelerated pace for Google Chrome stable releases. "The first goal is fairly straightforward, given our pace of development. We have new features coming out all the time and do not want users to have to wait months before they can use them. While pace is important to us, we are all committed to maintaining high quality releases - if a feature is not ready, it will not ship in a stable release."

Comments (29 posted)

MediaWiki and Script Translation for the Morevna Project (Free Software Magazine)

Free Software Magazine looks at the use of MediaWiki by the Morevna Project. "Putting together a collaborative film production involves a lot of bits and pieces. Workflow is unclear at the beginning, and has to be developed organically. That argues against putting too much structure into the software that you use - otherwise it would straightjacket you. Furthermore, when you're working on an artistic project, you don't want to waste time developing (or fixing) software that doesn't work like you need it to. So it makes sense to use something that is well-tested. So, really, MediaWiki is a no-brainer." (LWN covered the Morevna Project last March.) (Thanks to Paul Wise)

Comments (3 posted)

Shikari: Announcing the world's fastest VP8 decoder: ffvp8

The Diary Of An x264 Developer has an introduction to ffvp8. "Back when I originally reviewed VP8, I noted that the official decoder, libvpx, was rather slow. While there was no particular reason that it should be much faster than a good H.264 decoder, it shouldn't have been that much slower either! So, I set out with Ronald Bultje and David Conrad to make a better one in FFmpeg. This one would be community-developed and free from the beginning, rather than the proprietary code-dump that was libvpx. A few weeks ago the decoder was complete enough to be bit-exact with libvpx, making it the first independent free implementation of a VP8 decoder. Now, with the first round of optimizations complete, it should be ready for primetime. I'll go into some detail about the development process, but first, let's get to the real meat of this post: the benchmarks."

Comments (none posted)

Page editor: Jonathan Corbet


Non-Commercial announcements

LiMo and GNOME join forces

The LiMo and GNOME Foundations have announced a new partnership. "Starting immediately, LiMo Foundation will become a member of GNOME Foundation's Advisory Board and GNOME Foundation will become an Industry Liaison Partner for LiMo Foundation. This development represents a natural formalization founded upon the significant use of GNOME Mobile software components within Release 2 and Release 3 of the LiMo Platform."

Comments (8 posted)

Commercial announcements

GENIVI goes with MeeGo

The Linux Foundation has sent out an announcement stating that the GENIVI alliance has chosen MeeGo as the base of its "in-vehicle infotainment" (IVI) platform. "IVI is a rapidly growing and evolving field that encompasses the digital applications that can be used by all occupants of a vehicle, including navigation, entertainment, location-based services, and connectivity to devices, car networks and broadband networks. MeeGo will provide the base for the upcoming GENIVI Apollo release that will be used by members to reduce time to market and the cost of IVI development. MeeGo's platform contains a Linux base, middleware, and an interface layer that powers these rich applications."

Full Story (comments: 1)

Legal Announcements

The DMCA just got a little weaker

Here is the text of a ruling by the US Court of Appeals in a suit by MGE UPS Systems against General Electric. The court has ruled that simply circumventing technical measures is not, by itself, a violation of the Digital Millennium Copyright Act. "However, MGE advocates too broad a definition of "access;" their interpretation would permit liability under § 1201(a) for accessing a work simply to view it or to use it within the purview of 'fair use' permitted under the Copyright Act. Merely bypassing a technological protection that restricts a user from viewing or using a work is insufficient to trigger the DMCA's anti-circumvention provision. The DMCA prohibits only forms of access that would violate or impinge on the protections that the Copyright Act otherwise affords copyright owners." What this ruling means in the long term - especially for defendants who are not GE - remains to be seen, but it is a step in the right direction.

Comments (4 posted)

The EFF wins three DMCA exemptions

The Electronic Frontier Foundation has announced that it has won three exemptions to the DMCA's anti-circumvention rules as part of the regular, three-year process. These include cellphone unlocking, fair use of DVD content, and, happily, liberating locked-down phones. "In its reasoning in favor of EFF's jailbreaking exemption, the Copyright Office rejected Apple's claim that copyright law prevents people from installing unapproved programs on iPhones: 'When one jailbreaks a smartphone in order to make the operating system on that phone interoperable with an independently created application that has not been approved by the maker of the smartphone or the maker of its operating system, the modifications that are made purely for the purpose of such interoperability are fair uses.'"

Comments (11 posted)

Articles of interest

Gratton: The Sad State of Open Source in Android tablets

Angus Gratton has posted a survey of GPL compliance across several Android-based tablets, along with some comments on his findings. "With the exception of Barnes & Noble's Nook e-reader, a device that isn't even really a tablet, I couldn't find a single tablet manufacturer who was complying with the minimum of their legal open source requirements under GNU GPL. Let alone supporting community development."

Comments (22 posted)

Sony now facing single class-action for PS3 other-OS removal (ars technica)

Ars technica reports on the status of the suits against Sony for removing the "Other OS" option, thus removing the ability to install Linux. Those suits are now combined into a single class-action lawsuit. "None of the plaintiffs are likely to get rich. If the plaintiffs win, the lawyers will get paid, Sony will probably have to pay PlayStation 3 owners a small refund to make up for the loss of the option, or there will be a coupon or game giveaway. This consolidation just makes that settlement more likely, and much simpler from a legal perspective. It shows a large number of gamers affected, and makes reasonable restitution possible on a large scale."

Comments (11 posted)

Droid X rooted, bootloader still locked (ars technica)

Ars technica reports that Stephen Bird has found a way to gain root access on Motorola's new Droid X smartphone. "Droid X owners can use the Android debugging tool to run the exploit on their device. Step-by-step instructions are available from the AllDroid forum community. The exploit will give users the ability to modify the contents of the filesystem and use certain third-party software like screenshot and tethering tools that only work on rooted devices."

Comments (5 posted)

Novell opens Linux appliance gallery (Channel Register)

Channel Register reports that Novell is launching the SUSE Gallery. "It has been a year since Novell launched its SUSE Appliance Program, which offers a set of online tools, dubbed SUSE Studio, for spinning up software appliances based on its SUSE Linux distro. The appliance tools were aimed at software developers who wanted to code appliances for their own purposes - perhaps as a means of more easily supporting and redistributing their own application software to their customers - not for distributing software appliances to the general public. But that is precisely what some software developers want to be able to do, according to Joanna Rosenberg, ISV marketing manager at Novell, and so on the first birthday of the SUSE Appliance Program, Novell is opening up what it calls the SUSE Gallery."

Comments (none posted)

Government Computers to Get Linux-Based Operating System (Moscow Times)

According to an article in the Moscow Times, the Russian government is working on a Linux-based "national operating system" for its computers. "The operating system, for use on the computer systems of government agencies and state-run companies, will be 90 percent based on the open-source Linux operating system, Deputy Communications and Press Minister Ilya Massukh said. He said use of the operating system would be optional for all agencies." (Thanks to Eugene Markow)

Comments (21 posted)

What Could You Do With a $35 Tablet? (NetworkWorld)

The media has been buzzing about a prototype tablet from India. This article in NetworkWorld is one of many covering a device that may be available in 2011. "The $35 tablet prototype from India will run a variation of the open source Linux operating system. It has 2Gb of RAM, but no internal storage--relying on a removable memory card. The device has a USB port, and built-in Wi-Fi connectivity. Seems like reasonable enough specs--especially for $35. On the software side, the $35 tablet has a PDF reader, multimedia player, video conferencing, Web browser, and word processor."

Comments (18 posted)

New Books

Being Geek: The Software Developer's Career Handbook--New from O'Reilly

O'Reilly has released "Being Geek: The Software Developer's Career Handbook" by Michael Lopp.

Full Story (comments: none)

Calls for Presentations

PGDay.EU 2010 Call for Papers

PGDay.EU 2010, the European PostgreSQL conference, will be held December 6-8, 2010 in Stuttgart, Germany. "We are now accepting proposals for talks. Please note that we are looking for talks in both English and German." The submission deadline is October 11, 2010.

Full Story (comments: none)

Upcoming Events

Events: August 5, 2010 to October 4, 2010

The following event listing is taken from the Calendar.

August 1
August 7
DebConf10 New York, NY, USA
August 4
August 6
YAPC::Europe 2010 - The Renaissance of Perl Pisa, Italy
August 7
August 8
Debian MiniConf in India Pune, India
August 9
August 10
KVM Forum 2010 Boston, MA, USA
August 9 Linux Security Summit 2010 Boston, MA, USA
August 10
August 12
LinuxCon Boston, USA
August 13 Debian Day Costa Rica Desamparados, Costa Rica
August 14 Summercamp 2010 Ottawa, Canada
August 14
August 15
Conference for Open Source Coders, Users and Promoters Taipei, Taiwan
August 21
August 22
Free and Open Source Software Conference St. Augustin, Germany
August 23
August 27
European DrupalCon Copenhagen, Denmark
August 28 PyTexas 2010 Waco, TX, USA
August 31
September 3
OOoCon 2010 Budapest, Hungary
August 31
September 1
LinuxCon Brazil 2010 São Paulo, Brazil
September 6
September 9
Free and Open Source Software for Geospatial Conference Barcelona, Spain
September 7
September 9
DjangoCon US 2010 Portland, OR, USA
September 8
September 10
CouchCamp: CouchDB summer camp Petaluma, CA, United States
September 10
September 12
Ohio Linux Fest Columbus, Ohio, USA
September 11 Open Tech 2010 London, UK
September 13
September 15
Open Source Singapore Pacific-Asia Conference Sydney, Australia
September 16
September 18
X Developers' Summit Toulouse, France
September 16
September 17
Magnolia-CMS Basel, Switzerland
September 16
September 17
3rd International Conference FOSS Sea 2010 Odessa, Ukraine
September 17
September 18
FrOSCamp Zürich, Switzerland
September 17
September 19
Italian Debian/Ubuntu Community Conference 2010 Perugia, Italy
September 18
September 19
WordCamp Portland Portland, OR, USA
September 18 Software Freedom Day 2010 Everywhere, Everywhere
September 21
September 24
Linux-Kongress Nürnberg, Germany
September 23 Open Hardware Summit New York, NY, USA
September 24
September 25
BruCON Security Conference 2010 Brussels, Belgium
September 25
September 26
PyCon India 2010 Bangalore, India
September 27
September 29
Japan Linux Symposium Tokyo, Japan
September 27
September 28
Workshop on Self-sustaining Systems Tokyo, Japan
September 29 3rd Firebird Conference - Moscow Moscow, Russia
September 30
October 1
Open World Forum Paris, France
October 1
October 2
Open Video Conference New York, NY, USA
October 1 Firebird Day Paris - La Cinémathèque Française Paris, France
October 3
October 4
Foundations of Open Media Software 2010 New York, NY, USA

If your event does not appear here, please tell us about it.

Mailing Lists

A new mailing list for libpeas

A new mailing-list has been created for the libpeas GObject-based plugin engine. Send an email to to subscribe.

Full Story (comments: none)

Page editor: Rebecca Sobol

Copyright © 2010, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds