Longtime GNOME developer and community member Luis Villa kicked off the GNOME users' and developers' European conference (GUADEC) with a challenge to the project to "embrace the web" as a way for the project to remain relevant. The web has won the battle to produce a "robust, libre platform" over various desktop efforts like GNOME, but there is still time for the project to find a seat at that table. It is a "big scary step" to take, Villa said, but one that he thinks is ultimately the right direction for the project.
While he is currently working for Mozilla, which might have colored his thinking some, Villa certainly disclaimed (in true lawyerly fashion) that he was representing anyone's views but his own. He was taking vacation time to attend the conference and wore a shirt from a company (Ximian) that "no one can be pissed at any more". He was there because "I love GNOME", he said.
Villa was speaking from "the other side", referring back to a talk he gave at GUADEC in 2006 when he was "vanishing into the bowels of law school" and told the project members that he would see them on the other side. That historical perspective was a major element of Villa's talk; one theme revolved around a picture of a party on a Paris boat at the first GUADEC in 2000. He considered what one would tell the folks in that picture about the progress that has been made in the ten years since.
Today there is a free and open platform that runs on all PCs and laptops, but which also runs on phones and televisions, a fact which would likely surprise the crowd from 2000. Most people using that platform also use Linux every day; the licensing of the platform is generally LGPL or more permissive. Even Microsoft has an implementation. There are some 400 million users. High school kids are learning to program for this platform partially by using a "View Source" button. Unfortunately Villa would have to tell those folks that this platform isn't GNOME, it is, instead, the web.
So the question is: what should GNOME do about that? Villa described "one possible answer", which is for GNOME to join forces with the web development community and bring its strengths, in terms of technical ability, culture, user focus, and hacker mentality, to that party. GNOME should figure out how to deliver the best combination of desktop and web applications to users.
Basically, the web won because it "co-opted our message", he said. He pointed to the famous Gandhi quote ("First they ignore you ...") but noted that things don't always work out that way. "Sometimes your ideas win without you", he said.
But, the web didn't win because it is perfect for either developers or users. There are problems with proprietary applications as well as control and privacy issues. It delivers sophisticated, powerful applications, though, which are run by someone else, freeing users from that burden. It's not a fad, and not going away, as it will only get better, he said. He also said that he had pointed the audience to an EtherPad site as a way to send questions, rather than to a Gobby instance, because he could be sure that all the attendees had web browsers while many would not have Gobby installed.
The web should be treated as a first-class object and various desktop applications should integrate with web services, he said. He pointed to the GNOME background image chooser which now allows picking images from Flickr or other web photo sites as an example. Though he noted that Zeitgeist hadn't made the cut for GNOME 3.0, he saw that as a step in the right direction because it treats the web as just another object.
Beyond that, the project should be thinking about even bolder strategies that would not just copy what the web is doing. It will be bigger and harder step, but he suggested that GNOME start writing code for the browsers to provide any needed functionality. "Bring our ideas, bring our code" to fix areas that don't work for GNOME. As a concrete proposal, he thought the Desktop Summit being planned for next year (combining GUADEC and KDE's Akademy conference) should be renamed to the "Free User Software Summit" and include browser developers from Mozilla and Google.
He allowed as to how this would be a major upheaval; "I told you this would be hard." While it is going to require lots of new code, and potentially abandoning lots of old code, it is still an embodiment of "our old culture". Bringing that culture of freedom and user-focus to the web is Villa's prescription.
He is optimistic about the future because of the people that make up GNOME. "We are the right people" to do this job, but need the right code. The clear indication from the talk is that he's convinced that the GNOME project's current direction isn't right and that a radical shift in focus is needed. "Whether you agree or disagree or think I'm crazy", the challenge is to identify the right direction and "go out and do it". Villa has presented his idea of what that direction should be, and he clearly thinks others should do the same.
The WordPress community witnessed the end of a high-profile war of words last week when the distributor of a popular commercial theme for the blogging platform agreed to license some of his work under the GPL. Prior to last week, Chris Pearson had argued fiercely that WordPress themes are not derivative works of WordPress itself — as the project has long claimed — and thus he was free to sell his Thesis theme under his own restrictive licensing terms.
The disagreement hinged on a question that will sound familiar to free software enthusiasts: what constitutes a derivative work under the GPL? The WordPress project has long taken the position that both plugins and themes are derivatives of the WordPress application itself, and thus must inherit its license, the GPL v2.
Pearson disagreed, claiming that Thesis was his creation and that he could select a license for it at will. As he said during the interview:
Many commenters on the blog coverage of the fight seemed to be of the same mind, asserting that the WordPress license was irrelevant to "original work" written by a theme creator. Underlying that position, however, seems to be the belief that a WordPress theme is a layer "above" the WordPress application, which happens to call APIs exposed by WordPress.[PULL QUOTE: Perhaps WordPress's use of the term theme is itself misleading, because it suggests something cosmetic. END QUOTE]
Considering that belief, perhaps WordPress's use of the term theme is itself misleading, because it suggests something cosmetic, like a static template or a set of look-and-feel rules implemented in HTML and CSS. But that is not what WordPress themes are. Rather, themes in WordPress are a collection of PHP scripts that implement the entire outward-facing user interface of the site (the dashboard functionality is implemented elsewhere).
WordPress themes are executables that create all of the elements displayed in the browser: pulling the content of posts, comments, user information, category, tag, archive, and navigation links, even search functionality, all by calling WordPress functions. To put it another way, a WordPress theme is the interface component of the application; without a theme installed, WordPress does not serve up any pages, and when not installed in a WordPress site, a theme cannot even execute.
The debate over the GPL inheritance of themes and plugins has been around for several years, prompting Mullenweg to seek legal analysis. According to the Mixergy interview, he first consulted with Mozilla's attorney Heather Meeker, but it is the Software Freedom Law Center's (SFLC) official opinion that he refers to as conclusive proof.
This reading of the situation is essentially the same as the Free Software Foundation's (FSF) take on the licensing requirements for plugins. The GPL FAQ states:
During the Mixergy debate, Pearson referenced a 2009 blog post by Florida attorney Michael Wasylik, who asserted that WordPress themes did not inherit the GPL from the WordPress application, based largely on the "running on top of" WordPress argument. Mullenweg and others observed that Wasylik is a real estate, not a copyright, attorney, and that the court cases he references in his blog post are about hardware devices, not software. But Wasylik also said that "actual incorporation of code" makes the work "probably derivative, and the GPL probably applies." Drew Blas subsequently analyzed the Thesis source code and concluded that the theme incorporates code lifted from WordPress itself.
Furthermore, WordPress core developer Mark Jaquith, in a longer analysis of the problem, observed that a former Thesis developer openly admitted that code from WordPress was copied into Thesis, and Andrew Nacin noted that the Thesis documentation commented on such inclusions: "This function is mostly copy pasta from WP (wp-includes/media.php), but with minor alteration to play more nicely with our styling."
Perhaps it was in the face of this evidence that Pearson changed his mind and switched over to a "split" license for Thesis — his only public comments on the decision have been made through his Twitter account.
Whatever the reasoning, Mullenweg seemed relieved to hear the news. During the podcast debate, Mullenweg repeatedly told Pearson that switching to the GPL would help, not hurt, his sales, observing that there are many other commercial theme developers who sell their works while complying with the requirements of WordPress's license. He said that, should Pearson come into compliance, he would add Thesis to the list of commercially-available themes promoted on the official WordPress site (although the addition does not appear to have happened yet).
It is always better for the community surrounding a free software project when disputes such as these reach an amicable solution. In another sense, though, Pearson's decision to relicense Thesis without comment leaves open — in some people's minds — the original question over when themes and plugins are rightfully considered derivative works.
WordPress is not alone in its position; the Drupal project also states that plugins and themes must inherit the GPL from Drupal. Joomla makes the same claim about Joomla extensions, although it admits that it may also be possible to create Joomla extensions that are not derivative works.
There may never be a simple black-and-white test to determine unambiguously when a theme is a derivative of the application that it themes. Fortunately, for the determined professional themer, it makes little difference. As the list maintained at the WordPress site demonstrates, there are quite few talented individuals who can make a living producing and selling GPL-licensed themes.
Think that your Android smartphone is fully open? Aaron Williamson delivered some bad news to the audience at OSCON with his presentation Your Smartphone May Not Be as Open as You Think. Williamson, counsel for the Software Freedom Law Center, explained to the audience what components were still proprietary, and the problems with replacing those with open source components. Unfortunately, it doesn't look like a fully open phone is likely in the very near future.
Many LWN readers are already aware that Android phones contain proprietary components. However, the larger open source community, and certainly the consumer public that is not well-informed about goings on in open source development, are usually not aware how much proprietary software the Android phones depend on.
So what's open and what's not? Everything that's shipped by the Android Project is fine, but Williamson pointed out that manufacturers ship more than just Android with their phones. The phone manufacturers, companies like HTC, Motorola, and Samsung, produce the software to meld Android to the hardware it's shipping on. So it's not possible to ship an Android distribution that's completely open source that will work on any specific phone.
Some packagers do ship Android distributions, but they're not likely to have permission to ship all of the software that they include. For instance, there's CyanogenMod, which adds features not found in Android, but it's hard to ship such a distribution and stay on the right side of all the proprietary licenses. As a result, a typical CyanogenMod installation requires saving the proprietary code shipped with the phone to the side at the beginning, then reinstalling that software as one of the final steps.
What do you get if you remove most of the proprietary software? Williamson has done the research and managed to compile Android for an HTC Dream with as little proprietary software as possible. He kept three components necessary for making phone calls, and left the rest out. Without the proprietary components, the HTC Dream isn't quite a brick, but it might as well be. It's unable to take pictures or record video, connect to WiFi, connect to Bluetooth devices, or use GPS. This also leaves out the accelerometer, so the landscape mode doesn't work.
Of course that leaves plenty of functionality as well, but the phone is hardly as functional without the software as with. Unless a user is deeply committed to software freedom, they're unlikely to go to that extreme. So the goal should be to convince companies to open the software as much as possible.
Williamson pointed out that this problem is unlikely to be specific to Android, and when MeeGo or open source Symbian devices ship, they're likely to have the same problems. He also gave Google credit for working with the manufacturers and trying to get as much software available as open source as possible.
For the most part, Williamson says that mobile component manufacturers largely give the same reasons for proprietary licensing that PC component manufacturers used to avoid providing free drivers for video cards, sound cards, etc. The manufacturers are concerned that they'll lose the edge against competitors or will give away intellectual property. Manufacturers see little competitive value in being open. They don't want to use licenses (like the GPLv3) that would harm their ability to pursue patent infringement suits.
There's also the issue of regulatory agencies and their influence on radio components for Bluetooth, GSM, and WiFi. Whether that's a legitimate issue is debatable, but it does seem to concern quite a few parties. The result of these regulatory concerns isn't debatable, however: You're unlikely to find open source drivers for most of the radio components of phones, which makes it difficult to operate a phone with 100% open source software.
Williamson also said he didn't see it likely that the community could keep up with maintaining open source drivers without the cooperation of the hardware manufacturers. The device updates tend to move so quickly, and the skills required to develop and maintain the drivers without assistance, make it unlikely that the community would be able to maintain a 100% free Android system with drivers. Of course, Linux developers, who have managed to keep up with a lot of fast-changing hardware over the years, might just disagree.
For users who are concerned with software freedom, what can be done to acquire fully (or more) open phones or inspire vendors to sell them? Williamson said that it requires educating the vendors and, more or less, walking through the same process that the community went through with Intel, ATI, and other hardware vendors that have come a long way towards supporting software freedom.
He pointed out that the community can reward vendors that are relatively open. For instance he pointed out that enthusiasts should be avoiding Motorola phones as long as the company continues trying to block mods as it does with the Droid X. Aside from that, Williamson says there's not much for end users to do. The good news is that Williamson thinks we can move faster than with PC hardware, because we've been down the road before and the community knows how to talk to vendors.
When I spoke to Williamson after OSCON, he indicated that tablets are likely to have the same problems as handsets, and some additional issues as well. Because most of the tablet manufacturers to date are not working directly with Google or as part of the Android community, they are not only shipping a lot of proprietary software, but also likely to produce lower quality products and violate licenses. The last is almost certainly true as shipping tablets are rarely found to be in compliance with the GPL. Even though most of Android's licensing doesn't require much in the way of compliance, few vendors seem to be living up to the GPL'ed components.
For now, a truly open smartphone seems elusive, but the prospects over time look positive. Until then, users have to decide between seriously crippled devices or devices that are only largely free.
The biggest offender appears to be associated with a shady-looking apparel store. Even though it's shady-looking, though, we know it's a legitimate business, because the site's FAQ tells us so:
However, we would like it to be known that even businesses as proper, upstanding, and trustworthy as this one are not welcome to post their spam on LWN. We have spent years building this site and even convincing people that it is something worth paying for. How these people might think that we would allow them to destroy it is beyond imagining. Comment spam, for us, is truly a security issue.
Our recent discovery that nearly 3,000 LWN accounts had been created from a single site known as the origin of much comment spam has also helped to focus our minds on this issue. We don't know what the intended use of all those accounts was, but we doubt it was anything good.
Thus far, we have responded to spam by deleting it immediately on discovery and blocking the accounts and site it came from. The problem appears to be growing, though, to the point that the manual deletion approach will eventually run into scalability problems. Besides, we would rather be writing useful stuff than scrubbing graffiti from the site. But options for dealing with comment spam appear to be somewhat limited.
We could, of course, moderate all comments, but that approach, too, scales poorly; it also delays and distorts conversations. Full-scale moderation is just not a business we want to get into. There are blacklists out there which identify known sources of spam, but they are far from complete. One could try content-based filtering approaches, but they have their own hazards.
What we are likely to do, in the plausible scenario that this problem persists, is to impose some sort of moderation on comments from new accounts. After a legitimate comment or two, the moderation block will be removed and comments will be posted immediately; existing accounts would not be affected. We might also automatically remove the block if a subscription is purchased - spammers have shown a surprising reluctance to support LWN, for some reason.
Nothing is decided yet, so plans could change. We'd be more than interested in any ideas that readers might have; please post them as (non-spam) comments on this article. One thing that won't change, though, is our absolute determination that we will not allow LWN to be used as a platform for the spamming of our readers.
This appears to not be a joke.
|Created:||July 23, 2010||Updated:||November 3, 2010|
|Description:||From the Internet Systems Consortium advisory:
If a query is made explicitly for a record of type 'RRSIG' to a validating recursive server running BIND 9.7.1 or 9.7.1-P1, and the server has one or more trust anchors configured statically and/or via DLV, then if the answer is not already in cache, the server enters a loop which repeatedly generates queries for RRSIGs to the authoritative servers for the zone containing the queried name. This rarely occurs in normal operation, since RRSIGs are already included in responses to queries for the RR types they cover, when DNSSEC is enabled and the records exist.
|Created:||July 27, 2010||Updated:||January 23, 2013|
|Description:||From the CVE entry:
Multiple buffer underflows in the base64 decoder in base64.c in (1) bogofilter and (2) bogolexer in bogofilter before 1.2.2 allow remote attackers to cause a denial of service (heap memory corruption and application crash) via an e-mail message with invalid base64 data that begins with an = (equals) character.
|Created:||July 26, 2010||Updated:||August 17, 2010|
|Description:||From the Red Hat advisory:
An invalid free flaw was found in Firefox's plugin handler. Malicious web content could result in an invalid memory pointer being freed, causing Firefox to crash or, potentially, execute arbitrary code with the privileges of the user running the Firefox application.
|Created:||July 28, 2010||Updated:||October 24, 2011|
|Description:||GnuPG 2 suffers from a use-after-free vulnerability which could possibly be exploited (via a signature or certificate) to execute arbitrary code.|
|Created:||July 27, 2010||Updated:||July 27, 2010|
|Description:||From the CVE entry:
Horde IMP 4.3.6 and earlier does not request that the web browser avoid DNS prefetching of domain names contained in e-mail messages, which makes it easier for remote attackers to determine the network location of the webmail user by logging DNS requests.
|Created:||July 23, 2010||Updated:||March 15, 2013|
|Description:||From the Mandriva advisory:
Ovidiu Mara reported a vulnerability in ping.c (iputils) that could cause ping to hang when responding to a malicious echo reply.
|Package(s):||libvirt||CVE #(s):||CVE-2010-2242 CVE-2010-2237 CVE-2010-2238 CVE-2010-2239|
|Created:||July 27, 2010||Updated:||November 9, 2010|
|Description:||From the Red Hat bugzilla:
Jeremy Nickurak reported an issue with how libvirt creates iptables rules when
guest systems are setup for masquerading. (CVE-2010-2242)
From the Red Hat bugzilla: It was found that libvirt did not honour the user defined main disk format in guest XML when looking up disk backing stores in the security drivers. This could be possibly exploited by privileged guest user to access arbitrary files on the host. (CVE-2010-2237)
From the Red Hat bugzilla: It was found that libvirt did not extract the defined disk backing store format when recursing into disk image backing stores in the security drivers. This could be possibly exploited by privileged guest user to access arbitrary files on the host. (CVE-2010-2238)
From the Red Hat bugzilla: It was found that libvirt did not explicitly set the user defined backing store format when creating new image. This results in images being created with an potentially insecure configuration, preventing applications from opening backing stores without resorting to probing. A privileged guest user could use this flaw to access arbitrary files on the host. (CVE-2010-2239)
|Created:||July 27, 2010||Updated:||August 4, 2010|
|Description:||From the Ubuntu advisory:
Matt Weatherford discovered that Likewise Open did not correctly check password expiration for the local-provider account. A local attacker could exploit this to log into a system they would otherwise not have access to.
|Created:||July 28, 2010||Updated:||October 7, 2010|
|Description:||The cluster logical volume manager deamon (clvmd) in the lvm2-cluster package does not authenticate clients connecting to the Unix-domain societ used for control operations. As a result, local, unprivileged users can perform cluster management operations.|
|Created:||July 23, 2010||Updated:||August 2, 2010|
|Description:||From the openSUSE advisory:
lxsession-logout did not properly lock the screen before suspending, hibernating and switching between users which could allow attackers with physical access to take control of the system to obtain sensitive information and / or execute arbitrary code in the context of the user who is currently logged in.
|Created:||July 27, 2010||Updated:||November 11, 2010|
|Description:||From the CVE entry:
MySQL before 5.1.48 allows remote authenticated users with alter database privileges to cause a denial of service (server crash and database loss) via an ALTER DATABASE command with a #mysql50# string followed by a . (dot), .. (dot dot), ../ (dot dot slash) or similar sequence, and an UPGRADE DATA DIRECTORY NAME command, which causes MySQL to move certain directories to the server data directory.
|Created:||July 27, 2010||Updated:||July 27, 2010|
|Description:||From the Red Hat bugzilla:
A remote attacker could use this flaw to conduct denial of service attacks, leading to game server infinite loop consuming excessive amount of CPU time.
|Package(s):||php||CVE #(s):||CVE-2010-2531 CVE-2010-2484 CVE-2010-2225|
|Created:||July 27, 2010||Updated:||July 5, 2011|
|Description:||From the Mandriva advisory:
|Created:||July 27, 2010||Updated:||August 30, 2010|
|Description:||From the Red Hat bugzilla:
Mark Doliner, upstream pidgin/libpurple developer, discovered a NULL pointer dereference flaw in the way libpurple handled certain malformed X-Status messages in ICQ/Oscar protocol. This flaw could allow remote attacker to crash the victim's instant messenger application using libpurple such as pidgin.
|Package(s):||samba||CVE #(s):||CVE-2010-1635 CVE-2010-1642|
|Created:||July 27, 2010||Updated:||July 27, 2010|
|Description:||From the Mandriva advisory:
The chain_reply function in process.c in smbd in Samba before 3.4.8 and 3.5.x before 3.5.2 allows remote attackers to cause a denial of service (NULL pointer dereference and process crash) via a Negotiate Protocol request with a certain 0x0003 field value followed by a Session Setup AndX request with a certain 0x8003 field value (CVE-2010-1635).
The reply_sesssetup_and_X_spnego function in sesssetup.c in smbd in Samba before 3.4.8 and 3.5.x before 3.5.2 allows remote attackers to trigger an out-of-bounds read, and cause a denial of service (process crash), via a \xff\xff security blob length in a Session Setup AndX request (CVE-2010-1642).
Page editor: Jake Edge
Brief itemsreleased on July 22. Linus says:
It contains mostly fixes, but also a rename of the logical memory block (LMB) subsystem to "memblock." See the announcement for the short-form changelog, or the full changelog for all the details.
There have been no stable updates over the last week.
So I now just don't bother with any documentation _at_ _all_.
Kernel development news
The Unix file timestamps, as long-since enshrined by POSIX, are called "atime," "ctime," and "mtime." The atime stamp is meant to record the last time that the file was accessed. This information is almost never used, though, and can be quite expensive to maintain; Ingo Molnar once called atime "perhaps the most stupid Unix design idea of all times." So atime is often disabled on contemporary systems or, at least, rolled back to the infrequently-updated "relatime" mode. Mtime, instead, makes a certain amount of sense; it tells the user when the file was last modified. Modification requires writing to the file anyway, so updating this time is often free, and the information is often useful.
That leaves ctime, which is a bit of a strange beast. Users who do not look deeply are likely to interpret ctime as "creation time," but that is not what is stored there; ctime, instead, is updated whenever a file's metadata is changed. The main consumer of this information, apparently, is the venerable dump utility, which likes to know that a file's metadata has changed (so that information must be saved in an incremental backup), but the file data itself has not and need not be saved again. The number of dump users has certainly fallen over the years, to the point that the biggest role played by ctime is, arguably, confusing users who really just want a file's creation time.
So where do users find the creation time? They don't: Linux systems do not store that time and provide no interface for applications to access it.
That situation could change, though. Some newer filesystems (Btrfs and ext4, for example) have been designed with space for file creation times. Other operating systems also provide this information, and some network filesystem protocols expect to have access to it. So it would be nice if Linux properly supported file creation times; the proposed addition of the xstat() system call would be the ideal time to make that change.
Current xstat() implementations do, in fact, add a st_btime field to struct xstat; the "b" stands for "birth," which is a convention established in the BSD camp. There has been a fair amount of discussion about that addition, though, based on naming and semantics.
The naming issue, one would think, would be relatively straightforward. It was pointed out, though, that other names have been used in the kernel. JFS and Btrfs use "otime," for some reason, while ext4 uses "crtime." And BSD, it turns out, uses "birthtime" instead of "btime." That discussion inspired Linus to exclaim:
After that, though, Linus looked a bit more deeply at the problem, which he saw as primarily being to provide a Windows-style creation time that Samba could use. It turns out that Windows allows the creation time to be modified, so Linus saw it as being a sort of variation on the Unix ctime notion. That led to a suggestion to change the semantics of ctime to better suit the Windows case. After all, almost nobody uses ctime anyway, and it would be a trivial change to make ctime look like the Windows creation time. This behavior could be specified either as a per-process flag or a mount-time option; then there would be no need to add a new time field.
This idea was not wildly popular, though; Jeremy Allison said it would lead to "more horrible confusion." If ctime could mean different things in different situations, even fewer people would really understand it, and tools like Samba could not count on its semantics. Jeremy would rather just see the new field added; that seems like the way things will probably go.
There is one last interesting question, though: should the kernel allow the creation time to be modified? Windows does allow modification, and some applications evidently depend on that feature. Windows also apparently has a hack which, if a file is deleted and replaced by another with the same name, will reuse the older file's creation time. BSD systems, instead, do not allow the creation time to be changed. When Samba is serving files from a BSD system, it stores the "Windows creation time" in an extended attribute so that the usual Windows semantics can be provided.
If the current xstat() patch is merged, Linux will disallow changes to the creation time by default - there will be no system call which can make that change. Providing that capability would require an extended version of utimes() which can accept the additional information. Allowing the time to be changed would make it less reliable, but it would also be useful for backup/restore programs which want to restore the original creation time. That is a discussion which has not happened yet, though; for now, creation times cannot be changed.compcache patch, which implemented a sort of swap device which stored pages in main memory, compressing them on the way. Over time, compcache became "ramzswap" and found its way into the staging tree. It's not clear that ramzswap can ever graduate to the mainline kernel, so Nitin is trying again with a development called zcache. But zcache, too, currently lacks a clear path into the mainline.
Like its predecessors, zcache lives to store compressed copies of pages in memory. It no longer looks like a swap device, though; instead, it is set up as a backing store provider for the Cleancache framework. Cleancache uses a set of hooks into the page cache and filesystem code; when a page is evicted from the cache, it is passed to Cleancache, which might (or might not) save a copy somewhere. When pages are needed again, Cleancache gets a chance to restore them before the kernel reads them from disk. If Cleancache (and its backing store) is able to quickly save and restore pages, the potential exists for a real improvement in system performance.
Zcache uses LZO to compress pages passed to it by Cleancache; only pages which compress to less than half their original size are stored. There is also a special test for pages containing only zeros; those compress exceptionally well, requiring no storage space at all. There is not, at this point, any other attempt at the unification of pages with duplicated contents (as is done by KSM), though.
There are a couple of obvious tradeoffs to using a mechanism like zcache: memory usage and CPU time. With regard to memory, Nitin says:
The current patch does allow the system administrator to manually adjust the size of the zcache area, which is a start. It will be a rare admin, though, who wants to watch cache hit rates and tweak low-level memory management parameters in an attempt to sustain optimal behavior over time. So zcache will almost certainly have to grow some sort of adaptive self-tweaking before it can make it into the mainline.
The other tradeoff is CPU time: it takes processor time to compress and decompress pages of memory. The cost is made worse by any pages which fail to compress down to less than 50% of their original size - the time spent compressing them is a total waste. But, as Nitin points out: "with multi-cores becoming common, benefits of reduced disk I/O should easily outweigh the problem of increased CPU usage." People have often wondered what we are going to do with the increasing number of cores on contemporary processors; perhaps zcache is part of the answer.
One other issue remains to be resolved, though: zcache depends on Cleancache, which is not currently in the mainline. There is some opposition to merging Cleancache, mostly because that patch, which makes changes to individual filesystems, is seen as being overly intrusive. It's also not clear that everybody is, yet, sold on the value of Cleancache, despite the fact that SUSE has been shipping it for a little while now. Until the fate of Cleancache is resolved, add-on patches like zcache will be stuck outside of the mainline.20th Euromicro Conference on Real-Time Systems (ECRTS2010) was held in Brussels, Belgium from July 6-9, along with a series of satellite workshops which took place on July 6. One of those satellite workshops was OSPERT 2010 - the Sixth International Workshop on Operating Systems Platforms for Embedded Real-Time Applications, which was co-chaired by kernel developer Peter Zijlstra and Stefan M. Petters from the Polytechnic Institute of Porto, Portugal. Peter and Stefan invited researchers and practitioners from both industry and the Linux kernel developer community. I participated for the second year and tried, with Peter, to nurse the discussion between the academic and real worlds which started last year at OSPERT in Dublin.
Much to my surprise, I was also invited to give the opening keynote at the main conference, which I titled "The realtime preemption patch: pragmatic ignorance or a chance to collaborate?". Much to the surprise of the audience I did my talk without slides, as I couldn't come up with useful ones as much as I twisted my brain around it. The organizers of ECRTS asked me whether they could publish my writeup, but all I had to offer were my scribbled notes which outlined what I wanted to talk about. So I agreed to do a transcript from my notes and memory, without any guarantee that it's a verbatim transcript. Peter at least confirmed that it matches roughly the real talk.
First of all I want to thank Jim Anderson for the invitation to give this keynote at ECRTS and his adventurous offer to let me talk about whatever I want. Such offers can be dangerous, but I'll try my best not to disappoint him too much.
The Linux Kernel community has a proven track record of being in disagreement with - and disconnected from - the academic operating system research community from the very beginning. The famous Torvalds/Tannenbaum debate about the obsolescence of monolithic kernels is just the starting point of a long series of debates about various aspects of Linux kernel design choices.
One of the most controversial topics is the question how to add realtime extensions to the Linux kernel. In the late 1990's, various research realtime extensions emerged from universities. These include KURT (Kansas University), RTAI (University of Milano), RTLinux (NMT, Socorro, New Mexico), Linux/RK (Carnegie Mellon University), QLinux (University of Massachusetts), and DROPS (University of Dresden - based on L4), just to name a few. There have been more, but many of them have only left hard-to-track traces in the net.
The various projects can be divided into two categories:
I participated in and watched several discussions about these approaches over the years; the discussion which is burned into my memory forever happened in summer 2004. In the course of an heated debate one of the participants stated: "It's impossible to turn a General Purpose Operating System into a Real-Time Operating System. Period." I was smiling then as I had already proven, together with Doug Niehaus from Kansas University, that it can be done even if it violates all - or at least most - of the rules of the academic OS research universe.
But those discussions were not restricted to the academic world. The Linux kernel mailing list archives provide a huge choice of technical discussions (as well as flame wars) about preemptability, latency, priority inheritance and approaches to realtime support. It was fun to read back and watch how influential developers changed their minds over time. Especially Linus himself provides quite a few interesting quotes. In May 2002 he stated:
Which is, in my opinion, the only sane way to handle hard realtime. No confusion about priority inversions, no crap. Clear borders between what is "has to happen _now_" and "this can do with the regular soft realtime".
Four years later he said in a discussion about merging the realtime preemption patch during the Kernel Summit 2006:
Equally interesting is his statement about priority inheritance in a huge discussion about realtime approaches in December 2005:
Linus's clear statement that he wouldn't merge any PI code ever was rendered ad absurdum when he merged the PI support for pthread_mutexes without a single comment only half a year later.
Both are pretty good examples of the pragmatic approach of the Linux kernel development community and its key figures. Linus especially has always silently followed the famous words of the former German chancellor Konrad Adenauer: "Why should I care about my chatter from yesterday? Nothing prevents me from becoming wiser."
But back to the micro/nano-kernel versus in-kernel approaches which emerged in the late 90es. From both camps emerged commercial products and, more or less, active open source communities, but none of those efforts was commercially sustainable or ever got close to being merged into the official mainline kernel code base due to various reasons. Let me look at some of those reasons:
I'm not saying that it can't be done, it's just not suitable for the average programmer.
In October 2004, the real time topic got new vigor on the Linux kernel mailing list. MontaVista had integrated the results of research at the University of the German Federal Armed Forces at Munich into the kernel, replacing spinlocks with priority-inheritance-enabled mutexes. This posting resulted in one of the lengthiest discussions about realtime on the Linux kernel mailing list as almost everyone involved in efforts to solve the realtime problem surfaced and praised the superiority of their own approach. Interestingly enough, nobody from the academic camp participated in this heated argument.
A few days after the flame fest started, the discussion was driven to a new level by kernel developer Ingo Molnar, who, instead of spending time with rhetoric, had implemented a different patch which, despite being clumsy and incomplete, built the starting point for the current realtime preemption patch. In no time quite a few developers interested in realtime joined Ingo's effort and brought the patch to a point which allowed real-world deployment within two years. During that time a huge number of interesting problems had to be solved: efficient priority inheritance, solving per cpu assumptions, preemptible RCU, high resolution timers, interrupt threading etc. and, as a further burden, the fallout from sloppily-implemented locking schemes in all areas across the kernel.
Those two years were mostly spent with grunt work and twisting our brains around hard-to-understand and hard-to-solve locking and preemption problems. No time was left for theory and research. When the dust settled a bit and we started to feed parts of the realtime patch to the mainline, we actually spent some time reading papers and trying to leverage the academic research results.
Let me pick out priority inheritance and have a look at how the code evolved and why we ended up with the current implementation. The first version which was in Ingo's patchset was a rather simple approach with long-held locks, deep lock nesting and other ugliness. While it was correct and helped us to go forward it was clear that the code had to be replaced at some point.
A first starting point for getting a better implementation was of course reading through academic papers. First I was overwhelmed by the sheer amount of material and puzzled by the various interesting approaches to avoid priority inversion. But, the more papers I read, the more frustrated I got. Lots of theory, proof-of-concept implementations written in Ada, micro improvements to previous papers, you all know the academic drill. I'm not at all saying that it was waste of time as it gave me a pretty good impression of the pitfalls and limitations which are expected in a non-priority-based scheduling environment, but I have to admit that it didn't help me to solve my real world problem either.
The code was rewritten by Ingo Molnar, Esben Nielsen, Steven Rostedt and myself several times until we settled on the current version. The way led from the classic lock-chain walk with instant priority boosting through a scheduler-driven approach, then back to the lock-chain walk as it turned out to be the most robust, scalable and efficient way to solve the problem. My favorite implementation, though, would have been based on proxy execution, which already existed in Doug Niehaus's Kansas University Real Time project at that time, but unfortunately it lacked SMP support. Interestingly enough, we are looking into it again as non-priority-based scheduling algorithms are knocking at the kernel's door. But in hindsight I really regret that nobody—including myself—ever thought about documenting the various algorithms we tried, the up- and down-sides, the test results and related material.
So it seems that there is the reverse problem on the real world developer side: we are solving problems, comparing and contrasting approaches and implementations, but we are either too lazy or too busy to sit down and write a proper paper about it. And of course we believe that it is all documented in the different patch versions and in the maze of the Linux kernel mailing list archives which are freely available for the interested reader.
Indeed it might be a worthwhile exercise to go back and extract the information and document it, but in my case this probably has to wait until I go into retirement, and even then I fear that I have more favorable items on my ever growing list of things which I want to investigate. On the other hand, it might be an interesting student project to do a proper analysis and documentation on which further research could be based.
I do not consider myself in any way to be representative of the kernel developer community, so I asked around to learn who was actually influenced by research results when working on the realtime preemption patch. Sorry for you folks, the bad news is that most developers consider reading research results not to be a helpful and worthwhile exercise in order to get real work done. The question arises why? Is academic OS research useless in general? Not at all. It's just incredibly hard to leverage. There are various reasons for this and I'm going to pick out some of them.
First of all—and I have complained about this before—it's often hard to get access to papers because they are hidden away behind IEEE's paywall. While dealing with IEEE, a fact of life for the academic world, I personally consider it as a modern form of robber barony where tax payers have to pay for work which was funded by tax money in the first place. There is another problem I have with the IEEE monopoly. Universities' rankings are influenced by the number of papers written by their members and accepted at a IEEE conferences, which I consider to be one of the most idiotic quality measurement rules on the planet. And it's not only my personal opinion; it's also provable.
I actually took the time to spend a day at a university where I could gain access to IEEE papers without wasting my private money. I picked out twenty recent realtime related papers and did a quick survey. Twelve of the papers were a rehash of well-known and well-researched topics, and at least half of them were badly written as well. From the remaining eight papers, six were micro improvements based on previous papers where I had a hard time figuring out why the papers had been written at all. One of those was merely describing the effects of converting a constant which influences resource partitioning into a runtime configurable variable. So that left two papers which seemed actually worthwhile to read in detail. Funny enough, I had already read one of those papers as it was publicly accessible in a slightly modified form.
That survey really convinced me to stay away from IEEE forever and to consider the university ranking system even more suspicious.
There are plenty of other sources where research papers can be accessed, but unfortunately the signal-to-noise ratio there is not significantly better. I have no idea how researchers filter that, but on the other hand most people wonder how kernel developers filter out the interesting stuff from the Linux kernel mailing list flood.
One interesting thing I noticed while skimming through paper titles and abstracts is that the Linux kernel seems to have become the most popular research vehicle. On one site I found roughly 600 Linux-based realtime and scheduling papers which were written in the last 18 months. About 10% of them utilized the realtime preemption patch as their baseline operating system. Unfortunately almost none of the results ever trickled through to the kernel development community, not to mention actually working code being submitted to the Linux kernel mailing list.
As a side note: one paper even mentioned a hard-to-trigger longstanding bug in the kernel which the authors fixed during their research. It took me some time to map the bug to the kernel code, but I found out that it got fixed in the mainline about three months after the paper was published—which is a full kernel release cycle. The fix was not related to this research work in any way, it just happened that some unrelated changes made the race window wider and therefore made the bug surface. I was a bit grumpy when I discovered this, but all I can ask for is: please send out at least a description of a bug you trip over in your research work to the kernel community.
Another reason why it's hard for us to leverage research results is that academic operating system research has, as probably any other academic research area, a few interesting properties:
We discussed the sporadic server model yesterday at OSPERT, but it has been around for 27 years. I assume that hundreds of papers have been written about it, hundreds of researchers and students have improved the details, created variations, but there is almost no operating system providing support for it. As far as I know Apple's OSX is the only operating system which has a scheduling policy which is not based on priorities but, as I learned, it's well hidden away from the application programmer.
If you have any chance to influence that, then please help to plant at least some clue on the folks who are going to use the systems you and we create.
A related observation is the inability of hardware and software engineers to talk to each other when a system is designed. While I observe that disconnect mainly on the industry side, I have the feeling that it is largely true in the universities as well. No idea how to address this issue, but it's going to be more important the more the complexity of systems increases.
I'll stop bashing on you folks now, but I think that there are valid questions and we need to figure out answers to them if we want to get out of the historically grown state of affairs someday.
We are happy that you use Linux and its extensions for your research, but we would be even more happy if we could deal with the outcome of your work in an easier way. In the last couple of years we started to close the gap between researchers and the Linux kernel community at OSPERT and at the Realtime Linux Workshop and I want to say thanks to Stefan Petters, Jim Anderson, Gerhard Fohler, Peter Zijlstra and everyone else involved. It's really worthwhile to discuss the problems we face with the research community and we hope that you get some insight into the problems we face and requirements which are behind our pragmatic approach to solve them.
And of course we appreciate that some code which comes out straight of the research laboratory (the EDF scheduler from ReTiS, Pisa) actually got cleaned up and published on the Linux kernel mailing list for public discussion and I really hope that we are going to see more like this in the foreseeable future. Problem complexity is increasing, unfortunately, and we need all the collective brain power to address next year's challenges. We already started the discussion and first interesting patches have shown up, so really I hope we can follow down that road and get the best out of it for all of us.
Thanks for your attention.
I got quite a bit of feedback after the talk. Let me answer some of the questions.
Q: Is there any place outside LKML where discussion between academic folks and the kernel community can take place?
A: Björn Brandenberg suggested setting up a mailing list for research related questions, so that the academics are not forced to wade through the LKML noise. If a topic needs a broader audience we always can move it to LKML. I'm already working on that. It's going to be low traffic, so you should not be swamped in mail.
Q: Where can I get more information about the realtime preemption patch ?
Q: Which technologies in the mainline Linux kernel emerged from the realtime preemption patch?
A: The list includes:
Q: Where do I get information about the Realtime Linux Workshop?
A: The 2010 realtime Linux Workshop (RTLWS) will be in Nairobi, Kenya, Oct. 25-27th. The 2011 RTLWS is planned to be at Kansas University (not confirmed yet). Further information can be found on the RTLWS web page. General information about the organisation behind RTLWS can be found on the OSADL page, and information about it's academic members is on this page.
I stayed for the main conference, so let me share my impressions. First off the conference was well organized and, in general, the atmosphere was not really different from an open source conference. The realtime researchers seem to be a well-connected and open-minded community. While they take their research seriously, at least most of them admit freely that the ivory tower they are living in can be a complete different universe. This was pretty much observable in various talks where the number of assumptions and the perfectly working abstract hardware models made it hard for me to figure out how the results of this work could be applied to reality.
The really outstanding talks were the keynotes on day two and three.
On Thursday, Norbert When from the Technical University Kaiserslautern gave an interesting talk titled Hardware modeling: A critical assessment with case studies [PDF]. Norbert is working on hardware modeling and low-level software for embedded devices, so he is not the typical speaker you would expect at a realtime-focused conference. But it seems that the program committee tried to bring some reality into the picture. Norbert gave an impressive overview over the evolution of hardware and the reasons why we have to deal with multi-core hardware and have to face the fact that today's hardware is not designed for predictability and reliability. So realtime folks need to rethink their abstract models and take more complex aspects of the overall system into account.
One of the interesting aspects was his view on energy efficient computing: A cloud of 1.7 million AMD Opteron cores consumes 179MW while a cloud of 10 million Xtensa cores provides the same computing power at 3MW. Another aspect of power-aware computing is the increasing role of heterogeneous systems. Dedicated hardware for video decoding is about 100 times more power efficient than a software-based solution on a general-purpose CPU. Even specialized DSPs consume about 10 times more power for the same task than the optimized hardware solution.
But power optimized hardware has a tradeoff: the loss of flexibility which is provided by software. But the mobile space has already arrived in the heterogeneous world, and researchers need to become aware of the increased complexity to analyze such hybrid constructs and develop new models to allow the verification of these systems in the hardware design phase. Workarounds for hardware design failures in application specific systems are orders of magnitudes more complex than on general purpose hardware. All in all, he gave his colleagues from the operating system and realtime research communities quite a list of homework assignments and connected them back to earth.
The Friday morning keynote was a surprising reality check as well. Sanjoy Baruah from the University of North Carolina at Chapel Hill titled his talk "Why realtime scheduling theory still matters". Given the title one would assume that the talk would be focused on justifying the existence of the ivory tower, but Sanjoy was very clear about the fact that the realtime and scheduling research has focused for too long on uniprocessor systems and is missing answers to the challenges of the already-arrived multi-core era. He gave pretty clear guidelines about which areas research should focus on to prove that it still matters.
In addition to the classic problem space of verifiable safety-critical systems, he was calling for research which is relevant to the problem space and built on proper abstractions with a clear focus on multi-core systems. Multi-core systems bring new—and mostly unresearched—challenges like mixed criticalities, which means that safety critical, mission critical and non critical applications run on the same system. All of them have different requirements with regard to meeting their deadlines, resource constraints, etc., and therefore bring a new dimension into the verification problem space. Other areas which need care, according to Sanjoy, are component-based designs and power awareness.
It was good to hear that despite our usual perception of the ivory tower those folks have a strong sense of reality, but it seems they need a more or less gentle reminder from time to time. ECRTS was a real worthwhile conference and I can only encourage developers to attend such research-focused events and keep the communication and discussion between our perceived reality and the not-so-disconnected other universe alive.
Patches and updates
Core kernel code
Filesystems and block I/O
Virtualization and containers
Benchmarks and bugs
Page editor: Jonathan Corbet
There are not a lot of source-based Linux distributions, so when one of them announces a new release, it's always a good opportunity to take a look. We're not talking about Gentoo Linux or Linux From Scratch now, but about a relatively unknown but nonetheless interesting distribution: T2 SDE. After years of development, the project published a new stable release, version 8.0 ("Phoenix").
As the project's home page hastens to stress, T2 SDE (which stands for "System Development Environment") is not just a Linux distribution, it's an open source distribution build kit. At the core of T2 lies an automated build system that manages the whole compilation process from fetching the sources of packages to creating a CD image for a desktop system or a ROM image for embedded use. After initial creation of the tool chain, all packages are built inside a sandbox environment.
When configuring the build system, users can choose from various pre-defined target definitions or create their own target definition. A target handles the selection of packages, C library (Glibc, dietlibc, uClibc), compiler (GCC or LLVM), and so on, and it even supports cross-compilation. Depending on the chosen target, the user can build a Linux distribution for an embedded system, a desktop system or a server. There is even an experimental target to build a T2 system to run on a wireless router (wrt2), but it is not yet fully done. If someone picks up development of this target, the result should be an OpenWRT-like embedded Linux system.
The principal developer of T2 SDE is René Rebe, CTO of the German software development company ExactCODE. They use T2 in commercial embedded systems, industrial computers and appliances. Hence, the real target audience of the distribution are developers of appliances and embedded systems. According to René, ExactCODE's clients are using T2 to power firewall products, greenhouse controllers, IPTVs, and a lot of other embedded devices. But T2 is also used as the base of general-purpose Linux distributions, such as Puppy Linux.
T2 SDE 8.0 is based on Linux kernel 2.6.34, GCC 4.5, Glibc 2.11 and X.Org 7.5. In total there are around 3200 packages in the repository. Users can download minimal ISO images for i486, x86_64, powerpc and powerpc64 (PowerMac G5) or download the source and build their own ISO image. The advantage of the latter is that it allows you to build an ISO file for another architecture (ARM, Blackfin, MIPS, Sparc and many others are supported) or optimized for a specific processor instruction set, and that other package sets are supported. By the way, these ISO images can be fully cross-compiled, as has been done with the minimal ISO images.
The website has extensive but a little out-of-date documentation, with the T2 Handbook as an excellent in-depth reference, and two short step-by-step guides for the impatient: one for building and one for installing T2.
In short, after checking out the T2 sources with Subversion, the user starts the configuration of the build system as root with the command ./scripts/Config, which shows an ncurses interface. Then the user chooses a target (generic, embedded, and so on) and a package selection, as well as distribution media, the CPU architecture and optionally some optimizations. There a lot of advanced configuration options, for example for choosing another C library, compiler, and so on. When the configuration is done, the build is started with a ./scripts/Build-Target command. Multiple targets can be built from the same T2 build system by specifying a configuration file with the -cfg argument. Building T2 is obviously optimally done on a T2 system, but with the right prerequisites it's also possible on other Linux distributions.
T2 is obviously an excellent framework for building embedded Linux systems. But is it also suitable as a desktop system? It depends on what the user is looking for. The target users are not the ones that want to have a completely preconfigured operating system such as Ubuntu. In contrast, T2 is the ultimate do-it-yourself distribution: users install the base system from the minimal ISO image and install the packages they need. The operating system installation and configuration tool stone is really bare-bones, but it does the job. Just be sure to select "Full install" when asked about the package set.
In contrast to many other distributions, T2 only applies patches to the original source files when absolutely necessary, and it follows the latest version of all packages. This means that users have a cutting-edge Linux distribution, but they have to configure a lot themselves. Moreover, all services are disabled by default. All this makes T2's philosophy closer to the BSD world than to most Linux distributions.
Building and installing a package on a T2 system is simply done with the Emerge-Pkg script (after checking out the T2 source with Subversion). This script not only builds and installs the named package, but also its dependencies. The same command can be used to update an already installed package. Removing a package is done with:
mine -r packagename
Where mine is T2's binary package manager. By the way, T2 uses human-readable text files (found in /var/adm) for package management. For example a list of all installed files belonging to packages can be found in /var/adm/flists/packagename. This makes it possible to query package information with normal UNIX tools. For example, grepping for the name of a header file in /var/adm/flists will give you the package which offers this file.
However, dependencies are currently a big hurdle for desktop use of T2. Emerge-Pkg only installs direct dependencies, so a lot of the builds fail. The user then has to Emerge-Pkg the failed dependencies to try to build them directly, and if that doesn't work out the error log files in /var/adm/logs can give some information about which dependencies are missing. Then you should install these first before trying to build the original package again. Emerge-Pkg has an option to build all indirect dependencies, but this often builds way too much, so it isn't advisable to use this. With the current approach, the build system is not user-friendly enough to use T2 as a desktop system without annoyances, but René is aware that the problem has been neglected for too long and he is working on improving the experience.
T2 uses its own non-standard framework for network configuration, which can be somewhat confusing at first, although it looks somewhat like Debian's /etc/network/interfaces. The network configuration is stored in /etc/conf/network and allows setups from very simple to complex, even with multiple profiles and basic firewall rules. The T2 handbook is an invaluable resource to get a network running on a T2 system, although the system configuration tool stone ("Setup Tool ONE") can handle simple network configurations.
There will always come a moment when a user wants to install a package that T2 doesn't have. Luckily the package format (with the .desc file extension) is quite easy to grasp: it's a simple text file with some metainformation such as a description and a URL where the source of the program can be downloaded. T2 understands a lot of build systems (among others GNU autoconf, Makefiles, cmake, Python setup.py and Perl Makefile.PL) and automatically fetches and extracts the source, modifies the package build and compiles and installs the software. So in most cases you only have to fill in some basic information to create a new T2 package. In case this doesn't work, you have to add a separate .conf file that modifies the build process or .patch files with patches to the source. More information about the process can be found in the T2 handbook, and there is also a simple package creation tutorial.
When you have created a new package, contributing it to the T2 mailing list guarantees that it will be added to the repository. There is also the IRC channel #t2 on Freenode, where a small but helpful community is available. All in all, the process of writing your own packages is really straightforward: last year, your author contributed a handful of packages to T2 while evaluating the distribution, and it struck him how extremely readable and self-documenting the package format is.
T2 SDE is not only a cross-architecture distribution, it also wants to become cross-platform. While currently the only platform it uses is the Linux kernel, a long-term goal is support for other open source kernels like MINIX, GNU Hurd, *BSD, Haiku, OpenDarwin or OpenSolaris. At the moment no work is being done in this domain, but the build system should make the task doable. According to René, who is especially interested in having a microkernel to run T2 on, it is not so difficult besides patching some of the packages:
A first step into the direction of other kernels has already been made, and it is, surprisingly, support for Windows. More specifically, T2 added support for MinGW in August of last year. MinGW (Minimalist GNU for Windows) is a port of GCC and GNU Binutils for use in the development of Windows applications. This means that T2 can be used to cross-compile 32-bit Windows binaries on a Linux system. The work has been done by ExactCODE for a client who wanted to compile a Windows executable from a more automated (read: UNIX-like) environment.
Another important mid-term project is improved LLVM/Clang support, René says:
If you want to use T2 SDE as a desktop system, expect to invest a lot of time chasing dependencies and configuring a lot of details yourself. Because T2 SDE doesn't have extensive documentation about the daily use of the system, like Gentoo and Arch Linux have, it's no trivial task. However, its source-based nature and its clean BSD-like philosophy will surely appeal to do-it-yourself users.
These issues notwithstanding, T2 SDE is a powerful and flexible Linux distribution for all sorts of purposes. What's interesting is that it offers just one distribution build kit that can be used to create various targets, from embedded to desktop, while many other distributions have different versions for different targets. Moreover, T2's handbook covers extensively how to create packages and how to build your own distribution. If you want to build a special-purpose Linux distribution, T2 SDE should be one of the first places to look.
New ReleasesThis is the second release from the 8-STABLE branch which improves on the functionality of FreeBSD 8.0 and introduces some new features." This is the first test release based on Squeeze. The focus of this release is to test the user application selection."
Fedoraa draft version of her report is now available. It looks at contributors' motivations and problems they have encountered, and makes a number of recommendations on how to make the project easier to contribute to. "The key here, and the large difference between FLOSS development processes and traditional ones, is that it's not the act of doing something that needs approval; instead it's the result of the action and quality of the work that must be approved. Again, this is where not only having a mentor program for new contributors is useful, but also making such a program highly visible, transparent, and accessible is important." systemd is now the default init system. The early reports are mostly about dependency issues; it's not clear that all that many users have gotten as far as running the new system yet. "I have tested all this quite extensibly on my machines, but of course, I am not sure how this will break on other people's machines. I sincerely hope I didn't break anything major with this transition. So please report bugs and don't rip off my head because I might have broken your boot... I didn't do it on purpose, promised!"
Meanwhile, the Fedora 14 branch is coming on July 27, with the added twist that the project is switching its CVS-based system over to git at the same time. For now, they will be mostly focused on just making it work, but there's some interesting ideas for the future: "Later on we will start to explore more interesting advancements such as automatic patch management with exploded sources, linking to upstream source repositories, automatic %changelog generation from git changelogs, or things I haven't even thought about."
SUSE Linux and openSUSEopenSUSE 11.0 was released on June 17 2008, making it 2 years and 1 month of security and bugfix support." Jos commented, 'The opportunity to become part of the international openSUSE community is very exciting. There are a great number of interesting developments going on in the free software world, and openSUSE plays a major role in many of them. I look forward to working with the community on these, helping it grow, finding new directions and ways of developing, and delivering its innovative technologies to users and developers around the world.'"
Newsletters and articles of interest
Page editor: Rebecca Sobol
How do you build a successful community that attracts contributors? Two talks at OSCON offered very good advice on the topic: "Secrets of building and participating in open source communities" by Drupal founder Dries Buytaert, and "Junior Jobs and Bite-sized Bugs: Entry Points for New Contributors to Open Source," which was co-presented by Mel Chua of Red Hat and Asheesh Laroia of OpenHatch. Both presentations offered some worthwhile insight and tips on building community.
To be sure, these weren't the only talks focused on the fine art of community management. However, Buytaert's talk seemed worthwhile to attend because it's obvious that Drupal does enjoy a healthy community of contributors, and the "junior jobs" presentation seemed worthwhile because it was focused on practical techniques for dealing with an obvious problem for any community, rather than being of the hand-wavy motivational community talk variety.
Given that community management is a well-covered topic, one might be skeptical about a presentation that promises to teach "secrets" of building community. And, indeed, Buytaert's presentation was less revelatory than the title suggested. But Buytaert did, in fact, offer very useful advice and insight into the success of Drupal that might well apply to other communities.
He started with a rundown of Drupal's success so far. According to Buytaert's statistics, Drupal sites account for about 1% of Web sites. Drupal is downloaded about 300,000 times per month, and Drupal.org sees about 1.5 million unique visitors per month. He also said that Drupal has more than 6,000 modules. All of which point to a project that's doing something right, but what? Buytaert offered several pieces of advice or wisdom for the audience.
First, Buytaert advised that there's no "quick rich" formula to build a community. When growth does happen, the second tip was to embrace growing pains. Buytaert pointed to an incident with Drupal in 2005 when Drupal's server was pushed to its limit and couldn't handle the traffic it was receiving. Buytaert said that it made the Drupal community stronger, and that "communities are always a bit broken. Nothing better than suffering together."
Next, he offered two pieces of concrete advice: provide the right tools, and provide a architecture of evolution. He suggested that any project should have a modular design, and favor accessible technologies like PHP and MySQL (if appropriate) rather than languages or technology that are perhaps better but less accessible. Drupal has succeeded in part because PHP and MySQL are so commonplace. Would Drupal have flourished to the same extent if it was written in Perl and used a less popular database? Unlikely.
Buytaert offered one unusual "secret": build a commercial ecosystem. But he also suggested that projects needed to "find a higher purpose" while striving to make money. For instance, he offered Drupal's goal to democratize online publishing, and Mozilla's goals for an open web. Both initiatives that have powered a commercial ecosystem, but still meld well with open source communities.
Laroia and Chua have a fair amount of experience working with new contributors. Chua spends a great deal of time working with the Fedora community. Laroia works on OpenHatch, a project that helps connect new contributors to projects by breaking bugs and projects down into small and introductory chunks.
The start of the Junior Jobs presentation was slightly unorthodox. Chua and Laroia asked the audience to be "productively lost" and explore some project sites looking for the the bug tracker to find bugs to work on or other resources that a new contributor might seek out. The audience, about 20 people, gathered in pairs or small groups and checked some popular project sites like Sugar Labs, the Fedora Web site, and other mainstream open source projects.
The audience, predictably, discovered that finding the way around open source project sites for contributor resources was not always a simple exercise. Even when a contributor can find the bug tracker, it may take a translator to help them understand the information. One example given was the "UNCO" status in GNOME's bugtracker, which wasn't immediately obvious as "unconfirmed" to some of the audience at the talk. New contributors are not going to experience the sites like longstanding contributors who know where to look — and probably have deeply buried resources bookmarked for fast access.
Part of the problem is that tools and documentation that are helpful to experienced contributors may not be so useful for newbies. As an illustration, they talked about cookbooks and the difference between a cookbook useful for new cooks and one useful for an experienced cook. It might be necessary to explain "rolling boil" and what it means to "fold" an ingredient in to a mix for a beginning cook, but but such explanations are only frustrating to an experienced cook.
Looking for pointers for your project? In talking with Chua and Laroia after the presentation, they offered Dreamwidth as a prime example of a project that does well in attracting new and diverse contributors. Chua also pointed to Drupal's Dojo for its classes on how to get started with Drupal contribution, and the Fedora How to be a successful contributor document. Chua also noted that each community has to forge its own path:
The overall message of this presentation should be heeded by any project that looks to attract contributors who are new to contributing to open source projects. Breaking down tasks into "junior jobs" or easy-to-tackle bugs is fine, but that's only so useful if the sites are difficult to navigate or the bugtracker is confusing.
Chua recommends thinking of FOSS contributions more broadly:
It may be helpful to think of FOSS contributions as being towards not just a codebase but a *community* that includes and is centered around a codebase — so you can patch the code, but you can also patch the tests and docs and processes and how the technology you're building and the people you're building it with interact with the rest of the world.
Over all, the dominant message of the presentation was to communicate with new contributors and try to anticipate how newcomers will view your project. As Laroia said during the close of the talk, "Communicate. It can't make things any worse." While not a definitive guide to community management, it's a good first step.
Newsletters and articles
Page editor: Jonathan Corbet
Non-Commercial announcementsannounced a new partnership. "Starting immediately, LiMo Foundation will become a member of GNOME Foundation's Advisory Board and GNOME Foundation will become an Industry Liaison Partner for LiMo Foundation. This development represents a natural formalization founded upon the significant use of GNOME Mobile software components within Release 2 and Release 3 of the LiMo Platform."
Commercial announcementsIVI is a rapidly growing and evolving field that encompasses the digital applications that can be used by all occupants of a vehicle, including navigation, entertainment, location-based services, and connectivity to devices, car networks and broadband networks. MeeGo will provide the base for the upcoming GENIVI Apollo release that will be used by members to reduce time to market and the cost of IVI development. MeeGo's platform contains a Linux base, middleware, and an interface layer that powers these rich applications."
Legal Announcementsthe text of a ruling by the US Court of Appeals in a suit by MGE UPS Systems against General Electric. The court has ruled that simply circumventing technical measures is not, by itself, a violation of the Digital Millennium Copyright Act. "However, MGE advocates too broad a definition of "access;" their interpretation would permit liability under § 1201(a) for accessing a work simply to view it or to use it within the purview of 'fair use' permitted under the Copyright Act. Merely bypassing a technological protection that restricts a user from viewing or using a work is insufficient to trigger the DMCA's anti-circumvention provision. The DMCA prohibits only forms of access that would violate or impinge on the protections that the Copyright Act otherwise affords copyright owners." What this ruling means in the long term - especially for defendants who are not GE - remains to be seen, but it is a step in the right direction. announced that it has won three exemptions to the DMCA's anti-circumvention rules as part of the regular, three-year process. These include cellphone unlocking, fair use of DVD content, and, happily, liberating locked-down phones. "In its reasoning in favor of EFF's jailbreaking exemption, the Copyright Office rejected Apple's claim that copyright law prevents people from installing unapproved programs on iPhones: 'When one jailbreaks a smartphone in order to make the operating system on that phone interoperable with an independently created application that has not been approved by the maker of the smartphone or the maker of its operating system, the modifications that are made purely for the purpose of such interoperability are fair uses.'"
Articles of interesta survey of GPL compliance across several Android-based tablets, along with some comments on his findings. "With the exception of Barnes & Noble's Nook e-reader, a device that isn't even really a tablet, I couldn't find a single tablet manufacturer who was complying with the minimum of their legal open source requirements under GNU GPL. Let alone supporting community development." reports on the status of the suits against Sony for removing the "Other OS" option, thus removing the ability to install Linux. Those suits are now combined into a single class-action lawsuit. "None of the plaintiffs are likely to get rich. If the plaintiffs win, the lawyers will get paid, Sony will probably have to pay PlayStation 3 owners a small refund to make up for the loss of the option, or there will be a coupon or game giveaway. This consolidation just makes that settlement more likely, and much simpler from a legal perspective. It shows a large number of gamers affected, and makes reasonable restitution possible on a large scale." reports that Stephen Bird has found a way to gain root access on Motorola's new Droid X smartphone. "Droid X owners can use the Android debugging tool to run the exploit on their device. Step-by-step instructions are available from the AllDroid forum community. The exploit will give users the ability to modify the contents of the filesystem and use certain third-party software like screenshot and tethering tools that only work on rooted devices." reports that Novell is launching the SUSE Gallery. "It has been a year since Novell launched its SUSE Appliance Program, which offers a set of online tools, dubbed SUSE Studio, for spinning up software appliances based on its SUSE Linux distro. The appliance tools were aimed at software developers who wanted to code appliances for their own purposes - perhaps as a means of more easily supporting and redistributing their own application software to their customers - not for distributing software appliances to the general public. But that is precisely what some software developers want to be able to do, according to Joanna Rosenberg, ISV marketing manager at Novell, and so on the first birthday of the SUSE Appliance Program, Novell is opening up what it calls the SUSE Gallery." an article in the Moscow Times, the Russian government is working on a Linux-based "national operating system" for its computers. "The operating system, for use on the computer systems of government agencies and state-run companies, will be 90 percent based on the open-source Linux operating system, Deputy Communications and Press Minister Ilya Massukh said. He said use of the operating system would be optional for all agencies." (Thanks to Eugene Markow) article in NetworkWorld is one of many covering a device that may be available in 2011. "The $35 tablet prototype from India will run a variation of the open source Linux operating system. It has 2Gb of RAM, but no internal storage--relying on a removable memory card. The device has a USB port, and built-in Wi-Fi connectivity. Seems like reasonable enough specs--especially for $35. On the software side, the $35 tablet has a PDF reader, multimedia player, video conferencing, Web browser, and word processor."
Calls for PresentationsWe are now accepting proposals for talks. Please note that we are looking for talks in both English and German." The submission deadline is October 11, 2010.
|DebConf10||New York, NY, USA|
|YAPC::Europe 2010 - The Renaissance of Perl||Pisa, Italy|
|Debian MiniConf in India||Pune, India|
|KVM Forum 2010||Boston, MA, USA|
|August 9||Linux Security Summit 2010||Boston, MA, USA|
|August 13||Debian Day Costa Rica||Desamparados, Costa Rica|
|August 14||Summercamp 2010||Ottawa, Canada|
|Conference for Open Source Coders, Users and Promoters||Taipei, Taiwan|
|Free and Open Source Software Conference||St. Augustin, Germany|
|European DrupalCon||Copenhagen, Denmark|
|August 28||PyTexas 2010||Waco, TX, USA|
|OOoCon 2010||Budapest, Hungary|
|LinuxCon Brazil 2010||São Paulo, Brazil|
|Free and Open Source Software for Geospatial Conference||Barcelona, Spain|
|DjangoCon US 2010||Portland, OR, USA|
|CouchCamp: CouchDB summer camp||Petaluma, CA, United States|
|Ohio Linux Fest||Columbus, Ohio, USA|
|September 11||Open Tech 2010||London, UK|
|Open Source Singapore Pacific-Asia Conference||Sydney, Australia|
|X Developers' Summit||Toulouse, France|
|3rd International Conference FOSS Sea 2010||Odessa, Ukraine|
|Italian Debian/Ubuntu Community Conference 2010||Perugia, Italy|
|WordCamp Portland||Portland, OR, USA|
|September 18||Software Freedom Day 2010||Everywhere, Everywhere|
|September 23||Open Hardware Summit||New York, NY, USA|
|BruCON Security Conference 2010||Brussels, Belgium|
|PyCon India 2010||Bangalore, India|
|Japan Linux Symposium||Tokyo, Japan|
|Workshop on Self-sustaining Systems||Tokyo, Japan|
|September 29||3rd Firebird Conference - Moscow||Moscow, Russia|
|Open World Forum||Paris, France|
|Open Video Conference||New York, NY, USA|
|October 1||Firebird Day Paris - La Cinémathèque Française||Paris, France|
|Foundations of Open Media Software 2010||New York, NY, USA|
If your event does not appear here, please tell us about it.
Page editor: Rebecca Sobol
Copyright © 2010, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds