LWN.net Weekly Edition for June 10, 2010
FSF takes on Apple's App Store over GPL
Apple's App Store for iPhone, iPod, and iPad devices has no shortage of critics, from the delays in application approval to the seemingly arbitrary removal of previously-approved applications. The Free Software Foundation (FSF) recently took on yet another problem with the distribution channel when it contacted Apple to pursue a GPL compliance problem with a GNU-copyrighted game. Apple evidently found it easier to remove the game in question than to alter its App Store policies, but the debate has sparked a discussion about free software on Apple's otherwise closed devices.
The trouble began with a GNU Go for the iPhone game, submitted to the App Store by Robota Softwarehouse. GNU Go for the iPhone is a port of the original GNU Go — a program for playing the game of Go including an AI opponent — which is an official part of the GNU project. At some unspecified date, the FSF contacted both Robota and Apple to ask them to come into compliance with GNU Go's license, the GPLv2. Robota was violating the license by not providing access to the source code of the game, and Apple was in violation for several reasons, according to FSF License Compliance Engineer Brett Smith. Fearing that, in response, Apple would simply remove the GNU Go game from the store without explanation, the FSF posted a blog entry on May 25 describing its case. That fear was evidently well-founded, as Apple pulled GNU Go for the iPhone on May 26.
Smith followed up the original post with a more detailed explanation on May 27. In it, he says that the particular license violation that FSF brought up with Apple was section 6 of the GPLv2, which states that a redistributor of the licensed program may not impose further restrictions on the recipients to copy, distribute, or modify the program. Apple's App Store terms of service do impose several restrictions, such as limiting usage of the program to five devices approved by Apple.
Reaction
David "Lefty" Schlesinger criticized the FSF's action in a string of posts on his personal blog, starting by challenging whether FSF would engage in similar compliance actions against other mobile application marketplaces, such as Android's or Windows Mobile's. He followed that up with a second post rejecting the idea that Apple would be considered bound by the terms of the GPL, comparing the App Store to general "file download" sites such as FileHippo that host binary packages of free software but do not also host source code. Finally, he asserted in a third post that Apple could not be held to the terms of the GPL at all, because it would qualify for protection under the "safe harbor" provisions of the Digital Millennium Copyright Act (DMCA).
Debate over the entire chain of events can be found on every outlet from Slashdot to Identi.ca, including some lengthy exchanges in the comments on Schlesinger's blog posts. Examining Schlesinger's arguments, it is impossible to predict whether FSF would (or has) pursued GPL compliance action against other application markets, both because the FSF only acts on GPL violations for which it owns the copyright to infringing code, and because most compliance actions take place in private. Smith stated that the FSF only publicized the GNU Go case to forestall speculation that might result if Apple silently pulled the application as the FSF predicted it would.
On the second post, it seems quite clear that the GPL does apply to Apple as well as to FileHippo or any other site that hosts GPL-licensed code. Apple accepts the code from the developer, then redistributes it in the App Store; that triggers the license. On the other hand, section 3(c) states that noncommercial distribution by those who received the program as an executable can comply with the source code requirement simply by providing the upstream distributor's source code offer. Whether a zero-cost game in the App Store qualifies as "noncommercial" is open to interpretation. A download-only site like FileHippo should be in compliance provided that its binaries are unaltered; if the upstream source of the binary did not include a compliant source code offer, it would be the upstream source that is non-compliant.
But there is no similar exception in section 6 that passes the buck upstream. Apple's App Store terms-of-service definitely impose restrictions on the use of GNU Go for the iPhone, and the wording of section 6 is simple: "Each time you redistribute the Program (or any work based on the Program), the recipient automatically receives a license from the original licensor to copy, distribute or modify the Program subject to these terms and conditions. You may not impose any further restrictions on the recipients' exercise of the rights granted herein.
"
Schlesinger and other commenters seem taken aback by the scope of the definition of "distributor," saying it would be ludicrous to hold Best Buy or other brick-and-mortar stores to the GPL if they sell boxed copies of free software. It is certainly an inclusive definition, but it is not as bad as the hypothetical Best Buy scenario implies; a big-box retail store could reasonably claim ignorance to the contents of a prepackaged product, which Apple with its per-app-review process cannot, but, in any case, a retail store is not able to impose additional restrictions as described under section 6.
As ChillingEffects.org explains in
detail, the "safe harbor" provision of the DMCA applies only to
service providers, defined as "an entity offering
transmission, routing, or providing connections for digital online
communications, between or among points specified by a user, of material of
the user's choosing, without modification to the content of the material as
sent or received
" or "a provider of online services or network
access, or the operator of facilities thereof,
" further limited to
systems that provide "conduit communications", "system caching", "storage
systems", or "information location tools". Considering those definitions,
Apple's App Store simply does not appear to qualify for protection. But even if it did, Smith said, that only limits Apple's liability, it does not excuse them from compliance with the GPL. Apple "would still have to make the same choice about whether to come into compliance with the GPL, or cease distribution altogether.
"
Spreading the word
The big unanswered question is whether or not the FSF's action does more harm than good in the effort to spread free software. Schlesinger says repeatedly in the comment threads (as do others on other sites) that it does more harm, preventing users of the iPhone from using free software and being exposed to free software ideals. The FSF, for its part, suggests that there is little exposure to free software ideals when the terms-of-service restrict the software to begin with.
In the end, it is probably a judgment call; certainly technical restrictions of the iPhone platform would prevent users from exercising freedom to modify and redistribute GNU Go for the iPhone even if they had the source code in hand. Would they nevertheless learn to appreciate software freedom more?
The other charge made against the FSF in this case is that it singled out Apple as a public example solely for the sake of publicity. That is speculative, of course. FSF has targeted Apple for criticism before, as it has Microsoft, but as Smith explained in the first blog entry, it talked to Apple in private first — as it has with Robota, who still does not seem to have posted the source code to the problematic game. Even if one does not take FSF at its word (that the public discussion of the case was meant to preempt uninformed speculation), it has had one positive effect: it has gotten the community talking about free software licensing and mobile application distribution, a subject that will only grow in importance in the coming years.
Linaro seeks to simplify ARM Linux landscape
The ARM processor family is a complicated one, with many different variations, leading to large numbers of separate sub-architectures in the Linux kernel. A quick glance at the ARM directory in a recent kernel tree shows nearly 70 different sub-architectures, each corresponding to a different CPU or system-on-chip (SoC). That complexity has made it harder to develop new products for new or existing ARM devices. A new organization that was formed by six silicon vendors, Linaro, seeks to simplify that landscape, and allow easier—faster—development of ARM-Linux-based products.
Linaro was announced on June
3 as a non-profit company founded by ARM, Freescale, IBM, Samsung,
ST-Ericsson, and Texas Instruments that intends to "provide a
stable and optimized base for distributions and developers by creating new
releases of optimized tools, kernel and middleware software validated for a
wide range of SoCs, every six months.
" That six-month schedule
aligns with Ubuntu's—the first release is due in November, one month
after Ubuntu 10.10—and Canonical will be heavily involved in the effort.
Linaro already has a project in
Launchpad, Canonical's software collaboration platform, and it will
seemingly take the place of Ubuntu-ARM.
The focus will be on the low-level plumbing for ARM-based systems: the kernel, development tools, boot loaders, and graphics. The Linaro FAQ and other pages on the web site make it clear that Linaro is not planning on creating a new distribution. It is, instead, pitching in on upstream projects to simplify and optimize for ARM systems. It is not a new distribution, but clearly the hope is that various distributions will adopt the Linaro contributions.
The company lists current mobile distributions as potential benefactors of its efforts. Android, MeeGo, LiMo, and Ubuntu are specifically mentioned as mobile ARM distributions that might benefit from the work Linaro is planning to do. Because the work will be done in conjunction with the upstream projects, any other existing or new distribution can also make use of the improvements.
In addition, Linaro touts its benefits to consumers. By reducing the
complexity for device makers, it will be "enabling exciting
innovative products to come to the market quicker
". Focusing on
power consumption will also result in devices that "have longer
battery lives or slimmer cooler designs
". Linaro clearly wants to
maintain—increase—the level of ARM adoption in mobile devices.
Linaro will be well-funded, with a budget said to be tens of millions of dollars, much of which will pay for 80 employees. There are several levels of membership in Linaro, with two paying classes, Core and Club, which each provide money and engineers to the organization. For small organizations or individual contributors, there is the free Community Member class. There is a rather elaborate organizational structure that governs the company, as well as a management team in place. Based on all of that, it seems clear that Linaro wasn't just thrown together quickly, but has been in the works for some time.
There are already specific plans for what will be contained in the upcoming release. Canonical's Linaro release manager, Jamie Bennett, looks at the plan for Linaro 10.11 on his blog. There he also provides some more detail on the fragmentation in the Linux ARM world that led to the formation of Linaro:
That something is laid out in a detailed release document on the wiki. The tasks are broken up into four areas: Kernel, Graphics and User Experience, QA and Validation, and Infrastructure, with Linaro-specific as well as related Ubuntu tasks listed. Using device trees to describe different ARM hardware, which could reduce the complexity of configuring Linux for the platform, is high on the list. While some ARM hackers are not sold on device trees, a recent linux-kernel discussion about the proliferation of ARM configurations would indicate that Linus Torvalds, at least, is interested in seeing some kind of complexity reduction for that architecture. If Linaro can work with the upstream kernel developers to find a solution—device trees or something else—to that problem, it will have accomplished much.
There are other things on the agenda for the 10.11 release including
standardizing and unifying the telephony stack for the platform, making Qt
fully functional on ARM, optimizing web browsers for the architecture, and
selecting the "best toolchain for ARM hardware
". Overall, the
list of planned achievements for the five months before the release is
quite ambitious. Whether all that can be completed by a brand new
organization—even with a great deal of Canonical
know-how—remains to be seen. In the end, even completing a big chunk of it would
be quite an accomplishment; presumably there will be 11.05 and further
releases to fill in any gaps.
In many ways, Linaro is further proof that Linux is winning the battle for which OS will run on consumer electronics and other embedded devices. ARM chips are the dominant embedded Linux platform these days, but Intel has been targeting the low-power arena with its Atom processors. Linaro certainly looks like an effort to ensure that ARM-based devices maintain their lead in the embedded Linux world. It should be an interesting battle to watch.
Playing with MeeGo 1.0
The MeeGo project - the result of the merger between Moblin and Maemo - released version 1.0 of its core platform on May 25, accompanied by the first iteration of its "Netbook user experience." Your editor, who happens to have a netbook system sitting around, decided to give this release a try to see what has happened since the Moblin review written last November. While the overall feel of the system is quite similar, there has also been some real progress; MeeGo feels more like a finished product than Moblin did.
The overall user interface concepts laid out by Moblin have not changed
much in the merger with Maemo. There is still a home screen meant to
provide access to recently-used activities, be they web pages or
communications with others. The line of icons at the top still shows what
the MeeGo developers think people will want to do with netbooks: talk to
people, browse web pages, play music, etc. The quality of the graphics and
animation have improved somewhat, but the basic interaction model is what
Moblin had before.
There is an interesting distinction between running an activity from the top icon bar and running an application. Applications run in "zones," which are essentially virtual desktops which hold one window each. Moving between zones is done quickly enough by putting the pointer at the top of the screen, selecting the zones icon (yielding a display of the active zones), then picking the new destination; it's an experience similar to holding down the "home" key on an Android system. But an application run from the top bar (the music player, say, or the web browser) is treated differently; it has no zone and cannot be jumped into and out of that way. Your editor finds this to be a bit of a confusing inconsistency.
Speaking of web browsers, MeeGo now uses Chrome (or Chromium, one can choose at download time) for web access. Chrome is, of course, a reasonably mature and quite functional browser. The "Mozilla headless" mechanism used with the Moblin browser worked, but not all users were happy with the experience; Chrome, perhaps, will be better received.
While most things work nicely, one occasionally encounters a rough edge.
Your editor was able to crash the desktop by playing with an external
monitor. MeeGo lets the user choose between the built-in or an external
monitor, but does not want to run both at the same time - not even in
mirrored mode. One other thing that has jumped out is that options which
are toggles are controlled by a widget which looks like a sliding switch.
There are no labels, though, so it's not always obvious whether the option
is enabled or not.
The big sliders are typical of the way the MeeGo interface looks, though; buttons and such are big. Netbooks tend not to have touchscreens, but this user interface is clearly headed in the direction where everything has to be finger-sized. The interface is also still very much GNOME-based, despite MeeGo's plan to move over to Qt. Mail is handled by Evolution, the media player is Banshee, etc. Perhaps that will change over time; evidently the tablet user experience is more Qt-heavy.
While exploring options for customizing the top icon bar, your editor
stumbled across "gadgets," a relatively hidden feature that, perhaps, shows
where the MeeGo developers plan to go. Gadgets are little applications
which can be placed on a special screen; there are weather monitors, silly
games, slideshow applications, etc. The interface for choosing them is
awkward (browse through all 1000 of them in some strange order, four at a
time) and there doesn't seem to be a way to place them somewhere useful, like
the home "MyZone" screen. But it has the look of the beginnings of some
sort of "application store" mechanism which is separate from the normal
package management system used by MeeGo.
There is one other difference between Moblin and MeeGo that your editor has noticed: Moblin was a multi-user system with the ability to set up multiple independent accounts. MeeGo, instead, has a single account (called "meego," but one has to look hard to find that out) and no provision for creating or logging into any others. MeeGo devices, it seems, will be more like phones than Linux computers; they are highly personal devices that one does not ordinarily share.
Perhaps that is the future of Linux on the desktop - at least, Linux on the relatively small desktop. Like Android, it's not the sort of Linux experience that we are used to, though MeeGo is far closer to "traditional" Linux than Android is. But perhaps it's an experience that will bring in a new set of users; once they get used to this environment, the full Linux experience will be there for them to discover. That should be a good thing.
First, though, MeeGo needs to get out there on devices and into the hands of users. Evidently a number of MeeGo-based devices were on display at Computex, which is a start, but there are not a whole lot of deployed systems out there. So MeeGo is far behind other systems, including Android, which are aiming for very similar markets (though it is ahead of ChromeOS, which won't have a stable release for a few more months). Coming from behind in a highly competitive market is a hard thing to do, even when the market is expanding. But MeeGo has a lot of resources behind it and a lot of thought going into its design. Even if it's a bit of a dark horse, it's worth keeping an eye on.
Security
Mozilla's Plugin Check
Browser plugins are a constant source of security vulnerabilities and, because the browser is one of the most commonly used network applications, those vulnerabilities tend to affect a lot of users. But users are often oblivious to the fact that their plugins are not up-to-date. In order to help combat that problem, Mozilla has created a Plugin Check that will test the installed browser plugins and report on those that are out of date.
The site was originally launched last October, but was only set up for Firefox at that time. In May, Mozilla's director of Firefox development, Johnathan Nightingale, announced that Plugin Check had added support for the Safari, Chrome, and Opera browsers. There is also support for Internet Explorer, but only for the most popular plugins, as each plugin requires custom code due to a lack of a JavaScript plugin object in IE.
The basic idea is that the page gathers up information about the installed plugins, including metadata like version numbers, and then checks with a plugin directory to get the status of each. Mozilla is working with plugin vendors to keep an updated list of plugins and versions so that it can report outdated and, importantly, security vulnerable plugins. Mozilla plans to incorporate this technique into Firefox 3.6, so that users will get information on updated plugins without having to visit a special page.
While one could easily claim that it isn't Mozilla's—or any other
browser developer's—responsibility to help ensure that these
third-party plugins are current, it is a very nice public service. As
Nightingale points out, "plugin safety is an issue for the web as a
whole
". One need only consider the security track record of the
most common plugin—Adobe's Flash—to recognize that there have
been some fairly nasty, and exploitable, plugin holes over the years.
Undoubtedly there will be more in Flash, as well as other plugins, down the
road.
For Firefox users, the Plugin Check will eventually be moot. One would hope that other browser developers would also consider adding this feature—they should be able to use the same plugin database that Mozilla has, as the project is open. Until that time, though, users need to find out about, and visit, the Plugin Check page.
There are a variety of Plugin Check web badges available to help inform users about the service. In addition, the page has useful information about plugins and why it is important to keep them updated. That text is, as it should be, geared toward those who may not even realize their browser has any plugins installed, or even that there is some difference between a browser and a plugin. After all, those are the folks who are most likely to be browsing with outdated plugins—perhaps as many as 80% of web users.
User education is an important part of keeping systems secure. While Linux users have, in general, not been targeted by most of the malware—plugin-based or not—out there, that's no good reason to be cavalier about keeping one's software updated. In addition, most Linux users know, perhaps live with, one or more users of other operating systems and browsers. Regularly visiting the Plugin Check page (at least until browsers automatically do that checking), as well as recommending it to others, could go a long way toward reducing the threat from plugin vulnerabilities.
Brief items
Quote of the week
-- Dark Reading
Adobe Flash Player vulnerability
Adobe has reported a vulnerability in Flash Player 10.0.45.2 (and earlier), including the Linux version. "This vulnerability could cause a crash and potentially allow an attacker to take control of the affected system." There is a Flash Player 10.1 Release Candidate that does not appear to be vulnerable.
New vulnerabilities
bind9: DNS cache poisoning
| Package(s): | bind9 | CVE #(s): | CVE-2010-0382 | ||||||||
| Created: | June 7, 2010 | Updated: | June 16, 2010 | ||||||||
| Description: | From the Debian advisory:
When processing certain responses containing out-of-bailiwick data, BIND is subject to a DNS cache poisoning vulnerability, provided that DNSSEC validation is enabled and trust anchors have been installed. | ||||||||||
| Alerts: |
| ||||||||||
exim: privilege escalation
| Package(s): | exim | CVE #(s): | CVE-2010-2023 CVE-2010-2024 | ||||||||||||||||||||||||||||
| Created: | June 9, 2010 | Updated: | April 13, 2011 | ||||||||||||||||||||||||||||
| Description: | From the CVE entries:
transports/appendfile.c in Exim before 4.72, when a world-writable sticky-bit mail directory is used, does not verify the st_nlink field of mailbox files, which allows local users to cause a denial of service or possibly gain privileges by creating a hard link to another user's file. (CVE-2010-2023) transports/appendfile.c in Exim before 4.72, when MBX locking is enabled, allows local users to change permissions of arbitrary files or create arbitrary files, and cause a denial of service or possibly gain privileges, via a symlink attack on a lockfile in /tmp/. (CVE-2010-2024) | ||||||||||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||||||||||
gnutls: denial of service
| Package(s): | gnutls12 | CVE #(s): | CVE-2006-7239 | ||||
| Created: | June 4, 2010 | Updated: | June 10, 2010 | ||||
| Description: | From the Ubuntu advisory:
It was discovered that GnuTLS did not always properly verify the hash algorithm of X.509 certificates. If an application linked against GnuTLS processed a crafted certificate, an attacker could make GnuTLS dereference a NULL pointer and cause a DoS via application crash. | ||||||
| Alerts: |
| ||||||
java: unspecified vulnerability
| Package(s): | sun-jre-bin | CVE #(s): | CVE-2010-0850 | ||||
| Created: | June 4, 2010 | Updated: | June 9, 2010 | ||||
| Description: | From the CVE entry:
Unspecified vulnerability in the Java 2D component in Oracle Java SE and Java for Business 1.3.1_27 allows remote attackers to affect confidentiality, integrity, and availability via unknown vectors. | ||||||
| Alerts: |
| ||||||
kernel: multiple vulnerabilities
| Package(s): | linux, linux-source-2.6.15 | CVE #(s): | CVE-2010-1148 CVE-2010-1488 | ||||||||||||||||
| Created: | June 3, 2010 | Updated: | September 23, 2010 | ||||||||||||||||
| Description: | From the Ubuntu advisory: Eugene Teo discovered that CIFS did not correctly validate arguments when creating new files. A local attacker could exploit this to crash the system, leading to a denial of service, or possibly gain root privileges if mmap_min_addr was not set. (CVE-2010-1148) Oleg Nesterov discovered that the Out-Of-Memory handler did not correctly handle certain arrangements of processes. A local attacker could exploit this to crash the system, leading to a denial of service. (CVE-2010-1488) | ||||||||||||||||||
| Alerts: |
| ||||||||||||||||||
openoffice.org: arbitrary code execution
| Package(s): | openoffice.org | CVE #(s): | CVE-2010-0395 | ||||||||||||||||||||||||||||||||
| Created: | June 7, 2010 | Updated: | June 16, 2010 | ||||||||||||||||||||||||||||||||
| Description: | From the Debian advisory:
It was discovered that OpenOffice.org, a full-featured office productivity suite that provides a near drop-in replacement for Microsoft(R) Office, is not properly handling python macros embedded in an office document. This allows an attacker to perform user-assisted execution of arbitrary code in certain use cases of the python macro viewer component. | ||||||||||||||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||||||||||||||
perl: restriction bypass
| Package(s): | perl | CVE #(s): | CVE-2010-1168 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Created: | June 8, 2010 | Updated: | November 21, 2011 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Description: | From the Red Hat advisory:
The Safe module did not properly restrict the code of implicitly called methods (such as DESTROY and AUTOLOAD) on implicitly blessed objects returned as a result of unsafe code evaluation. These methods could have been executed unrestricted by Safe when such objects were accessed or destroyed. A specially-crafted Perl script executed inside of a Safe compartment could use this flaw to bypass intended Safe module restrictions. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
postgresql: arbitrary code execution
| Package(s): | postgresql-server | CVE #(s): | CVE-2010-1447 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Created: | June 4, 2010 | Updated: | July 5, 2011 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Description: | From the CVE entry:
PostgreSQL 7.4 before 7.4.29, 8.0 before 8.0.25, 8.1 before 8.1.21, 8.2 before 8.2.17, 8.3 before 8.3.11, 8.4 before 8.4.4, and 9.0 Beta before 9.0 Beta 2 does not properly restrict PL/perl procedures, which might allow remote attackers to execute arbitrary Perl code via a crafted script, related to the Safe module (aka Safe.pm) for Perl. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
vlc: arbitrary code execution
| Package(s): | vlc | CVE #(s): | |||||
| Created: | June 4, 2010 | Updated: | June 9, 2010 | ||||
| Description: | From the Pardus advisory:
VLC media player suffers from various vulnerabilities when attempting to parse malformatted or overly long byte streams. If successful, a malicious third party could crash the player instance or perhaps execute arbitrary code within the context of VLC media player. | ||||||
| Alerts: |
| ||||||
xinha: restriction bypass
| Package(s): | xinha | CVE #(s): | CVE-2010-1916 | ||||||||
| Created: | June 9, 2010 | Updated: | June 17, 2010 | ||||||||
| Description: | From the CVE entry:
The dynamic configuration feature in Xinha WYSIWYG editor 0.96 Beta 2 and earlier, as used in Serendipity 1.5.2 and earlier, allows remote attackers to bypass intended access restrictions and modify the configuration of arbitrary plugins via (1) crafted backend_config_secret_key_location and backend_config_hash parameters that are used in a SHA1 hash of a shared secret that can be known or externally influenced, which are not properly handled by the "Deprecated config passing" feature; or (2) crafted backend_data and backend_data[key_location] variables, which are not properly handled by the xinha_read_passed_data function. NOTE: this can be leveraged to upload and possibly execute arbitrary files via config.inc.php in the ImageManager plugin. | ||||||||||
| Alerts: |
| ||||||||||
zikula: multiple vulnerabilities
| Package(s): | zikula | CVE #(s): | CVE-2010-1724 CVE-2010-1732 | ||||||||
| Created: | June 8, 2010 | Updated: | June 9, 2010 | ||||||||
| Description: | From the CVE entries:
Multiple cross-site scripting (XSS) vulnerabilities in Zikula Application Framework 1.2.2, and possibly earlier, allow remote attackers to inject arbitrary web script or HTML via the (1) func parameter to index.php, or the (2) lang parameter to index.php, which is not properly handled by ZLanguage.php. (CVE-2010-1724) Cross-site request forgery (CSRF) vulnerability in the users module in Zikula Application Framework before 1.2.3 allows remote attackers to hijack the authentication of administrators for requests that change the administrator email address (updateemail action). (CVE-2010-1732) | ||||||||||
| Alerts: |
| ||||||||||
zonecheck: cross-site scripting
| Package(s): | zonecheck | CVE #(s): | CVE-2010-2052 CVE-2010-2155 CVE-2009-4882 | ||||
| Created: | June 7, 2010 | Updated: | June 9, 2010 | ||||
| Description: | From the Debian advisory:
It was discovered that in zonecheck, a tool to check DNS configurations, the CGI does not perform sufficient sanitation of user input; an attacker can take advantage of this and pass script code in order to perform cross-site scripting attacks. | ||||||
| Alerts: |
| ||||||
Page editor: Jake Edge
Kernel development
Brief items
Kernel release status
The current development kernel is 2.6.35-rc2, released on June 5. "So -rc2 is out there, and hopefully fixes way more problems than it introduces. I'm slightly unhappy with its size - admittedly it's not nearly as big as rc2 was the last release cycle, but that was an unusually big -rc2. And I really hoped for a calmer release cycle this time." There's some new drivers and a lot of fixes; the short-form changelog is in the announcement, or see the full changelog for all the details.
Stable updates: there have been no stable updates released in the last week.
Quotes of the week
I suspect it won't even be very painful if people just get used to it. And if it ends up really helping sub-maintainers ("I can't do that, because Linus wouldn't pull the result anyway"), then that would be a really good reason for me to be rather stricter about the rules.
> Kubys
But his evil twin got ahold of the keyboard.
That's a _big_ argument in favour of them. Certainly much bigger than arguing against them based on some complexity-arguments for an alternative that hasn't seen any testing at all.
IOW, I would seriously hope that this discussion was more about real code that _exists_ and does what people need. It seems to have degenerated into something else.
Native ZFS Port for Linux (OSNews)
OSNews is reporting that employees at the Lawrence Livermore (US) National Laboratory have ported Sun/Oracle's ZFS filesystem to Linux. The kernel module is distributed in source form to work around the licensing incompatibility between the CDDL and GPL. "Main developer Brian Behlendorf has also stated that the Lawrence Livermore National Laboratory has repeatedly urged Oracle to do something about the licensing situation so that ZFS can become a part of the kernel. 'We have been working on this for some time now and have been strongly urging Sun/Oracle to make a change to the licensing,' he explains, 'I'm sorry to say we have not yet had any luck.' [...] There's still some major work to be done, so this is not production-ready code. The ZFS Posix Layer has not been implemented yet, therefore mounting file systems is not yet possible; direct database access, however, is."
This week's episode of "Desperate Androids"
In last week's episode of this particular soap opera, we described the criticism that blocked the merging of suspend blockers and the shape of a solution which was seemingly emerging from the ruins. But declaring an end to this particular story is always a hazardous thing to do. Now it looks like the suspend blocker discussion may last for another release cycle or two.On June 4, Ingo Molnar posted a proposed solution which was a variant of the quality-of-service-based approach described last week. It looked feasible until Linus wandered into the discussion (for the first time), saying:
So the discussion started anew. Linus pushed for a solution which avoids the rest of the core kernel (and the scheduler in particular) as much as possible - a goal which the initial suspend blocker implementation shared. He also raised the issue of multi-core processors, which are not really addressed by the current code. There might be value in being able to shut down individual cores as the system load drops, suspending the system when there's nothing for the last CPU to do. One assumes that SMP handsets are not that far away, so planning for them would be a sensible thing to do.
All told, the situation has grown more complicated - but it also seems that the will to solve it has grown. It is becoming clear that the real solution may not show up in a hurry, though. So, in the meantime, we may see a stopgap solution which was first proposed early in the discussion: add stub versions of the suspend blocker API so that various Android drivers can be merged unchanged. That should help the mainline and Android kernels come much closer to convergence while allowing time for a globally acceptable solution to the suspend blocker problem to be solved. We will likely see those stubs merged, possibly with a 2.6.37 expiration date. The more contentious stuff will come some time after.
Kernel development news
Another OOM killer rewrite
Nobody likes the out-of-memory (OOM) killer. Its job is to lurk out of sight until that unfortunate day when the system runs out of memory and cannot get work done; the OOM killer must then choose a process to sacrifice in the name of continued operation. It's a distasteful job, one which many think should not be necessary. But, despite the OOM killer's lack of popularity, we still keep it around; think of it as the kernel equivalent of lawyers, tax collectors, or Best Buy clerks. Every now and then, they are useful.The OOM killer's reputation is not helped by the fact that it is seen as often choosing the wrong victim. The fact that a running system was saved is a small consolation if that system's useful processes were killed and work was lost. Over the years, numerous developers have tried to improve the set of heuristics used by the OOM killer, with a certain amount of apparent success; complaints about poor choices are less common than they once were. Still, the OOM killer is not perfect, encouraging new rounds of developers to tilt at that particular windmill.
For some months now, the task of improving the OOM killer has fallen to David Rientjes, who has posted several versions of his OOM killer rewrite patch set. This version, he hopes, will be deemed suitable for merging into 2.6.36. It has already run the review gauntlet several times, but it's still not clear what its ultimate fate will be.
Much of this patch set is dedicated to relatively straightforward fixes and improvements which are not especially controversial. One change opens up the kernel's final memory reserves to processes which are either exiting or are about to receive a fatal signal; that should allow them to clean up and get out of the way, freeing memory quickly. Another prevents the killing of processes which are in a separate memory allocation domain from the process which hit the OOM condition; killing those processes is unfair and unlikely to improve the situation. If the OOM condition is the result of a mempolicy-imposed constraint, only processes which might release pages on that policy's chosen nodes are considered as targets.
Another interesting change has to do with the killing of child processes. The current OOM killer, upon picking a target for its unwelcome attention, will target one of that target's child processes if any exist. Killing the parent is likely to take out all the children anyway, so cleaning up the children - or, at least, those with their own address spaces - first may resolve the problem with less pain. The updated OOM killer does the same, but in a more targeted fashion: it attempts to pick the child which currently has the highest "badness" score, thus, hopefully, improving the chances of freeing some real memory quickly.
Yet another change affects behavior when memory is exhausted in the low memory zone. This zone, present on 32-bit systems with 1GB or more of memory, is needed in places where the kernel must be able to keep a direct pointer to it. It is also used for DMA I/O at times. When this memory is gone, David says, killing processes is unlikely to replenish it and may cause real harm. So, instead of invoking the OOM killer, low-memory allocation requests will simply fail unless the __GFP_NOFAIL flag is present.
A new heuristic which has been added is the "forkbomb penalty." If a process has a large number of children (where the default value of "large" is 1000) with less than one second of run time, it is considered to be a fork bomb. Once that happens, the scoring is changed to make that process much more likely to be chosen by the OOM killer. The "kill the worst child" policy still applies in this situation, so the immediate result is likely to be a fork bomb with 999 children instead. Even in this case, picking off the children one at a time is seen as being better than killing a potentially important server process.
[PULL QUOTE: The most controversial part of the patch is a complete rewrite of the badness() function END QUOTE] The most controversial part of the patch is a complete rewrite of the badness() function which assigns a score to each process in the system. This function contains the bulk of the heuristics used to decide which process is most deserving of the OOM killer's services; over time, it has accumulated a number of tests which try to identify the process whose demise would release the greatest amount of memory while causing the least amount of user distress.
In David's patch set, the old badness() heuristics are almost entirely gone. Instead, the calculation turns into a simple question of what percentage of the available memory is being used by the process. If the system as a whole is short of memory, then "available memory" is the sum of all RAM and swap space available to the system. If, instead, the OOM situation is caused by exhausting the memory allowed to a given cpuset/control group, then "available memory" is the total amount allocated to that control group. A similar calculation is made if limits imposed by a memory policy have been exceeded. In each case, the memory use of the process is deemed to be the sum of its resident set (the number of RAM pages it is using) and its swap usage.
This calculation produces a percent-times-ten number as a result; a process which is using every byte of the memory available to it will have a score of 1000, while a process using no memory at all will get a score of zero. There are very few heuristic tweaks to this score, but the code does still subtract a small amount (30) from the score of root-owned processes on the notion that they are slightly more valuable than user-owned processes.
One other tweak which is applied is to add the value stored in each process's oom_score_adj variable, which can be adjusted via /proc. This knob allows the adjustment of each process's attractiveness to the OOM killer in user space; setting it to -1000 will disable OOM kills entirely, while setting to +1000 is the equivalent of painting a large target on the associated process. One of the reasons why this patch is controversial is that this variable differs in name and semantics from the oom_adj value implemented by the current OOM killer; it is, in other words, an ABI change. David has implemented a mapping function between the two values to try to mitigate the pain; oom_adj is deprecated and marked for removal in 2012.
Opposition to this change goes beyond the ABI issue, though. Understanding
why is not always easy; one reviewer's response consists solely of the word
"nack
". The objections seem to relate to the way the patch
replaces badness() wholesale rather than evolving it in a new
direction, along with concerns that the new algorithm will lead to worse
results. It is true that no hard evidence has been posted to justify the
inclusion of this change, but getting hard evidence in this case is, well,
hard. There is no simple benchmark which can quantify the OOM killer's
choices. So we're left with answers like:
Memory management patches tend to be hard to merge, and the OOM killer rewrite has certainly been no exception. In this case, it is starting to look like some sort of intervention from a higher authority will be required to get a decision made. As it happens, Andrew Morton seems poised to carry out just this sort of intervention, saying:
So, depending on what Andrew concludes, there might just be a new OOM killer in store for 2.6.36. For most users, this new feature is probably about as exciting as getting a new toilet cleaner as a birthday present. But, if it eventually helps a system of theirs survive an OOM situation in good form, they may yet come to appreciate it.
Writing a WMI driver - an introduction
Windows Management Instrumentation (WMI) is a set of extensions to the Windows Driver Model that provides an operating system interface for dealing with platform devices. WMI objects can be embedded within ACPI, a configuration which Microsoft recommends. Like ACPI, WMI is not really standardized and vendors still implement their own custom interfaces. In this article, I will, through the creation of a simple WMI driver, discuss the process of discovering WMI interfaces and working with them.
As WMI is embedded into ACPI tables, you should really start with Matthew's article on ACPI drivers before reading this one. You'll need to know how to extract, decompile and read your DSDT before going further. The DSDT (Discrete System Descriptor Table) lives in one of the ACPI tables provided to the operating system by the BIOS; it contains configuration information and executable code. On Linux, it can be found in /sys/firmware/acpi/DSDT; you will need to decompile it with iasl using the -d option. iasl is the Intel ACPI compiler; it is probably already packaged in your favorite distribution, but if it's not you can always grab the source from acpica.org.
In this article we'll focus on the history of the eeepc-wmi driver as an example; the DSDT used for this article (Eeepc 1201nl) can be downloaded here. The interesting part of the DSDT for making an ACPI or WMI driver is the ACPI device descriptions. They are defined with the Device (XXXX) keyword, where XXXX is the four-character name of the device. ACPI devices are also identified by an HID string using the same namespace as ISA PNP devices. This is why, most of the time, standardized HID names start with PNP. For WMI Devices, this HID will always be PNP0C14 (or pnp0c14).
The first Eeepc systems were shipped with an ACPI device called ASUS010;
Linux had an ACPI driver for that device called eeepc-laptop. Then, ASUS
started shipping a BIOS with "Windows 7 support", and eeepc-laptop didn't
want to load any more, because those BIOSes were disabling the
ASUS010 device when Windows 7 was detected, and Linux
has been identifying itself as Windows 7 since 2.6.32.
No eeepc-laptop driver means: no hotkeys, no rfkill, no LEDs,
and sometimes even no backlight,
because on some models you need to boot with acpi_backlight=vendor if you
want a working backlight.
A quick workaround was to boot with acpi_osi="!Windows 2009" or acpi_osi=Linux. But there's a better way: those BIOS updates also added a new ACPI device. It's easy to notice that this is a WMI device, thanks to the reserved _HID PNP0C14 and the explicit ASUSWMI UID. From the DSDT:
Device (AMW0)
{
Name (_HID, EisaId ("PNP0C14"))
Name (_UID, "ASUSWMI")
...
}
So we have a WMI device, and we need to find what we can do with it. The first thing to do is to dump the GUID mapping of the WMI device. A good way to do it is to use wmidump, it will parse the buffer returned by the _WDG method, and display it in a humanly readable form. The _WDG method is defined in the WMI device and provides mapping for data blocks, events, and WMI methods. The result of _WDG evaluation is a buffer containing an array of structures, each entry describing a GUID.
Here is the output of wmidump for Eeepc 1201nl:
97845ED0-4E6D-11DE-8A39-0800200C9A66:
object_id: BC
notify_id: 42
reserved: 43
instance_count: 1
flags: 0x2 ACPI_WMI_METHOD
466747A0-70EC-11DE-8A39-0800200C9A66:
object_id: BD
notify_id: 42
reserved: 44
instance_count: 1
flags: 0x2 ACPI_WMI_METHOD
ABBC0F72-8EA1-11D1-00A0-C90629100000:
object_id: ?
notify_id: D2
reserved: 00
instance_count: 1
flags: 0x8 ACPI_WMI_EVENT
05901221-D566-11D1-B2F0-00A0C9062910:
object_id: MO
notify_id: 4D
reserved: 4F
instance_count: 1
flags: 0
We can see four different GUIDs. The first two are flagged with ACPI_WMI_METHOD, while the third is flagged with ACPI_WMI_EVENT. ACPI_WMI_METHOD means that, in the same ACPI device, there is a WMXX method, where XX is the object_id of this GUID. Thus, we will find a method called WMBC for GUID 97845ED0-4E6D-11DE-8A39-0800200C9A66, and WMBD for 466747A0-70EC-11DE-8A39-0800200C9A66. ACPI_WMI_EVENT is used to describe a GUID that will send events; hotkeys for example are reported using WMI events on Eeepc systems.
WMI support in Linux is provided by the wmi driver (CONFIG_ACPI_WMI) and linux/acpi.h. Using this framework, we can write a basic WMI driver that will load only if a given GUID is available. For that, we will use wmi_has_guid(const char *guid);. That function is easy to use: pass the GUID and it will return a true value if this GUID can be found. For this example we will use the ABBC0F72-8EA1-11D1-00A0-C90629100000. Here is a typical initialization function for a WMI driver:
#define EEEPC_WMI_EVENT_GUID "ABBC0F72-8EA1-11D1-00A0-C90629100000"
static int __init eeepc_wmi_init(void)
{
if (!wmi_has_guid(EEEPC_WMI_EVENT_GUID)) {
pr_warning("No known WMI GUID found\n");
return -ENODEV;
}
return 0;
}
Cool! A driver which does nothing :)
Events
Now, we want to catch hotkey events and send real input events when a hotkey is pressed. This requirement is common in platform drivers like eeepc-wmi and eeepc-laptop, so Dmitry Torokhov wrote the sparse keymap library to ease the implementation of such drivers. The sparse-keymap module (CONFIG_INPUT_SPARSEKMAP) allows the programmer to associate input events with custom codes (integers) and provides helpers to search a for code in a given keymap and report the resulting event through an input device.
Input events that you'll send to your input device are defined in <linux/input.h>. Key events are prefixed with KEY_, for example "a" is KEY_A, F11 is KEY_F11, and the key used to toggle a wireless Lan device is KEY_WLAN. There are more than 380 distinct keys, so you should be able to find one that suits your needs.
Defining a sparse keymap is simple:
#include <input/sparse-keymap.h>
static const struct key_entry eeepc_wmi_keymap[] = {
{ KE_KEY, 0x42, { KEY_F13 } },
{ KE_END, 0},
};
Then all you need to do is to initialize an input device, bind it with your sparse keymap, and call sparse_keymap_report_event() when you receive an event. I'll not describe the whole sparse-keymap API here (maybe in another article, who knows?), but if you want to see a (clean) real world example, please read eeepc-wmi.c.
Let's go back to our main topic: how can we receive WMI events? wmidump told us that one of the GUIDs was flagged ACPI_WMI_EVENT; this means that it is able to send events. To catch these events, we have to install a notify handler on this GUID with:
typedef void (*wmi_notify_handler) (u32 value, void *context);
acpi_status wmi_install_notify_handler(const char *guid,
wmi_notify_handler handler,
void *data);
The void *data argument passed to wmi_install_notify_handler() can be retrieved in void *context when the handler is called, and can be used to store context information. The important thing here is value: you can pass this value to wmi_get_event_data(), which fills an acpi_buffer that can be cast into an acpi_object. And most of the time for hotkeys, this object is an integer. Don't forget to call wmi_install_notify_handler() after input and keymap initialization, because the handler is likely to use the input to device, so it has to be initialized.
Here is how to register (and unregister) the WMI handler. In this example, sparse_keymap and input device handling have been removed for clarity purposes.
static int __init eeepc_wmi_init(void)
{
...
err = eeepc_wmi_input_setup(); // Setup sparse_keymap and input device
if (err)
return err;
status = wmi_install_notify_handler(EEEPC_WMI_EVENT_GUID,
eeepc_wmi_notify, NULL);
if (ACPI_FAILURE(status)) {
... // Free sparse_keymap and input device
return -ENODEV;
}
return 0;
}
static void __exit eeepc_wmi_exit(void)
{
wmi_remove_notify_handler(EEEPC_WMI_EVENT_GUID);
... // Free sparse_keymap and input device
}
Below you'll see the code for the handler. Here we don't need the context variable and we assume that eeepc_wmi_input_dev is accessible.
static void eeepc_wmi_notify(u32 value, void *context)
{
struct acpi_buffer response = { ACPI_ALLOCATE_BUFFER, NULL };
union acpi_object *obj;
acpi_status status;
int code;
status = wmi_get_event_data(value, &response);
if (status != AE_OK) {
pr_err("bad event status 0x%x\n", status);
return;
}
obj = (union acpi_object *)response.pointer;
if (obj && obj->type == ACPI_TYPE_INTEGER) {
code = obj->integer.value;
if (!sparse_keymap_report_event(eeepc_wmi_input_dev, code, 1, true))
pr_info("Unknown key %x pressed\n", code);
}
kfree(obj);
}
Our keymap is empty for the moment, and because we are lazy, we don't want to read the whole DSDT to see what kinds of events are reported. An alternative is to implement a basic driver with an empty keymap, and make it dump every event. Then press some buttons, check dmesg, and fill the keymap! For example, pressing Fn+F2 will show "Unknown key 0x5d pressed." Fn+F2 is the wireless toggle key, so let's fill the keymap accordingly:
static const struct key_entry eeepc_wmi_keymap[] = {
{ KE_KEY, 0x5d, { KEY_WLAN } },
{ KE_END, 0},
};
Methods
Now, you should be able to create a basic driver for WMI event handling. But what about setting the brightness, enabling a GPS device or blinking an LED? If you go back to the wmidump output from the beginning, GUID 97845ED0-4E6D-11DE-8A39-0800200C9A66 has the ACPI_WMI_METHOD flag set, and its object_id is BC. That means that there is an ACPI WMBC method that can be called. This function has three parameters; the first is a ULONG that has the instance index being executed; the second contains the method ID for the method being executed; and the third is a buffer that contains the input for the method call.
To call such a method, the WMI module provides a function called wmi_evaluate_method(). It takes a GUID, an instance (we only have one here, see the output of wmidump), a method identifier and an input buffer. This buffer is used to pass custom parameters to the underlying method. It also takes an output buffer that will contain the return value of the method (if any).
acpi_status wmi_evaluate_method(const char *guid, u8 instance, u32 method_id,
const struct acpi_buffer *in,
struct acpi_buffer *out);
We will try to implement backlight control for this laptop, using WMI of course! Most of the time on x86 laptops, the backlight is handled by the generic ACPI video module. But sometimes, the generic ACPI backlight interface is broken, so you may want to use a vendor specific module to control the backlight. To do that, boot with acpi_backlight=vendor. We won't talk a lot about the backlight class, and we'll focus on the WMI specific part. But if you want to know more, read the complete eeepc-wmi driver.
The first thing to do is to find how the backlight can be controlled. I won't describe the entire (painful) process of digging into the DSDT to find out how to control the backlight, and we will assume that the vendor gave you the WMI device documentation (and a pony!). But in a real world, you'll have to start from the WMXX method of your device (where XX is the object_id of your GUID) to find something related to what you want.
To control devices on an Eeepc, the WMI interface exposes two methods. The first one is DEVS which is used to set something in a device; its identifier is 0x53564544 and it takes two parameters: the device ID and the value you want to set. For the backlight, this device ID is 0x00050012 and the value is the brightness value (between zero and 15). This parameter can be translated into the following C structure;
struct bios_args {
u32 dev_id;
u32 ctrl_param;
};
The second method is named DSTS; it can be used to get the state of a device. Its identifier is 0x53544344 and it takes only one parameter: the device ID, which is the same used for DEVS
In summary: we have the GUID of our device, the ID of the methods we want to call and their custom magic parameters. Let's translate that to C and put it at the begining of our driver.
#define EEEPC_WMI_MGMT_GUID "97845ED0-4E6D-11DE-8A39-0800200C9A66"
#define EEEPC_WMI_METHODID_DEVS 0x53564544
#define EEEPC_WMI_METHODID_DSTS 0x53544344
#define EEEPC_WMI_DEVID_BACKLIGHT 0x00050012
The next thing to do is to write two helpers for DEVS and DSTS because they can be used not only for the backlight, but also probably to implement rfkill for Bluetooth and WIFI.
DEVS is used to set a state for a given device. It takes a device ID, and a custom parameter, they are passed using the bios_args structure in the input buffer. This helper is pretty simple.
static acpi_status eeepc_wmi_set_devstate(u32 dev_id, u32 ctrl_param)
{
struct bios_args args = {
.dev_id = dev_id,
.ctrl_param = ctrl_param,
};
struct acpi_buffer input = { (acpi_size)sizeof(args), &args };
return wmi_evaluate_method(EEEPC_WMI_MGMT_GUID, 1,
EEEPC_WMI_METHODID_DEVS, &input, NULL);
}
Calling DSTS is a little more complicated because it returns a value. In wmi_evaluate_method() we put the dev_id in input, and create an output buffer that will hold the return value. Then we check that the return value is really an integer (because we want an integer for brightness level, and we know that the DSDT should return one).
static acpi_status eeepc_wmi_get_devstate(u32 dev_id, u32 *ctrl_param)
{
struct acpi_buffer input = { (acpi_size)sizeof(u32), &dev_id };
struct acpi_buffer output = { ACPI_ALLOCATE_BUFFER, NULL };
union acpi_object *obj;
acpi_status status;
u32 tmp = 0;
status = wmi_evaluate_method(EEEPC_WMI_MGMT_GUID, 1,
EEEPC_WMI_METHODID_DSTS, &input, &output);
if (ACPI_FAILURE(status))
return status;
obj = (union acpi_object *)output.pointer;
if (obj && obj->type == ACPI_TYPE_INTEGER)
tmp = (u32)obj->integer.value;
if (ctrl_param)
*ctrl_param = tmp;
kfree(obj);
return status;
}
Now, we have two helpers that can easily be used to set and get the state for a given device. We know the dev_id for the backlight, and we just need to link that with backlight_device callbacks using 0x00050012 as the dev_id.
static int read_brightness(struct backlight_device *bd)
{
u32 ctrl_param;
acpi_status status;
status = eeepc_wmi_get_devstate(EEEPC_WMI_DEVID_BACKLIGHT, &ctrl_param);
if (ACPI_FAILURE(status))
return -1;
return ctrl_param & 0xFF;
}
static int update_bl_status(struct backlight_device *bd)
{
u32 ctrl_param;
acpi_status status;
ctrl_param = bd->props.brightness;
status = eeepc_wmi_set_devstate(EEEPC_WMI_DEVID_BACKLIGHT, ctrl_param);
if (ACPI_FAILURE(status))
return -1;
return 0;
}
And we're done! Eeepc WMI device is a simple WMI device, but the principle should the same for others. I chose this one, because we waited a long time for this driver; Yong Wang finally wrote it for 2.6.35. This driver is young and really easy to read, so it is a good example.
Patches and updates
Kernel trees
Architecture-specific
Core kernel code
Device drivers
Filesystems and block I/O
Memory management
Security-related
Virtualization and containers
Benchmarks and bugs
Miscellaneous
Page editor: Jonathan Corbet
Distributions
News and Editorials
OpenSUSE searches for its strategy
While Ubuntu is clearly focused on user-friendliness and Fedora has a
"bleeding edge" approach (although it has sometimes struggled with its identity),
openSUSE lacks a similar message. Who is the target user? What are the
long-term goals of the distribution? What is its unique selling point? For
the past few months, the openSUSE
Board worked together with some community members on a more focused strategy. The question they want to answer to themselves and to the rest of the world is "Why choose openSUSE?
".
To answer this question, the Board looked at data from various sources, including market share figures and the openSUSE 2010 Survey that the project ran in February, which produced some useful results [PDF]. They also held a series of strategy sessions on IRC with many discussions about the role and the future of openSUSE.
During the weekend of May 28th, the members of the Strategy Team met in Nuremberg to bring together all the collected data and discussion points into a cohesive and unifying statement. At this openSUSE Strategy Meeting, they also formalized a draft of the strategy.
The documents that are available are an interesting read. First there is the Process document that describes why a clear strategy is needed and how the strategy will be developed. "Understand the industry" is a document describing the broader picture, including openSUSE's direct competitors and a characterization of the market and its customers. And last but not least, the SWOT document gives a list of strengths, weaknesses, opportunities, and threats. Readers are warned, though, that these documents are the result of brainstorming sessions and are therefore subjective and unpolished.
Strengths and weaknesses
At the strategy meeting, the team concentrated on the strengths and weaknesses openSUSE has, taking into account the competition and the expectations for future changes in the way we use computers. Concentrating on the strengths makes sense, according to openSUSE Board member Bryen Yunashko:
The summary of strengths is a mix of community-related and technical
topics. On the community level, there is for example the Boosters Team with their
mantra "Grow Community by Enabling Community
". This is a group
of Novell employees who are dedicated to openSUSE development and working
with the community. The distribution also has and attracts many users with
a strong technical background. On the technical level, openSUSE offers an
ecosystem of tools around the distribution, such as the openSUSE Build Service and
the feature-tracking system openFATE; it has also some
excellent in-system tools such as YaST and zypper. From the end user's point
of view, openSUSE comes with good hardware support and high quality; it's
also the only distribution where the user can choose among multiple desktop
environments in the standard installer.
The weaknesses that came out of the SWOT analysis were not listed in the strategy meeting wrap-up, but it's interesting to check them out in the SWOT document (keeping in mind that it's brainstorm material). With respect to the market aspects, Novell is not seen as open source friendly as Red Hat. As for the quality of the distribution, a lot of features are not documented, with the result that they are not used or integrated. Concerning the software, Java support is called "awful". The openSUSE Build Service is deemed too complex to use and not well-known enough. And on the community marketing level, there are lot of weaknesses listed, including no dedicated professional graphics artists, not enough local events for contributors, and so on. The openSUSE community is also not considered welcoming to new participants and there is decreasing support from Novell engineers.
Choosing a strategy
Knowing your strengths is one thing, but then deciding on a strategy to build upon your strengths is a lot more difficult. The openSUSE Strategy Team was inspired by Michael Porter's approach at Harvard Business School for their strategy development process. It's important to note that strategy is always about trade-offs, its goals are always long-term (at least 3 to 5 years in IT) and it is always rooted in the context of competitors. In the domain of operating systems, this means for example that it really doesn't make sense if more Linux distributions have very similar strategies. They shouldn't be radically different, but the more distributions differ (within limits), the better it is for everyone.
That's why the Board brainstormed about competitive advantages, things that the distribution does better than the competition or that make it unique:
As outlined in the process document, three questions should be asked about any competitive advantage: to whom it is focused, how sustainable would it be, and what activities are needed to build it. A few competitive advantages should then be condensed into a strategy statement that is easy to grasp. The trade-off here is: if openSUSE chooses a strategy that is too broad, it will become less realistic to achieve the goals, but if the strategy is too focused the distribution will face the loss of a number of users for which it doesn't offer an interesting solution anymore.
With this information about clusters of competitive advantages, the team tried to find some valid strategies for openSUSE. The openSUSE Strategy Meeting ended up having three possible strategies:
- openSUSE the home for developers (distro, tools, apps)
- openSUSE the base for derivatives of any kind (eg. openSUSE Education, openSUSE XYZ)
- openSUSE for the mobile world (be the glue between mobile services (clouds) and mobile consumers)
On June 8, these three proposals were to have been presented and opened up for 30 days of public discussion, but the release date has been postponed to June 17. After publication of the proposals, feedback will be used to enhance or change them, and after that, openSUSE members will be able to vote on which strategy is the right one to go with.
Reception
On the opensuse-project mailing list, Marcus Moeller posted a response to the strategy meeting wrap-up, which kicked off an interesting discussion about some of the listed strengths and weaknesses. In the comments on the wrap-up blog post, the strategy proposals were fairly quietly received. For example, one person pointed out the risk of targeting developers:
Alberto Passalacqua also made the valid observation that the proposals don't seem to be focused well enough:
Independence
A separate and somewhat heated discussion spun off about the distribution's dependence on Novell. Trifle Menot put it this way:
Yunashko disagreed, replying:
But there is a concern that openSUSE would not survive without backing from Novell. It's difficult to attract volunteers for a project that is in that state as Trifle points out:
That leads to a chicken-and-egg problem, though. Without volunteers, it's difficult for openSUSE to be independent. Vincent Untz describes the problem:
And is the dependence on Novell really such a big problem? Passalacqua argues that Novell's commercial backing is a strong point for openSUSE:
All in all, some openSUSE members are not happy with the power that Novell has in the openSUSE project and think that independence from Novell would result in a better openSUSE. The fear that Novell will be bought by a party that pulls the plug clearly exists. According to Andreas Jaeger, though, openSUSE is already evolving into an independent community, and he maintains that the openSUSE Foundation that is in the works will be able to attract more corporate sponsors and build a stronger openSUSE community.
Conclusion
While the discussion about openSUSE's strategy spun off into the specifics of the project's relationship with Novell, many participants seem to be aware that there are more pressing matters: what openSUSE lacks, and what other distributions have, more or less, is a clear strategy, long-term goals, or a unique selling point. It remains to be seen if one of the proposed strategies is powerful enough to give the distribution its own raison d'être. Without the full proposals it's difficult to say, but, as was mentioned, the current one-line descriptions do seem either too specific or too generic.
New Releases
Maverick Meerkat (Ubuntu 10.10) Alpha 1 released
The first alpha version of Ubuntu 10.10 (Maverick Meerkat) is now available. "Pre-releases of Maverick are *not* encouraged for anyone needing a stable system or anyone who is not comfortable running into occasional, even frequent breakage. They are, however, recommended for Ubuntu developers and those who want to help in testing, reporting, and fixing bugs." Click below for the full announcement.
Distribution News
Fedora
Final Board appointment
Stephen Smoogen has been appointed to the Fedora Project Board. "Stephen brings many years of experience with the Fedora Project in a variety of areas, and as a volunteer contributor and a Red Hat employee at various times. This appointment completes the normal succession process for the Board for this term."
Ubuntu family
Ubuntu sparc and ia64 ports
Ubuntu's SPARC and IA64 ports are in danger of being decommissioned. "If Ubuntu users or developers wish this port [SPARC] to continue, they should group together to take over maintenance and ensure that the state is improved above the minimum level before that date. This will almost certainly require substantial work on the toolchain and kernel... Likewise if a maintenance team does not step forward to take over, the [IA64] port will be decommissioned."
Newsletters and articles of interest
Distribution newsletters
- CentOS Pulse 1004 (June 8, 2010)
- DistroWatch Weekly (June 7, 2010)
- Fedora Weekly News 228 (June 2, 2010).
- openSUSE Weekly News/126 (June 5, 2010).
- Ubuntu Weekly Newsletter 196 (June 5, 2010).
Shuttleworth: Linaro: Accelerating Linux on ARM
Mark Shuttleworth posts his thoughts about Linaro. Canonical will be working closely with the project, which has plans for six-month release cycles like Ubuntu. Linaro will also be using some Ubuntu infrastructure such as Launchpad. "And finally, there are teams aimed at providing out of the box 'heads' for different user experiences. By 'head' we mean a particular user experience, which might range from the minimalist (console, for developers) to the sophisticated (like KDE for a netbook). Over time, as more partners join, the set of supported 'heads' will grow — ideally in future youll be able to bring up a Gnome head, or a KDE head, or a Chrome OS head, or an Android head, or a MeeGo head, trivially. We already have [good] precedent for this in Ubuntu with support for KDE, Gnome, LXE and server heads, so everyone's confident this will work well."
Robby Workman Answers 13 Questions on the Occasion of Slackware-13.1 Release (The Slack World)
The Slack World interviews Robby Workman about the release of Slackware 13.1. "13.1 was intended as more of a "let's polish 13.0" release as opposed to "here's some new features" release, so ideally, there's not a whole bunch of user-visible changes in it. That being said, there's the addition of polkit that might be noticeable in some cases while using kde; there's the new bluetooth stack (bluez4) that is greatly improved in some respects but does have some regressions for some users; there are some enhancements to Xorg and Xfce (and obviously KDE) that should offer improved stability and some minor new functionality—as an example, you might notice some new notification popups in Xfce..."
Fedora 14 - three new features
Rahul Sundaram looks at three new features in Fedora 14. "LZMA/Xz is a relatively new and better method for compression and Fedora has increasingly been taking advantage of it. Fedora switched over from gzip to using LZMA for the RPM payload back in Fedora 12. Fedora 14 will combined the previous feature with the next step of better compression in the live image itself."
Page editor: Rebecca Sobol
Development
A look at GNOME Shell
It's been more than a year since LWN looked at GNOME Shell in what was still a primitive state. With only a few months left until the scheduled release of GNOME 3.0 at the end of September, and after more than a year's development, it seemed like a good time to take another look.
GNOME Shell is a compositing manager that works on top of Mutter — a branch of the Metacity window manager. With GNOME 3.0 Metacity will be in maintenance mode only, and Mutter will be GNOME's window manager going forward. Mutter uses the Clutter toolkit for rendering. In practical terms, GNOME Shell provides the actual desktop environment for GNOME 3.0. It displays and manages windows, provides a panel for displaying system notifiers, launches applications, and shows recently used files.
Getting GNOME Shell is relatively easy, if you're running a very current Linux distribution and have the right graphics card and setup. Ubuntu 10.04, Fedora 13, Debian testing and unstable, and openSUSE 11.2 all include gnome-shell packages for testing. I tried running GNOME Shell with the packages supplied for Ubuntu 10.04, but Mutter failed to start.
Fedora 13 provided slightly better results. After installing the gnome-shell package, Fedora adds a new option to the Desktop Effects dialog for GNOME Shell. Checking that should automatically start GNOME Shell if the graphics system supports it. Mutter requires 3D acceleration, but Nouveau and GNOME Shell do not play well together. This was a bit of a surprise, as Nouveau does just fine with Compiz on Fedora 13.
Finally, I tried GNOME Shell on a machine with Intel graphics on Fedora 13. GNOME Shell worked well on this machine and had good performance. It showed itself to be stable and feature-complete enough for everyday use, though not all features have been implemented yet. For example, the GNOME Shell design document [PDF] calls for a message tray that will display events and messages, but this is not present in the implementation shipped with Fedora 13. Matt Novenstern provided an update recently on his progress but it's still under heavy development.
What happens if you don't have supported 3D hardware? No GNOME Shell for you, though the GNOME Project will still make it possible to run the GNOME 2 shell with GNOME 3 applications and libraries.
Shell replaces the GNOME Panel, taking over the job of managing the desktop from Nautilus and providing some new ways to manage windows. The concept of switching between windows using a menu or toolbar buttons (as is the norm in GNOME 2.x) is gone. Instead, users can use Alt-Tab to switch between windows or move the mouse cursor to the top of the window and select between windows and/or workspaces. Users can also see all open windows and the GNOME menus by clicking the Activities button or pressing the System (Windows) key. New workspaces can be added (or removed) by clicking a button in the lower left-hand corner of the screen.
Alt-Tab works slightly differently with GNOME Shell than with Metacity. Instead of switching between all active windows, it displays all active applications, with a drop-down menu for windows owned by each application. For instance, if Firefox has three open windows, it will display one thumbnail for Firefox and a triangle at the bottom that indicates there's more than one window to choose from.
The GNOME Shell panel is not planned to support GNOME applets, so users that have applets they depend on (like GNOME Time Tracker) are going to be out of luck. Owen Taylor laid out the rationale in April 2009 for omitting applets.
The panel also doesn't make the best use of space in the current implementation. The panel is a flat black bar that displays system tray notifiers, date and time, a logout menu, a button that displays the active application (and does little else), and the Activities menu.
Aside from missing applets, though, GNOME Shell worked fine with all of my day-to-day applications. In its current state, GNOME Shell is slightly less functional than Metacity and the GNOME Panel. The Applications menu could use some work, as it just displays all the applications GNOME knows about in a flat grid. This is a work in progress, though. The usability trade-off may be worth it in the long run when some of these problems are addressed. Even on a relatively small screen (1280x800 resolution) GNOME Shell makes it easier to manage a lot of open windows.
GNOME Shell and Accessibility
One of the concerns with any major revamp like GNOME Shell is the impact it will have on GNOME accessibility (a11y). The GNOME accessibility team has been working hard on GNOME 3.0. Alejandro Piñeiro Iglesias, maintainer of the Cally accessibility implementation library for clutter, says that there's room for improvement but Cally is in good shape at the moment.
Piñeiro says that, ultimately, he wants to see Cally become part of Clutter rather than a standalone library. This isn't the case at the moment, and is unlikely to happen by GNOME 3.0. Piñeiro says that when using Cally patches to Clutter he's been able to use Orca screen reader with GNOME Shell, though "the functionality is limited
". Presumably this will be improved by the final release.
Taylor said in March that he'd like to see accessibility "held to the same high standards as everything else in GNOME
" with accessibility features on by default and a user experience that "just works
". But it won't happen by GNOME 3.0:
Getting accessibility fully to that standard isn't going to happen for GNOME 3.0... we've never been there for GNOME 2, we aren't going to be there in 4-5 months even if it was the only thing we worked on.
But where I definitely want to be for GNOME 3.0 - in the next 4-5 months is to make sure we've laid the groundwork properly so that we can get there in follow-on releases, both on a technical level and on a user-experience level.
Piñeiro says he'll continue working towards full integration of Cally with GNOME Shell, and points out that there are a number of other features to implement such as keyboard navigation and theming. GNOME Shell provides the ability to create new themes, though it only ships with one at the moment, and work will need to be done to ensure that GNOME Shell has a selection of accessibility-friendly themes.
It's clear that GNOME Shell will need to continue to mature after the GNOME 3.0 release. What's less clear is exactly how it will proceed. As discussed on the gnome-shell-list, various design documents and roadmaps are spread out a bit. The most authoritative are the design document and the roadmap on the GNOME wiki.
One area that is wide-open is the "social dimension / collaboration" mentioned in the design document. This is left open for version 2.0 of the Shell, but the basic components are in view now. The (not yet implemented) message menu is meant to work hand in hand with the Telepathy communications framework and the applications it supports, like the Empathy instant messaging client and Gwibber social client. Eventually the message menu should be used to show instant messages, system notification, etc. — though the user should be able to block this by setting their status as busy.
The GNOME Shell plan also calls for the ability to create extensions using JavaScript and CSS. The extensions are intended to add functionality to GNOME Shell, but not to replace applets. Extensions are meant to be a way to make changes to the way GNOME Shell handles things like window management or application launching without having to actually hack the Shell itself. GNOME Shell already has a functioning debugger called Looking Glass for prospective extension authors, but there are no extensions in the wild yet.
GNOME 3.0 is more than GNOME Shell
Because GNOME Shell is the most visible major change to GNOME, it has drawn the most attention. However, GNOME 3.0 is more than just GNOME Shell. In addition to the work that's gone into GNOME accessibility, GNOME 3.0 should inherit multitouch support from GTK+ 3.0, along with major improvements in the help system via Yelp.
Users thinking about trying out the GNOME Shell should check out the GNOME Shell Cheat Sheet, which includes a list of built-in features and instructions on using them. Even in its unfinished state, the GNOME Shell should be stable enough for most LWN readers to use. Interested contributors should join the gnome-shell-list mailing list and see the GNOME 2.31.x development series page, contributor guide, and the GNOME Shell Todo for further information.
Though GNOME 3.0 is due to be released by the end of September, GNOME Shell may not land on many GNOME users' desktops until 2011. Mark Shuttleworth has already indicated that GNOME Shell won't ship as the default with Maverick Meerkat (Ubuntu 10.10). GNOME 3.0 will miss the 11.3 release of openSUSE, and Debian Squeeze is currently planned to ship with GNOME 2.30. The first major distribution to ship GNOME 3.0 with GNOME Shell will likely be Fedora, as Fedora 14 is due to hit about a month after the GNOME 3.0 release.
Brief items
Quote of the week
Ever since we started to seriously get into OpenBSC to run GSM networks, I've been looking forward to running GPRS networks, too. What most people don't know: GPRS is radically different from GSM. It basically only shares the frequencies and timeslot architecture of the physical layer, while having it's own layer1, layer2 and various other protocol layers. Also, its signalling and data completely bypass the usual BSC and MSC components of a GSM core network.
Konversation 1.3 for KDE 4 released
The Konversation IRC client has released its 1.3 version with some significant new features including the DCC whiteboard extension and integration with KDE's SSL certificate store. "Konversation 1.3 debuts a major new feature in the area of Direct-Client-to-Client (DCC) support: An implementation of the DCC Whiteboard extension that brings collaborative drawing - think two-player Kolourpaint - to IRC."
The LLDB debugger launches
The LLVM project has announced the existence of the LLDB debugger. "While still in early development, LLDB supports basic command line debugging scenarios on the Mac, is scriptable, and has great support for multithreaded debugging. LLDB is already much faster than GDB when debugging large programs, and has the promise to provide a much better user experience (particularly for C++ programmers). We are excited to see the new platforms, new features, and enhancements that the broader LLVM community is interested in."
OpenOffice.org 3.2.1 released
The OpenOffice.org Community has announced the availability of OpenOffice.org 3.2.1. "OpenOffice.org 3.2.1 is a so-called micro release that comes with bugfixes and improvements, with no new features being introduced. This release also fixes security issues, so we recommend everyone to upgrade to the new version as soon as possible." This is the first version released since Oracle became the the project's new main sponsor.
PostgreSQL 9.0 Beta 2 Now Available
The second beta of PostgreSQL 9.0 has been released. There are some significant changes from beta1, including security fixes. "Note that, due to a system catalog change, an initdb and database reload will be required for upgrading from 9.0Beta1. We encourage users to use this opportunity to test pg_upgrade for the upgrade from Beta1 or an earlier version of 9.0. Please report your results."
Python 2.7 release candidate 1 released
Release candidate 1 for Python 2.7, which is planned to be the last major version in the 2.x series, has been released. "2.7 includes many features that were first released in Python 3.1. The faster io module, the new nested with statement syntax, improved float repr, set literals, dictionary views, and the memoryview object have been backported from 3.1. Other features include an ordered dictionary implementation, unittests improvements, a new sysconfig module, and support for ttk Tile in Tkinter. For a more extensive list of changes in 2.7, see http://doc.python.org/dev/whatsnew/2.7.html or Misc/NEWS in the Python distribution."
Rockbox 3.6 released
Version 3.6 of the Rockbox music player firmware system is out. "Four months have passed since the last release, and in that time we've been busy adding new supported devices, adding features and fixing bugs to give you the best Rockbox experience yet on the widest range of targets ever. With the addition of multiple new codecs, the ability to skin the FM screen, multiple fonts in themes, the hotkey feature and much more we are confident that this is the best version of Rockbox ever." See the release notes for details on what's new this time around.
Scribus 1.3.7 Released
The Scribus desktop publishing application team has announced the release of version 1.3.7, with many bug fixes and some minor enhancements. Those include new scripter functions, translation and documentation updates, fixes to the PDF bookmark export, new import filters for Mac PICT and Calamus Vector Graphics (CVG) files, and more. "The Scribus Team considers this version to be quite stable and ready for many real-world use cases. However, since there are still some annoyances that prevent us from releasing version 1.4, more cautious users may want to prefer to stick with the officially stable version 1.3.3.14."
SimPy 2.1.0 simulation package
The SimPy (simulation in Python) project has announced the release of 2.1.0, which features a refactored code base and two API additions, while keeping backward compatibility with 2.0.1 and earlier versions. "Many users say that SimPy is one of the easiest to use discrete event simulation packages. [...] It provides the modeler with components of a simulation model. These include processes, for active components like customers, messages, and vehicles, and resources, for passive components that form limited capacity congestion points like servers, checkout counters, and tunnels. It also provides monitor variables to aid in gathering statistics. [...] SimPy has plotting and GUI capabilities "out of the box". It comes with extensive documentation, tutorials and a large number of example models."
Newsletters and articles
Development newsletters from the last week
- Caml Weekly News (June 8)
- PostgreSQL Weekly News (June 6)
- Python-URL! (June 6)
- Tcl-URL! (June 7)
Macieira: Qt and Open Governance
Thiago Macieira writes about opening up Qt governance on the Qt Labs blog. "And we'll also need to open up the decision-making structure. That is to say, contributors who have shown themselves to be trustworthy and good at what they do deserve the right of having a say in the decisions. Take, for example some of the contributors of the past year: there are a couple of cases where they know the code better than people working in the Qt offices. We have come quickly to the point where we have to say 'I trust you that this contribution is good'. This is part of the meritocratic process that we want to have in place."
Page editor: Jonathan Corbet
Announcements
Non-Commercial announcements
KDE e.V. offers supporting memberships
KDE.News has the announcement that KDE e.V., the non-profit organization which supports KDE development, is seeking supporting members. "The goal of this programme is to get people more involved in the work of the e.V. - we will send supporting members quarterly reports, ask their opinions and in general keep them informed of our activities. They will also be able to attend the General Assembly of the e.V. membership at Akademy and follow the discussions (voting will remain a privilege of the core membership). Meanwhile, the money we receive will help us have a more sustainable financial basis and make us less dependent on a few generous supporters."
Commercial announcements
The Linaro consortium debuts
ARM, Freescale, IBM, Samsung, ST-Ericsson and Texas Instruments have announced the creation of a new nonprofit organization called Linaro which is aimed at helping the creation of mobile Linux-based systems. "Linaro will work with the growing number of Linux distributions to create regular releases of optimized tools and foundation software that can be used widely by the industry, increasing compatibility across semiconductors from multiple suppliers. As a result, Linaro's resources and open source solutions will allow device manufacturers to speed up development time, improve performance and reduce engineering time spent on non-differentiating, low-level software. Linux distributions, open source and proprietary software projects will benefit from Linaro's investment, with more stable code becoming widely available as a common base for innovation."
CTERA Networks announces Next3 File System for Linux
CTERA Networks has announced the availability of the Next3 file system for Linux. "Snapshots record the state of the file system at any given moment, creating a point-in-time copy of the data that can be used to restore previous versions of files. Versioning capabilities have been a key goal for storage systems, but until now no solution for Linux offered file-system level snapshots that made efficient use of disk space, stored snapshots reliably and maintained low performance overhead. For the first time, Next3 brings a free, GPL licensed file-system level snapshots solution for Linux users." LWN looked at Next3 back in May.
Articles of interest
Geist: The Canadian Copyright Bill: Flawed But Fixable
Canadian LWN readers may be interested in Michael Geist's blog post on the Copyright Modernization Act (or Bill C-32). "The one area where there is no compromise are the digital lock provisions. The prioritization of digital locks is the choice of the U.S. DMCA and is now the choice of the Canadian DCMA. In fact, the Canadian digital lock provisions are arguably worse than those found in the U.S., with fewer exceptions and greater difficulty to amend the rules. The Canadian DCMA provisions are virtually identical to the U.S. - a handful of hard-to-use exceptions, a ban on the distribution and marketing of devices (ie. software) that can be used to circumvent, and a presumption that any circumvention is an infringement." (Thanks to Barbara Irwin)
Legal Announcements
WebM gets a new license
Google has announced some changes in the licensing for the WebM codec. "Using patent language borrowed from both the Apache and GPLv3 patent clauses, in this new iteration of the the patent clause we've decoupled patents from copyright, thus preserving the pure BSD nature of the copyright license. This means we are no longer creating a new open source copyright license, and the patent grant can exist on its own. Additionally, we have updated the patent grant language to make it clearer that the grant includes the right to modify the code and give it to others."
Contests and Awards
Chris Lattner gets first SIGPLAN award for LLVM work
The Association for Computing Machinery's Special Interest Group on Programming Languages has announced that LLVM creator Chris Lattner has won its first "Programming Languages Software Award." "Lattner and Vikram Adve initially developed LLVM as a novel research infrastructure when Lattner was a member of Adve's research group at the University of Illinois at Urbana-Champaign (UIUC). Lattner went on to extend it into a powerful, widely adopted commercial-quality product. LLVM was released as an open source infrastructure in October 2003, and has since enjoyed popular adoption in the academic, commercial and open source worlds."
Education and Certification
LPI and Ma3bar host Open Source "Train-the-Trainer" workshops for Middle East
The Linux Professional Institute (LPI) has announced a series of GNU/Linux "Train-the-Trainer" workshops for Linux professionals from throughout the Middle East on June 9-12 and June 14-17, 2010.
Meeting Minutes
GNOME Meeting Minutes Published - May 27, 2010
Click below for the minutes of the May 27, 2010 meeting of the GNOME Foundation board. Topics include Women Outreach Program, LiMo and GTK+, Sysadmin job, Finances, Event Updates, ...
Calls for Presentations
COSCUP / GNOME.Asia 2010 Call For Participants
The organizing team of GNOME.Asia Summit has announced the call for participants. The event will be held August 14-15, 2010 in Taipei, Taiwan. The submission deadline is June 25, 2010.
Upcoming Events
Events: June 17, 2010 to August 16, 2010
The following event listing is taken from the LWN.net Calendar.
| Date(s) | Event | Location |
|---|---|---|
| June 19 | FOSSCon | Rochester, New York, USA |
| June 21 June 25 |
Semantic Technology Conference 2010 | San Francisco, CA, USA |
| June 22 June 25 |
Red Hat Summit | Boston, USA |
| June 23 June 24 |
Open Source Data Center Conference 2010 | Nuremberg, Germany |
| June 26 June 27 |
PyCon Australia | Sydney, Australia |
| June 28 July 3 |
SciPy 2010 | Austin, TX, USA |
| July 1 July 4 |
Linux Vacation / Eastern Europe | Grodno, Belarus |
| July 3 July 10 |
Akademy | Tampere, Finland |
| July 6 July 9 |
Euromicro Conference on Real-Time Systems | Brussels, Belgium |
| July 6 July 11 |
11th Libre Software Meeting / Rencontres Mondiales du Logiciel Libre | Bordeaux, France |
| July 9 July 11 |
State Of The Map 2010 | Girona, Spain |
| July 12 July 16 |
Ottawa Linux Symposium | Ottawa, Canada |
| July 15 July 17 |
FUDCon | Santiago, Chile |
| July 17 July 24 |
EuroPython 2010: The European Python Conference | Birmingham, United Kingdom |
| July 17 July 18 |
Community Leadership Summit 2010 | Portland, OR, USA |
| July 19 July 23 |
O'Reilly Open Source Convention | Portland, Oregon, USA |
| July 21 July 24 |
11th International Free Software Forum | Porto Alegre, Brazil |
| July 22 July 23 |
ArchCon 2010 | Toronto, Ontario, Canada |
| July 22 July 25 |
Haxo-Green SummerCamp 2010 | Dudelange, Luxembourg |
| July 24 July 30 |
Gnome Users And Developers European Conference | The Hague, The Netherlands |
| July 25 July 31 |
Debian Camp @ DebConf10 | New York City, USA |
| July 31 August 1 |
PyOhio | Columbus, Ohio, USA |
| August 1 August 7 |
DebConf10 | New York, NY, USA |
| August 4 August 6 |
YAPC::Europe 2010 - The Renaissance of Perl | Pisa, Italy |
| August 7 August 8 |
Debian MiniConf in India | Pune, India |
| August 9 August 10 |
KVM Forum 2010 | Boston, MA, USA |
| August 9 | Linux Security Summit 2010 | Boston, MA, USA |
| August 10 August 12 |
LinuxCon | Boston, USA |
| August 13 | Debian Day Costa Rica | Desamparados, Costa Rica |
| August 14 | Summercamp 2010 | Ottawa, Canada |
| August 14 August 15 |
Conference for Open Source Coders, Users and Promoters | Taipei, Taiwan |
If your event does not appear here, please tell us about it.
Page editor: Rebecca Sobol
