User: Password:
Subscribe / Log in / New account Weekly Edition for October 7, 2010

Trials, tribulations, and trademarks

By Jonathan Corbet
October 6, 2010
LWN has visited the issue of trademarks - and the Mozilla corporation's trademarks in particular - a number of times over the years, but not recently. This topic recently resurfaced on the Fedora development list, so it seems like time for another look. It is clear that heavy-handed trademark policies do not sit well with some members of the community, but are trademarks really a threat to free software?

Fedora's policies are not normally forgiving of packagers who want to bundle their own versions of libraries. Having multiple copies of libraries bloats the size of the distribution and makes it hard to fix any security problems in those libraries. This policy has, at times, made life difficult for packagers trying to get a new program (with a bundled library) into the distribution; such packagers are usually required to make the program work with the system's core libraries. There are exceptions, though, with Mozilla-based packages (Firefox, Thunderbird, and xulrunner) being at the top of the list.

Mozilla, in turn, is adamant about its right to bundle its own libraries. The project's recent rejection of a patch allowing the use of a system's version of libvpx was the immediate cause of the discussion in the Fedora community. Mozilla developer Chris Pearce justified the decision this way:

Sorry, we won't take this. We prefer to ship our own copies of the media libraries, as if necessary we can cherry-pick a critical security fix and push out a release quickly, rather than relying on the distros to update their libraries. We can guarantee the safety and stability of our libraries this way.

Firefox is free software; Fedora is free to modify its build to make Firefox use Fedora's own libvpx. The catch, of course, is the trademark policy: if Fedora makes this kind of change, it can no longer call the browser "Firefox." That is a restriction which rubs some developers the wrong way. Some users have gone as far as to claim that trademark restrictions make the software non-free:

If the owner of the trademark doesn't grant a license that is compatible with a free software license, then the software is non free. Linus doesn't go around telling people they can't redistribute a modified linux kernel. His only restriction on the linux trademark is that it is used to label things that use the linux kernel.

Such users have been calling on Fedora to drop Firefox and take the iceweasel route. It is worth noting that the people asking for this change are not the people who would have to do the work. And it seems that the amount of work would be considerable. In fact, we're told that Fedora's maintainers cannot really keep up with Firefox etc. now; they have little appetite for taking on more work to get away from the trademark policy. As Rahul Sundaram put it:

Ignoring upstream and patching without consent is only feasible if you have the amount of resources to do a good job with that. Fedora doesn't have that.

In fact, according to Adam Williamson, Fedora's policy with regard to Firefox is not driven by the trademark policy anyway:

Practically speaking, [iceweasel] would add an extra burden to the maintainers, who already do not have enough resources to deal with all the issues. Again, the reason we don't carry non-upstream patches in Firefox has nothing to do with the branding issue. It's because we don't have the resources to maintain non-upstream patches in Firefox.

This claim was not accepted by all members of the Fedora community. Toshio Kuratomi responded:

I wish people would stop repeating this particular bit of justification for the issue of bundling libraries. I can see it for other suggested patches for firefox but in the case of bundled libraries, this is work that we require of all packages because there's security ramifications for our product, the Fedora distribution by not unbundling.

One suspects that, in the absence of the trademark issue, there would be more pressure within Fedora to simply fix the bundled library issue in Fedora. But nobody wants to take on the extra burden that would be imposed by forking Firefox - even if it's a fork which simply tracks upstream with a few added changes.

Beyond that, it has been noted that Fedora, itself, has a similar trademark policy in place. Maintaining that policy while protesting Mozilla's seems a little inconsistent.

Trademarks often seem at odds with the ideals of free software; they may not place restrictions on what can be done with the code, but they do restrict the combination of the code and a name. Many people in the community (and here at LWN) have worried that this control could be used to restrict the community's freedom in unwelcome ways. Clearly, some people not only fear that it could happen, but that it is happening now.

That said, we now have roughly ten years of experience with the combination of trademarks and free software. That experience has certainly proved irritating at times. But it has not proved disastrous. In the end, the power of a name is not as strong as the power behind the freedom to fork. Losing the XFree86 name did not hinder, and the trademark has not stopped LibreOffice. After this much time, it is tempting to conclude that free software and trademarks can live with each other - or, more exactly, separating the two is done easily enough when the need arises. Obnoxious trademark policies are still worth protesting, but we need not fear that they threaten free software as a whole.

Comments (58 posted)


By Jonathan Corbet
October 6, 2010
Your editor's iRiver H340 music player attracts stares in the crowded confines of the economy class cabin; it is rather larger than many newer, more capable devices, contains a rotating disk drive, and looks like it should have a smokestack as well. But your editor has continued to nurse this gadget for a simple reason: it is no longer possible to buy anything else like it. The device is open, has a reasonable storage capacity, and is able to run Rockbox. It is, thus, not just running free software; it is far more functional and usable than any other music player your editor has ever encountered. These are not advantages to be given up lightly.

Why can't the H340 be replaced? Flash storage is one of the reasons. A solid state disk makes obvious sense in a portable music player, but an immediate result of their adoption was a reduction in the storage capacity of the players. Your editor, who has had a lot of time to accumulate a music collection, does not want to select the music he will hear prior to leaving the house. Some time recently spent in Akihabara shows that capacities are slowly growing, but there was only one non-iPod device on offer which matches the H340: a pretty Sony player which does not support useful formats (e.g. Ogg) and which is certainly difficult to put new firmware onto. Needless to say, there is no Rockbox port for that Sony player. In conclusion: there is still nothing out there as good as the H340, at least for your editor's strange value of "good."

There are a couple of conclusions to be drawn here: (1) the market for personal music players may well be in decline, so newer, better players are not coming as quickly as one might like, and (2) the players which continue to exist are increasingly closed and unlikely to run Rockbox. This discouraging trend has been evident for a while, but there is hope. One of the reasons for the apparent decline of standalone media players must certainly be the growth of smartphones. A decent phone is able to run a music player; why carry two devices when one will suffice? Unfortunately, the music players available on most of these devices leave something to be desired. Even if they handle a wider variety of formats (as Android-based players tend to), they lack other important functionality: gapless playback and bookmarks being at the top of your editor's list. Using a phone-based music player after becoming accustomed to Rockbox feels like going several steps backward.

Enter the Rockbox Android port, which is actually a subset of the "Rockbox as an application" port. The core idea behind this port is that the days of standalone media players might just be coming to an end, while the days of much more powerful mobile computers are just beginning. Contemporary mobile systems can run a real operating system; they are thus open to the installation of specialized applications. The ability of Rockbox to run on a variety of hardware platforms is valuable, but what really distinguishes Rockbox is the intensive attention that has been put into making it be the best media player available. So it makes sense to think about dropping the hardware support and hosting Rockbox as an application on top of another operating system.

Let it be said from the outset: Rockbox on Android is far from being ready for general use, and its developers know it. For those who want to try it out, there are prebuilt Android packages for a few screen sizes, but users are cautioned against expecting too much, and the developers don't even want to hear about bugs encountered with the prebuilt versions. Anybody who seriously wants to try Rockbox on Android needs to build it from source; if nothing else, the target's display size must be selected at build time. The build process is not trivial - one must install the Android SDK and native application development kit - but it is not particularly painful either. The end result is a rockbox.apk file which can be installed on a convenient handset.

[Rockbox main menu] Running the application is likely to be most confusing for the unprepared user, though. The traditional top-level Rockbox menu appears on-screen, but the result of tapping a menu entry is not what one would expect; indeed, the application's response to touch events seems to be nearly random. After digging in the forums, your editor stumbled across this bit of helpful advice:

Imagine that your screen is a 3x3 grid, where the middle is used as the selector, left-right-up-down are used as cursor keys. The other directions have special functions in some screens, e.g. in Now Playing screen with the upper left you can access some playback mode settings.

In short: the Rockbox user interface was not designed with touch screens in mind, so the developers have partitioned up the screen and mapped the pieces onto the arrows and buttons found on a typical old-school media player. Without putting any indication on the screen that it has been so divided. To say that this decision violates the principle of least surprise is a bit of understatement, but, once the nature of the interface has been understood, Rockbox can be made to work as expected. Your editor is listening to music from the Android Rockbox client as this is being typed.

As it turns out, deep in the settings menu there is an option to switch the touchscreen interface to "absolute mode." That causes taps on menu entries to do the expected thing. There is still a lot of work needed to make the interface truly touch-friendly, though - or even to make basic things like the "back" button function properly. It is sometimes possible to get stuck in screens where exit seems to be impossible. The "while playing" screen [Rockbox WPS] operates in strange and mysterious ways. Fixing all of this will require a bit of time by a determined user-interface developer, but there should not be any fundamental challenges involved.

Unsurprisingly for a port in such an early state, there are a number of other glitches and shortcomings waiting to be discovered. Some functionality has not yet been implemented - support for the FM radio (if present) and audio recording top that list. Attempts to use the database feature lead to "panic" messages and/or locked screens. The plugin feature does not appear to work at all - but it is also far from clear that plugins make any sense in the Android environment. Rockbox has its own idea of the playback volume which is separate from the Android system's. And so on.

That said, the Rockbox-on-Android developers have made it clear that this idea can work. The hard part appears to be done; now it's just a matter of tying up a fair number of loose ends. OK, it's a matter of tying up a lot of loose ends.

So, one might ask, is the H340 going into a well-earned retirement? Not quite yet. You editor must still wait until he has a handset with sufficient storage to hold at least a significant part of the music/podcast collection; the Nexus One does not qualify - though an SD card upgrade would make some real progress in that direction. There is another important requirement, though: a media player must have sufficient battery life to get through a long transoceanic flight without leaving the traveler phoneless at the other end. An overnight test showed that a fully-charged Nexus One in airplane mode can run Rockbox continuously for about 18 hours - not bad, but not quite enough for a long trip where the phone will be used for purposes other than just playing audio.

So the H340 will likely have to rock on for a little longer. But the writing is on the wall: there will probably not be a standalone replacement for that faithful piece of hardware. Regardless of whether your editor's next phone runs Android, MeeGo, or something else entirely, it appears that there will be a highly capable, GPL-licensed music player application available for it. It's hard to complain about that.

Comments (39 posted)

Page editor: Jonathan Corbet


Questions about Android's security model

October 6, 2010

This article was contributed by Nathan Willis

Mobile device security has become a hot topic in recent years as always-on network connectivity has become widespread for smartphone users. Security holes in the operating system itself are certainly an issue, but the bigger threat seems to come from third-party applications distributed widely through web stores and marketplaces. Although Google's Android platform takes steps to isolate applications from each other and has a rigid permissions system, a series of recent events have called into question whether that security model offers significant protection from malicious third-party code.

An example of a "traditional" take on Android's application security model might be one described at the blog, which contrasts the Android Market with Apple's App Store. First, Apple strictly curates what programs are accepted and made available to consumers through the store, but Google offers no such authoritative policing of the Android Market. On the other hand, Google, like Apple, does have a remote "kill switch" it can use to deactivate rogue applications.

In addition to the distribution models, the two platforms also differ in their application permission systems. Apple alerts the user if application attempts to use "push" services or request the device's location through GPS, which the user must either approve or disapprove on each individual request. Android has a predefined set of permissions, each of which the application must register its intent to use. The user is notified of every application's permission requests at install-time, and can later check the list from a control panel. The list of permissions is quite long and specific, Android defenders might say, and exposing it to the user makes Android Market applications safer than App Store downloads, which are impossible to audit altogether.

Granularity and transparency

Android's application permission model has its detractors, however, more so in recent months since the discovery of two malicious applications. Jackeey was a purported wallpaper application that was believed to relay personal information from phones to a web site in China, and Tap Snake was an arcade-style game that secretly reported the phone's location to be monitored remotely.

The trouble is that both apps requested Internet access through the Android permissions system; they simply used that permission to harvest data secretly and upload it to a third party. Simson Garfinkel described this on the MIT Technology Review site as a granularity problem, because "although Android programs are required to tell the user which permissions they use, that doesn't explain what the apps actually do with these permissions."

Garfinkel went on to detail his experience asking for explanations from developers whose applications seemingly requested permissions that had nothing to do with their intended purpose. A battery-saving wallpaper applications, for example, requested "the ability to modify or delete SD card contents, full Internet access, and the ability to read my phone's state and identity." In only one case did Garfinkel receive a reply from the application developer, who claimed that Internet access was required to register the program.

He pointed Android users to a program called TaintDroid, which is a possible solution that will be presented at the Usenix Symposium on Operating Systems Design and Implementation (OSDI). Developed by a team from Penn State, Duke University, and Intel, TaintDroid allows fine-grained monitoring of personal information and other data accessed by Android applications. TaintDroid logs attempts by applications to access specific private or sensitive information on the phone (phone number, IMEI number, SIM card ID, GPS location, camera, microphone, etc.), records attempts to transmit that information, and sends user notifications detailing the traffic to the phone's home screen toolbar.

The code has not yet been released, but the project says it will be made available under an open source license, and interested users can email the project and ask to be notified about the release. The team explains on the landing page that TaintDroid was not implemented as a stand-alone application for their purposes, but as a ROM customization. When the code is eventually released, however, it may eventually find its way either into a standalone application, or be incorporated into community-maintained Android distributions.

No opt-out

Sam Watkins also argues that too many applications request blanket permissions beyond what they really need, noting that almost all of the top 20 Android Market games request full Internet access and GPS location. But he also points out that although Android does a good job of revealing to the user what permissions an application has requested, Android offers no way for a user to deny individual requests. In short, if you do not like the set of permissions that an application requests, your only recourse is to not install it.

He also points out that although Android "sandboxes" individual applications by running each one under a unique user ID (thus preventing applications from sharing files), all applications have full read access to the phone's flash storage card, which is used as a general data storage location. Even worse, for backwards-compatibility reasons, any application can request to use the older Android 1.4 API, giving it write/erase permission over the flash storage — and neither this request nor its consequences are revealed to the user.

None of the preceding privacy violations or attacks require an escalation in privilege; the application requests the permissions it wants, and if the user installs it, he or she is immediately exposed. But Watkins also warns of possible attacks based on gaining root access, citing a demonstration example created by Jon Oberheide.

Watkins recommends two responses to the current situation. First, he suggests voting for issue 10481 on the official Android bug tracker, an enhancement request to implement a method of limiting Internet access. At present, the bug has more than 1300 votes.

Secondly, he recommends installing the Droid Wall firewall application on any Android device. Droid Wall is an iptables configuration tool for Android, building on the Linux kernel's existing packet filtering functionality, and allowing the user to write blacklist and whitelist firewall rules in a simple GUI. Earlier versions of Droid Wall required a separate iptables package to be installed, but since 1.4.0 this has been rolled into Droid Wall itself.

The Droid Wall developers primarily advertise the application as a way to reduce battery and mobile data usage, blocking particular applications from repeatedly using the connection or initiating unwanted transfers. When installed, it automatically collects a list of the other applications installed on the phone, and presents them in a user-friendly checklist; the user can then uncheck any application to block its Internet access. It also allows the user to maintain separate permission lists for WiFi and 3G data connections, and automatically switches between the two rule sets when switching to or from a WiFi hotspot.

The PC security crowd moves in

The Jackeey and Tap Snake incidents raised the profile of Android security problems a few months ago, and major players in the proprietary desktop security market have swept in to collect: both Norton and Symantec Android-specific security suites were unveiled in recent weeks. Both of these applications tackle common "device" security issues, such as on-disk encryption and securing or retrieving data in the event of device loss or theft. The Norton product targets home users, while Symantec targets enterprise deployments.

Neither one addresses the problems created by Android's all-or-nothing application permission requests or the lack of transparency in how applications exercise those permissions. For that, Droid Wall and (when it becomes available) TaintDroid used in tandem may provide the best protection. The TaintDroid team presents its OSDI paper on Wednesday the 6th of October, but a PDF version is already available on the project team's web site.

The paper makes for interesting reading, including the results of a survey of the permissions exercised by the top 30 Android applications. Many, it seems, request permissions that they never exercise — or at least have not exercised yet. A similar survey conducted by Smobile of more than 48,000 Android applications noted that 21 percent requested permission to read private or sensitive information from the phone, and many others "have the ability to read or use the authentication credentials from another service or application," place calls without user interaction, or other potential security breaches.

Google has not officially responded to the published criticism of the application permission system in Android. Bug 10481, while it has received a significant number of comments, has not been assigned. Hopefully the widespread release of TaintDroid will at least raise awareness of the issue in the minds of general Android users. In the meantime, at least the availability of the Android source code makes solutions like TaintDroid and Droid Wall possible.

Comments (5 posted)

Brief items

Security quotes of the week

Within 36 hours of the system going live, our team had found and exploited a vulnerability that gave us almost total control of the server software, including the ability to change votes and reveal voters' secret ballots.
-- J. Alex Halderman on finding a hole in an internet voting system

In the United States the 4th amendment did not come about simply because it was impractical to directly spy on everyone on such a large scale. Nor does it end simply because it may now be technically feasible to do so. Communication privacy furthermore is essential to the normal functioning of free societies, whether speaking of whistle-blowers, journalists who have to protect their sources, human rights and peace activists engaging in legitimate political dissent, workers engaged in union organizing, or lawyers who must protect the confidentiality of their privileged communications with clients. Privacy is ultimately about liberty while surveillance is always about control.
-- David Sugar in an open letter to the Obama administration

It's bad civic hygiene to build technologies that could someday be used to facilitate a police state. No matter what the eavesdroppers say, these systems cost too much and put us all at greater risk.
-- Bruce Schneier

Comments (none posted)

Some Android apps caught covertly sending GPS data to advertisers (ars technica)

Ars technica is reporting that some Android applications are surreptitiously sending GPS coordinates and other information to advertisers. The information comes from a recent study done by researchers from Penn State, Duke University, and Intel Labs. "They used TaintDroid to test 30 popular free Android applications selected at random from the Android market and found that half were sending private information to advertising servers, including the user's location and phone number. In some cases, they found that applications were relaying GPS coordinates to remote advertising network servers as frequently as every 30 seconds, even when not displaying advertisements. These findings raise concern about the extent to which mobile platforms can insulate users from unwanted invasions of privacy."

Comments (43 posted)

New vulnerabilities

apr-util: denial of service

Package(s):apr-util CVE #(s):CVE-2010-1623
Created:October 4, 2010 Updated:August 2, 2011
Description: From the Mandriva advisory:

A denial of service attack against apr_brigade_split_line() was discovered in apr-util

Gentoo 201405-24 apr 2014-05-18
SUSE SUSE-SU-2011:1229-1 apache2 2011-11-09
openSUSE openSUSE-SU-2011:0859-1 libapr1 2011-08-02
Slackware SSA:2011-041-03 httpd 2011-02-11
Slackware SSA:2011-041-01 apr-util 2011-02-11
CentOS CESA-2010:0950 apr-util 2011-01-27
Red Hat RHSA-2010:0950-01 apr-util 2010-12-07
Ubuntu USN-1022-1 apr-util 2010-11-25
Ubuntu USN-1021-1 apache2 2010-11-25
Fedora FEDORA-2010-16178 apr-util 2010-10-13
Fedora FEDORA-2010-15916 apr-util 2010-10-08
Fedora FEDORA-2010-15953 apr-util 2010-10-08
Debian DSA-2117-1 apr-util 2010-10-04
Mandriva MDVSA-2010:192 apr-util 2010-10-02

Comments (none posted)

freetype: code execution

Package(s):freetype CVE #(s):CVE-2010-3054 CVE-2010-3311
Created:October 5, 2010 Updated:January 20, 2011
Description: From the Red Hat advisory:

A stack overflow flaw was found in the way the FreeType font rendering engine processed PostScript Type 1 font files that contain nested Standard Encoding Accented Character (seac) calls. If a user loaded a specially-crafted font file with an application linked against FreeType, it could cause the application to crash. (CVE-2010-3054)

It was discovered that the FreeType font rendering engine improperly validated certain position values when processing input streams. If a user loaded a specially-crafted font file with an application linked against FreeType, and the relevant font glyphs were subsequently rendered with the X FreeType library (libXft), it could trigger a heap-based buffer overflow in the libXft library, causing the application to crash or, possibly, execute arbitrary code with the privileges of the user running the application. (CVE-2010-3311)

SUSE SUSE-SU-2012:0553-1 freetype2 2012-04-23
Gentoo 201201-09 freetype 2012-01-23
MeeGo MeeGo-SA-10:31 freetype 2010-10-09
Red Hat RHSA-2010:0864-02 freetype 2010-11-10
Ubuntu USN-1013-1 freetype 2010-11-04
Fedora FEDORA-2010-15785 freetype 2010-10-05
Mandriva MDVSA-2010:201 freetype2 2010-10-13
CentOS CESA-2010:0736 freetype 2010-10-05
CentOS CESA-2010:0737 freetype 2010-10-04
Fedora FEDORA-2010-15705 freetype 2010-10-05
CentOS CESA-2010:0737 freetype 2010-10-05
Debian DSA-2116-1 freetype 2010-10-04
Red Hat RHSA-2010:0737-01 freetype 2010-10-04
SUSE SUSE-SR:2010:019 OpenOffice_org, acroread/acroread_ja, cifs-mount/samba, dbus-1-glib, festival, freetype2, java-1_6_0-sun, krb5, libHX13/libHX18/libHX22, mipv6d, mysql, postgresql, squid3 2010-10-25
openSUSE openSUSE-SU-2010:0726-1 freetype2 2010-10-15
Red Hat RHSA-2010:0736-01 freetype 2010-10-04

Comments (none posted)

krb5: code execution

Package(s):krb5 CVE #(s):CVE-2010-1322
Created:October 6, 2010 Updated:November 11, 2010
Description: The MIT krb5 daemon can be made to dereference an uninitialized pointer, leading to a crash, and, possibly, arbitrary code execution. See this SecurityFocus entry for more information.
Gentoo 201201-13 mit-krb5 2012-01-23
Red Hat RHSA-2010:0863-02 krb5 2010-11-10
Mandriva MDVSA-2010:202-1 krb5 2010-11-02
Mandriva MDVSA-2010:202 krb5 2010-10-13
Ubuntu USN-999-1 krb5 2010-10-05
openSUSE openSUSE-SU-2010:0709-1 krb5 2010-10-11
SUSE SUSE-SR:2010:019 OpenOffice_org, acroread/acroread_ja, cifs-mount/samba, dbus-1-glib, festival, freetype2, java-1_6_0-sun, krb5, libHX13/libHX18/libHX22, mipv6d, mysql, postgresql, squid3 2010-10-25

Comments (none posted)

libesmtp: certificate spoofing

Package(s):libesmtp CVE #(s):CVE-2010-1192 CVE-2010-1194
Created:October 5, 2010 Updated:October 6, 2010
Description: From the Mandriva advisory:

libESMTP, probably 1.0.4 and earlier, does not properly handle a \'\0\' (NUL) character in a domain name in the subject's Common Name (CN) field of an X.509 certificate, which allows man-in-the-middle attackers to spoof arbitrary SSL servers via a crafted certificate issued by a legitimate Certification Authority, a related issue to CVE-2009-2408 (CVE-2010-1192).

The match_component function in smtp-tls.c in libESMTP 1.0.3.r1, and possibly other versions including 1.0.4, treats two strings as equal if one is a substring of the other, which allows remote attackers to spoof trusted certificates via a crafted subjectAltName (CVE-2010-1194).

Mandriva MDVSA-2010:195 libesmtp 2010-10-04

Comments (none posted)

mailman: cross-site scripting

Package(s):mailman CVE #(s):CVE-2010-3089
Created:October 4, 2010 Updated:May 17, 2011
Description: From the Mandriva advisory:

Multiple cross-site scripting (XSS) vulnerabilities in GNU Mailman before 2.1.14rc1 allow remote authenticated users to inject arbitrary web script or HTML via vectors involving (1) the list information field or (2) the list description field.

SUSE SUSE-SR:2011:007 NetworkManager, OpenOffice_org, apache2-slms, dbus-1-glib, dhcp/dhcpcd/dhcp6, freetype2, kbd, krb5, libcgroup, libmodplug, libvirt, mailman, moonlight-plugin, nbd, openldap2, pure-ftpd, python-feedparser, rsyslog, telepathy-gabble, wireshark 2011-04-19
CentOS CESA-2011:0307 mailman 2011-04-14
openSUSE openSUSE-SU-2011:0312-1 mailman 2011-04-07
SUSE SUSE-SR:2011:009 mailman, openssl, tgt, rsync, vsftpd, libzip1/libzip-devel, otrs, libtiff, kdelibs4, libwebkit, libpython2_6-1_0, perl, pure-ftpd, collectd, vino, aaa_base, exim 2011-05-17
openSUSE openSUSE-SU-2011:0424-1 mailman 2011-05-03
CentOS CESA-2011:0307 mailman 2011-03-02
Red Hat RHSA-2011:0308-01 mailman 2011-03-01
Red Hat RHSA-2011:0307-01 mailman 2011-03-01
Ubuntu USN-1069-1 mailman 2011-02-22
Debian DSA-2170-1 mailman 2011-02-18
Fedora FEDORA-2010-14877 mailman 2010-09-17
Fedora FEDORA-2010-14834 mailman 2010-09-17
Mandriva MDVSA-2010:191 mailman 2010-10-01

Comments (none posted)

mantis: multiple cross-site scripting flaws

Package(s):mantis CVE #(s):CVE-2010-2574 CVE-2010-3303
Created:September 30, 2010 Updated:November 9, 2012

From the Red Hat bugzilla entries [1, 2]:

CVE-2010-2574: Cross-site scripting (XSS) vulnerability in manage_proj_cat_add.php in MantisBT 1.2.2 allows remote authenticated administrators to inject arbitrary web script or HTML via the name parameter in an Add Category action.

CVE-2010-3303: XSS vulnerability when uninstalling maliciously named plugins; Multiple XSS issues with custom field enumeration values; XSS issues when using custom field String values; XSS in print_all_bug_page_word.php when printing project and category names

Gentoo 201211-01 mantisbt 2012-11-08
Fedora FEDORA-2010-15082 mantis 2010-09-22
Fedora FEDORA-2010-15080 mantis 2010-09-22

Comments (none posted)

mysql: multiple vulnerabilities

Package(s):mysql CVE #(s):CVE-2010-3676 CVE-2010-3677 CVE-2010-3678 CVE-2010-3679 CVE-2010-3680 CVE-2010-3681 CVE-2010-3682 CVE-2010-3683
Created:October 5, 2010 Updated:January 19, 2011
Description: From the Fedora advisory:

Bug #628660 - CVE-2010-3676 MySQL: mysqld DoS (assertion failure) after changing InnoDB storage engine configuration parameters (MySQL bug #55039)

Bug #628040 - CVE-2010-3677 MySQL: Mysqld DoS (crash) by processing joins involving a table with a unique SET column (MySQL BZ#54575)

Bug #628172 - CVE-2010-3678 MySQL: mysqld DoS (crash) by processing IN / CASE statements with NULL arguments (MySQL bug #54477)

Bug #628062 - CVE-2010-3679 MySQL: Use of unassigned memory (valgrind errors / crash) by providing certain values to BINLOG statement (MySQL BZ#54393)

Bug #628192 - CVE-2010-3680 MySQL: mysqld DoS (assertion failure) by using temporary InnoDB engine tables with nullable columns (MySQL bug #54044)

Bug #628680 - CVE-2010-3681 MySQL: mysqld DoS (assertion failure) by alternate reads from two indexes on a table using the HANDLER interface (MySQL bug #54007)

Bug #628328 - CVE-2010-3682 MySQL: mysqld DoS (crash) by processing EXPLAIN statements for complex SQL queries (MySQL bug #52711)

Bug #628698 - CVE-2010-3683 MySQL: mysqld DoS (assertion failure) while reading the file back into a table (MySQL bug #52512)

Ubuntu USN-1397-1 mysql-5.1, mysql-dfsg-5.0, mysql-dfsg-5.1 2012-03-12
Gentoo 201201-02 mysql 2012-01-05
Red Hat RHSA-2011:0164-01 mysql 2011-01-18
Mandriva MDVSA-2011:012 mysql 2011-01-17
Debian DSA-2143-1 mysql-dfsg-5.0 2011-01-14
SUSE SUSE-SR:2010:021 mysql, dhcp, monotone, moodle, openssl 2010-11-16
Ubuntu USN-1017-1 mysql-5.1, mysql-dfsg-5.0, mysql-dfsg-5.1 2010-11-11
Mandriva MDVSA-2010:222 mysql 2010-11-09
Mandriva MDVSA-2010:155-1 mysql 2010-11-08
CentOS CESA-2010:0825 mysql 2010-11-05
CentOS CESA-2010:0824 mysql 2010-11-05
Red Hat RHSA-2010:0825-01 mysql 2010-11-03
Red Hat RHSA-2010:0824-01 mysql 2010-11-03
openSUSE openSUSE-SU-2010:0730-1 mysql 2010-10-18
SUSE SUSE-SR:2010:019 OpenOffice_org, acroread/acroread_ja, cifs-mount/samba, dbus-1-glib, festival, freetype2, java-1_6_0-sun, krb5, libHX13/libHX18/libHX22, mipv6d, mysql, postgresql, squid3 2010-10-25
openSUSE openSUSE-SU-2010:0731-1 mysql 2010-10-18
Fedora FEDORA-2010-15166 mysql 2010-09-24

Comments (none posted)

php-pecl-apc: cross-site scripting

Package(s):php-pecl-apc CVE #(s):CVE-2010-3294
Created:September 30, 2010 Updated:July 10, 2012

From the Red Hat bugzilla entry:

A potential Cross Site Scripting (XSS) vulnerability was found in the PECL APC package in versions prior to 3.1.4

CentOS CESA-2012:0811 php-pecl-apc 2012-07-10
Scientific Linux SL-php--20120709 php-pecl-apc 2012-07-09
Oracle ELSA-2012-0811 php-pecl-apc 2012-07-02
Red Hat RHSA-2012:0811-04 php-pecl-apc 2012-06-20
Fedora FEDORA-2010-15004 php-pecl-apc 2010-09-21

Comments (none posted)

PostgreSQL: privilege escalation

Package(s):postgresql CVE #(s):CVE-2010-3433
Created:October 6, 2010 Updated:November 23, 2010
Description: The PostgreSQL 9.0.1, 8.4.5, 8.3.12, 8.2.18, 8.1.22, 8.0.26 and 7.4.30 releases fix a potential privilege escalation bug: "The security vulnerability allows any ordinary SQL users with 'trusted' procedural language usage rights to modify the contents of procedural language functions at runtime. As detailed in CVE-2010-3433, an authenticated user can accomplish privilege escalation by hijacking a SECURITY DEFINER function (or some other existing authentication-change operation). The mere presence of the procedural languages does not make your database application vulnerable."
Gentoo 201110-22 postgresql-base 2011-10-25
Red Hat RHSA-2010:0908-01 postgresql 2010-11-23
SUSE SUSE-SR:2010:020 NetworkManager, bind, clamav, dovecot12, festival, gpg2, libfreebl3, php5-pear-mail, postgresql 2010-11-03
Fedora FEDORA-2010-16004 sepostgresql 2010-10-08
openSUSE openSUSE-SU-2010:0903-1 postgesql 2010-10-27
Ubuntu USN-1002-1 postgresql-8.1, postgresql-8.3, postgresql-8.4 2010-10-07
Fedora FEDORA-2010-15954 postgresql 2010-10-08
Fedora FEDORA-2010-15960 postgresql 2010-10-08
CentOS CESA-2010:0742 postgresql 2010-10-10
Ubuntu USN-1002-2 postgresql-8.4 2010-10-07
CentOS CESA-2010:0742 postgresql 2010-10-06
SUSE SUSE-SR:2010:019 OpenOffice_org, acroread/acroread_ja, cifs-mount/samba, dbus-1-glib, festival, freetype2, java-1_6_0-sun, krb5, libHX13/libHX18/libHX22, mipv6d, mysql, postgresql, squid3 2010-10-25
Debian DSA-2120-1 postgresql-8.3 2010-10-12
Mandriva MDVSA-2010:197 postgresql 2010-10-06
Red Hat RHSA-2010:0742-01 postgresql 2010-10-06

Comments (none posted)

qt-creator: insecure manipulation of environment variable

Package(s):qt-creator CVE #(s):CVE-2010-3374
Created:October 4, 2010 Updated:October 6, 2010
Description: From the Mandriva advisory:

A vulnerability has been found in Qt Creator 2.0.0 and previous versions. The vulnerability occurs because of an insecure manipulation of a Unix environment variable by the qtcreator shell script. It manifests by causing Qt or Qt Creator to attempt to load certain library names from the current working directory.

Gentoo 201412-09 racer-bin, fmod, PEAR-Mail, lvm2, gnucash, xine-lib, lastfmplayer, webkit-gtk, shadow, PEAR-PEAR, unixODBC, resource-agents, mrouted, rsync, xmlsec, xrdb, vino, oprofile, syslog-ng, sflowtool, gdm, libsoup, ca-certificates, gitolite, qt-creator 2014-12-11
Mandriva MDVSA-2010:193 qt-creator 2010-10-03

Comments (none posted)

Page editor: Jake Edge

Kernel development

Brief items

Kernel release status

The current development kernel is 2.6.36-rc7, released on October 6. "This should be the last -rc, I'm not seeing any reason to keep delaying a real release. There was still more changes to drivers/gpu/drm than I really would have hoped for, but they all look harmless and good. Famous last words." The short-form changelog is in the announcement; has the full changelog.

Stable updates:, containing a single fix for a typo in the Xen code, was released on October 1. As of this writing, there are no stable updates in the review process.

Comments (none posted)

Quotes of the week

As a general rule, if a reviewer's comment doesn't result in a code change then it should result in a changelog fix or a code comment. Because if the code wasn't clear enough to the reviewer then it won't be clear enough to later readers.
-- Andrew Morton

AMD's reference BIOS code had a bug that could result in the firmware failing to reenable the iommu on resume. It transpires that this causes certain less than desirable behaviour when it comes to PCI accesses, to whit them ending up somewhere near Bristol when the more desirable outcome was Edinburgh. Sadness ensues, perhaps along with filesystem corruption. Let's make sure that it gets turned back on, and that we restore its configuration so decisions it makes bear some resemblance to those made by reasonable people rather than crack-addled lemurs who spent all your DMA on Thunderbird.
-- Matthew Garrett

Comments (none posted)

Little-endian PowerPC

By Jonathan Corbet
October 6, 2010
The PowerPC architecture is normally thought of as a big-endian domain - the most significant byte of multi-byte values comes first. Big-endian is consistent with a number of other architectures, but the fact that one obscure architecture - x86 - is little-endian means that the world as a whole tends toward the little-endian persuasion. As it happens, at least some PowerPC processors can optionally be run in a little-endian mode. Ian Munsie has posted a patch set which enables Linux to take advantage of that feature and run little-endian on suitably-equipped PowerPC processors.

The first question that came to the mind of a few reviewers was: "why?" PowerPC runs fine as a big-endian architecture, and there has been little clamor for little-endian support. Besides, endianness seems to be one of those things that users can feel strongly about; to at least some PowerPC users, little-endian apparently feels cheap, wrong, and PCish.

The answer, as expressed by Ben Herrenschmidt, appears to be graphics hardware. A number of GPUs, especially those aimed at embedded applications, only work in the little-endian mode. Carefully-written device drivers can work around that sort of limitation without too much trouble, but user-space code - which often ends up talking to graphics hardware - is another story. Fixing all of that code is not a task that anybody wants to take on. As a result, PowerPC processors will not be considered for situations where little-endian support is needed. Running the processor in little-endian mode will nicely overcome that obstacle.

That said, it will take a little while before this support is generally available. The kernel patches apparently look good, but there are toolchain changes required which are not, yet, generally available. Until that little issue is resolved, PowerPC will remain a club for big-endian users only.

Comments (17 posted)

Kernel development news

Trusted and encrypted keys

By Jake Edge
October 6, 2010

The Trusted Platform Module (TPM) present on many of today's systems can be used in various ways, from making completely locked-down systems that cannot be changed by users to protecting sensitive systems from various kinds of attacks. While the TPM-using integrity measurement architecture (IMA), which can measure and attest to the integrity of a running Linux system, has been part of the kernel for some time now, the related extended verification module (EVM) has not made it into the mainline. One of the concerns raised about EVM was that it obtained a cryptographic key from user space that is then used as a key for integrity verification—largely nullifying the integrity guarantees that EVM is supposed to provide. A set of patches that were recently posted for comments to the linux-security-module mailing list would add two new key types to the kernel that would allow user space to provide the key without being able to see the actual key data.

We last looked in on EVM back in June when it seemed like it might make it into 2.6.36. That didn't happen, nor has EVM been incorporated into linux-next, so its path into the mainline is a bit unclear at this point. EVM calculates HMAC (hash-based message authentication code) values for on-disk files, uses the EVM key and TPM to sign the values, and stores them in extended attributes (xattrs) in the security namespace. If the EVM key is subverted, all bets are off in terms of the integrity of the system. While they are targeted for use by EVM, Mimi Zohar's patches to add trusted and encrypted key types could also be used for other purposes such as handling the keys for filesystem encryption.

The basic idea is that these keys would be generated by the kernel, and would never be touched by user space in an unencrypted form. Encrypted "blobs" would be provided to user space by the kernel and would contain the key material. User space could store the keys, for example, but the blobs would be completely opaque to anything outside of the kernel. The patches come with two new flavors of these in-kernel keys: trusted and encrypted.

Trusted keys are generated by the TPM and then encrypted using the TPM's storage root key (SRK), which is a 2048-bit RSA key (this is known as "sealing" the key in TPM terminology). Furthermore, trusted keys can also be sealed to a particular set of TPM platform configuration register (PCR) values so that the keys cannot be unsealed unless the PCR values match. The PCR contains an integrity measurement of the system BIOS, bootloader, and operating system, so tying keys to PCR values means that the trusted keys cannot be accessed except from those systems for which it was specifically authorized. Any change to the underlying code will result in undecryptable keys.

Since the PCR values change based on the kernel and initramfs used, trusted keys can be updated to use different PCRs, once they have been added to a keyring (so that the existing PCR values have been verified). There can also be multiple versions of a single trusted key, each of which is sealed to different PCR values. This can be used to support booting multiple kernels that use the same key. While the underlying, unencrypted key data will not need to change for different kernels, the user-space blob will change because of the different PCR values, which will require some kind of key management in user space.

Encrypted keys, on the other hand, do not rely on the TPM, and use the kernel's AES encryption instead which is faster than the TPM's public key encryption. Keys are generated as random numbers of the requested length from the kernel's random pool and, when they are exported as user-space blobs, they are encrypted using a master key. That master key can either be the new trusted key type or the user key type that already exists in the kernel. Obviously, if the master key is not a trusted key, it needs to be handled securely, as it provides security for any other encrypted keys.

The user-space blobs contain an HMAC that the kernel can use to verify the integrity of a key. The keyctl utility (or keyctl() system call) can be used to generate keys, add them to a kernel keyring, as well as to extract a key blob from the kernel. The patch set introduction gives some examples of using keyctl to manipulate both trusted and encrypted keys.

A recent proposal for a kernel crypto API was not particularly well-received, in part because it was not integrated with the existing kernel keyring API, but Zohar's proposal doesn't suffer from that problem. Both have the idea of wrapping keys into opaque blobs before handing them off to user space, but the crypto API went much further, adding lots of ways to actually use the keys from user space for encryption and decryption.

While the trusted and encrypted key types would be useful to kernel services (like EVM or filesystem encryption), they aren't very useful to applications that want to do cryptography without exposing key data to user space. The keys could potentially be used by hardware cryptographic accelerators, or possibly be wired into the existing kernel crypto services, but they won't provide all of the different algorithms envisioned by the kernel crypto API.

The existing IMA code only solves part of the integrity problem, leaving the detection of offline attacks against disk files (e.g. by mounting the disk under another OS) to EVM. If EVM is to eventually be added to the kernel to complete the integrity verification puzzle, then trusted keys or something similar will be needed. So far, the patches have attracted few comments or complaints, but they were posted to various Linux security mailing lists, and have not yet run the linux-kernel gauntlet.

Comments (none posted)

Two ABI troubles

By Jonathan Corbet
October 5, 2010
It has long been accepted by kernel developers that the user-space ABI cannot be broken in most situations. But what happens if the current ABI is a mistake, or if blocking changes risks stopping kernel development altogether? Both of those possibilities have been raised in recent discussions.

The capi driver provides a control interface for ISDN adapters - some of which, apparently, are still in use somewhere out there. If the devices.txt file is to be believed, the control device for CAPI applications should be /dev/capi20, while the first actual application shows up as /dev/capi20.00. That is not what the applications apparently want to see, though, so Marc-Andre Dahlhaus posted a patch moving the application devices under their own directory. In other words, the first CAPI application would show up as /dev/capi/0. The patch also modified the devices.txt file to match the new naming.

Alan Cox rejected the patch, saying:

devices.txt is the specification, and its ABI.

It is fixed and the kernel behaviour is to follow it. Those who didn't follow it, or who didn't propose a change back when it was specified in the first place have only themselves to blame. It isn't changing, and the ISDN code should follow the spec.

Maintaining the ABI is normally the right thing, but there are a couple of problems with the reasoning here. First is that, apparently, few (if any) distributions follow the rules described in devices.txt; the real ABI, in practice, may be different. Second: the kernel doesn't follow devices.txt either: current practice is to create /dev/capi as the control device, and /dev/capi0 as the first application device. The capifs virtual filesystem covered over some of this, but capifs is on its way out of the kernel.

In the short term, the fix appears to redefine the current behavior as a typo, tweaking things just enough that udev is able to create the right file names. The devices.txt file will not be touched for now. If regressions turn up, though, it may become necessary to support alternative names for these devices for well into the future.

Tracepoints, again

Jean Pihet recently posted a set of tracepoint changes for power-related events. The patch added some new tracepoints, added information to others, and added some documentation as well. Even more recently, Thomas Renninger came forward with a different set of power tracepoint changes, meant to clean things up and make the tracepoints more applicable to ARM systems. In both cases, Arjan van de Ven opposed the patches, claiming that they are an ABI break.

The ABI in question does have users - tools like powertop and pytimechart in particular. It seems that Intel also has "internal tools" which would be affected by this change. As Arjan put it: "the thing with ABIs is that you don't know how many users you have." When things are expressed this way, it looks like a standard case of a user-space ABI which must be preserved, but not all developers see it that way.

Peter Zijlstra argues that tools using tracepoints need to be more flexible:

These tools should be smart enough to look up the tracepoint name, fail it its not available, read the tracepoint format, again fail if not compatible. I really object to treating tracepoints as ABI and being tied to any implementation details due to that.

Steven Rostedt worries about the effects of a tracepoint ABI on kernel development:

Once we start saying that a tracepoint is a fixed abi, we just stopped innovation of the kernel. Tracepoints are too intrusive to guarantee their stability. Tools that need to get information from a tracepoint should either be bound to a given kernel, or have a easy way to update the tool (config file or script) that can cope with a change.

The issue of ABI status for tracepoints has come up in the past, but it has never really been resolved. In other situations, Linus has said that any kernel interface which is taken up by applications becomes part of the ABI whether that status was intended or not. From this point of view, it is not a matter of "saying" that there is an ABI here or not; applications are using the tracepoints, so the damage has already been done. Given that user-space developers are being pushed to use tracepoints in various situations, it makes sense to offer those developers a stable interface.

On the other hand, it is very much true that these tracepoints hook deeply into the kernel. If they truly cannot be changed, then either (1) changes in the kernel itself will be severely restricted, or (2) we will start to accumulate backward-compatibility tracepoints which are increasingly unrelated to anything that the kernel is actually doing. Neither of these outcomes is conducive to the rapid evolution of the kernel in the coming years.

If nothing else, if tracepoints are deemed to be part of the user-space ABI, there will be strong resistance to the addition of any more of them to large parts of the kernel.

Some alternatives have been discussed; the old idea of marking specific tracepoints as being stable came back again. Frank Eigler suggested the creation of a compatibility module which could attach to tracepoints which have been changed, remapping the trace data into the older format for user space. There has also been talk of creating a mapping layer in user space. But none of these ideas have actually been put into the mainline kernel.

This issue is clearly not going to go away; it can only get worse as more application developers start to make use of the tracepoints which are being added to the kernel. It seems like an obvious topic to discuss at the 2010 Kernel Summit, scheduled for the beginning of November. What the outcome of that discussion might be is hard to predict, but, with luck, it will at least provide some sort of clarity on this issue.

Comments (3 posted)

Solid-state storage devices and the block layer

By Jonathan Corbet
October 4, 2010
Over the last few years, it has become clear that one of the most pressing scalability problems faced by Linux is being driven by solid-state storage devices (SSDs). The rapid increase in performance offered by these devices cannot help but reveal any bottlenecks in the Linux filesystem and block layers. What has been less clear, at times, is what we are going to do about this problem. In his LinuxCon Japan talk, block maintainer Jens Axboe described some of the work that has been done to improve block layer scalability and offered a view of where things might go in the future.

While workloads will vary, Jens says, most I/O patterns are dominated by random I/O and relatively small requests. Thus, getting the best results requires being able to perform a large number of I/O operations per second (IOPS). With a high-end rotating drive (running at 15,000 RPM), the maximum rate possible is about 500 IOPS. Most real-world drives, of course, will have significantly slower performance and lower I/O rates.

SSDs, by eliminating seeks and rotational delays, change everything; we have gone from hundreds of IOPS to hundreds of thousands of IOPS in a very short period of time. A number of people have said that the massive increase in IOPS means that the block layer will have to become more like the networking layer, where every bit of per-packet overhead has been squeezed out over time. But, as Jens points out, time is not in great abundance. Networking technology went from 10Mb/s in the 1980's to 10Gb/s [Jens Axboe] now, the better part of 30 years later. SSDs have forced a similar jump (three orders of magnitude) in a much shorter period of time - and every indication suggests that devices with IOPS rates in the millions are not that far away. The result, says Jens, is "a big problem."

This problem pops up in a number of places, but it usually comes down to contention for shared resources. Locking overhead which is tolerable at 500 IOPS is crippling at 500,000. There are also problems with contention at the hardware level too; vendors of storage controllers have been caught by surprise by SSDs and are having to scramble to get their performance up to the required levels. The growth of multicore systems naturally makes things worse; such systems can create contention problems throughout the kernel, and the block layer is no exception. So much of the necessary work comes down to avoiding contention.

Before that, though, some work had to be done just to get the block layer to recognize that it is dealing with an SSD and react accordingly. Traditionally, the block layer has been driven by the need to avoid head seeks; the use of quite a bit of CPU time could be justified if it managed to avoid a single seek. SSDs - at least the good ones - care a lot less about seeks, so expending a bunch of CPU time to avoid them no longer makes sense. There are various ways of detecting SSDs in the hardware, but they don't always work, especially with the lower-quality devices. So the block layer exports a flag under


which can be used to override the system's notion of what kind of storage device it is dealing with.

Improving performance with SSDs can be a challenging task. There is no single big bottleneck which is causing performance problems; instead, there are numerous small things to fix. Each fix yields a bit of progress, but it mostly serves to highlight the next problem. Additionally, performance testing is hard; results are often not reproducible and can be perturbed by small changes. This is especially true on larger systems with more CPUs. Power management can also get in the way of the generation of consistent results.

One of the first things to address on an SSD was queue plugging. On a rotating disk, the first I/O operation to show up in the request queue will cause the queue to be "plugged," meaning that no operations will actually be dispatched to the hardware. The idea behind plugging is that, by allowing a little time for additional I/O requests to arrive, the block layer will be able to merge adjacent requests (reducing the operation count) and sort them into an optimal order, increasing performance. Performance on SSDs tends not to benefit from this treatment, though there is still a little value to merging requests. Dropping (or, at least, reducing) plugging not only eliminates a needless delay; it also reduces the need to take the queue lock in the process.

Then, there is the issue of request timeouts. Like most I/O code, the block layer needs to notice when an I/O request is never completed by the device. That detection is done with timeouts. The old implementation involved a separate timeout for each outstanding request, but that clearly does not scale when the number of such requests can be huge. The answer was to go to a per-queue timer, reducing the number of running timers considerably.

Block I/O operations, due to their inherently unpredictable execution times, have traditionally contributed entropy to the kernel's random number pool. There is a problem, though: the necessary call to add_timer_randomness() has to acquire a global lock, causing unpleasant systemwide contention. Some work was done to batch these calls and accumulate randomness on a per-CPU basis, but, even when batching 4K operations at a time, the performance cost was significant. On top of it all, it's not really clear that using an SSD as an entropy source makes a lot of sense. SSDs lack mechanical parts moving around, so their completion times are much more predictable. Still, for the moment, SSDs contribute to the entropy pool by default; administrators who would like to change that behavior can do so by changing the queue/add_random sysfs variable.

There are other locking issues to be dealt with. Over time, the block layer has gone from being protected by the big kernel lock to a block-level lock, then to a per-disk lock, but lock contention is still a problem. The I/O scheduler adds contention of its own, especially if it is performing disk-level accounting. Interestingly, contention for the locks themselves is not usually the problem; it's not that the locks are being held for too long. The big problem is the cache-line bouncing caused by moving the lock between processors. So the traditional technique of dropping and reacquiring locks to reduce lock contention does not help here - indeed, it makes things worse. What's needed is to avoid taking the lock altogether.

Block requests enter the system via __make_request(), which is responsible for getting a request (represented by a BIO structure) onto the queue. Two lock acquisitions are required to do this job - three if the CFQ I/O scheduler is in use. Those two acquisitions are the result of a lock split done to reduce contention in the past; that split, when the system is handling requests at SSD speeds, makes things worse. Eliminating it led to a roughly 3% increase in IOPS with a reduction in CPU time on a 32-core system. It is, Jens says, a "quick hack," but it demonstrates the kind of changes that need to be made.

The next step for this patch is to drop the I/O request allocation batching - a mechanism added to increase throughput on rotating drives by allowing the simultaneous submission of multiple requests. Jens also plans to drop the allocation accounting code, which tracks the number of requests in flight at any given time. Counting outstanding I/O operations requires global counters and the associated contention, but it can be done without most of the time. Some accounting will still be done at the request queue level to ensure that some control is maintained over the number of outstanding requests. Beyond that, there is some per-request accounting which can be cleaned up and, Jens thinks, request completion can be made completely lockless. He hopes that this work will be ready for merging into 2.6.38.

Another important technique for reducing contention is keeping processing on the same CPU as often as possible. In particular, there are a number of costs which are incurred if the CPU which handles the submission of a specific I/O request is not the CPU which handles that request's completion. Locks are bounced between CPUs in an unpleasant way, and the slab allocator tends not to respond well when memory allocated on one processor is freed elsewhere in the system. In the networking layer, this problem has been addressed with techniques like receive packet steering, but, unlike some networking hardware, block I/O controllers are not able to direct specific I/O completion interrupts to specific CPUs. So a different solution was required.

That solution took the form of smp_call_function(), which performs fast cross-CPU calls. Using smp_call_function(), the block I/O completion code can direct the completion of specific requests to the CPU where those requests were initially submitted. The result is a relatively easy performance improvement. A dedicated administrator who is willing to tweak the system manually can do better, but that takes a lot of work and the solution tends to be fragile. This code - which was merged back in 2.6.27 and made the default in 2.6.32 - is an easier way that takes away a fair amount of the pain of cross-CPU contention. Jens noted with pride that the block layer was not chasing the networking code with regard to completion steering - the block code had it first.

On the other hand, the blk-iopoll interrupt mitigation code was not just inspired by the networking layer - some of the code was "shamelessly stolen" from there. The blk-iopoll code turns off completion interrupts when I/O traffic is high and uses polling to pick up completed events instead. On a test system, this code reduced 20,000 interrupts/second to about 1,000. Jens says that the results are less conclusive on real-world systems, though.

An approach which "has more merit" is "context plugging," a rework of the queue plugging code. Currently, queue plugging is done implicitly on I/O submission, with an explicit unplug required at a later time. That has been the source of a lot of bugs; forgetting to unplug queues is a common mistake to make. The plan is to make plugging and unplugging fully implicit, but give I/O submitters a way to inform the block layer that more requests are coming soon. It makes the code more clear and robust; it also gets rid of a lot of expensive per-queue state which must be maintained. There are still some problems to be solved, but the code works, is "tasty on many levels," and yields a net reduction of some 600 lines of code. Expect a merge in 2.6.38 or 2.6.39.

Finally, there is the "weird territory" of a multiqueue block layer - an idea which, once again, came from the networking layer. The creation of multiple I/O queues for a given device will allow multiple processors to handle I/O requests simultaneously with less contention. It's currently hard to do, though, because block I/O controllers do not (yet) have multiqueue support. That problem will be fixed eventually, but there will be some other challenges to overcome: I/O barriers will become significantly more complicated, as will per-device accounting. All told, it will require some major changes to the block layer and a special I/O scheduler. Jens offered no guidance as to when we might see this code merged.

The conclusion which comes from this talk is that the Linux block layer is facing some significant challenges driven by hardware changes. These challenges are being addressed, though, and the code is moving in the necessary direction. By the time most of us can afford a system with one of those massive, 1 MIOPS arrays on it, Linux should be able to use it to its potential.

Comments (66 posted)

Patches and updates

Kernel trees


Core kernel code

Device drivers

Filesystems and block I/O

Memory management



Virtualization and containers

Benchmarks and bugs


Page editor: Jonathan Corbet


Fedora defines its vision

By Jake Edge
October 6, 2010

After a long period of discussion and deliberation, the Fedora project has started to put together concrete answers to the questions that have been swirling within that community: "What is Fedora?" and "Who is Fedora for?". The Fedora engineering steering committee (FESCo) recently approved a policy on updates that will govern how package updates are applied to the various Fedora branches, while the Fedora board has come up with a "vision statement". Both of those will help answer the questions, but they aren't complete answers, at least yet, and meanwhile there are other community members, like Mike McGrath, who are proposing major shifts in the direction of the project.

The vision statement is meant to serve as an overall guide to what Fedora is and why it exists in a single sentence. Obviously it isn't a manifesto, but is, instead, a succinct guide that can be used at a high level to decide what fits for the project—as well as what doesn't. The final draft was presented by Fedora project leader Jared Smith for comments in advance of a board meeting to discuss it, which was held on October 1. Some wordsmithing was done to the draft at that meeting, which resulted in:

The Fedora Project creates a world where free culture is welcoming and widespread, collaboration is commonplace, and people control their content and devices.

That wording was adopted at the October 4 board meeting, and the the project is still putting together some background and rationale statements to go along with it. The next step, according to Máirín Duffy's meeting summaries for the September 27 and October 1 board meetings, is to come up with tangible goals for specific special interest groups (SIGs) and teams within the project that are based on the vision. In addition, the board will set high-level priorities that FESCo and others can use to set their own goals. Based on that, the vision statement will be used to make each Fedora release more focused than we have seen in the past, with the board and other leaders trying to shape the efforts of Fedora volunteers into a more cohesive whole.

Update policy

Once the release is made, the update policy will kick in to try to calm the flood of updates that tend to follow any release. In particular:

[...] we should avoid major updates of packages within a stable release. Updates should aim to fix bugs, and not introduce features, particularly when those features would materially affect the user or developer experience. The update rate for any given release should drop off over time, approaching zero near release end-of-life; since updates are primarily bugfixes, fewer and fewer should be needed over time.

This necessarily means that stable releases will not closely track the very latest upstream code for all packages. We have rawhide for that.

That stands in sharp contrast to some of the updates that have been pushed in the past (e.g. KDE) just to provide additional features. Security updates are handled somewhat differently, particularly for packages where upstream doesn't provide a backport and it would be "impractical" for the package maintainer to make that change. In that case, subject to the judgement of FESCo and the maintainer, it may make sense to move forward to a new release that is supported by upstream.

In addition to the overall philosophy that is meant to slow down the updates train, there are more stringent requirements for critical path packages. Those are the packages that are considered essential functionality without which the system is unusable. That includes various system-level packages (kernel, init system, X server, etc.), but has been augmented by the updates policy to include things like desktop environments, important desktop applications (Firefox, Konqueror, Evolution, Thunderbird, etc.), and the package updating tools (PackageKit and friends). In order to push out an update to any of those packages, even for a security update, it requires a two or higher "karma" sum in Bodhi, and one of the positive votes must come from a proven tester.

For updates that do not affect the critical path, the requirements are relaxed somewhat. Those updates can either pass the criteria for the critical path, reach a (presumably lower) karma threshold specified by the maintainer, or spend at least a week in the updates-testing branch. But, once again, it is stressed that the changes should not affect the ABI/API or user experience "if at all possible".

Different direction?

McGrath's proposal is to shift Fedora from a packaging organization into more of a development organization, with a focus on providing open source "cloud" applications and services. While it fits in just fine with the vision statement, it is a radical departure from what most folks think of as Fedora. The reaction on the fedora-advisory-board mailing list has been, not surprisingly, mixed. Some community members are excited about a shift in that direction, while others are less so.

There is a real question, though, how Fedora would go about making this change, even if the board and community were completely behind it. As Jesse Keating points out:

Again, what exactly are you proposing the board do then? It's not as if the board has resources they can say "stop working on foo, start working on bar", or have resources to go out and hire Bob, Jim, and Sue to start working on bar.

Keating is concerned that McGrath's proposal will be "another drive-by 'hey, we should be doing THIS thing over here, somebody should look into that.'" But McGrath sees it as a bigger project, that might involve other organizations, so it is something that the board would have to facilitate:

I'm proposing a complete reorganization of The Fedora Project. Leave FESCo and their current role as it is. Figure out how to create a new FESCo type org for this new goal. I'm proposing the board find/request the resources to make this happen. Contact the likes of mozilla perhaps even google. Look around and see who else is interested in contributing resources and see if this is feasible. If the board's job isn't to set vision, policy and find resources, what is it?

Free (as in freedom) cloud services have been on the minds of lots of FOSS advocates lately. Many folks are increasingly locking their data up in proprietary web applications, at least partially because there are no alternatives. It may be too late to disconnect the general public from services like Facebook, but even the staunchest free software advocate would be hard-pressed to point to a free, working alternative. If no one in the FOSS world starts working on cloud applications, we will remain stuck in that uncomfortable position.

There are hopes that things like Diaspora will fill the role of Facebook for privacy and freedom-conscious users and there are some other nascent efforts to fill in other holes, but there isn't, yet, any umbrella project that is looking at the whole picture. That is what McGrath would like to see Fedora evolve into. It seems like that may be a hard sell for the Fedora community (and its sponsor Red Hat), but it would be a very valuable project for some new or existing FOSS organization to take on.


While it may seem rather late for Fedora to be hashing these things out (after 13, nearly 14, separate releases over seven years), it is a sign that the distribution has reached a critical mass. Over the last year or two, there have been various factions pulling Fedora in different directions, and without much guidance from the board or FESCo. Those competing interests have finally caused the project to really consider its focus and direction. There are undoubtedly those who will be unhappy with the update policy, possibly to the point of leaving the project, but for those that remain, it should make it a friendlier, and easier, place to work.

Comments (2 posted)

Brief items

Distribution quote of the week

Not cool. It's like you're getting kids under the drinking age all fired up about a new club, and when they actually show up, they are bounced at the door. How rude! If you're going to recruit folks like this to help Linux out, Linux needs to be something they can be inspired by — something they can actually use. Otherwise, why will they care? And for the few who either are inspired already and see the potential, or who find out about free software & culture on their own and have some interest in it, it's not just that they have to gear up just to be able to join your project — there's alternatives calling out to them that are more welcoming and far easier to get started with.
-- Máirín Duffy

Comments (3 posted)

Smeegol 1.0 released

The openSUSE project has announced the release of Smeegol 1.0. "Smeegol is an openSUSE volunteer effort by the Goblin Team to create an openSUSE interpretation of the MeeGo user experience, offering the compelling advantages of the openSUSE infrastructure. Users are able to pull from the full openSUSE ecosystem for applications, using repositories on the Build Service and other 3rd party repositories. Moreover, thanks to SUSE Studio[3] anyone can now easily create a customized Smeegol based OS from a convenient web interface! On SUSE Gallery you can find an appliance (Featured Appliance this week) ready to be cloned for customization. Finally, openSUSE users can easily install Smeegol using the openSUSE one click install technology."

Full Story (comments: 2)

Ubuntu 10.10 release candidate is out

Ubuntu has announced the availability of the release candidate for Ubuntu 10.10 ("Maverick Meerkat"). It is "complete, stable, and suitable for testing by any user", according to the announcement, which also comes with a Hitchhiker's Guide riff: "Releases are big. You just won't believe how vastly, hugely, mind-bogglingly big they are. I mean, you may think it's a long haul to release a single Linux package or application, but that's just peanuts to a Linux distribution release. Because of this, we must work our way up to it, incrementally...bit by bit...milestone by takes a lot of Deep Thought."

Full Story (comments: 18)

Distribution News

Debian GNU/Linux

Debian Release Team meeting minutes (and release update)

Click below for the minutes from the recent meeting of the Debian Release Team. Topics include Documentation, Stable Updates and Volatile, Release notes and upgrade reports, Release Update (Squeeze Status), Transitions and removals, Bug Squashing Parties, Current Release Blockers, and Proposed timeline. It's possible that squeeze will be released before Christmas.

Full Story (comments: none)

New Debian Backports Suite created

The Debian Backports Team has announced the availability of a new suite on backports: lenny-backports-sloppy. "lenny-backports-sloppy will please the group that is happy to upgrade from lenny + lenny-backports to squeeze + squeeze-backports. lenny-backports is meant only for packages from squeeze, even after the release. Technically that means it will get locked down for uploads after the release of squeeze and require manual approval (for e.g. point release update versions, or security updates that happen during the squeeze release cycle), while lenny-backports-sloppy will accept packages from wheezy. Uploading to lenny-backport will have to get approved by the Debian Backports Team after the squeeze release, just like uploads to lenny are currently approved by the Release Team."

Full Story (comments: none)

Call for Votes - GR: Debian project members

Voting is open for the General Resolution to welcome non-packaging contributors as Debian project members, until October 18, 2010.

Full Story (comments: none)


McGrath: Proposal for a new Fedora project

Mike McGrath has posted a proposal for a serious change of direction for the Fedora project. "It's no secret I'm not big on the future of the desktop. With great reflection and further research I've come to realize something else. Google is about to destroy just about everyone. There's a tiny handful of people that don't like the idea of cloud computing and information 'in the cloud'. The majority of the world though in love with it or will be and not know it. The problem: Free Software is in no position to compete with the web based applications of the Google of tomorrow." He would like to reorganize Fedora to help developers create applications that will be competitive in that world.

Full Story (comments: 60)

Fedora Board Meetings, 27 Sept 2010 and 1 Oct 2010

Máirín Duffy provides a summary of the Fedora Board meetings held on September 27 and October 1.

Comments (none posted)

Other distributions

CentOS 3 1-Month End Of Life

CentOS 3 will not be supported after October 31. "It is recommended that any system still running CentOS 3 should be upgraded to a more recent version of CentOS before this date to ensure continued security and bug fix support."

Full Story (comments: none)

Newsletters and articles of interest

Distribution newsletters

Comments (none posted)

New Fedora Linux Project Leader Building More Than a Distro (CIO Update)

Sean Michael Kerner talks with Fedora Project Leader Jared Smith. "Smith's vision for Fedora is about ensuring that the Fedora community is an inclusive place where multiple views and contributions are welcome. Smith doesn't necessarily have any new or unique tools for building community, but he does bring a different background to the position than past Fedora Project Leaders. "I came from another open source company that had the same business model as Red Hat," Smith said. "So I've had some experience in how to keep people motivated, how to move things forward and I think we've already implemented some of the things that I like to see.""

Comments (1 posted)

Tiny Core: Ultralight DIY distribution (Linux Journal)

Linux Journal has a review of Tiny Core. "When reviewing a lightweight distribution, the term Swiss Army knife is sometimes employed to indicate that it's packed with features despite a diminutive size. However, at 11MB for the ISO, Tiny Core is more of a blank-slate distribution, as when booted from a CDROM or a USB stick, it presents the user with a simple desktop consisting merely of a task launcher and a package manager. It contains some good ideas and it's already perfectly usable, but I think it needs a few more refinements in order to become great."

Comments (none posted)

Page editor: Rebecca Sobol


The state of Linux gaming

October 6, 2010

This article was contributed by Joe 'Zonker' Brockmeier.

All work and no play makes for unhappy users. For Linux users, finding satisfying games to play can be a challenge, though not an insurmountable one.

History and Failed Attempts

Many have hoped to replace Windows and other proprietary desktop systems with Linux, so it has naturally been a focus of many commercial and community efforts over the years to target Linux as a gaming platform. Many, if not most, of these efforts have failed or have only enjoyed a modest amount of success.

Consider, for instance, Loki, which struggled and ultimately failed in its bid to port Windows games to Linux. The company landed several major publishing deals to port major (at the time) games to Linux. It brought very popular games to Linux, including Unreal Tournament, Sid Meyers Civilization, and (this author's favorite) Quake III Arena. Despite providing a decent selection of popular and current games for Linux, the existing Linux desktop market in 2000 and 2001 was simply too small to support the company — and the existence of a selection of popular games was not enough to drive adoption of Linux.

One of Mandrake's (eventually Mandriva) unsuccessful products was a Gaming Edition based on Mandrake 8.1. The Gaming Edition added TransGaming's WineX to help install Windows-based games, and a copy of The Sims. Despite being only slightly more expensive than buying The Sims standalone, the Gaming Edition didn't merit a repeat and Mandrake never released a second attempt.

WineX was a customized version of Wine optimized to play Windows games. Eventually that became Cedega, which is still in active development and competes with the, similarly Wine-based, CodeWeavers CrossOver Games.

All of these efforts were or are proprietary in whole or part, and derivative of existing efforts. They were either porting proprietary games to Linux, or enabling proprietary Windows-based games to run on Linux. But several projects are also trying to bring quality, native, open source games to Linux.

Going Concerns and Native Efforts

Finding games for Linux is not difficult, particularly if one seeks only simple puzzle, card, or board game analogs on the computer. For example, GNOME and KDE each ship a handful of simple games that provide ample amusement during conference calls or to while away a few minutes between more productive tasks. Users who enjoy card games, Mahjong, Sudoku, Chess, and other similar games will find the selection much to their satisfaction.

But users looking for games that are competitive with more complex, immersive, arcade-style games that one can find easily on Windows will come up with just a handful. For example, Armagetron is a multiplatform game that takes its cue from the lightcycles in Tron. Several games have been developed based on the GPLed engine released by id Software from Quake III Arena, like OpenArena, Nexuiz/Xonotic, World of Padman, Tremulous, and ioquake3.

Players who enjoy role playing games and multiplayer action have found Battle for Wesnoth to be particularly satisfying. Other players prefer old DOS games reimagined, such as Scorched 3D, or clones of Super NES games like the addictive Crack Attack! Aspiring air guitarists might enjoy the Rock Band clone Frets on Fire, which lets players test their virtual guitar skills via the keyboard.

Ryzom was a popular massively multiplayer online role-playing game (MMORPG) that went through a long journey before being released as open source. After various campaigns starting back in 2006, it was finally released as free software in May. Ryzom looks to be under active development and if you poke around long enough on the developer site you can find the install instructions for getting it running on Linux.

Another MMORPG is WorldForge, which has been under development since 1997. It seems to be a fairly active community with plenty of development going on. It's no substitute for World of Warcraft, as it is under active development, but it does look like something that will provide a rich environment for many styles of MMORPGs down the road.

Bundling Linux games

Still, Linux doesn't quite match Windows for games in terms of variety or quality. One can find a handful of quality games for Linux if you are willing to look, and certainly enough to while away a few weekends or evenings in front of the computer, but hard-core gamers are going to be dissatisfied. The latest and greatest blockbuster games usually don't run on Linux.

Casual gamers will fare better if they can find Linux games. Users who are new to Linux and searching for games can have a hard time discovering suitable games for their tastes without guidance. It helps to have a unifying project that pulls together a selection of games, such as the Fedora Games Live DVD, a "spin" of Fedora that focuses on Linux gamers.

The Fedora Games Spin serves several purposes. First, it's good test disc to see whether hardware is suitable for 3D gaming on Linux. It also, of course, bundles many native Linux games that are fully free software. Not only the standard-issue arcade and FPS-type games are included, but games suited for kids, and flight simulators as well.

The full list of games is available on the Fedora Wiki. The current release is based on Fedora 13, and it is the third release since the project started with a spin based on Fedora 11. The DVD doesn't actually contain all games that are packaged for Fedora, but a selection that the spin team feels is most representative of the best gaming on Linux.

Another showcase effort is produced by Like the Fedora spin, (the name of the distribution) is a live image that can be booted from CD, DVD, or USB key. Based on Arch Linux, the live CD contains fewer games than the Fedora spin, and focuses primarily on action games, rather than also including educational content.

There's a new site for Ubuntu users called Ubuntu Gamer that provides tips and news about Linux-based games. The site has only been up for a bit over a week, but it's off to a strong start.

What seems lacking is any concerted effort to encourage more game development on Linux and open source platforms. While you can find plenty of games on Linux, they do lag significantly behind offerings for Windows and the popular gaming consoles in terms of production values, and maturity of the gaming engines. Developers can find resources via pygame if they're interested in writing games in Python, but there's little specifically encouraging game development on Linux.

Mozilla Gaming

As users turn to Web-based applications in larger numbers, it seems natural that they would look to Web-based games as well. In fact, many already do in the form of (annoying) Facebook games like Farmville, Flash-based games, and multiplatform plugins like Quake Live. Linux users are on equal footing here, since these browser-based options are all supported on Linux as well as Windows and Mac OS X. Linux users on non-x86 platforms, however, are left behind because the games are tied to proprietary pieces that run only on x86/x86-64 Linux systems.

The Mozilla Project is attempting to encourage development of Web-based games using "open Web technology." The Mozilla Labs Gaming project was announced in early September, and kicked off with a contest launched on September 30th.

Dubbed "Game On 2010," the contest calls for developers to create a game using open Web technology, which is defined as HTML, CSS, JavaScript, and server-side code that can be PHP, Python, Java, and other languages. No plugins are allowed. The games will be judged on six criteria, including the game's polish, aesthetics, how original the game is, and whether it showcases the "power of open Web technologies." Submissions are due by January 11th, 2011, and winners will get a trip to the Game Developer Conference in San Francisco on February 28th.

Aside from the contest, though, the Mozilla Labs Gaming project is little more than an idea. Whether it will pick up steam remains to be seen. It should be interesting to see what the contest produces, but it would be nice if the labs project at least had some developer resources or guidance for getting started on developing browser-based games.

For now, Linux remains a poor cousin to Windows when it comes to gaming. While you can find many good games for Linux, the selection and quality are not comparable to the thousands of titles available for Windows and proprietary gaming consoles. If browser-based gaming takes off, it seems likely that Linux users will be on even footing with Windows and Mac users.

Comments (37 posted)

Brief items

Firebird 2.5

Version 2.5 of the Firebird relational database manager has been announced; see the release notes for details. "The primary goal for Firebird 2.5 was to establish the basics for a new threading architecture that is almost entirely common to the Superserver, Classic and Embedded models, taking in lower level synchronization and thread safety generally. Although SQL enhancements are not a primary objective of this release, for the first time, user management becomes accessible through SQL CREATE/ALTER/DROP USER statements and syntaxes for ALTER VIEW and CREATE OR ALTER VIEW are implemented. PSQL improvements include the introduction of autonomous transactions and ability to query another database via EXECUTE STATEMENT."

Comments (2 posted)

Ganeti 2.2.0 released

Version 2.2.0 of the Ganeti virtualization cluster manager has been released. Major changes include better DRBD support, experimental LXC support, intra-cluster instance moves, and more.

Comments (none posted)

LLVM 2.8 is available

The LLVM compiler project has announced the release of version 2.8. "LLVM 2.8 includes broad improvements in the core LLVM project and notably includes major improvements to Clang C++ support (which is now feature complete and quite usable). In addition (and though they are not included as part of the 2.8 release) two major new subprojects have joined the LLVM project: libc++ and LLDB." Click below for the announcement, or see the release notes for the details.

Full Story (comments: 35)

PostgreSQL security update

PostgreSQL versions 9.0.1, 8.4.5, 8.3.12, 8.2.18, 8.1.22, 8.0.26 and 7.4.30 have been released to fix a security issue and a few other serious problems. "The security vulnerability allows any ordinary SQL users with 'trusted' procedural language usage rights to modify the contents of procedural language functions at runtime. As detailed in CVE-2010-3433, an authenticated user can accomplish privilege escalation by hijacking a SECURITY DEFINER function (or some other existing authentication-change operation). The mere presence of the procedural languages does not make your database application vulnerable." One might think that a fairly serious database is needed just to keep up with all of the supported versions, but that situation will now be simplified: this is the final update for versions 7.4.x and 8.0.x, and 8.1.x will go unsupported before the end of the year.

Full Story (comments: none)

Sawfish 1.7.0 "Frozen Flame" released

Version 1.7.0 of the venerable Sawfish window manager is out. New features include XFCE integration, better GNOME/KDE integration, a new emacs major mode, and more.

Full Story (comments: 1)

Newsletters and articles

Development newsletters from the last week

Comments (none posted)

Hutterer: Thoughts on Linux multitouch

Peter Hutterer has posted some lengthy thoughts about the current state and future directions for multitouch support on Linux. "Why is it taking us so long when there's plenty of multitouch offerings out there already? The simple answer is: we are not working on the same problem. [...] If we look at commercial products that provide multitouch, Apple's iPhones and iPads are often the first ones that come to mind. These provide multitouch but in a very restrictive setting: one multi-touch aware application running in full-screen. Doing this is [surprisingly] easy from a technical point of view, all you need is a new API that you write all new applications against."

Comments (none posted)

Seigo: on the impending future of ui greatnesses

On his blog, KDE hacker Aaron Seigo disagrees with the idea that the desktop as we know it is likely to disappear. "Now, our way of writing applications for "the desktop" may change over the next decade, but the desktop will still be with us. People will still want a way to launch their apps, manage the shapes they appear in on the screen (aka "windows", since I assume that HTML5CloudAwesomeness doesn't mean "everything is fullscreen with one app at a time" for most people), will want to place these HTML5CloudAwesomenesses around their screen (aka "desktop widgets"), etc. That could, indeed, be written in HTML and [Javascript], but it will still exist. [...] So what appears inside of our windows may change in the form of where some or all of the data being manipulated is stored and/or what language is used to write them .. but it will still be a lot like a laptop computer."

Comments (14 posted)

systemd for administrators - script conversion

The third installment of Lennart Poettering's "systemd for administrators" series has been posted; this one focuses on converting SYSV init scripts to systemd. "And that's all there is to it. We have a simple systemd service file now that encodes in 10 lines more information than the original SysV init script encoded in 115. And even now there's a lot of room left for further improvement utilizing more features systemd offers. For example, we could set Restart=restart-always to tell systemd to automatically restart this service when it dies. Or, we could use OOMScoreAdjust=-500 to ask the kernel to please leave this process around when the OOM killer wreaks havoc."

Comments (none posted)

The OpenOffice fork is officially here (Computerworld)

Over at Computerworld, Steven J. Vaughan-Nichols is reporting that, perhaps unsurprisingly, Oracle does not plan to work with the new Document Foundation and LibreOffice project. "As for The Document Foundation's offer for Oracle to work with them on streamlining and improving the OpenOffice development process, [Oracle public relations said]: 'The beauty of open source is that it can be forked by anyone who chooses, as was done [by The Document Foundation]. Our sincerest goal for OpenOffice is that it becomes more widely used so, if this new foundation will help advance OpenOffice and the Open Document Format (ODF), we wish them the best.'"

Comments (38 posted)

Page editor: Jonathan Corbet


Non-Commercial announcements

Software Freedom Conservancy appoints Kuhn as full-time executive director

The Software Freedom Conservancy (SFC) has announced the appointment of Bradley M. Kuhn as its full-time executive director. The SFC provides a non-profit home for member free software projects—such as Mercurial, BusyBox, Samba, Inkscape, and 18 others—without the projects having to obtain and maintain individual non-profit status. "Kuhn brings to Conservancy two decades of experience in software freedom volunteerism and ten years of non-profit management and organizational experience. From 2001 to 2005, Kuhn was Executive Director of the Free Software Foundation in Boston, MA. More recently, from 2005 to 2010, Kuhn worked as Policy Analyst and Technology Director of the Software Freedom Law Center (SFLC)." On his blog, and the SFC blog, Kuhn adds: "For four years, I have worked part-time on nights, weekends, and lunch times to keep Conservancy running and to implement and administer the services that Conservancy provides to its member projects. It's actual quite a relief to now have full-time attention available to carry out this important work."

Comments (2 posted)

GNOME Quarterly Report

They are running a bit behind, but the GNOME Foundation has released the GNOME Quarterly Report for the second quarter of 2010. Several GNOME teams have updates in this report, including the Board of Directors, Localization, GNOME Marketing, Bug Squad, GNOME Outreach Program for Women, Membership, Usability, GNOME Mobile, Art Team, Documentation Team, Travel Committee, GNOME Events, Release Team, and Finance.

Comments (none posted)

Commercial announcements

Black Duck acquires Ohloh

Black Duck Software has announced the acquisition of the site from Geeknet. "Black Duck plans to use the acquisition to help enhance and expand FOSS adoption by making it easier for developers to tap the huge body of high-quality code in open source projects, and collaborate with their peers through the Ohloh community. By working with the FOSS community, including forges, foundations and other code repositories as well as individual projects and developers, Black Duck will expand and enrich Ohloh with improved data and new productivity tools. Black Duck will integrate Ohloh assets with Black Duck's free code search site, and will infuse it with a complete set of FOSS project data from Black Duck's comprehensive KnowledgeBase, to create a single premier web destination that developers can turn to as a trusted source of FOSS knowledge."

Comments (6 posted)

Articles of interest

WebP, a new image format for the Web (The Chromium Blog)

On its Chromium Blog, Google has announced a new image format called WebP. It is based on techniques from Google's recently open-sourced VP8 video codec and shows some significant size reductions for image data. There is also a gallery available to compare original and WebP-compressed images. "While the benefits of a VP8 based image format were clear in theory, we needed to test them in the real world. In order to gauge the effectiveness of our efforts, we randomly picked about 1,000,000 images from the web (mostly JPEGs and some PNGs and GIFs) and re-encoded them to WebP without perceptibly compromising visual quality. This resulted in an average 39% reduction in file size. We expect that developers will achieve in practice even better file size reduction with WebP when starting from an uncompressed image." (Thanks to Martin Jeppesen.)

Comments (33 posted)

Red Hat settles patent case with Acacia - shares few details (

Sean Michael Kerner shares his concerns that Red Hat has not been entirely forthcoming with the details of this case. "As to how Red Hat has settled the alleged IP infringement, that's where the transparency (or lack thereof) is my concern. When I asked Red Hat about the patent settlement with Acacia I got the following statement: "Red Hat routinely addresses attempts to impede the innovative forces of open source via allegations of patent infringement. We can confirm that Red Hat, Inc and Software Tree LLC have settled patent litigation that was pending in federal court in the Eastern District of Texas (Civil Action No. 6:09-cv-00097-LED)."" (Thanks to Don Marti)

Comments (15 posted)

Microsoft sues Motorola, citing Android patent infringement (ars technica)

The latest in a series of patent cases involving Android has been launched by Microsoft against Motorola. Ars technica reports: "The patents are all related to key smartphone experiences that include syncing e-mails, calendars, and contacts, scheduling meetings, and notifying applications about changes in signal strength and battery power. Microsoft specifically names two Motorola devices, the Droid 2 and the Charm, but says these are just examples and not a comprehensive list."

Florian Mueller has posted his first reaction to the news here.

Comments (180 posted)

Google Answers Oracle (Groklaw)

Groklaw has Google's full response in the Oracle suit, along with the usual commentary. "It's a very aggressive and confident response to Oracle's complaint. Google asks that Oracle's complaint be dismissed, for a judgment in favor of all its counterclaims, for a declaratory judgment that Google has not infringed or contributed to any infringement of any of the patents, a declaration of the invalidity of all the Oracle patents, and a declaration that all Oracle's claims are barred by laches, equitable estoppel and/or waiver, and unclean hands."

Comments (none posted)

Newest Google Android Cell Phone Contains Unexpected 'Feature' (New America)

The New America Foundation has posted a somewhat sensationalist article on the G2 Android phone. "Specifically, one of the microchips embedded into the G2 prevents device owners from making permanent changes that allow custom modifications to the the Android operating system. This is the same Android that purposefully opened up its source code under the Apache License, allowing anyone to use, modify, and redistribute the operating system code even if they choose not to contribute back to the development community." The primary source appears to be this XDA forum; it looks like the G2 has either a mechanism to rewrite the root partition or some sort of union mount that causes post-boot changes to be lost. Either way, it's not a hacker-friendly device.

Comments (25 posted)

Android: Swimming With the Patent Sharks (GigaOM)

Matt Asay discusses the Android patent wars on GigaOM. "So why didn't Google just go along with Sun and take a fee-free license to use Java ME? Because doing so would have required Google to keep its Java implementation consistent with the standard instead of forking it with its Dalvik virtual machine. As much as Google might talk about standards, Google has much to gain by keeping Android applications on the Android platform, rather than allowing them to run on competing platforms like RIM."

Comments (5 posted)

New Books

New book: "The Linux Programming Interface"

Linux man-page maintainer Michael Kerrisk's magnum opus The Linux Programming Interface is now available from No Starch Press. The 1500-page book covers Linux system calls and library APIs for system programming, with multiple example programs and diagrams. "It can be difficult and time-consuming to learn how to develop system programs for Linux. It's not unusual for programmers to scour several manuals--or hundreds of web pages--before finding the information they need. According to Michael Kerrisk, ''The Linux Programming Interface' is the book I wanted when I first switched from UNIX to predominantly working in Linux more than a decade ago.' He added that it is '...a broad and deep system programming book that covers Linux-specific details while also clearly delineating standard features available on all UNIX systems. Long before I completed writing this book, it had already become my own primary system programming reference.'"

Full Story (comments: 21)

Building Android Apps with HTML, CSS, and JavaScript--New from O'Reilly

O'Reilly has released "Building Android Apps with HTML, CSS, and JavaScript" by Jonathan Stark.

Full Story (comments: none)

JavaScript Patterns and Closure: The Definitive Guide--New from O'Reilly

O'Reilly has released "JavaScript Patterns" by Stoyan Stefanov and "Closure: The Definitive Guide" by Michael Bolin.

Full Story (comments: none)


CE Linux Forum Newsletter: September 2010

The CE Linux Forum newsletter for September 2010 covers the Embedded Linux Conference Europe and U-Boot ARM Enhancements.

Full Story (comments: none)

FSFE : Newsletter October 2010

The Free Software Foundation Europe Newsletter for October 2010 is out. "In this edition we discuss the misleading term "fair, reasonable and non-discriminatory terms" (FRAND), we explain what we are doing about centralised computer systems and the Internet Governance Forum (IGF), and update you on our current campaign to end non-free software commercials by public institutions."

Full Story (comments: none)

Calls for Presentations

Linux Audio Conference 2011

The Linux Audio Conference 2011 will be held May 6-8, 2011 in Ireland. The call for papers will be open until January 15, 2011.

Full Story (comments: none)

FOSDEM 2011 call for talks

FOSDEM 2011 will have a distribution miniconf. "Though it is not yet certain what the details will look like, it is certain that there will be room for distribution-related talks; so this is a call for talk proposals for the distributions rooms at FOSDEM 2011."

Full Story (comments: none) CFP closing soon is (in your editor's opinion) the premier free software event in India; this year it is happening from December 15 to 17 in Bangalore. The call for participation is about to close; anybody who would like to be a part of should get their proposals in before October 10.

Comments (none posted)

PyCon 2011 Call For Tutorials

PyCon 2011 will be held March 9th through the 17th, 2011 in Atlanta, Georgia. The call for tutorial proposals is open until November 1, 2010. "Tutorials are 3-hour long classes (with a refreshment break) taught be some of the leading minds in the Python community. Classes range from beginner (Introduction to Python) to advanced (OOP, Data Storage and Optimization) and everything in between."

Full Story (comments: none)

Upcoming Events

lca2011 Announces more Keynote Speakers

The 2011 organizing team has announced two more keynote speakers for lca2011 in Brisbane, Australia. They are Eric Allman, the original author of Sendmail, and Geoff Huston, the Chief Scientist at the Asia Pacific Network Information Centre (APNIC), the Regional Internet Registry serving the Asia Pacific region.

Full Story (comments: none)

Desktop Summit 2011

The Desktop Summit is a co-located event which features the yearly contributor conferences of the GNOME and KDE communities, GUADEC and Akademy. Next year the conference will take place from August 6-12, 2011 in Berlin. "The GNOME and KDE communities develop the majority of Free Software desktop technology. Increasingly, they cooperate on underlying infrastructure. By holding their annual developer flagship events in the same location, the two projects will further foster collaboration and discussion between their developer communities. Moreover, KDE and GNOME aim to work more closely with the rest of the desktop and mobile open source community. The summit presents a unique opportunity for main actors to work together and improve the free and open source desktop for all."

Full Story (comments: none)

Open Source Health Informatics Conference

The Open Source Health Informatics Conference will be held on October 27, 2010 in London. "The focus of this conference will be around the place that Open Source software should have in UK healthcare and how a coherent community might be established around it. For example would: An NHS version of OpenOffice be a practical proposition?; Could the skillsets that exist within UK healthcare be utilised to create sustainable implementations of Open Source software?; How would the requirements for this be gathered?; Is standardisation via Open Source software a viable aim across the UK healthcare sector?"

Full Story (comments: none)

ON2: Test Signals

ON2: Test Signals is a festival exploring new forms for radio and software. "The festival will bring together software developers and radio practitioners to demonstrate, discuss and develop new ways of applying software to radio on Friday 22 October and Saturday 23 October at Direktorenhaus, Berlin."

Full Story (comments: none)

Events: October 14, 2010 to December 13, 2010

The following event listing is taken from the Calendar.

October 11
October 15
17th Annual Tcl/Tk Conference Chicago/Oakbrook Terrace, IL, USA
October 16 FLOSS UK Unconference Autumn 2010 Birmingham, UK
October 16 Central PA Open Source Conference Harrisburg, PA, USA
October 18
October 21
7th Netfilter Workshop Seville, Spain
October 18
October 20
Pacific Northwest Software Quality Conference Portland, OR, USA
October 19
October 20
Open Source in Mobile World London, United Kingdom
October 20
October 23
openSUSE Conference 2010 Nuremberg, Germany
October 22
October 24
OLPC Community Summit San Francisco, CA, USA
October 25
October 27
GitTogether '10 Mountain VIew, CA, USA
October 25
October 27
Real Time Linux Workshop Nairobi, Kenya
October 25
October 27
GCC & GNU Toolchain DevelopersÂ’ Summit Ottawa, Ontario, Canada
October 25
October 29
Ubuntu Developer Summit Orlando, Florida, USA
October 26 GStreamer Conference 2010 Cambridge, UK
October 27 Open Source Health Informatics Conference London, UK
October 27
October 29 2010 Parc Hotel Alvisse, Luxembourg
October 27
October 28
Embedded Linux Conference Europe 2010 Cambridge, UK
October 27
October 28
Government Open Source Conference 2010 Portland, OR, USA
October 28
October 29
European Conference on Computer Network Defense Berlin, Germany
October 28
October 29
Free Software Open Source Symposium Toronto, Canada
October 30
October 31
Debian MiniConf Paris 2010 Paris, France
November 1
November 2
Linux Kernel Summit Cambridge, MA, USA
November 1
November 5
ApacheCon North America 2010 Atlanta, GA, USA
November 3
November 5
Linux Plumbers Conference Cambridge, MA, USA
November 4 2010 LLVM Developers' Meeting San Jose, CA, USA
November 5
November 7
Free Society Conference and Nordic Summit Gorthenburg, Sweden
November 6
November 7
Technical Dutch Open Source Event Eindhoven, Netherlands
November 6
November 7 HackFest 2010 Hamburg, Germany
November 8
November 10
Free Open Source Academia Conference Grenoble, France
November 9
November 12
OpenStack Design Summit San Antonio, TX, USA
November 11 NLUUG Fall conference: Security Ede, Netherlands
November 11
November 13
8th International Firebird Conference 2010 Bremen, Germany
November 12
November 14
FOSSASIA Ho Chi Minh City (Saigon), Vietnam
November 12
November 13
Japan Linux Conference Tokyo, Japan
November 12
November 13
Mini-DebConf in Vietnam 2010 Ho Chi Minh City, Vietnam
November 13
November 14
OpenRheinRuhr Oberhausen, Germany
November 15
November 17
MeeGo Conference 2010 Dublin, Ireland
November 18
November 21
Piksel10 Bergen, Norway
November 20
November 21
OpenFest - Bulgaria's biggest Free and Open Source conference Sofia, Bulgaria
November 20
November 21
Kiwi PyCon 2010 Waitangi, New Zealand
November 20
November 21
WineConf 2010 Paris, France
November 23
November 26
DeepSec Vienna, Austria
November 24
November 26
Open Source Developers' Conference Melbourne, Australia
November 27 Open Source Conference Shimane 2010 Shimane, Japan
November 27 12. LinuxDay 2010 Dornbirn, Austria
November 29
November 30
European OpenSource & Free Software Law Event Torino, Italy
December 4 London Perl Workshop 2010 London, United Kingdom
December 6
December 8
PGDay Europe 2010 Stuttgart, Germany
December 11 Open Source Conference Fukuoka 2010 Fukuoka, Japan

If your event does not appear here, please tell us about it.

Audio and Video programs

Video sessions available from KVM Forum 2010

Videos from the recent KVM Forum are available for viewing.

Comments (none posted)

Page editor: Rebecca Sobol

Copyright © 2010, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds