User: Password:
|
|
Subscribe / Log in / New account

LWN.net Weekly Edition for August 18, 2011

Desktop Summit: Claire Rowland on service design

By Jake Edge
August 17, 2011

When thinking about user interface design, many will focus on the application itself, but Claire Rowland, an interaction designer and researcher, looks at things a bit differently. She came to the Desktop Summit in Berlin to describe "service design", which encompasses more than just the interface for a particular application. Looking at the service that is being provided, and focusing on the "touchpoints" for that service, makes for a more holistic view of interface design. That will become increasingly important as we move into a world where more and more "ordinary" devices become connected to the internet.

[Claire Rowland]

Rowland set the tone for her talk by playing a short video from the Smarcos project, which outlined the kinds of devices and connectivity between them that we are likely to see over the next few years. Things in the real world that have not been connected to the internet, like toilets, pets, or bathroom scales, are headed in that direction. Since February 2011, AT&T has had more new machine subscribers (i.e. devices of various sorts) than human subscribers, and it is estimated that there will be 50 billion connected devices by 2020.

The video described the challenge of making the systems—services—surrounding these devices usable. It also pointed out the problems with ensuring that users are in control of the data that gets shared, as well as the challenges in making the service understandable. Some of the presumably fictional examples shown were a washing machine flashing an "Error: update firmware" message and a coffee machine that wouldn't perform its usual task because of a "caffeine allowance exceeded" condition.

The difficulty in designing these systems is to make them usable and understandable, Rowland said, because many people "don't want to fiddle around with tech". The number of things that need to "connect up" are only increasing. Smartphones are outselling PCs these days, TVs are connecting to the web, and more environmental sensors are coming online, which presents an "interconnectivity challenge", she said. "How do we get these things to play nicely together?"

Part of the answer may lie in "service design", which is what she works on. A service simply delivers "something for users". That could be a service in the traditional computer sense of the term, or something more real world. She used the "Post" (i.e. Postal Service in the US) as an example of the latter. There are multiple "touchpoints" for the service, whether it is buying stamps or sending and receiving packages. The value of the service is in "how the whole thing works together", she said. For digital services, it doesn't matter how well an application ("app") works in isolation, it needs to fit and work with the service as a whole.

New design metaphor needed

There is a need for a new design metaphor, Rowland said, because the old usability model of "one person sitting in front of one app" is no longer valid. That model relies on there being one core device, the screen, that creates a "work-centric" design. Those kinds of applications are context-independent and passive, waiting for a single user to perform some task. In contrast, future applications will have "interusability", she said. There will be multiple devices involved, some without a screen, and the applications will become context aware. The applications will be "content and activity-centric", cloud-based, and will target multiple users (e.g. web TV).

[Claire Rowland]

The key to designing these services will be in finding the right touchpoints and the appropriate interaction type. Touchpoints need to be right for the device being used, that is "doing the right thing on the right device". The "right thing" is not necessarily based on what the device can do, she said. While a TV can have a keyboard, that may not be the right way to interact with it, because watching TV is generally a more passive activity. Depending on the type of application, and the device in use, it may make sense to design applications to be "glanceable", and not require users to put their full attention on the application.

Today's smartphone landscape takes an approach that Rowland called the "bucket of apps". Instead of just offering a huge range of different apps, the phone's capabilities should be used to anticipate the user's needs. If the user is fumbling with their phone at a bus stop, there is no technical reason that the bus stop couldn't identify itself to the phone. That would allow the phone to present the bus schedule app as a likely choice, rather than require the user to dig it out of the bucket.

There are three elements that make a "service feel like a service", Rowland said. The first is to present a "clear mental model" of what the service is and what it can do for the user. For example, she said that Dropbox is not technically better than other alternatives, but it positions itself as simply being about sharing folders. Other similar services talk about "syncing and backup", which is "scary for some".

Continuity is another important element, so that users get the same experience on different devices. For example, an app could tag the Twitter tweets that you have seen on a particular device, so that they don't have to be downloaded on a different device. There is an effort to create "migratory interfaces", she said, where the user can move from device to device while keeping the same state and context in the service. If a user is on a mobile device looking at banking information, and the device runs low on power, the device could prompt whether to push the information to a nearby desktop. There should also be continuity "across interaction modes", so that a transaction started elsewhere could be completed via a phone call, for example.

The final piece of the service puzzle is consistency, Rowland said. No matter what kind of device or application used to interact with the service, it should be consistent. If an appliance is to be controlled from a mobile phone, that doesn't mean that there will be the exact same dials and other control elements in the phone app, but that the labels, names, and interaction logic should be the same, she said. The kind of controls used should be appropriate to the device, but still be consistent with other ways of interacting with the service.

Clouds

The cloud user experience is a challenge for consumers, she said. Connectivity is going to fail sometimes, and to a non-technical user, the difference between losing the connection and a bug in the app is small. Losing connectivity can also lead to bad user experience when it is regained. She pointed to the Spotify music service, where users have to log in again once the connection has been restored. There may be valid security reasons for doing so, she said, but it leads to a bad user experience.

Instead of treating connection loss as an exceptional event, applications should plan for periods of disconnection. Downloading content well ahead of the time it is needed would be one example of that. The cloud also brings with it a set of privacy issues and settings that are difficult for users to get their heads around. There is a need for reasonable defaults, she said, pointing to the recent issues with Fitbit activity information showing up in Google searches. Users were probably not expecting that their sexual activity (including date and time, as well as duration) would show up there.

The desktop certainly has a role to play and will be a part of this ecosystem, Rowland said. Service design is partly about the user interfaces on devices, but it is also about how to make all the different parts work well together. Apple has staked out a claim to provide this kind of experience, but she does not want commit to only Apple products. There "need to be alternatives" to Apple, she said, and that's where the free software world can come in.

In response to a question from the audience, Rowland had some suggestions on getting designers more involved with free software. "Designers love a challenge", she said, and free software needs to "get better at packaging itself to attract designers". She suggested going to design conferences to present free software design problems as challenges and asking for designers to step up to help solve them.

While Rowland's talk was not immediately applicable to free desktops, there was much in it to ponder on. Like it or not, the vision of the interconnected future is coming, and our mundane devices and appliances are going that route as well. Making those things work well for users, while still allowing user freedom, is important, and it's something the free software community should be contemplating.

[ I would like to thank the GNOME Foundation and KDE e.V. for travel assistance to attend the Desktop Summit. ]

Comments (12 posted)

Android and the GPLv2 death penalty

By Jonathan Corbet
August 15, 2011
Edward Naughton is at it again: he is now claiming that most or all Android vendors have lost their right to distribute the kernel as the result of GPL violations. Naturally Florian Mueller picked it up and amplified it; he is amusingly surprised to learn that there are GPL compliance problems in the Android world. As it happens, there is no immediate prospect of Android vendors being unable to ship their products - at least, not as a result of GPL issues - but there is a point here which is worth keeping in mind.

First: please bear in mind while reading the following that your editor is not a lawyer and couldn't even plausibly play one on television.

Earlier this year, Jeremy Allison gave a talk on why the Samba project moved to version 3 of the GNU General Public License. There were a number of reasons for the change, but near the top of his list was the "GPLv2 death penalty." Version 2 is an unforgiving license: any violation leads to an automatic termination of all rights to the software. A literal reading of this language leads to the conclusion that anybody who has violated the license must explicitly obtain a new license from the copyright holder(s) before they can exercise any of the rights given by the GPL. For a project that does not require copyright assignment, there could be a large number of copyright owners to placate before recovery from a violation would be possible.

The Samba developers have dealt with their share of GPL violations over the years. As has almost universally been the case in our community, the Samba folks have never been interested in vengeance or "punitive damages" from violators; they simply want the offending parties to respect the license and come back into compliance. When the GPL was written to become GPLv3, that approach was encoded into the license; violators who fix their problems in a timely manner automatically have their rights reinstated. There is no "death penalty" which could possibly shut violators down forever; leaving this provision behind was something that the Samba team was happy to do.

Android phones are capable devices, but they still tend not to be shipped with Samba servers installed. They do, however, contain the Linux kernel, which happens to be a GPLv2-licensed body of code with thousands of owners. Those who find it in their interest to create fear, uncertainty, and doubt around Android have been happy to seize on the idea that a GPL violation will force a vendor to locate and kowtow before all of those owners before they can ship the kernel again. There can be no doubt that this is a scary prospect.

One should look, though, at the history of how GPL violations have been resolved in the past. There is a fair amount of case history - and a much larger volume of "quietly resolved" cases - where coming into compliance has been enough. Those who have pursued GPL violations in the courts have asked for organizational changes (the appointment of a GPL compliance officer, perhaps), payment of immediate expenses, and, perhaps, a small donation to a worthy project. But the point has been license compliance, not personal gain or disruption of anybody's business; that is especially true of the kernel in particular.

Harald Welte and company won their first GPL court case in 2004; the practice of quietly bringing violators into compliance had been going on for quite some time previously. Never, in any of these cases, has a copyright-holding third party come forward and claimed that a former infringer lacks a license and is, thus, still in violation. The community as a whole has not promised that licenses for violators will be automatically restored when the guilty parties come back into compliance, but it has acted that way with great consistency for many years. Whether a former violator could use that fact to build a defense based on estoppel is a matter for lawyers and judges, but the possibility cannot be dismissed out of hand. Automatic reinstatement is not written into the license, but it's how things have really worked.

There is an interesting related question: how extensive is the termination of rights? Each kernel release is a different work; the chances that any given piece of code has been modified in a new release are pretty high. One could argue that each kernel release comes with its own license; the termination of one does not necessarily affect rights to other releases. Switching to a different release would obviously not affect any ongoing violations, but it might suffice to leave holdovers from previous violations behind. Should this naive, non-lawyerly speculation actually hold water, the death penalty becomes a minor issue at worst.

So Android vendors probably have bigger worries than post-compliance hassles from kernel copyright owners. Until they get around to that little detail of becoming a former violator, the question isn't even relevant, of course. Afterward, software patents still look like a much bigger threat.

That said, your editor has, in the past, heard occasional worries about the prospect of "copyright trolls." It's not too hard to imagine that somebody with a trollish inclination might come into possession of the copyright on some kernel code; that somebody could then go shaking down former violators with threats of lawsuits for ongoing infringement. This is not an outcome which would be beneficial to our community, to say the least.

One would guess that a copyright troll with a small ownership would succeed mostly in getting his or her code removed from the kernel in record time. Big holders could pose a bigger threat. Imagine a company like IBM, for example; IBM owns the copyright on a great deal of kernel code. IBM also has the look of one of those short-lived companies that doesn't hang around for long. As this flash-in-the-pan fades, its copyright portfolio could be picked up by a troll which would then proceed to attack prior infringers. Writing IBM's code out of the kernel would not be an easy task, so some other sort of solution would have to be found. It is not a pretty scenario.

It is also a relatively unlikely scenario. Companies that have built up ownership of large parts of the kernel have done so because they are committed to its success. It is hard to imagine them turning evil in such a legally uncertain way. But it's not a possibility which can be ignored entirely. The "death penalty" is written into the license; someday, somebody may well try to take advantage of that to our detriment.

What would happen then? Assuming that the malefactor is not simply lawyered out of existence, other things would have to come into play. Remember that the development community is currently adding more than one million lines of code to the kernel every year. Even a massive rewrite job could be done relatively quickly if the need were to arise. If things got really bad, the kernel could conceivably follow Samba's example and move to GPLv3 - though that move, clearly, would not affect the need to remove problematic code. One way or another, the problem would be dealt with. Copyright trolls probably do not belong at the top of the list of things we lose sleep over at the moment.

Comments (39 posted)

Developments in Mozilla-land

August 17, 2011

This article was contributed by Nathan Willis

Mozilla announced a new rapid-release cycle for its flagship applications earlier this year, but it has not slowed down on other fronts in the interim. New work in recent weeks include the Boot to Gecko instant-on project, collaboration with Google developers on WebRTC and Web Intents, a new security review process, and an initiative aimed at meeting the distinct needs of enterprise IT departments.

To the cloud

Boot to Gecko (B2G) was announced on July 27; it is aiming to build "a complete, standalone operating system for the open web." Much like ChromeOS, the idea is to build a low-resource-usage operating system for portable devices (e.g., tablets and phones) that focuses on web-delivered applications instead of locally-installed software. Notably, however, the initial announcement and the main project page both discuss the web's ability to displacing proprietary "single vendor control" of application execution environments.

Obviously, Mozilla has believed in the web as an OS-agnostic delivery platform for years, as its "open web app ecosystem" and Drumbeat outreach initiative demonstrate. But the project has never spearheaded the development of an actual OS offering before. When third-party developers launched the Webian Shell project — itself a Mozilla-based desktop environment — Mozilla offered guidance and technical assistance, but did not get directly involved in its development. At the time, some industry watchers speculated that Mozilla might be wary of stepping on Google's toes with its default-search-engine-placement deal coming up for renewal later this year.

B2G and Webian are very different, at least at the moment. Webian is replacement for the desktop environment, not a complete OS, while B2G at least plans to adopt a full software stack. B2G is also still in the very early development stage, without demos to download. But the project has outlined a number of areas where it believes new APIs will need to be developed and structures will need to be put in place to build a fully Mozilla-based OS. These include APIs for accessing hardware devices not addressed by traditional browsers (telephony, Bluetooth, and USB device access, for example), and a new "privilege model" to make sure that these devices are accessible by pages and web applications without security risks.

Interestingly enough, the B2G project pages also discuss the need for an underlying OS to boot into, and describes it as "a low-level substrate for an Android-compatible device." This suggests that B2G is going after the Android, not ChromeOS, class of hardware (although where it concerns angering Google, it is doubtful that the company would be less protective of one pet project than another).

Indeed, the B2G GitHub code currently builds only with the Android SDK and is installable only on the Nexus S 4G, although the mailing list thread in mozilla.dev.platform discusses other hardware targets. The thread (which for the moment is the only official email discussion forum for B2G) includes considerable debate about what the sub-Gecko OS needs to include, exactly what web APIs deserve top priority, and the relative merits of Android, MeeGo, webOS, and other open source operating systems as a platform.

Mozilla's Mike Shaver addressed the current use of Android as more of a project-bootstrapping move than a longer-term strategy:

We intend to use as little of Android as possible, in fact. Really, we want to use the kernel + drivers, plus libc and ancillary stuff. It's not likely that we'll use the Android Java-wrapped graphics APIs, for example. It's nice to start from something that's known to boot and have access to all the devices we want to expose.

In spite of that explanation, the debate over the OS underpinnings rolls on. Wherever the project heads, it makes for educational reading.

Web APIs

Tied in deeply to the B2G discussion is a new generation of web APIs on which to build the increasingly interactive and cross-domain web applications that the B2G vision relies on. On that front, Mozilla and Google seem to be working well together.

For example, the search giant released an open source real-time communications framework called WebRTC in May. WebRTC is a collection of voice and video streaming protocols and support libraries designed to be accessed via JavaScript, rather than through the binary plug-ins common in older web-chat applications. The release includes the iSAC and iLBC voice codecs Google acquired with its purchase of Global IP Solutions, the VP8 video codec it has owned since 2010, and support libraries to perform echo cancellation, automatic gain control, noise suppression, and cross-platform hardware access.

In addition to the media streaming components, WebRTC includes libraries to handle network buffering, error correction, and connection establishment, some of which is adapted from libjingle.

In early August, Mozilla announced it was going to adopt WebRTC as a core component of its Rainbow extension for Firefox. Rainbow allows web applications to access client-side audio- and video-recording hardware (i.e., microphones and webcams). Apart from the obvious use (person-to-person chat applications), Mozilla Labs reports that developers have written karaoke, QR code scanning, and photo booth applications. Unfortunately, even the most recent Rainbow release (0.4) does not support Linux, although the team claims it is a high priority. The Rainbow README says the project ultimately wants to not depend on any external libraries; a solid offering of audio- and video-handling through WebRTC should help.

While WebRTC occupies a low-level API slot, Web Intents implements a very high level of abstraction. The concept is inter-application communication and service discovery, so that (for example) a user could use an online image editor like Picnik to open and touch up photos hosted at another online service, like Flickr. Web Intents was announced by Google in November of 2010, based on the Intents API used by Android.

Web services "register" the actions they intend to support with <intent> tags in their page's <head> sections. The prototype framework defines a handful of default intents — share, discover, edit, view, and pick — and uses MIME types to allow services to indicate the type of data they understand. An intents-aware browser could then match compatible services together and present them as options to the user. In the meantime, the project has written a JavaScript shim that application authors can use to invoke the intents offered up by other services and to get back the results.

Mozilla's proposal tackles much the same problem. It was initially referred to as Web Activities in a July blog post, then as Web Actions in August. In both cases, however, the same general protocol is used: each service advertises a set of actions that it will support from incoming applications, based on a generally-agreed-upon set of common actions.

In an August 4th blog post, Google announced that it was "working closely with Mozilla engineers to unify our two proposals into one simple, useful API." With little more than basic demos to go on, the two APIs seem strikingly similar, although Mozilla's "Web Actions" is regarded as the clearer name in several articles in the technical press. It also includes a more definite mechanism for service discovery, which remains a fuzzy notion in the Google proposal. Currently applications needing to connect to a remote service must rely on either the user or the browser to locate compatible alternatives. Mozilla's proposal uses its Open Web App manifest storage to remember previously-discovered services. Everyone seems to agree on the value of a cross-web-application communication framework, so the protocol is worth watching, but it could be quite some time before there are any services able to make use of the system.

Freshening the security blanket

In late July, the Mozilla Security Blog posted a outline for reworking and "evolving" the project's security review process. The nexus of the proposal is to better integrate security review with the overall application development process: a smoother process results in less disruption for the developers and fewer hangups for the users. As Mozilla contemplates reaching a wider audience with the increased adoption of Firefox for Mobile and its messaging products, getting the process right will help the organization grow its user base.

Specifically, the goals include performing reviews and flagging bugs earlier, ensuring that reviews produce "paths" out of trouble and not just work stoppages, more transparency in the content of reviews, and a more open and transparent format for security team meetings. There is a sample outline of the new review meeting process included in the blog post, and the team has been using it for the past few months.

The experience has been a successful one so far, and preemptively caught security flaws in Firefox's CSS animation code and Server-sent DOM event handling. The full schedule of security meetings is published as publicly-accessible HTML and iCalendar data, and the results are archived on the Mozilla wiki. The new approach has also resulted in some new features being added to the Mozilla Bugzilla instance and team status pages.

Ultimately, the security team says it wants to become "fleet of foot" enough that development teams will come to it to have a review done, rather than the security team needing to initiate the review process and interrupt development.

Enterprise Firefox

In late June, PC Magazine reported that enterprise IT departments were upset by Mozilla's move to a short release cycle, arguing that the change negatively affected them by drastically shortening the support lifetime of each release. When a corporate IT consultant lamented the time it would take to test and validate multiple major releases each year, Mozilla's Asa Dotzler sparked controversy by commenting "Enterprise has never been (and I'll argue, shouldn't be) a focus of ours."

A month later, Mozilla's chief of developer engagement, Stormy Peters, announced the formation of an enterprise user working group where the project can interface with IT professionals and enterprise developers. The "enterprise developers" segment includes people who develop in-house web applications for enterprises, as well as those who use Mozilla components to develop their own software (including add-ons and XUL-based applications).

The group's wiki page lists general "help each other"-style objectives, but more importantly it outlines communication mechanisms, starting with a private mailing list and monthly phone call meetings. Each meeting has a specific topic, and both outlines and minutes are posted on the wiki. Understandably, the first few tackled the new release cycle and input from enterprise users on deploying Firefox and how it could be improved.

The output of the meetings also seems to be archived in "resource" pages on the wiki, and integrated with related information on each particular topic. Unfortunately, the minutes from the August 8th meeting on the new rapid-release cycle are not posted yet, and although the working group has its own issues in Mozilla Bugzilla, so far the only bugs filed deal with technical issues, such as configuration and LDAP support.

Nevertheless, the working group is a positive step. The brouhaha over enterprise support in June was primarily sparked by the attitude many read in Dotzler's comments; opening an ongoing conversation with more diplomatic overtones is arguably a better fix for that kind of problem than are Bugzilla issues. It would be nice to see the enterprise working group attempt to increase its openness and transparency by making its mailing list public, but that may simply be another one of those areas where "enterprises" and those of us who are merely "consumers" do not see eye-to-eye.

Busy times

The list of recent projects undertaken at Mozilla demonstrates the organization's new-found interest in taking its mission beyond the traditional desktop browser. Certainly the new approach to security review and the enterprise working group directly affect Firefox development, but with B2G and the various Open Web Application projects, soon the oft-used term "browser maker" may fail to accurately describe Mozilla. But it is encouraging to see that the diversified interests of the project include exploring areas — like web-only operating systems — that might otherwise be ceded to commercial interests alone.

Comments (14 posted)

Page editor: Jonathan Corbet

Security

Unpredictable sequence numbers

By Jake Edge
August 17, 2011

It has been known for 15 years or more that using predictable network sequence numbers is a security risk, so most implementations, including Linux, have randomized the initial sequence number (ISN) for TCP connections. Due to performance concerns, though, Linux used a combination of the MD4 cryptographic hash, along with changing the random seed every five minutes, to create the ISN. In addition, only a partial MD4 implementation was used, which effectively limited the ISNs to 24 bits of randomness. That's all changed with a recent patch that has been merged into the mainline as well as the stable and longterm kernels.

Sequence numbers are used by TCP to keep the bytes in the connection stream in order. An ISN is established at the time the connection is made, and incremented by the number of data bytes in each packet. That way, both sides of the connection can recognize when they have received out-of-order packets and ensure that the data that gets handed off to the application is properly sequenced.

Initially, TCP specified that ISNs would increment every four microseconds to avoid having multiple outstanding connections with the same sequence number. But, in the mid-90s, it was recognized that predictability in choosing ISNs could be used by attackers to potentially inject packets into the set up of a connection, or into an established session itself. That led to RFC 1948, which suggested establishing a separate sequence number space for each connection, and randomizing the ISNs based on the connection parameters.

Basically, the idea is that by using the source address/port and destination address/port as input to a cryptographic hash (the RFC suggests MD5), along with a random seed generated at boot time, an unpredictable ISN can be created. But Linux went its own way, using the partial MD4 and resetting the random seed frequently (which was meant to add some additional unpredictability).

According to the description in David Miller's patch, Dan Kaminsky recently alerted the kernel security mailing list (i.e. security@kernel.org, which is a closed list for security discussions) that the Linux ISN generation was vulnerable to brute force attacks. Presumably, the increased speed of today's computers coupled with the higher bandwidth available means that a brute force attack against a 24-bit space is more plausible today. Also, as Miller points out, the increase in computer speed also means that the need for using MD4 for performance reasons has likely passed.

Over the years since RFC 1948, MD5 has been considerably weakened, so SHA-1 was also considered for the Linux fix. But, as Miller describes it, the performance cost was simply too high:

MD5 was selected as a compromise between performance loss and theoretical ability to be compromised. Willy Tarreau did extensive testing and SHA1 was found to harm performance too much to be considered seriously at this time.

Down the road, a sysctl knob may be added to select different modes, Miller said. That could include the "super secure" SHA-1 version, as well as a mode that turns off any hashing for networks that run in trusted environments.

While it may have made sense at the time, it is clear that using MD4 (and effectively limiting it to 24 bits of randomness) is just too risky today. Attacks against the earlier implementation may be hard to pull off, but the effects can be rather serious. The RFC describes an attack that would inject commands into a remote shell session. While rsh is not used very frequently—at all?—any more, there are other kinds of attacks that are possible too. It's good to see this particular hole get filled.

Comments (11 posted)

Brief items

Security quotes of the week

Turns out we have a large index of the web, so we cranked through 20 terabytes of SWF file downloads followed by 1 week of run time on 2,000 CPU cores to calculate the minimal set of about 20,000 files. Finally, those same 2,000 cores plus 3 more weeks of runtime were put to good work mutating the files in the minimal set (bitflipping, etc.) and generating crash cases. These crash cases included an interesting range of vulnerability categories, including buffer overflows, integer overflows, use-after-frees and object type confusions.
-- Google security team on fuzzing Flash at "Google scale"

Is losing your genomic privacy an excessive price to pay for surviving cancer and evading plagues?

Is compromising your sensory privacy through lifelogging a reasonable price to pay for preventing malicious impersonation and apprehending criminals?

Is letting your insurance company know exactly how you steer and hit the gas and brake pedals, and where you drive, an acceptable price to pay for cheaper insurance?

-- Charlie Stross's USENIX 2011 keynote: Network Security in the Medium Term, 2061-2561 AD

Comments (4 posted)

One year of Android malware

Paolo Passeri has put up a list of malevolent Android applications discovered over the last year. "Scroll down my special compilation showing the long malware trail which characterized this hard days for information security. Commenting the graph, in my opinion, probably the turning point was Android.Geinimi (end of 2010), featuring the characteristics of a primordial Botnet, but also Android.DroidDream (AKA RootCager) is worthwhile to mention because of its capability to root the phone and potentially to remotely install applications without direct user intervention."

Comments (2 posted)

Cox: Six years of Red Hat Enterprise Linux 4

Red Hat security team lead Mark J. Cox writes about the "Six Years of Red Hat Enterprise Linux 4" report [PDF] on his blog. It looks at the vulnerabilities that were found and fixed in RHEL 4, along with their severity. "The data we publish is interesting to get a feel for the risk of running Enterprise Linux, but isn't really useful for comparisons with other distributions, or operating systems. One important difference is that it is Red Hat policy to count vulnerabilities and allocate CVE names to all issues that we fix, including ones that are found internally. This is not true for many other vendors including folks like Microsoft and Adobe who do not count or disclose issues they fix which were found internally."

Comments (5 posted)

New vulnerabilities

cgit: cross-site scripting

Package(s):cgit CVE #(s):CVE-2011-2711
Created:August 11, 2011 Updated:August 17, 2011
Description: cgit 0.9.0.2 and prior have a cross-site scripting vulnerability exploitable by authenticated users.
Alerts:
Fedora FEDORA-2011-9588 cgit 2011-07-23
Fedora FEDORA-2011-9589 cgit 2011-07-23
openSUSE openSUSE-SU-2011:0891-1 cgit 2011-08-11

Comments (none posted)

firefox: multiple vulnerabilities

Package(s):firefox CVE #(s):CVE-2011-2989 CVE-2011-2991 CVE-2011-2985 CVE-2011-2993 CVE-2011-2988 CVE-2011-2987 CVE-2011-2990 CVE-2011-2992
Created:August 17, 2011 Updated:July 23, 2012
Description: From the Ubuntu advisory:

Aral Yaman discovered a vulnerability in the WebGL engine. An attacker could potentially use this to crash Firefox or execute arbitrary code with the privileges of the user invoking Firefox. (CVE-2011-2989)

Vivekanand Bolajwar discovered a vulnerability in the JavaScript engine. An attacker could potentially use this to crash Firefox or execute arbitrary code with the privileges of the user invoking Firefox. (CVE-2011-2991)

Robert Kaiser, Jesse Ruderman, Gary Kwong, Christoph Diehl, Martijn Wargers, Travis Emmitt, Bob Clary, and Jonathan Watt discovered multiple memory vulnerabilities in the browser rendering engine. An attacker could use these to possibly execute arbitrary code with the privileges of the user invoking Firefox. (CVE-2011-2985)

Rafael Gieschke discovered that unsigned JavaScript could call into a script inside a signed JAR. This could allow an attacker to execute arbitrary code with the identity and permissions of the signed JAR. (CVE-2011-2993)

Michael Jordon discovered that an overly long shader program could cause a buffer overrun. An attacker could potentially use this to crash Firefox or execute arbitrary code with the privileges of the user invoking Firefox. (CVE-2011-2988)

Michael Jordon discovered a heap overflow in the ANGLE library used in Firefox's WebGL implementation. An attacker could potentially use this to crash Firefox or execute arbitrary code with the privileges of the user invoking Firefox. (CVE-2011-2987)

Mike Cardwell discovered that Content Security Policy violation reports failed to strip out proxy authorization credentials from the list of request headers. This could allow a malicious website to capture proxy authorization credentials. Daniel Veditz discovered that redirecting to a website with Content Security Policy resulted in the incorrect resolution of hosts in the constructed policy. This could allow a malicious website to circumvent the Content Security Policy of another website. (CVE-2011-2990)

Bert Hubert and Theo Snelleman discovered a vulnerability in the Ogg reader. An attacker could potentially use this to crash Firefox or execute arbitrary code with the privileges of the user invoking Firefox. (CVE-2011-2992)

Alerts:
openSUSE openSUSE-SU-2014:1100-1 Firefox 2014-09-09
Gentoo 201301-01 firefox 2013-01-07
Mageia MGASA-2012-0176 iceape 2012-07-21
openSUSE openSUSE-SU-2012:0567-1 firefox, thunderbird, seamonkey, xulrunner 2012-04-27
Ubuntu USN-1192-3 libvoikko 2011-10-19
openSUSE openSUSE-SU-2011:0957-2 MozillaFirefox 2011-08-30
SUSE SUSE-SA:2011:037 MozillaFirefox,MozillaThunderbird,seamonkey 2011-08-29
openSUSE openSUSE-SU-2011:0957-1 seamonkey 2011-08-29
openSUSE openSUSE-SU-2011:0935-1 mozilla-nss 2011-08-23
Fedora FEDORA-2011-11084 gnome-web-photo 2011-08-18
Fedora FEDORA-2011-11084 galeon 2011-08-18
Fedora FEDORA-2011-11084 mozvoikko 2011-08-18
Fedora FEDORA-2011-11084 xulrunner 2011-08-18
Fedora FEDORA-2011-11084 perl-Gtk2-MozEmbed 2011-08-18
Fedora FEDORA-2011-11084 gnome-python2-extras 2011-08-18
Fedora FEDORA-2011-11084 firefox 2011-08-18
Ubuntu USN-1192-2 mozvoikko 2011-08-17
Fedora FEDORA-2011-11106 gnome-python2-extras 2011-08-18
Fedora FEDORA-2011-11106 mozvoikko 2011-08-18
Fedora FEDORA-2011-11106 perl-Gtk2-MozEmbed 2011-08-18
Fedora FEDORA-2011-11106 firefox 2011-08-18
Fedora FEDORA-2011-11106 xulrunner 2011-08-18
Ubuntu USN-1192-1 firefox 2011-08-17

Comments (none posted)

isc-dhcp: denial of service

Package(s):isc-dhcp CVE #(s):CVE-2011-2748 CVE-2011-2749
Created:August 11, 2011 Updated:September 23, 2011
Description: The ISC DHCP server crashes "when processing certain packets."
Alerts:
Gentoo 201301-06 dhcp 2013-01-09
CentOS CESA-2011:1160 dhcp 2011-09-22
openSUSE openSUSE-SU-2011:1021-1 dhcp 2011-09-07
Fedora FEDORA-2011-10705 dhcp 2011-08-12
Pardus 2011-113 dhcp 2011-09-05
Fedora FEDORA-2011-10740 dhcp 2011-08-12
Mandriva MDVSA-2011:128 dhcp 2011-08-18
CentOS CESA-2011:1160 dhcp 2011-08-16
Scientific Linux SL-dhcp-20110815 dhcp 2011-08-15
Red Hat RHSA-2011:1160-01 dhcp 2011-08-15
Ubuntu USN-1190-1 dhcp3, isc-dhcp 2011-08-15
Debian DSA-2292-1 isc-dhcp 2011-08-11

Comments (none posted)

libmodplug: multiple vulnerabilities

Package(s):libmodplug CVE #(s):CVE-2011-2911 CVE-2011-2912 CVE-2011-2913 CVE-2011-2914 CVE-2011-2915
Created:August 17, 2011 Updated:March 16, 2012
Description: From the Red Hat bugzilla:

A number of vulnerabilities were reported in libmodplug, which can be exploited to cause a DoS or possibly compromise an application using the library:

1) An integer overflow error exists within the "CSoundFile::ReadWav()" function (src/load_wav.cpp) when processing certain WAV files. This can be exploited to cause a heap-based buffer overflow by tricking a user into opening a specially crafted WAV file.

2) Boundary errors within the "CSoundFile::ReadS3M()" function (src/load_s3m.cpp) when processing S3M files can be exploited to cause stack-based buffer overflows by tricking a user into opening a specially crafted S3M file.

3) An off-by-one error within the "CSoundFile::ReadAMS()" function (src/load_ams.cpp) can be exploited to cause a stack corruption by tricking a user into opening a specially crafted AMS file.

4) An off-by-one error within the "CSoundFile::ReadDSM()" function (src/load_dms.cpp) can be exploited to cause a memory corruption by tricking a user into opening a specially crafted DSM file.

5) An off-by-one error within the "CSoundFile::ReadAMS2()" function (src/load_ams.cpp) can be exploited to cause a memory corruption by tricking a user into opening a specially crafted AMS file.

Alerts:
Gentoo 201203-16 libmodplug 2012-03-16
Gentoo 201203-14 audacious-plugins 2012-03-16
Debian DSA-2415-1 libmodplug 2012-02-22
CentOS CESA-2011:1264 gstreamer-plugins 2011-09-08
Scientific Linux SL-gstr-20110906 gstreamer-plugins 2011-09-06
Ubuntu USN-1255-1 libmodplug 2011-11-09
Red Hat RHSA-2011:1264-01 gstreamer-plugins 2011-09-06
Pardus 2011-112 libmodplug 2011-09-05
openSUSE openSUSE-SU-2011:0943-1 libmodplug 2011-08-25
Fedora FEDORA-2011-10503 libmodplug 2011-08-09
Fedora FEDORA-2011-10544 libmodplug 2011-08-09

Comments (none posted)

libxfont: privilege escalation

Package(s):libxfont CVE #(s):CVE-2011-2895
Created:August 12, 2011 Updated:December 19, 2011
Description: From the Debian advisory:

Tomas Hoger found a buffer overflow in the X.Org libXfont library, which may allow for a local privilege escalation through crafted font files.

Alerts:
Fedora FEDORA-2015-3948 nx-libs 2015-03-26
Fedora FEDORA-2015-3964 nx-libs 2015-03-26
Gentoo 201402-23 libXfont 2014-02-21
SUSE SUSE-SU-2012:0553-1 freetype2 2012-04-23
Red Hat RHSA-2011:1834-01 libXfont 2011-12-19
SUSE SUSE-SU-2011:1306-1 freetype2 2011-12-08
SUSE SUSE-SU-2011:1035-2 Xorg-X11 2011-12-07
openSUSE openSUSE-SU-2011:1299-1 xorg-x11-libs 2011-12-05
Mandriva MDVSA-2011:153 libxfont 2011-10-17
Mandriva MDVSA-2011:146 cups 2011-10-11
CentOS CESA-2011:1154 libXfont 2011-09-22
SUSE SUSE-SU-2011:1035-1 Xorg X11 2011-09-13
CentOS CESA-2011:1161 freetype 2011-08-16
Scientific Linux SL-free-20110815 freetype 2011-08-15
Scientific Linux SL-xorg-20110811 xorg-x11 2011-08-11
Scientific Linux SL-libX-20110811 libXfont 2011-08-11
Red Hat RHSA-2011:1161-01 freetype 2011-08-15
Ubuntu USN-1191-1 libxfont 2011-08-15
CentOS CESA-2011:1155 xorg-x11 2011-08-14
Red Hat RHSA-2011:1155-01 xorg-x11 2011-08-11
Red Hat RHSA-2011:1154-01 libXfont 2011-08-11
Debian DSA-2293-1 libxfont 2011-08-12

Comments (none posted)

Mozilla products: multiple vulnerabilities

Package(s):firefox, thunderbird, seamonkey CVE #(s):CVE-2011-0084 CVE-2011-2378 CVE-2011-2981 CVE-2011-2982 CVE-2011-2983 CVE-2011-2984
Created:August 17, 2011 Updated:September 23, 2011
Description: From the Red Hat advisory:

Several flaws were found in the processing of malformed web content. A web page containing malicious content could cause Firefox to crash or, potentially, execute arbitrary code with the privileges of the user running Firefox. (CVE-2011-2982)

A dangling pointer flaw was found in the Firefox Scalable Vector Graphics (SVG) text manipulation routine. A web page containing a malicious SVG image could cause Firefox to crash or, potentially, execute arbitrary code with the privileges of the user running Firefox. (CVE-2011-0084)

A dangling pointer flaw was found in the way Firefox handled a certain Document Object Model (DOM) element. A web page containing malicious content could cause Firefox to crash or, potentially, execute arbitrary code with the privileges of the user running Firefox. (CVE-2011-2378)

A flaw was found in the event management code in Firefox. A website containing malicious JavaScript could cause Firefox to execute that JavaScript with the privileges of the user running Firefox. (CVE-2011-2981)

A flaw was found in the way Firefox handled malformed JavaScript. A web page containing malicious JavaScript could cause Firefox to access already freed memory, causing Firefox to crash or, potentially, execute arbitrary code with the privileges of the user running Firefox. (CVE-2011-2983)

It was found that a malicious web page could execute arbitrary code with the privileges of the user running Firefox if the user dropped a tab onto the malicious web page. (CVE-2011-2984)

Alerts:
openSUSE openSUSE-SU-2014:1100-1 Firefox 2014-09-09
Gentoo 201301-01 firefox 2013-01-07
Mageia MGASA-2012-0176 iceape 2012-07-21
CentOS CESA-2011:1164 firefox 2011-09-22
CentOS CESA-2011:1164 firefox 2011-09-22
CentOS CESA-2011:1165 thunderbird 2011-09-22
openSUSE openSUSE-SU-2011:0957-2 MozillaFirefox 2011-08-30
openSUSE openSUSE-SU-2011:0935-2 MozillaThunderbird 2011-08-30
SUSE SUSE-SU-2011:0967-1 Mozilla Firefox 2011-08-30
SUSE SUSE-SA:2011:037 MozillaFirefox,MozillaThunderbird,seamonkey 2011-08-29
openSUSE openSUSE-SU-2011:0958-1 firefox 2011-08-29
openSUSE openSUSE-SU-2011:0957-1 seamonkey 2011-08-29
Ubuntu USN-1185-1 thunderbird 2011-08-26
openSUSE openSUSE-SU-2011:0935-1 mozilla-nss 2011-08-23
Fedora FEDORA-2011-11084 gnome-web-photo 2011-08-18
Fedora FEDORA-2011-11084 galeon 2011-08-18
Fedora FEDORA-2011-11084 mozvoikko 2011-08-18
Fedora FEDORA-2011-11084 xulrunner 2011-08-18
Fedora FEDORA-2011-11084 perl-Gtk2-MozEmbed 2011-08-18
Fedora FEDORA-2011-11084 gnome-python2-extras 2011-08-18
Fedora FEDORA-2011-11084 firefox 2011-08-18
Fedora FEDORA-2011-11084 thunderbird-lightning 2011-08-18
Fedora FEDORA-2011-11087 thunderbird-lightning 2011-08-18
Fedora FEDORA-2011-11084 thunderbird 2011-08-18
Fedora FEDORA-2011-11087 thunderbird 2011-08-18
Debian DSA-2297-1 icedove 2011-08-21
Ubuntu USN-1184-1 firefox, xulrunner-1.9.2 2011-08-19
Ubuntu USN-1192-2 mozvoikko 2011-08-17
Debian DSA-2296-1 iceweasel 2011-08-17
Ubuntu USN-1192-1 firefox 2011-08-17
Debian DSA-2295-1 iceape 2011-08-17
Mandriva MDVSA-2011:127 mozilla 2011-08-17
Scientific Linux SL-fire-20110816 firefox 2011-08-16
Scientific Linux SL-thun-20110816 thunderbird 2011-08-16
Scientific Linux SL-thun-20110816 thunderbird 2011-08-16
Scientific Linux SL-seam-20110816 seamonkey 2011-08-16
CentOS CESA-2011:1164 firefox 2011-08-17
CentOS CESA-2011:1165 thunderbird 2011-08-17
CentOS CESA-2011:1167 seamonkey 2011-08-17
Red Hat RHSA-2011:1166-01 thunderbird 2011-08-16
Red Hat RHSA-2011:1165-01 thunderbird 2011-08-16
Red Hat RHSA-2011:1167-01 seamonkey 2011-08-16
Red Hat RHSA-2011:1164-01 firefox 2011-08-16

Comments (none posted)

Page editor: Jake Edge

Kernel development

Brief items

Kernel release status

The current development kernel is 3.1-rc2, released on August 14. "Hey, nice calm first week after the merge window. Good job. Or maybe people are just being lazy, and everybody is on vacation. Whatever. Don't tell me. I'm reasonably happy, I want to stay that way." Details can be found in the full changelog. The code name for this kernel, incidentally, has been changed to "wet seal."

Stable updates: the 2.6.32.45, 2.6.33.18, and 3.0.2 stable updates were released on August 15. They contain the usual pile of fixes. All three updates also include a change how TCP sequence numbers are generated; a (relatively) insecure 24-bit MD4 algorithm has been replaced by 32-bit MD5. 3.0.3 was released on August 17 with another set of useful fixes.

Comments (none posted)

Quotes of the week

The truth to realize is that we have grown really good at decimating our user-base every year or so.
-- Ingo Molnar

As far as long-term kernels goes, from the Android perspective we strongly prefer to snap up to the most recent released kernel on every platform/device release. I prefer to be as up to date on bugfixes and features from mainline as possible and minimize the deltas on our stack 'o patches as much as possible.
-- Brian Swetland

Comments (12 posted)

Possible changes to longterm kernel maintenance

Greg Kroah-Hartman has posted a proposal for some changes to how the stable and (especially) longterm kernels are maintained. The changes are being driven by users other than the enterprise distributors. "Now that 2.6.32 is over a year and a half, and the enterprise distros are off doing their thing with their multi-year upgrade cycles, there's no real need from the distros for a new longterm kernel release. But it turns out that the distros are not the only user of the kernel, other groups and companies have been approaching me over the past year, asking how they could pick the next longterm kernel, or what the process is in determining this." The core idea is to pick a new longterm kernel once a year; that kernel would then be maintained for two years thereafter. There is some discussion on Google+; it should move to the mailing list around August 15.

Comments (34 posted)

Kernel development news

Sharing buffers between devices

By Jonathan Corbet
August 15, 2011
CPUs may not have gotten hugely faster in recent years, but they have gained in other ways; a typical system-on-chip (SoC) device now has a number of peripherals which would qualify as reasonably powerful CPUs in their own right. More powerful devices with direct access to the memory bus can take on more demanding tasks. For example, an image frame captured from a camera device can often be passed directly to the graphics processor for display without all of the user-space processing that was once necessary. Increasingly, the CPU's job looks like that of a shop foreman whose main concern is keeping all of the other processors busy.

The foreman's job will be easier if the various devices under its control can communicate easily with each other. One useful addition in this area might be the buffer sharing patch set recently posted by Marek Szyprowski. The idea here is to make it possible for multiple kernel subsystems to share buffers under the control of user space. With this type of feature, applications could wire kernel subsystems together in problem-specific ways then get out of the way, letting the devices involved process the data as it passes through.

There are (at least) a couple of challenges which must be dealt with to make this kind of functionality safe to export to applications. One is that the application should not be able to "create" buffers at arbitrary kernel addresses. Indeed, kernel-space addresses should not be visible to user space at all, so the kernel must provide some other way for an application to refer to a specific buffer. The other is that shared buffers must not go away until all users have let go of it. A buffer may be created by a specific device driver, but it must persist, even if the device is closed, until nobody else expects it to be there.

The mechanism added in this patch set (this part in particular is credited to Tomasz Stanislawski) is relatively simple - though it will probably get more complex in the future. Kernel code wanting to make a buffer available to other parts of the kernel via user space starts by filling in one of these structures:

    struct shrbuf {
    	void (*get)(struct shrbuf *);
    	void (*put)(struct shrbuf *);
    	unsigned long dma_addr;
    	unsigned long size;
    };

One could immediately raise a number of complaints about this structure: the address should be a dma_addr_t, there's no reason not to put the kernel virtual address there, only physically-contiguous buffers are allowed, etc. It also seems like there could be value in the ability to annotate the state of the buffer (filled or empty, for example) and possibly signal another thread when that state changes. But it's worth remembering that this is an explicitly proof-of-concept patch posting and a lot of things will change. In particular, the eventual plan is to pass a scatterlist around instead of a single physical address.

The get() and put() functions are important: they manage reference counts to the buffer, which must continue to exist until that count goes to zero. Any subsystem depending on a buffer's continued existence should hold a reference to that buffer. The put() function should release the buffer when the last reference is dropped.

Once this structure exists, it can be passed to:

	int shrbuf_export(struct shrbuf *sb);

The return value (if all goes well) will be an integer file descriptor which can be handed to user space. This file descriptor embodies a reference to the buffer, which now will not be released before the file descriptor is closed. Other than closing it, there is very little that the application can do with the descriptor other than give it to another kernel subsystem; attempts to read from or write to it will fail, for example.

If a kernel subsystem receives a file descriptor which is purported to represent a kernel buffer, it can pass that descriptor to:

    struct shrbuf *shrbuf_import(int fd);

The return value will be the same shrbuf structure (or an ERR_PTR() error value for a file descriptor of the wrong type). A reference is taken on the structure before returning it, so the recipient should call put() at some future time to release it.

The patch set includes a new Video4Linux2 ioctl() command (VIDIOC_EXPBUF) enabling the exporting of buffers as file descriptors; a couple of capture drivers have been augmented to support this functionality. No examples of the other side (importing a buffer) have been posted yet.

There has been relatively little commentary on the patch set so far, possibly because it was posted to a couple of relatively obscure mailing lists. It has the look of functionality that could be useful beyond one or two kernel subsystems, though. It would probably make sense for the next iteration, which presumably will have more of the anticipated functionality built into it, to be distributed more widely for review.

Comments (12 posted)

Avoiding the OS abstraction trap

August 12, 2011

This article was contributed by Dan J. Williams

It is an innocent idea. After all, "all problems in computer science can be solved by another level of indirection." However, when the problem is developing a device driver for acceptance into the current mainline Linux kernel, OS abstraction (using a level of indirection to hide a kernel's internal API) is taking things a level too far. Seasoned Linux kernel developers will have already cringed at the premise of this article. But they are not my intended readers; instead, this text is aimed at those that find themselves in a similar position as the original authors of the isci driver: a team new to the process of getting a large driver accepted into the mainline, and tasked with enabling several environments at once. The isci driver fell into the OS abstraction trap. These are the lessons learned and attitudes your author developed about this trap while leading the effort to rework the driver for upstream acceptance.

As mentioned above, one would be hard pressed to find an experienced Linux kernel developer willing to accept OS abstraction as a general approach to driver design. So, a simplistic rule of thumb for those wanting to avoid the pain of reworking large amounts of code would be to not go it alone. Arrange for a developer with at least 100 upstream commits to be permanently assigned to the development team, and seek the advice of a developer with at least 500 commits early in the design phase. After the fact, it was one such developer, Arjan van de Ven, who set the expectation for the magnitude of rework effort. When your author was toying with ideas of Coccinelle and other automated ways to unwind the driver's abstractions Arjan presciently noted (paraphrasing): "...it isn't about the specific abstractions, it's about the wider assumptions that lead to the abstractions."

The fundamental problem with OS abstraction techniques is that they actively defeat the purpose of having an open driver in the first place. As a community we want drivers upstream so that we can refactor common code into generic infrastructure and drive uniformity across drivers of the same class. OS abstraction, in contrast, implies the development of driver-specific translations of common kernel constructs, "lowest common denominator" programming to avoid constructs that do not have a clear analogue in all environments, and overlooking the subtleties of interfaces that appear similar but have important semantic differences.

So what were the larger problematic assumptions that led to to the rework effort? It comes down to the following list that, given the recurrence of OS-abstracted drivers, may be expected among other developers new to the Linux mainline acceptance process.

  1. The programming interface of the the Linux kernel is a static contract to third party developers. The documentation is up to date, and the recourse for upper layer bugs is to add workarounds to the driver.

  2. The "community" is external to the development team. Many conversations about Linux kernel requirements reference the "community" as an anonymous body of developers and norms external to the driver's development process.

  3. An OS abstraction layer can cleanly handle the differences between operating systems.

In the case of the isci driver these assumptions resulted in a nearly half-year effort to rework the driver as measured from the first public release until the driver was ultimately deemed acceptable.

Who fixes the platform?

The kernel community runs on trust and reciprocation. To get things done in a timely manner one needs to build up a cache of trust capital. One quick way to build this capital is to fix bugs or otherwise lower the overall maintenance burden of the kernel. Fixing a bug in a core library, or spotting some duplicated patterns that can be unified are golden opportunities to demonstrate proficiency and build trust.

The attitude of aiming to become a co-implementer of common kernel infrastructure is an inherently foreign idea for a developer with a proprietary environment background. The proprietary OS vendor provides an interface sandbox for the driver to play in that is assumed to be rigid, documented and supported. A similar assumption was carried to the isci driver; for example, it initially contained workarounds for bugs (real and perceived) in libsas and the other upper layers. The assumption behind those workarounds seems to be that the "vendor's" (maintainer's) interface is broken and the vendor is on the hook for a fix. This is, of course, the well-known "platform problem."

In the particular case of libsas, SCSI maintainer James Bottomley noted: "there's no overall maintainer, it's jointly maintained by its users." Internal kernel interfaces evolve to meet the needs of their users, but the users that engender the most trust tend to have an easier time getting their needs met. In this case, root-causing the bugs or allowing time to clarify the documentation for libsas ahead of the driver's introduction might have streamlined the acceptance process; it certainly would have saved the effort of developing local workarounds to global problems.

We the community

Similar to how internal kernel interfaces evolve at a different pace than their documentation, so too do the expectations of the community for mainline-acceptable code versus the documented submission requirements. However, in contrast to the interface question where the current code can be used to clarify interface details, the same cannot be done to determine the current set of requirements for mainline acceptance. The reality is that code exists in the mainline tree that would not be acceptable if it were being re-submitted for inclusion today.

A driver with an OS-abstracted core can be found in the tree, but over time the maintenance burden incurred by that architecture has precluded future drivers from taking the same approach. As a result, attempting to understand "the community" from an external position is a sure-fire way to underestimate the current set of requirements for acceptable code. The only way to acquire this knowledge is ongoing participation. Read other drivers, read the code reviews from other developers, and try to answer the question "would someone external to the development team have a chance at maintaining the driver without assistance?".

One clear "no" answer to this question from the isci driver experience came from the simple usage of c99 structure initializers. The common core was targeted for reuse in environments where there was no compiler support for this syntax. However, the state machine implementation in the driver had dozens of tables filled with, in some cases, hundreds of function pointers. Modifying such tables by counting commas and trusting comments is error prone. The conversion to c99-style struct initialization made the code safer to edit (compiler verifiable), more readable and, consequently, allowed many of those tables to be removed. These initializations were a simple example of lowest-common-denominator programming and a nuance that an "external" Linux kernel developer need not care to understand when modifying the driver, especially when the next round of cleanups are enabled by the change.

Can the OS be hidden?

OS abstraction defenders may look at that last example and propose automated ways to convert the code for different environments. The dangerous assumptions of automated abstraction replacement engines are the same as listed above. The Linux kernel internal interface is not static, so the abstraction needs to keep up with the pace of API change, but that effort is better spent participating in the Linux interface evolution.

More dangerous is the assumption that similar looking interfaces from different environments can be properly captured by a unified abstraction. The prominent example from the isci driver was memory mapping: converting between virtual and physical addresses. As far as your author knows, Linux is one of the few environments that utilizes IOMMUs (I/O memory management units) to protect standard streaming DMA mappings requested by device drivers (via the DMA API). The isci abstraction had a virtual-to-physical abstraction that mapped to virt_to_phys() (broken but straightforward to fix), but it also had a physical-to-virtual abstraction mapped to phys_to_virt() which was not straightforward to fix. The assumption that physical-to-virtual translation was a viable mechanism lead to an implementation that not only missed the DMA API requirements, but also the need to use kmap() when accessing pages that may be located in high memory. The lesson is that convenient interfaces in other environments can lead to the diversionary search for equivalent functionality in Linux and magnify the eventual rework effort.

Conclusion

The initial patch added 60,896 lines to the kernel over 159 files. Once the rework was done, the number of new files was cut down to 34 and overall diffstat for the effort was:

    192 files changed, 23575 insertions(+), 60895 deletions(-)

There is no question that adherence to current Linux coding principles resulted in a simpler implementation of the isci driver. The community's mainline acceptance criteria are designed to maximize the portability and effectiveness of a kernel developer's skills across drivers. Any locally-developed convenience mechanisms that diminish that global portability will almost always result in requests for changes and prevent mainline acceptance. In the end participation and getting one's hands dirty in the evolution of the native interfaces is the requirement for mainline acceptance, and it is very difficult to achieve that through a level of indirection.

I want to thank Christoph Hellwig and James Bottomley for their review and recognize Dave Jiang, Jeff Skirvin, Ed Nadolski, Jacek Danecki, Maciej Trela, Maciej Patelczyk and the rest of the isci development team that accomplished this rework.

Comments (74 posted)

Transcendent memory in a nutshell

August 12, 2011

This article was contributed by Dan Magenheimer

The Linux kernel carefully enumerates and tracks all of its memory and, for the most part, it can individually access every byte of it. The purpose of transcendent memory ("tmem") is to provide the kernel with the capability to utilize memory that it cannot enumerate, sometimes cannot track, and cannot directly address. This may sound counterintuitive, or even silly, to core kernel developers, but as we will see it is actually quite useful; indeed it adds a level of flexibility to the kernel that allows some rather complex functionalities to be implemented and layered on a handful of tiny changes to the core kernel. The end goal is that memory can be more efficiently utilized by one kernel and/or load-balanced between multiple kernels (in a virtualized OR a non-virtualized environment), resulting in higher performance and/or lower RAM costs in a system or across a data center. This article will provide an overview of transcendent memory and how it is used in the Linux kernel.

Exactly how the kernel talks to tmem will be described in Part 2, but there are certain classes of data maintained by the kernel that are suitable. Two of these are known to kernel developers as "clean pagecache pages" and "swap pages". The patch that deals with the former is known as "cleancache"; it was merged into the 3.0 kernel. The patch that deals with swap pages is known as frontswap and is still being reviewed on the Linux kernel mailing list, with a target of linux-3.2. There may well be other classes of data that will also work well with tmem. Collectively these sources of suitable data for tmem can be referred to as "frontends" for tmem and we will detail them in Part 3.

There are multiple implementations of tmem which store data using different methods. We can refer to these data stores as "backends" for tmem, and all frontends can be used by all backends (possibly using a shim to connect them). The initial tmem implementation, known as "Xen tmem," allows Xen hypervisor memory to be used to store data for one or more tmem-enabled guest kernels. Xen tmem has been implemented in Xen for over two years and has been shipping in Xen since Xen 4.0; the in-kernel shim for Xen tmem was merged into 3.0 (for cleancache only, updated to also support frontswap in 3.1). Another Xen driver component, the Xen self-ballooning driver, which helps encourage a guest kernel to use tmem efficiently, was merged for 3.1 and also includes the "frontswap-selfshrinker". See Appendix A for more information about these.

The second tmem implementation does not involve virtualization at all and is known as "zcache," it is an in-kernel driver that stores compressed pages. Zcache essentially "doubles RAM" for any class of kernel memory that the kernel can provide via a tmem frontend (e.g. cleancache, frontswap), thus reducing memory requirements in, for example, embedded kernels. Zcache was merged as a staging driver in 2.6.39 (though dependent on the cleancache and frontswap patchsets which were not yet upstream)

A third tmem implementation is underway; it is known as "RAMster." In RAMster, a "closely-connected" set of kernels effectively pool their RAM so that a RAM-hungry workload on one machine can temporarily and transparently utilize RAM on another machine which is presumably idle or running a non RAM-hungry workload. RAMster has also been dubbed "peer-to-peer transcendent memory" and is intended for non-virtualized kernels but is being tested also with virtualized kernels. While RAMster is best suited in an environment where multiple systems are connected by a high-speed "exofabric", in which one system can directly address another systems memory, the initial prototype is built on a standard ethernet connection.

Other tmem implementations have been proposed: For example, there has been some argument about how useful tmem might be for KVM and/or for containers. With recent changes to zcache merged in 3.1, it may be very easy to simply implement the necessary shims and try these out; nobody has yet stepped up to do it. As another example, it has been observed that the tmem protocols may be ideal for certain kinds of RAM-like technologies such as "phase-change" memory (PRAM); most of these technologies have certain idiosyncrasies, such as limited write-cycles, that can be managed effectively through a software interface such as tmem. Discussions have begun with certain vendors of such RAM-like technologies. Yet another example is a variation of RAMster: a single machine in a cluster acts as a "memory server" and memory is added solely to that machine; the memory may be RAM, may be RAM-like, or perhaps may be a fast SSD.

The existing tmem implementations will be described in Part 4 along with some speculation about future implementations.

2: How the kernel talks to transcendent memory

The kernel "talks" to tmem through a carefully defined interface, which was crafted to provide maximum flexibility for the tmem implementation while incurring low impact on the core kernel. The tmem interface may appear odd but there are good reasons for its peculiarities. Note that in some cases the tmem interface is completely internal to the kernel and is thus an "API"; in other cases it defines the boundary between two independent software components (e.g. Xen and a guest Linux kernel) so is properly called an "ABI".

(Casual readers may wish to skip this section.)

Tmem should be thought of as another "entity" that "owns" some memory. The entity might be an in-kernel driver, another kernel, or a hypervisor/host. As previously noted, tmem cannot be enumerated by the kernel; the size of tmem is unknowable to the kernel, may change dynamically, and may at any time be "full". As a result, the kernel must "ask" tmem on every individual page to accept data or to retrieve data.

Tmem is not byte-addressable -- only large chunks of data (exactly or approximately a page in size) are copied between kernel memory and tmem. Since the kernel cannot "see" tmem, it is the tmem side of the API/ABI that copies the data from/to kernel memory. Tmem organizes related chunks of data in a pool; within a pool, the kernel chooses a unique "handle" to represent the equivalent of an address for the chunk of data. When the kernel requests the creation of a pool, it specifies certain attributes to be described below. If pool creation is successful, tmem provides a "pool id". Handles are unique within pools, not across pools, and consist of a 192-bit "object id" and a 32-bit "index." The rough equivalent of an object is a "file" and the index is the rough equivalent of a page offset into the file.

The two basic operations of tmem are "put" and "get". If the kernel wishes to save a chunk of data in tmem, it uses the "put" operation, providing a pool id, a handle, and the location of the data; if the put returns success, tmem has copied the data. If the kernel wishes to retrieve data, it uses the "get" operation and provides the pool id, the handle, and a location for tmem to place the data; if the get succeeds, on return, the data will be present at the specified location. Note that, unlike I/O, the copying performed by tmem is fully synchronous. As a result, arbitrary locks can (and, to avoid races, often should!) be held by the caller.

There are two basic pool types: ephemeral and persistent. Pages successfully put to an ephemeral pool may or may not be present later when the kernel uses a subsequent get with a matching handle. Pages successfully put to a persistent pool are guaranteed to be present for a subsequent get. (Additionally, a pool may be "shared" or "private".)

The kernel is responsible for maintaining coherency between tmem and the kernel's own data, and tmem has two types of "flush" operations to assist with this: To disassociate a handle from any tmem data, the kernel uses a "flush" operation. To disassociate all chunks of data in an object, the kernel uses a "flush object" operation. After a flush, subsequent gets will fail. A get on an (unshared) ephemeral pool is destructive, i.e. implies a flush; otherwise, the get is non-destructive and an explicit flush is required. (There are two additional coherency guarantees that are described in Appendix B.)

3: Transcendent memory frontends: frontswap and cleancache

While other frontends are possible, the two existing tmem frontends, frontswap and cleancache, cover two of the primary types of kernel memory that are sensitive to memory pressure. These two frontends are complementary: cleancache handles (clean) mapped pages that would otherwise be reclaimed by the kernel; frontswap handles (dirty) anonymous pages that would otherwise be swapped out by the kernel. When a successful cleancache_get happens, a disk read has been avoided. When a successful frontswap_put (or get) happens, a swap device write (or read) had been avoided. Together, assuming tmem is significantly faster than disk paging/swapping, substantial performance gains may be obtained in a memory-constrained environment.

Frontswap

The total amount of "virtual memory" in a Linux system is the sum of the physical RAM plus the sum of all configured swap devices. When the "working set" of a workload exceeds the size of physical RAM, swapping occurs -- swap devices are essentially used to emulate physical RAM. But, generally, a swap device is several orders of magnitude slower than RAM so swapping has become synonymous with horrible performance. As a result, wise system administrators increase physical RAM and/or redistribute workloads to ensure that swapping is avoided. But what if swapping isn't always slow?

Frontswap allows the Linux swap subsystem to use transcendent memory, when available, in place of sending data to and from a swap device. Frontswap is not in itself a swap device and, thus, requires no swap-device-like configuration. It does not change the total virtual memory in the system; it just results in faster swapping... some/most/nearly all of the time, but not necessarily always. Remember that the quantity of transcendent memory is unknowable and dynamic. With frontswap, whenever a page needs to be swapped out the swap subsystem asks tmem if it is willing to take the page of data. If tmem rejects it, the swap subsystem writes the page, as normal, to the swap device. If tmem accepts it, the swap subsystem can request the page of data back at any time and it is guaranteed to be retrievable from tmem. And, later, if the swap subsystem is certain the data is no longer valid (e.g. if the owning process has exited), it can flush the page of data from tmem.

Note that tmem can reject any or every frontswap "put". Why would it? One example is if tmem is a resource shared between multiple kernels (aka tmem "clients"), as is the case for Xen tmem or for RAMster; another kernel may have already claimed the space, or perhaps this kernel has exceeded some tmem-managed quota. Another example is if tmem is compressing data as it does in zcache and it determines that the compressed page of data is too large; in this case, tmem might reject any page that isn't sufficiently compressible OR perhaps even if the mean compression ratio is growing unacceptably.

The frontswap patchset is non-invasive and does not impact the behavior of the swap subsystem at all when frontswap is disabled. Indeed, a key kernel maintainer has observed that frontswap appears to be "bolted on" to the swap subsystem. That is a good thing as the existing swap subsystem code is very stable, infrequently used (because swapping is so slow), yet critical to system correctness; dramatic change to the swap subsystem is probably unwise and frontswap only touches the fringes of it.

A few implementation notes: Frontswap requires one bit of metadata per page of enabled swap. (The Linux swap subsystem until recently required 16 bits, and now requires eight bits of metadata per page so frontswap increases this by 12.5%.) This bit-per-page records whether the page is in tmem or is on the physical swap device. Since, at any time, some pages may be in frontswap and some on the physical device, the swap subsystem "swapoff" code also requires some modification. And because in-use tmem is more valuable than swap device space, some additional modifications are provided by frontswap so that a "partial swapoff" can be performed. And, of course, hooks are at the read-page and write-page routines to divert data into tmem and a hook is added to flush the data when it is no longer needed. All told, the patch parts that affect core kernel components add up to less than 100 lines.

Cleancache

In most workloads, the kernel fetches pages from a slow disk and, when RAM is plentiful, the kernel retains copies of many of these pages in memory, assuming that a disk page used once is likely to be used again. There's no sense incurring two disk reads when one will do and there's nothing else to do with that plentiful RAM anyway. If any data is written to one of those pages, the changes must be written to disk but, in anticipation of future changes, the (now clean) page continues to be retained in memory. As a result, the number of clean pages in this "page cache" often grows to fill the vast majority of memory. Eventually, when memory is nearly filled, or perhaps if the workload grows to require more memory, the kernel "reclaims" some of those clean pages; the data is discarded and the page frames are used for something else. No data is lost because a clean page in memory is identical to the same page on disk. However, if the kernel later determines that it does need that page of data after all, it must again be fetched from disk, which is called a "refault." Since the kernel can't predict the future, some pages are retained that will never be used again and some pages are reclaimed that soon result in a refault.

Cleancache allows tmem to be used to store clean page cache pages resulting in fewer refaults. When the kernel reclaims a page, rather than discard the data, it places the data into tmem, tagged as "ephemeral", which means that the page of data may be discarded if tmem chooses. Later, if the kernel determines it needs that page of data after all, it asks tmem to give it back. If tmem has retained the page, it gives it back; if tmem hasn't retained the page, the kernel proceeds with the refault, fetching the data from the disk as usual.

To function properly, cleancache "hooks" are placed where pages are reclaimed and where the refault occurs. The kernel is also responsible for ensuring coherency between the page cache, disk, and tmem, so hooks are also present where ever the kernel might invalidate the data. Since cleancache affects the kernel's VFS layer, and since not all filesystems use all VFS features, a filesystem must "opt in" to use cleancache whenever it mounts a filesystem.

One interesting note about cleancache is that clean pages may be retained in tmem for a file that has no pages remaining in the kernel's page cache. Thus the kernel must provide a name ("handle") for the page which is unique across the entire filesystem. For some filesystems, the inode number is sufficient, but for modern filesystems, the 192-bit "exportfs" handle is used.

Other tmem frontends

A common question is: can user code use tmem? For example, can enterprise applications that otherwise circumvent the pagecache use tmem? Currently the answer is no, but one could implement "tmem syscalls" to allow this. Coherency issues may arise, and it remains to be seen if they could be managed in user space.

What about other in-kernel uses? Some have suggested that the kernel dcache might provide a useful source of data for tmem. This too deserves further investigation.

4: Transcendent memory backends

The tmem interface allows multiple frontends to function with different backends. Currently only one backend may be configured though, in the future, some form of layering may be possible. Tmem backends share some common characteristics: Although a tmem backend might seem similar to a block device, it does not perform I/O and does not use the block I/O (bio) subsystem. In fact, a tmem backend must perform its functions fully synchronously, that is, it must not sleep and the scheduler may not be called. When a "put" completes, the kernels's page of data has been copied. And a successful "get" may not complete until the page of data has been copied to the kernel's data page. While these constraints create some difficulty for tmem backends, they also ensure that the tmem backend meets the tmem's interface requirements while also minimizing changes to the core kernel.

Zcache

Although tmem was conceived as a way to share a fixed resource (RAM) among a number of clients with constantly varying memory appetites, it also works nicely when the amount of RAM needed by a single kernel to store some number, N, of pages of data is less than N*PAGE_SIZE and when those pages of data need only be accessed only at a page granularity. So zcache combines an in-kernel implementation of tmem with in-kernel compression code to reduce the space requirements for data provided through a tmem frontend. As a result, when the kernel is under memory pressure, zcache can substantially increase the number of clean page cache pages and swap cache pages stored in RAM and thus significantly decrease disk I/O.

The zcache implementation is currently a staging driver so it is subject to change; it handles both persistent pages (from frontswap) and ephemeral pages (from cleancache) and, in both cases, uses the in-kernel lzo1x routines to compress/decompress the data contained in the pages. Space for persistent pages is obtained through a shim to xvmalloc, a memory allocator in the zram staging driver designed to store compressed pages. Space for ephemeral pages is obtained through standard kernel get_free_page() calls, then pairs of compressed ephemeral pages are matched using an algorithm called "compression buddies". This algorithm ensures that physical page frames containing two compressed ephemeral pages can easily be reclaimed when necessary; zcache provides a standard "shrinker" routine so those whole page frames can be reclaimed when required by the kernel using the existing kernel shrinker mechanism.

Zcache nicely demonstrates one of the flexibility features of tmem: Recall that, although data may often compress nicely (i.e. by a factor of two or more), it is possible that some workloads may produce long sequences of data that compress poorly. Since tmem allows any page to be rejected at the time of put, zcache policy (adjustable with sysfs tuneables in 3.1) avoids storing this poorly compressible data, instead passing it on to the original swap device for storage, thus dynamically optimizing the density of pages stored in RAM.

RAMster

RAMster is still under development but a proof-of-concept exists today. RAMster assumes that we have a cluster-like set of systems with some high-speed communication layer, or "exofabric", connecting them. The collected RAM of all the systems in the "collective" is the shared RAM resource used by tmem. Each cluster node acts as both a tmem client and a tmem server, and decides how much of its RAM to provide to the collective. Thus RAMster is a "peer-to-peer" implementation of tmem.

Ideally this exofabric allows some form of synchronous remote DMA to allow one system to read or write the RAM on another system, but in the initial RAMster proof-of-concept (RAMster-POC), a standard Ethernet connection is used instead. As long as the exofabric is sufficiently faster than disk reads/writes, there is still a net performance win.

Interestingly, RAMster-POC demonstrates a useful dimension of tmem: Once pages have been placed in tmem, the data can be transformed in various ways as long as the pages can be reconstituted when required. When pages are put to RAMster-POC, they are first compressed and cached locally using a zcache-like tmem backend. As local memory constraints increase, an asynchronous process attempts to "remotify" pages to another cluster node; if one node rejects the attempt, another node can be used as long as the local node tracks where the remote data resides. Although the current RAMster-POC doesn't implement this, one could even remotify multiple copies to achieve higher availability (i.e. to recover from node failures).

While this multi-level mechanism in RAMster works nicely for puts, there is currently no counterpart for gets. When a tmem frontend requests a persistent get, the data must be fetched immediately and synchronously; the thread requiring the data must busy-wait for the data to arrive and the scheduler must not be called. As a result current RAMster-POC is best suited for many-core processors, where it is unusual for all cores to be simultaneously active.

Transcendent memory for Xen

Tmem was originally conceived for Xen and so the Xen implementation is the most mature. The tmem backend in Xen utilizes spare hypervisor memory to store data, supports a large number of guests, and optionally implements both compression and deduplication (both within a guest and across guests) to maximize the volume of data that can be stored. The tmem frontends are converted to Xen hypercalls using a shim. Individual guests may be equipped with "self-ballooning" and "frontswap-self-shrinking" (both in Linux 3.1) to optimize their interaction with Xen tmem. Xen tmem also supports shared ephemeral pools, so that guests co-located on a physical server that share a cluster filesystem need only keep one copy of a cleancache page in tmem. The Xen control plane also fully implements tmem: An extensive set of statistics is available; live migration and save/restore of tmem-using guests is fully supported and limits, or "weights", may be applied to tmem guests to avoid denial-of-service.

Transcendent memory for kvm

The in-kernel tmem code included in zcache has been updated in 3.1 to support multiple tmem clients. With this in place, a KVM implementation of tmem should be fairly easy to complete, at least in prototype form. As with Xen, a shim would need to be placed in the guest to convert the cleancache and frontswap frontend calls to KVM hypercalls. On the host side, these hypercalls would need to be interfaced with the in-kernel tmem backend code. Some additional control plane support would also be necessary for this to be used in a KVM distribution.

Future tmem backends

The flexibility and dynamicity of tmem suggests that it may be useful for other storage needs and other backends have been proposed. The idiosyncrasies of some RAM-extension technologies, such as SSD and phase-change (PRAM) have been observed to be a possible fit; since page-size quantities are always used, writes can be carefully controlled and accounted, and user code never writes to tmem, memory technologies that could previously only be used as a fast I/O device could now instead be used as slow RAM. Some of these ideas are already under investigation.

Appendix A: Self-ballooning and frontswap-selfshrinking

After a system has been running for awhile, it is not uncommon for the vast majority of its memory to be filled with clean pagecache pages. With some tmem backends, especially Xen, it may make sense for those pages to reside in tmem instead of in the guest. To achieve this, Xen implements aggressive "self-ballooning", which artificially creates memory pressure by driving the Xen balloon driver to claim page frames, thus forcing the kernel to reclaim pages, which sends them to tmem. The algorithm essentially uses control theory to drive towards a memory target that approximates the current "working set" of the workload using the "Committed_AS" kernel variable. Since Committed_AS doesn't account for clean, mapped pages, these pages end up residing in Xen tmem where, queueing theory assures us, Xen can manage the pages more efficiently.

If the working set increases unexpectedly and faster than the self-balloon driver is able to (or chooses to) provide usable RAM, swapping occurs, but, in most cases, frontswap is able to absorb this swapping into Xen tmem. However, due to the fact that the kernel swap subsystem assumes that swapping occurs to a disk, swapped pages may sit on the "disk" for a very long time, even if the kernel knows the page will never be used again, because the disk space costs very little and can be overwritten when necessary. When such stale pages are in frontswap, however, they are taking up valuable space.

Frontswap-self-shrinking works to resolve this problem: when frontswap activity is stable and the guest kernel returns to a state where it is not under memory pressure, pressure is provided to remove some pages from frontswap, using a "partial" swapoff interface, and return them to kernel memory, thus freeing tmem space for more urgent needs, i.e. other guests that are currently memory-constrained.

Both self-ballooning and frontswap-self-shrinking provide sysfs tuneables to drive their control processes. Further experimentation will be necessary to optimize them.

Appendix B: Subtle tmem implementation requirements

Although tmem places most coherency responsibility on its clients, a tmem backend itself must enforce two coherency requirements. These are called "get-get" coherency and "put-put-get" coherency. For the former, a tmem backend guarantees that if a get fails, a subsequent get to the same handle will also fail (unless, of course, there is an intermediate put). For the latter, if a put places data "A" into tmem and a subsequent put with the same handle places data "B" into tmem, a subsequent "get" must never return "A".

This second coherency requirement results in an unusual corner-case which affects the API/ABI specification: If a put with a handle "X" of data "A" is accepted, and then a subsequent put is done to handle "X" with data "B", this is referred to as a "duplicate put". In this case, the API/ABI allows the backend implementation two options, and the frontend must be prepared for either: (1) if the duplicate put is accepted, the backend replaces data "A" with data "B" and success is returned and (2) the duplicate put may be failed, and the backend must flush the data associated with "X" so that a subsequent get will fail. This is the only case where a persistent get of a previously accepted put may fail; fortunately in this case the frontend has the new data "B" which would have overwritten the old data "A" anyway.

Comments (5 posted)

Patches and updates

Kernel trees

Architecture-specific

Core kernel code

Device drivers

Filesystems and block I/O

Memory management

Security-related

Benchmarks and bugs

Miscellaneous

Page editor: Jonathan Corbet

Distributions

Six years of RHEL 4 security

By Jake Edge
August 17, 2011

Red Hat does a good job of looking at the security problems found and fixed in its enterprise distributions. It issues periodic reports on those security problems to try to give its customers—and interested bystanders—a sense of how vulnerable their systems are over the lifetime of a particular release. The most recent report [PDF] looks at six years of security update data for RHEL 4.

The report was written by Red Hat security team lead Mark J. Cox, and looks at two broad categories of security concerns: vulnerabilities and threats. Even a cursory glance at the vulnerability information makes it clear that the most numerous flaws come in web browsers. That may not affect too many RHEL customers for a couple of reasons. Most RHEL installations are for servers where browsers are not installed by default and are probably rarely installed by administrators. In addition, browser vulnerabilities require visiting a malicious or compromised site and, even on systems where a browser is installed, one would think that administrators would be fairly careful about which sites are visited.

For desktop or workstation systems, of course, a web browser is standard fare. Over the six years, there have been nearly 200 critical flaws in Mozilla products (which includes Firefox, Seamonkey, and Thunderbird). By way of contrast, the default server install of RHEL 4 has only suffered from 20 critical flaws in that time.

Beyond the Mozilla products (which are the top three entries in the table of "worst security history" in the report), the packages that had multiple critical issues include Samba and Kerberos. Another desktop-oriented package appears in the list, HelixPlayer, which was eventually dropped from RHEL 4 because it was proprietary code that could no longer have its security problems fixed. While the kernel is number four on the list, it has had zero critical vulnerabilities during RHEL 4's lifetime (though there have been nearly 300 vulnerabilities at lower severity levels). It is clear that avoiding browsers and other desktop software will make for a system with fewer updates needed—something that's not really possible for many users, but should be for server systems.

But there clearly were flaws beyond the browser, and the report breaks out the two dozen or so critical flaws in the other packages. The list shows the CVE number (and Red Hat advisory number), whether it is a default package or not (most were), a short description of the vulnerability, and the so-called "days of risk". That measure is meant to give a rough guide of how long it was between the public release of the vulnerability information until Red Hat had a fix available. Those numbers were typically zero (a fix on the same day as the disclosure), though there were some outliers including a seven day risk window for a 2007 GnomeMeeting (now Ekiga) bug.

On the threats side of the ledger, Cox reports on the public exploits that were found that tried to take advantage of the vulnerabilities in RHEL. The exploits described were limited to those that "have the potential to cause remote damage to the confidentiality or integrity of a system". So, denial of service exploits were not considered. For the purposes of looking at the threats, "proof of concept" exploits were counted, and that led to 80 public exploits of RHEL 4 vulnerabilities being found.

There were 15 privilege escalation exploits, 22 web browser exploits (all but three for Mozilla products, though Links, Lynx, and HelixPlayer each had one), 17 "user-complicit" exploits (where the user needs to do something to make it happen, like opening a file with a vulnerable application), and 9 exploits for PHP vulnerabilities (many of which were reported during the "PHP month of bugs"). While some of those could certainly prove to be problematic, the privilege escalations in particular, they often require a hard-to-engineer set of circumstances—at least for widespread exploitation. The most dangerous group are the 17 public exploits that were found for services, many of which would be running on a default RHEL install.

The report also noted the lack of any known Linux worms since 2005. There were two in that year, but both were exploiting PHP flaws in applications that are not shipped with RHEL 4 (though could have been installed by the administrator separately).

The full report is well worth a read for those who are interested. It does a good job reporting on the security vulnerability landscape for RHEL 4, but, more importantly, gives even those who don't run RHEL a useful look at the type and severity of Linux security problems. It would be nice to see more distributions, especially those targeting enterprises, produce similar reports.

Comments (none posted)

Brief items

Distribution quote of the week

Why use Ubuntu's Natty Narwhal? Sure, it's fun to say "Natty Netadmin Netbook," but won't any Linux work? Of course it will, and if you have a favorite, by all means use it.
-- Carla Schroder by way of ITworld

Comments (none posted)

SmartOS (based on IllumOS) released - with KVM

SmartOS is a new Solaris/IllumOS-based distribution released by Joyent. "SmartOS incorporates the four most revolutionary OS technologies of the past decade - Zones, ZFS, DTrace and KVM - into a single operating system, providing an arbitrarily observable, highly multi-tenant environment built on a reliable, enterprise-grade storage stack." Yes, they have ported the KVM virtualization facility from Linux to Solaris.

Comments (48 posted)

Debian Community celebrates its 18th birthday

The Debian Project celebrates the 18th anniversary of Ian Murdoch's founding announcement. "A lot has happened to the project and its community in the past eighteen years. There have been eleven releases - most recently Debian 6.0 "Squeeze" in February 2011 - and a huge amount of free software packaged. The current "unstable" branch consists of more than 35,000 binary packages for the amd64 architecture alone - over 44GB of Free/Libre Software! Throughout this history Debian has maintained its goals of technical excellence, accountability, and above all freedom."

Full Story (comments: none)

CentOS-5.6 Continuous Release i386 and x86_64

The CentOS-5.6 Continuous Release (CR) repository is now available. "This repository contains rpms to be included in the next CentOS-5.x release. Because these include security and bugfix updates, we strongly recommend everyone using CentOS-5 install and update their system using this repository."

Full Story (comments: none)

Distribution News

Ubuntu family

Ubuntu Global Jam coming up: Sep 2-4

Ubuntu Global Jam is an event where local Ubuntu teams around the globe get together to work on Ubuntu directly. "It's a great opportunity for new contributors to learn from their peers about translating, documenting, bug triaging, testing, packaging and loads of other things related to Ubuntu."

Full Story (comments: none)

Newsletters and articles of interest

Distribution newsletters

Comments (none posted)

The 2011 Top 7 Best Linux Distributions for You (Linux.com)

Brian Proffitt presents his choices for Best Desktop Distribution (Fedora), Best Laptop Distribution (Ubuntu), Best Enterprise Desktop (SUSE Linux Enterprise Desktop), Best Enterprise Server (Red Hat Enterprise Linux), Best LiveCD (KNOPPIX), Best Security-Enhanced Distribution (BackTrack), and Best Multimedia Distribution (Ubuntu Studio).

Comments (none posted)

Interview: Kate Stewart, Ubuntu Release Manager at Canonical

Amber Graner has interviewed Kate Stewart about her work as the Ubuntu Release Manger at Canonical. "My biggest personal challenge over the last year has been learning about the interactions in the user space applications and the different flavors' user interfaces. It's very challenging to figure out what the implications of a specific change are after we freeze, and to decide if it makes the product overall better or not. I expect I'll be learning for as long as I'm in this role (since the contents of a release continue to evolve), which is one of the reasons I'm enjoying myself so much. Luckily for me, the other members of release team span a wide range of different backgrounds and have been doing releases for quite a while, and are very willing to share their knowledge."

Comments (none posted)

Page editor: Rebecca Sobol

Development

Desktop Summit: Plasma Active

By Jake Edge
August 17, 2011

The KDE project had quite a number of talks at the Desktop Summit—as would be expected—but there was definitely a unifying theme in many of the talks: Plasma Active. This relatively new sub-project is an attempt to build a cross-device interface for tablets, phones, media centers, and other embedded Linux devices. The descriptions and demos look interesting, and Plasma Active seems to break some of the molds that have arisen around how we interact with devices.

[Sebastian Kügler]

Sebastian Kügler gave an overview talk titled "Plasma Active - Conquering the Device Spectrum" that looked at the motivations behind and plans for the project. It grew out of a recognition that there are two main players in the mobile market today, iOS and Android, both of which use parts of the free desktop stack, but not openly. Both are "more or less proprietary", which leaves quite a few people that are not served by either. There are other players (e.g. RIM, WebOS), but those don't really have an ecosystem around them and lack for apps. There is a need, he said, to create a community in the mobile space.

Kügler and others started thinking about what could fill the gap left over by iOS and Android. They wanted a "freedom device" where the user was in control. It would deliver what the user wants, and not what Android or the application wants the user to have. Importantly, it would also be an "elegant and beautiful" interface that would make friends say "I want that", he said. It absolutely "needs to be an open system", without lock-in to a single vendor or service.

The mission statement for the project came out of that thinking: "Create a desirable user experience encompassing a spectrum of devices". A spectrum of devices is important Kügler said because the device landscape is constantly changing. "Yesterday it was smartphones, now it's tablets, I don't know what it will be in the future", he said.

Plasma Active will encompass a complete software stack, which is currently based on MeeGo and openSUSE. That stack will continue to be free-software-based, because "hackability implies a free software stack", he said. The interface needs to be touch-friendly, but many devices and systems today already have that. If that were all that Plasma Active brought to the table, it would "just be another player in the market". It needs, he said, some unique features that make it stand out and make it desirable.

[Contour Shell]

One of those unique features is the Contour shell, which is envisioned as the workspace for Plasma Active. It is activity-centric, not application-centric, he said. It will adapt to the context of what the user is doing, and will try to recommend likely actions based on previous usage.

Another unique feature of the interface is the idea of "Share-Like-Connect", which integrates the social web into Contour. For each type of object, there will be options to share the object with others (e.g. a photo to Flickr, Facebook, or a free service), connect it to the current activity (or another resource), or rate the object. The options will reflect the type of the object (e.g. a file, a contact, a web page URL, etc.), and will be extendable by plugins written in JavaScript or other scripting languages. The idea is to "make hackability accessible to people who don't write C++", Kügler said.

There will also be a set of "nicely integrated apps", some of which will derive from existing KDE applications. So, Calligra Active and Kontact Touch use the existing code bases, while scaling them down and making them touch-friendly. Apps can be written in pure QML, which also helps reduce the barriers. Since it is a normal Linux stack underneath, traditional applications can be run as well, or you can run MeeGo applications, though those applications may not integrate perfectly.

For developers, Plasma Quick provides a way to speed development of applications. It builds on top of Qt Quick (which uses QML) and JavaScript using Plasma as the runtime. It makes app distribution easier as well, because you can write an app, "pop it onto a USB stick, plug it in, and run it". You can also write more traditional C++ applications, and use QML for the user interface, which is the path that Calligra Active has taken, he said.

Integrating with the operating system is where Kügler has been spending much of his time. He has been building Balsam Professional live images for easier installation and testing. Those images are based on openSUSE 11.4, but there is ongoing work on MeeGo both for x86 and ARM, he said.

There are also efforts to get hardware vendors involved. The first step is to make Plasma Active product-ready, he said. The project is "actively interacting with vendors", he said, and there are companies making commercial support available. There is also consulting for integration and development available. A "commercial ecosystem" for application developers is something being worked on as well, he said. The goal is to "get this system into the hands of real users".

Activities

Ivan Čukić dug into Plasma activities in his talk, and he called them "the helpful Big Brother". Essentially activities are used to collect up related applications, files, windows, and other parts of the interface and the user can easily switch between them using the aptly named "activity switcher". With activities, Plasma Active converts the desktop from something that is application-centric to a document-centric workflow.

Underlying activities is Nepomuk, which can "track everything that you do", he said—thus "Big Brother". That data can then be analyzed to determine which are the important documents to the user, which documents go together (because they are often used together), and so on. It is completely based on the usage patterns, rather than the contents of the documents, he said.

That allows things like "favorites" and recently used document lists to be context dependent, so that you "won't get completely useless things in your list", he said. Those lists are also not limited to just documents, but can include all sorts of other objects, like contacts (email, IRC, instant messaging, ...), web pages visited, etc. The lists become activity-specific and can thus provide more relevant information, he said.

The Nepomuk tracking is already present in current versions of KDE, but applications have not yet started reporting information on object usage. There are also multiple back-ends that can be used, including a version of Zeitgeist that pushes its information into Nepomuk. In addition, all of this infrastructure is applicable to both Plasma Workspaces on the desktop and Plasma Active for devices.

More on Plasma Active

[Marco Martin and Fania Jöck]

KDE hacker Marco Martin and interaction designer Fania Jöck shared the stage for another presentation about Plasma Active, but more directly focused on the user experience design and features. Jöck outlined the "big picture", noting that Plasma Active is really an umbrella project that encompasses pieces like Plasma Mobile, Contour, Share/Like/Connect, and active applications.

The model for Plasma Active is to make the system adapt to the user, rather than the normal situation that is the other way around. She likened it to the difference between a supermarket and a restaurant. In a supermarket, you have to search for the things that you need. Whereas, in a restaurant, the waiter will bring you what you need next. Plasma Active is looking to implement the restaurant model on mobile devices.

In the Contour shell, the idea is to take context, patterns, and activities and "squeeze them to give recommendations" to the user, she said. Context is things like geographic location, time, active files and applications, and the current activity. Everyone has different patterns, she said, and Contour will try to track those. Some will get up in the morning and check email and Facebook, but others will have different ways of starting their day—and working throughout it.

Using that information, Contour will (eventually) try to come up with recommendations, which are "propositions for actions that will dynamically change", based on various factors. While the user is at home, the activity switcher will show "relevant things you might do there". While at work or at a university, different activities will be recommended. The "recommendation overlay" will adapt to the current activity and recent action history, she said, but it is all still "a bit theoretical" right now.

After a demo of Contour and Plasma Active, Martin said that "a big part of Contour is data and a smart representation of data". KDE has the existing infrastructure with Nepomuk and the activity manager, and Contour is in some sense a visualization of the data collected.

QML is in a "way more central place" for Plasma Active, as compared to Plasma Workspaces, Martin said. QML manages and lays out the application windows as well as managing the panel and the animations used to switch activities. Though they Plasma Active packages aren't Plasmoids, they fill the same role and are often called "QML Plasmoids", Martin said.

An audience member asked about how activities are created and whether you could do something like tag all of the internet activity for the last five minutes into a new activity. Martin said that new activity creation is a very manual process at this point, but that they would like to detect major changes in what the user is doing and allow the user to define that as a new activity. It may be difficult to do in practice, and he's not sure how much of it can be done semi-automatically.

Conclusion

It's clear from the number of talks on Plasma Active and related projects that the KDE project as a whole is very excited about the possibilities that it brings. The project seems to be taking the right steps to try to build an application ecosystem and to engage hardware makers as well. While many have written off the mobile space as a two-horse race, it is likely to be way too early to make that determination. Once Plasma Active is fully working, and possibly pre-installed on real hardware, it could certainly "become a third player in the mobile space" as is the plan according to Kügler

[ I would like to thank KDE e.V. and the GNOME Foundation for travel assistance to attend the Desktop Summit. ]

Comments (8 posted)

Brief items

Quote of the week

And when you let developers create arbitrarily long identifiers, some of them do crazy things. At work, it all started with

Tools/Java/StringUtilities/StringTransformer.java
Tools/Java/StringUtilities/StringCollectionTransformer.java

which are not too crazy. But they naturally spawned

Tools/Java/StringUtilities/NullRemovalStringCollectionTransformer.java

which is only slightly less insane than Tools/Java/StringUtilities/StringTransformerBackedStringCollectionTransformer.java

Now when you consider the presence of

Tools/Java/StringUtilities/ToLowerCaseStringTransformer.java

it's only natural to create

Tools/Java/StringUtilities/ToLowerCaseStringTransformerBackedStringCollectionTransformer.java

and that, of course, demands the creation of

Tools/Java/StringUtilities/TestToLowerCaseStringTransformerBackedStringCollectionTransformer.java

I'm not kidding! This stuff is for real! Who could make this up?)

P.S. please don't flame Java developers. The poor souls, they really don't know any better. I actually feel sorry for them more than anything.

-- Greg Ward

Comments (12 posted)

Intel open-sources Cilk Plus

Intel has announced that its "Cilk Plus" project is now available as a GCC branch. Essentially, the project is working on extensions to the C and C++ languages to make efficient parallel programming easier. "The product includes three simple keywords and array notations that allow C and C++ developers to quickly make productive use of modern processors that contain both multiple cores and vector units." See the Cilk Plus specification for details.

Comments (11 posted)

Firefox, Thunderbird and SeaMonkey updates

Mozilla has released Firefox 3.6.20, Thunderbird 3.1.12, and SeaMonkey 2.3. These releases fix a number of security issues.

Comments (2 posted)

Newsletters and articles

Development newsletters from the last week

Comments (none posted)

Some reports from the Desktop Summit (KDE.News)

KDE.News has a set of articles reporting from the recently-concluded 2011 Desktop Summit in Berlin: day 1, day 2, and day 3. "The head of the summit organizing team, Mirko Boehm, closed the conference track of the summit with a review of things we have learned in the last few days. There were too many to list fully, but highlights included the informative copyright assignment panel (and the discovery that at least one panel member sees Firefox and Chrome as wasteful duplication), the news that 30% of Qt developers discover the framework through free software, and even the trials and tribulations of building a toaster from scratch."

Comments (none posted)

GNOME-Designer Jon McCann talks about the future of GNOME3 (der Standard)

Der Standard interviews Jon McCann about GNOME 3 and related topics. "Some of the feedback is certainly valid and we are going to use that to make informed decisions in the GNOME3 cycle - remember we've only had one release so far. In couple of the talks we pointed out that it took us eight, nine years to get to where GNOME2 ended up and we've had like four months of GNOME3. So there are plenty of things we still have to do. There are a lot of holes in our story. People will look at some things and say 'Why is this there? Does this really make sense?'. And in many cases that's because we didn't get to really finish that off. And that will start to fill in, the story will become a little bit more complete as we go through this cycle. I'm not saying that all this people will be completely convinced and that's unfortunate but I think over time people will realize what we are doing has been at least thought through."

Comments (125 posted)

Page editor: Jonathan Corbet

Announcements

Brief items

Google to buy Motorola mobility

Google has announced that it is acquiring Motorola Mobility for $12.5 billion. "The acquisition of Motorola Mobility, a dedicated Android partner, will enable Google to supercharge the Android ecosystem and will enhance competition in mobile computing. Motorola Mobility will remain a licensee of Android and Android will remain open. Google will run Motorola Mobility as a separate business." The press release says nothing about mobile-related patents, but that's obviously a big part of what's going on here.

Comments (35 posted)

MPL 2.0-rc1

The first release candidate for version 2.0 of the Mozilla Public License is now available for comment. "We've had a lot of useful feedback throughout this process, both from professional lawyers and passionate non-lawyers, but we're still seeking to write the best license we can. That means we still want even more participation and commentary from both the Mozilla community and the broader community of MPL and FLOSS users." There are some FAQ files available for those wanting information on what has changed.

Comments (none posted)

SPI board and officer elections

Software in the Public Interest (SPI) has announced the results of the recent board and officer elections. The current directors are: Bdale Garbee, Joerg Jaspert, Jonathan McDowell, Michael Schultheiss, Clint Adams, Robert Brockway, Joshua D. Drake, Jimmy Kaplowitz, and Martin Zobel-Helas.

Full Story (comments: none)

Articles of interest

The mobile patent mess may get worse

The "Unwired View" site has come to a worrisome conclusion based on Motorola's quarterly earnings call and a conference keynote by CEO Sanjay Jha: Motorola may start demanding patent royalties from other Android handset makers. "The discussion above was solely about Android, and how Motorola can differentiate from other players who are already doing better - like HTC and Samsung. One of the key points to win against competition, according to Sanjay Jha, are Motorola's patents. Used not only defensively - to avoid paying royalties on its Android handsets, but also offensively. To collect royalties from other Android device makers." Needless to say, that would not help the situation.

Comments (20 posted)

Tablet and smartphone run on Android-based Grid OS (Linux Devices)

Linux Devices reviews the upcoming tablet and phone offerings from Fusion Garage. "Grid OS is also hyped for its 'predictive intelligence' features based on the Semantic Web, the contextual World Wide Web extension set that has been championed by WWW inventor Sir Tim Berners-Lee. The predictive software anticipates 'user needs for actions and information' with features like intelligent notifications. The latter offer time-based suggestions, such as 'restaurant recommendations near a user's GPS-determined lunch location,' says the company."

Comments (8 posted)

Non-profit Group Releases Open Source Mesh WiFi Network Software (Hot HardWare)

Hot HardWare reports that Geeks Without Frontiers has released open source software based on the 802.11s WiFi standard that lets Linux machines be their own WiFi network. "Geeks claims that the mesh networks created by open80211s will be highly secure. It uses strong authentication to allow only authorized individuals entry and encryption, to keep prying eyes from seeing the traffic."

Comments (7 posted)

Nokia's MeeGo-powered N9 not coming to the UK or US (The H)

The H has a report on the limited availability of the Nokia N9. "According to Nokia's N9 "Check Availability" page, the MeeGo phone will be making an appearance in Europe in Austria, Bulgaria, Croatia, Finland, Greece, Hungary, Portugal, Poland, Romania, Russia, Serbia, Slovenia, Sweden and Switzerland. Outside Europe, the device is due to appear in Australia, China, Hong Kong, Malaysia, New Zealand, Saudi Arabia, Singapore, United Arab Emirates and Vietnam." (Thanks to Felix Braun)

Comments (21 posted)

Calls for Presentations

PyCon Ireland: Call for Papers and Volunteers for Beginner's Track

PyCon Ireland will take place October 8-9, 2011 in Dublin. The call for proposals is still open and volunteers are needed to help out with the beginner's track.

Full Story (comments: none)

PyCon 2012 Call for Proposals

PyCon 2012 will be held in Santa Clara, California, March 7-15. Proposals will be accepted through October 12, 2011.

Full Story (comments: none)

Upcoming Events

openSUSE Conference 2011

The third openSUSE Conference (osc11) will take place September 11-14, 2011 in Nuremberg, Germany. "Under the motto RWX³ all Free and Open Source Software enthusiasts are invited to come together for four days to learn, hack and to have a lot of fun. The program will cover a variety topics with an emphasis on interaction between participants. The event is free of charge and open to anyone!"

Full Story (comments: none)

Events: August 25, 2011 to October 24, 2011

The following event listing is taken from the LWN.net Calendar.

Date(s)EventLocation
August 22
August 26
8th Netfilter Workshop Freiburg, Germany
August 25
August 28
EuroSciPy Paris, France
August 25
August 28
GNU Hackers Meeting Paris, France
August 26 Dynamic Language Conference 2011 Edinburgh, United-Kingdom
August 27
August 28
Kiwi PyCon 2011 Wellington, New Zealand
August 27 PyCon Japan 2011 Tokyo, Japan
August 27 SC2011 - Software Developers Haven Ottawa, ON, Canada
August 30
September 1
Military Open Source Software (MIL-OSS) WG3 Conference Atlanta, GA, USA
September 6
September 8
Conference on Domain-Specific Languages Bordeaux, France
September 7
September 9
Linux Plumbers' Conference Santa Rosa, CA, USA
September 8 Linux Security Summit 2011 Santa Rosa, CA, USA
September 8
September 9
Italian Perl Workshop 2011 Turin, Italy
September 8
September 9
Lua Workshop 2011 Frick, Switzerland
September 9
September 11
State of the Map 2011 Denver, Colorado, USA
September 9
September 11
Ohio LinuxFest 2011 Columbus, OH, USA
September 10
September 11
PyTexas 2011 College Station, Texas, USA
September 10
September 11
SugarCamp Paris 2011 - "Fix Sugar Documentation!" Paris, France
September 11
September 14
openSUSE Conference Nuremberg, Germany
September 12
September 14
X.Org Developers' Conference Chicago, Illinois, USA
September 14
September 16
Postgres Open Chicago, IL, USA
September 14
September 16
GNU Radio Conference 2011 Philadelphia, PA, USA
September 15 Open Hardware Summit New York, NY, USA
September 16 LLVM European User Group Meeting London, United Kingdom
September 16
September 18
Creative Commons Global Summit 2011 Warsaw, Poland
September 16
September 18
Pycon India 2011 Pune, India
September 18
September 20
Strange Loop St. Louis, MO, USA
September 19
September 22
BruCON 2011 Brussels, Belgium
September 22
September 25
Pycon Poland 2011 Kielce, Poland
September 23
September 24
Open Source Developers Conference France 2011 Paris, France
September 23
September 24
PyCon Argentina 2011 Buenos Aires, Argentina
September 24
September 25
PyCon UK 2011 Coventry, UK
September 27
September 30
PostgreSQL Conference West San Jose, CA, USA
September 27
September 29
Nagios World Conference North America 2011 Saint Paul, MN, USA
September 29
October 1
Python Brasil [7] São Paulo, Brazil
September 30
October 3
Fedora Users and Developers Conference: Milan 2011 Milan, Italy
October 1
October 2
WineConf 2011 Minneapolis, MN, USA
October 1
October 2
Big Android BBQ Austin, TX, USA
October 3
October 5
OpenStack "Essex" Design Summit Boston, MA, USA
October 4
October 9
PyCon DE Leipzig, Germany
October 6
October 9
EuroBSDCon 2011 Netherlands
October 7
October 9
Linux Autumn 2011 Kielce, Poland
October 7
October 10
Open Source Week 2011 Malang, Indonesia
October 8
October 9
PyCon Ireland 2011 Dublin, Ireland
October 8
October 9
Pittsburgh Perl Workshop 2011 Pittsburgh, PA, USA
October 8 PHP North West Conference Manchester, UK
October 8
October 10
GNOME "Boston" Fall Summit 2011 Montreal, QC, Canada
October 8 FLOSSUK / UKUUG's 2011 Unconference Manchester, UK
October 9
October 11
Android Open San Francisco, CA, USA
October 11 PLUG Talk: Rusty Russell Perth, Australia
October 12
October 15
LibreOffice Conference Paris, France
October 14
October 16
MediaWiki Hackathon New Orleans New Orleans, Louisiana, USA
October 14 Workshop Packaging BlankOn Jakarta , Indonesia
October 15 Packaging Debian Class BlankOn Surabaya, Indonesia
October 17
October 18
PyCon Finland 2011 Turku, Finland
October 18
October 21
PostgreSQL Conference Europe Amsterdam, The Netherlands
October 19
October 21
13th German Perl Workshop Frankfurt/Main, Germany
October 19
October 21
Latinoware 2011 Foz do Iguaçu, Brazil
October 20
October 22
13th Real-Time Linux Workshop Prague, Czech Republic
October 21
October 23
PHPCon Poland 2011 Kielce, Poland
October 21 PG-Day Denver 2011 Denver, CO, USA
October 23
October 25
Kernel Summit Prague, Czech Republic

If your event does not appear here, please tell us about it.

Page editor: Rebecca Sobol


Copyright © 2011, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds