User: Password:
|
|
Subscribe / Log in / New account

LWN.net Weekly Edition for July 28, 2011

Ksplice and CentOS

By Jonathan Corbet
July 26, 2011
Ksplice first announced itself in 2008 as a project for "rebootless kernel security updates" based at MIT. The students behind the project soon graduated, and so did the project itself; a company by the same name was formed to offer commercial no-reboot patching to customers who cared deeply about uptime. Ksplice Inc. also offered free update services for a number of distributions. Much of this came to an end on July 21, when Oracle announced that it had acquired Ksplice Inc. and would incorporate its services into its own Linux support offerings. A free form of ksplice might just live on, though, with support from an interesting direction.

On the same day that Oracle announced the acquisition, CentOS developer Karanbir Singh suggested that one place the CentOS community could help out would be in the creation of a ksplice update stream. CentOS updates had been available from Ksplice Inc., on a trial basis at least; the company even somewhat snidely let it be known that they were providing updates for CentOS during the first few months of 2011, when the CentOS project itself had dropped the ball on that job. Oracle-ksplice still claims to support CentOS, but there is not even a trial service available for free; anybody wanting update service for CentOS must pay for it from the beginning. (The free service for Fedora and Ubuntu appears to still be functioning, for now - but who builds a high-availability system on those distributions?).

It is hard to blame Oracle too much for this decision. Oracle has bought a company which, it believes, will make its support offerings more attractive. Making the ksplice service available for free to CentOS users, in the process making CentOS more attractive relative to commercial enterprise offerings, would tend to undercut the rationale behind the entire acquisition. While it would certainly be a nice thing for Oracle to provide a stream of ksplice updates for CentOS users, that is not something the company is obligated to do.

So if CentOS is to have an equivalent service, it will have to roll its own. There are a few challenges to be overcome to bring this idea to fruition, starting with the ksplice code itself. That code, by some strange coincidence, disappeared from the Ksplice Inc. site just before the acquisition was announced. The Internet tends not to forget, though, so copies of this code (which was released under the GPL) were quickly located. Karanbir has posted a repository containing the ksplice 0.9.9 code as a starting place; for good measure, there are also mirrors on gitorious and github.

Getting the ksplice code is the easy part; generating the update stream will prove to be somewhat harder. Ksplice works by looking at which functions are changed by a kernel patch; it then creates a kernel module which (at runtime) patches out the affected functions and replaces them with the fixed versions. Every patch must be examined with an eye toward what effects it will have on a running kernel and, perhaps, modified accordingly. If the original patch changes a data structure, the rebootless version may have to do things quite differently, sometimes to the point of creating a shadow structure containing the new information. And, naturally, each patch in the stream must take into account whatever previous patches may have been applied to the running kernel.

Some more information on this process can be found in this article from late 2008. The point, though, is that the creation of these runtime patches is not always a simple or mechanical process; it requires real attention from somebody who understands what the original patches are doing. CentOS has not always been able to keep up with Red Hat's patch stream as it is; the creation of this new stream for kernel patches will make the task harder. It is not immediately obvious that the project will be able to sustain that extra effort. If it does work out, though, it would clearly make CentOS a more attractive distribution for a number of high-uptime use cases.

An interesting question (for those who are into license lawyering, anyway) is whether a patch in Oracle's ksplice stream constitutes a work derived from the kernel for which the source must be provided. Having access to the source for Oracle's runtime patches would obviously facilitate the process of creating CentOS patches.

Even if a credible patch stream can be created, there is another challenge to be aware of: software patents. The Ksplice Inc. developers did not hesitate to apply for patents on their work; a quick search turns up these applications:

The first of these has a claim reading simply:

A method comprising: identifying a portion of executable code to be updated in a running computer program; and determining whether it is safe to modify the executable code of the running computer program without having to restart the running computer program.

That is an astonishingly broad claim, even by the standards of US software patents. One should note that both of the applications listed above are exactly that: applications. Chances are that they will see modifications before an actual patent is granted - if it is granted at all. But the US patent office has not always demonstrated a great ability to filter out patents that overreach or that are clearly covered by prior art.

Once again, license lawyers could get into the game and debate whether the implied patent license in the GPL would be sufficient to protect those who are distributing and using the ksplice code. Others may want to look at Oracle's litigation history and contemplate how the company might react to a free service competing with its newly-acquired company. There are other companies holding patents in this area as well. Like it or not, this technology has a potential cloud over it.

It all adds up to a daunting set of challenges for the CentOS project if it truly chooses to offer this type of service. That said, years of watching this community has made one thing abundantly clear: one should never discount what a determined group of hackers can do if they set their minds to a task. A CentOS with no-reboot kernel updates would be an appealing option in situations where uptime needs to be maximized but there are no resources for the operation of a high-availability cluster. If the CentOS community wants this feature badly enough, it can certainly make it happen.

Comments (38 posted)

Desktop name collisions

July 27, 2011

This article was contributed by Nathan Willis

The GNOME and KDE development communities ran into a potentially confusing name collision recently when it was discovered that both were using "System Settings" to label the menu entry for their respective environmental configuration tools. A plan for handling the redundant names was eventually hashed out, though it shed light on a variety of other issues about system configuration on modern Linux desktops.

The debate started when Ben Cooksley, maintainer of KDE System Settings, wrote to both the GNOME desktop-devel list and KDE kde-core-devel lists with what he termed a "formal complaint" about the name change in GNOME 3's unified configuration tool, from "Control Center" to "System Settings." Cooksley argued that users would be confused by the presence of both GNOME's System Settings tool and KDE's, and that GNOME "packagers" (meaning downstream distributions) would disable the KDE tool, thus leaving mixed-environment users without a way to configure important KDE application settings. Because KDE was using the term before GNOME, he ended the complaint requesting that GNOME "immediately rename it once again to another name which is not in conflict."

Outside of the core issue, Cooksley's initial few messages were openly combative in tone, accusing the GNOME project of deliberately choosing the same name; remarks like "as KDE occupied this name first, it is ours as a result, and I will NOT be relinquishing it to satisfy your personal (selfish) desires" threatened to derail any serious discourse. A few other posters in the two threads also reacted with acrimony, but list moderators Olav Vitters (GNOME) and Ingo Klöcker (KDE) were quick to step in and warn participants to keep the discussion civil. For the most part, the discussion did calm down, and focused on the technical challenge of permitting two system configuration tools to co-exist — a challenge without an easy solution.

Configuration Junction

The root of the potential confusion is that both KDE and GNOME handle desktop-wide preferences for a range of settings that their constituent applications need: localization, mouse settings, keyboard shortcuts, preferred file-handling applications, even widget and sound themes. Many of the settings are defined in Freedesktop.org specifications, but some are unique to just one desktop environment or another.

In both cases, the name given to the settings-configuration application is generic rather than customized and unique, as it is for most desktop environment utilities. Few on either list gave much credence to the notion that KDE had "dibs" on the generic usage of the name System Settings. Generic names, after all, are by their very nature going to attract name collisions. Shaun McCance observed that "you just can't expect to own generic names across desktops."

Jeremy Bicha even pointed out that a previous name collision between the two projects happened in the other direction, with KDE duplicating the name System Monitor, which GNOME had already been using for years:

There's no evidence to believe that KDE was trying to cause a conflict then, nor is there any evidence that Gnome is doing that now. Unproven allegations like these encourage the criticized party to get defensive and start attacking back, or just not want to listen. Please look for solutions instead of conspiracies.

When on an entirely-KDE or entirely-GNOME system, the name of the other environment's configuration tool theoretically should not matter, but when users install applications from the other environment, the other tool could get pulled in as a dependency, and users are faced with two menu entries named "System Settings." As several people on the thread pointed out, simply renaming one tool or the other to "System Preferences" does not solve the problem, as in either case it is unclear which tool is associated with which environment. Niklas Hambüchen added that although "preferences" and "settings" may be two different words in the English translations of the strings, in many others languages the two tools might still end up using the same word.

GNOME has an OnlyShowIn: GConf key that it uses to make its System Settings appear only in GNOME Shell and Unity, so users running KDE (but using some GNOME applications) do not see the name-colliding menu entries. But as Cooksley and Bicha pointed out, the same solution does not work for KDE, because a substantial number of KDE applications expect the KDE System Settings tool to be available in the menu, even when running under GNOME (or another environment).

McCance suggested that each configuration tool include two .desktop files (which are used by both environments to build the system menus): one for the "native" environment which would use the generic "System Settings" name, and one for the non-native environment, which would prepend "GNOME" or "KDE" to the name, for clarity. Although that approach is possible under the Freedesktop.org .desktop specification using the OnlyShowIn= and NotShowIn= keys, Cooksley said it was already too late to make the change in KDE's .desktop files because the project had already frozen for its 4.7.0 release in August.

Several others felt that supplying two .desktop files for a single application was inelegant, and that the .desktop specification needed patching to specifically support applications that provide different names in different environments. User markg85's recommendation involves adding entries for NativeDE=, and a NameNonNative= key that would be used to provide an alternate name.

On the kde-core-devel list, Ambroz Bizjak offered up a slightly different proposal, in which each application would include a Name= key (as they do currently), but add a Specific-KDE-Name= key for use in KDE, and a Specific-GNOME-Name= key for GNOME, etc. The debate over the difference between those two proposals (and variations of each) is currently ongoing on kde-core-devel.

KDE applications and configuration

A tangent arose in the initial discussion over tool names asking why a KDE application would depend on the external KDE System Settings tool's presence when running under GNOME. Alex Neundorf said there were many configuration issues that could only be set through KDE System Settings, such as "widget style, colors, printing, file associations etc." Cooksley added Phonon, keyboard shortcuts, date/time and localization, and theme.

Giovanni Campagna insisted that those examples should actually be classified as bugs (either in KDE or in the particular application), because the majority of the settings in questions should be accessible to applications regardless of the desktop environment running, either through XSETTINGS, D-Bus, or other means. The KDE Wallet password-storage application mentioned by Cooksley, for example, should be used if the environment is KDE, but all KDE-based applications should follow the .org.freedesktop.Secrets setting, which will direct them to gnome-keyring if the environment is GNOME. Emmanuele Bassi said that most GTK+-based applications currently do adhere to the Freedesktop standards.

Aurélien Gâteau commented that he has been patching KDE applications to do better in this regard, so that "isolated" KDE applications will more closely follow the behavior of generic Qt applications, and pick up the configuration settings set by the environment. He said that there were "very few" applications that can only be configured through a KDE Control Module (KCM) (the type of component presented in KDE System Settings); all others should be completely configurable through their own Settings menus.

The effort to standardize KDE application behavior is obviously ongoing. Later in the thread the personal finance manager KMyMoney came up as another example of an application that relies on KCM components in KDE System Settings to configure its localization settings. Ryan Rix pointed out that KMyMoney could embed the localization KCM.

As for XSETTINGS support, Frédéric Crozat commented that he had written KDE support for the specification in 2007, but that the code had yet to be merged. Gâteau added that he was under the impression that the specification was still in the draft stage, and not ready for public consumption.

KWord developer Thomas Zander said that the whole situation should be treated as a "call to action":

This shows that our system settings actually is only for KDE based applications. [...] Today we realized that Gnome apps don't use our settings and KDE apps need some KCMs that have no Gnome equivalents. And thats not something to get mad about when others work around it, I would personally see that as a call to action.

[...]

The long-term response certainly is to get out of the situation where KDE apps can't be configured without KDEs system settings application. I'll personally take a look at my app; KWord. I have to figure out if Gnome (or Windows) users can configure their locale so we don't have a default of A4 for users that want a Letter sized page.

The short-, medium-, and long-term

It is still not entirely clear what the KDE developers' plan is for 4.7.0. Cooksley concurred with McCance's proposal to use OnlyShowIn= and NotShowIn= keys as a "medium" term solution. When asked why he could not make the changes to KDE System Settings' .desktop files in Subversion and have a "short" term fix ready before August, though, he replied that as per the KDE Release Schedule, only build fixes are permitted after the freeze date.

In the medium term, it does appear that KDE will take the dual-.desktop approach, and that the discussion over additions to the .desktop specification is an attempt to find a "long" term solution. The longer that discussion continued, however, the more people began to comment that the truly long-term approach would be to obviate the need for every environment to provide its own set of system settings tools, particularly when the tools control the same underlying cross-desktop specifications. Two silos of settings are bad enough, but two tools controlling the same settings is a scenario with problems of its own.

For that problem, no one has yet drafted a proposal. But it is not only the KDE camp that recognizes the issue; in his proposal to the desktop-devel list, McCance argued that working on a shared groundwork was the best path forward, saying "if a user has to set his language in two different applications just because he happens to use applications written in two different toolkits, we have failed miserably." The good news is that the KDE and GNOME teams will both be in Berlin the second week of August for the Desktop Summit. Hopefully the long-term answer will inch a little closer to the present as a result.

Comments (9 posted)

IKS: Toward smarter content management systems

July 27, 2011

This article was contributed by Koen Vervloesem

Interactive Knowledge Stack (IKS) is an open source project focused on building an open and flexible technology platform for semantically enhanced Content Management Systems (CMS). Recently, the project held a workshop in Paris, myCMS and the Web of Data, where some IKS tools were presented and where users of the IKS framework demonstrated how they used the semantic enhancements of the project in their CMS. According to the organizers, the event attracted 90 participants.

IKS is a collaboration between academia, industry, and open source developers, co-funded with €6.58 million by the European Union. The goal is to enrich content management systems with semantic content in order to let the users benefit from more intelligent extraction and linking of their information. In other words, as researcher Wernher Behrendt described it in his introduction of the workshop: "The vision of IKS is to move the CMS forwards in the domain of interactive knowledge." Anyone can participate in this vision, for instance by adding their input to the user stories page on the project's wiki.

All of the code for the various IKS projects are provided under a permissive open source license, either BSD, Apache, or MIT. This is expressly done to pave the way for commercial use of IKS. Two of the software components of the IKS stack that are already in good shape are Apache Stanbol (a Java-based software stack to provide semantic services) and VIE (Vienna IKS Editables, a solution to make RDFa encoded semantics browser-editable).

Semantic applications

[Stéphane Croisier]

In his keynote speech "From Semantic Platforms to Semantic Applications", Stéphane Croisier emphasized some problems of the current semantic technology solutions. There is a lot of development happening, with Linked Data, natural language processing, entity extraction, ontologies, and reasoners, that make a lot of promises, but all of these solutions are moving slowly. Croisier has investigated some of them in a so-called "one-week reality check", and he didn't like what he saw:

Many of the semantic web solutions are not ready for multi-language environments, which is especially in Europe a big problem, or they have poor scalability. Others have a steep learning curve, and this industry is also plagued a lot by fanaticism and religious wars, like we had in open source five years ago. All these factors prevent mainstream adoption of the semantic web.

But the problems are not limited to the technical level. According to Croisier, the next key challenge is improved user experience:

Current user interfaces for the semantic web are ugly and not user-friendly. One of the reasons is that the budgets go mostly to platform development, not to development of the user interface, which is probably because many semantic web projects are born in universities and have an academic approach, focusing on the technology. But it doesn't have to be this way, and if we want a breakthrough of the semantic web, we better start working on good user interfaces.

At the same time, Croisier expressed his hope that developers will move their efforts from semantic platforms to semantic applications, or in other words "migrate from the geek to the practitioner". Only geeks are excited by the platform stuff like RDF (Resource Description Framework), ontologies and REST (REpresentational State Transfer) interfaces, but the industry needs some smart content applications. However, there seems to be a barrier to overcome, as Croisier admitted: "We're all still trying to find the killer app for the semantic web."

Apache Stanbol

After Croisier's talk, a couple of early adopters showed their demos of applications built on Apache Stanbol, the open source modular software stack for semantic content management initiated by IKS. Stanbol components are meant to be accessed over RESTful interfaces to provide semantic services, and the code is written in Java and based on the OSGi modularization framework.

Stanbol has four main features to offer to applications using its services: persistence (it stores or caches semantic information and makes it searchable), lifting/enhancement (it adds semantic information to unstructured pieces of content), knowledge models and reasoning (to enhance the semantic information), and interaction (management and generation of intelligent user interfaces). If you want to take a peek at the possibilities, there's an online demo: just paste some text into the form and run the engine to look at the entities Stanbol finds. There are also some installation instructions in the documentation to run a Stanbol server yourself. Because Stanbol has a RESTful API, it's also easy to test it with a command line tool like curl.

At the IKS workshop, some integrators showed how they integrated Stanbol into an open source CMS. For instance, the London-based company Zaizi showed an integration with the enterprise content management system Alfresco. The code for this integration is licensed under the LGPL and there's a website with some information and installation instructions. The semantic engine extracts entities from Microsoft Office, ODF, PDF, HTML, and plain text documents uploaded to Alfresco and shows the entities next to the content details. The entities can also be selected in Alfresco's interface to list all other documents classified with that entity.

Jürgen Jakobitsch from the Austrian company punkt. netServices presented its Drupal plugin to integrate Stanbol. The current version is targeted at Drupal 6, but an update for Drupal 7 is coming soon. The module enables tag recommendations as well as semi-automated semantic annotation. The open data website of the Austrian government is running this Drupal/IKS integration.

Andrea Volpini and David Riccitelli from the Italian company InsideOut10 presented WordLift, an open source plugin to enrich textual content on a WordPress blog using HTML microdata, which is easy to parse by search engines. When writing a blog post, the content is sent to Stanbol, and the entities it finds will be added in Google Rich Snippets or Schema.org format. The user can then select which of the found entities are relevant. It's all still quite experimental, but the target of the developers is clear: spoon-feeding HTML microdata to the search engines using semantic web technologies. According to Volpini, the source code of the plugin will be published in a few weeks.

In addition, Olivier Grisel from the French open source ECM (enterprise content management) company Nuxeo presented their Semantic Entities module for the Nuxeo CMS and Juan A. Prieto presented the integration of Stanbol with the semantic CMS XIMDEX.

Vienna IKS Editables

[Henri Bergius]

The other key component of the IKS software stack is VIE (Vienna IKS Editables), presented at the workshop by the main developer, Henri Bergius. The idea is to "build a CMS, no forms allowed", as people don't like forms ("forms are only for communication with the government," according to Bergius). To make this possible, the CMS and some JavaScript code must agree on the content model, and this is what VIE offers: it understands RDFa, a semantically annotated version of HTML.

If you annotate your website (or CMS) with RDFa, suddenly JavaScript code can understand the meaning of your content. VIE is an MIT-licensed browser API for RDFa, bridging RDFa to JavaScript. It depends on Backbone.js and jQuery, and it reads all RDFa annotated entities as JavaScript objects on a page where the library is loaded. These objects can then be edited by the user in the browser, and changes are synchronized with the server and the Document Object Model (DOM) in the browser.

The big promise of VIE is that it is independent of the CMS: the same lines of JavaScript work on Drupal, WordPress, TYPO3, and any other CMS that has provided an implementation of the Backbone.sync method. Apart from implementing this method, you only have to mark up your content with RDFa, include vie.js in your pages and write some JavaScript code. The three latter tasks are all independent of the underlying CMS.

On top of VIE, there's also VIE^2 (Vienna IKS Editable Entities), which talks to services like Stanbol and OpenCalais to find related information for your content. To show what's possible with VIE and VIE^2, the IKS developers created palsu.me, an online collaborative meeting tool.

Surviving after EU funding

[Bertrand Delacretaz]

The IKS project was started in 2009 as a 4-year EU project, but how will it survive after the project (and the funding) is done? Bertrand Delacretaz from Adobe had some advice. Apart from being a developer at Adobe, he is also a member of the board of directors of the Apache Software Foundation. Stanbol is currently an Apache Incubator project since 2010, and the developers would like it to graduate to a full Apache project, preferably before the end of 2012 because that's when the IKS project (and hence the funding) stops.

There are, however, some criteria before an Apache incubator project is allowed full project status. Delacretaz gave two examples: all communication about the project has to happen on the -dev mailing list, and there have to be at least three legally independent committers (with different income sources). The latter is currently a problem for Stanbol, because too many committers get funded by the IKS project. So Delacretaz would like to see more (external) committers for Stanbol to secure its future.

In search of the killer app

At the end of the conference, the organizers announced the IKS Semantic CMS UI/X Competition. Project manager John Pereira said that the first 1.5 years of IKS were focused on infrastructure, but now the focus has shifted to the users. In the contest, the IKS project will give two awards of €40,000 to CMS developers who build "killer user experiences and user interfaces" on top of IKS technology. Anyone with an idea for a killer semantic application can enter the contest.

Of course there are some conditions. The proposed solution should reuse as many IKS components as possible, and it should ideally be easy to implement. It also should focus on providing a compelling semantic experience. Ideas can be found in the list of semantic UI/X user stories. The awards will let the winners finance the development of their proposed solution, and in exchange the deliverables have to be released under a permissive open source license. Proposals should be submitted online (there's no online form yet, at the moment you should email John Pereira) before November 2011 and the five best ones will be shortlisted and invited to pitch their proposals at the J. Boye Conference in November 2011, where the two winners will be selected.

There are some striking parallels between the promises of the semantic web and the "year of the Linux desktop" meme. Since at least 2000, IT magazines and web sites have been declaring every year as the year of the Linux desktop, in the sincere hope that that year would see a breakthrough in Linux adoption by businesses and home users on desktop computers. In the same way, the press has been writing about small success stories of semantic web technology, with expectations that it would soon come to a breakthrough. However, although most of the technology under the hood is ready, it looks like we still have to wait a while for this "year of the semantic web". What the IKS workshop made clear is that there's a lot of work to do on the level of the user interface. VIE looks like an interesting component for semantic web user interfaces, but as many of the speakers made clear, the whole industry is still desperately searching for that killer app.

Comments (none posted)

Page editor: Jonathan Corbet

Security

BrowserID: A new web authentication scheme

By Jake Edge
July 27, 2011

Identity and authentication on the internet is still an unsolved problem. Some sites are delegating the problem to Facebook, Twitter, and others, but that has obvious privacy and control problems, which makes it worrisome for at least some users. OpenID has never really gained much traction, and alternate user-centric proposals, like the related OpenID Connect, haven't either. There are both technical and "social" barriers that haven't been overcome. Mozilla's recent BrowserID proposal looks toward solving a subset of the identity problem by making it easier for users to log in to web applications without having to remember (or duplicate) multiple usernames and passwords.

One of the main differences between BrowserID and the other solutions is that it decouples the identity question from that of authentication. Essentially, using the Verified Email Protocol (VEP) that underlies BrowserID will simply authenticate that a given email address corresponds to the browser that is being used to sign in. OpenID and others supplement that authentication with the idea of a verified identity that could include things like email address, real name, physical address, photo, and so on. BrowserID and VEP forgo all of that, which may or may not make it more palatable to web site operators.

For users who control their own email domain, or whose email provider implements the protocol, VEP's operation is fairly straightforward. The email provider acts as an "authority" to authenticate its email addresses. A user who wants to have a verified email address would authenticate with the authority (via a username/password in web mail application for example) and the authority would make a JavaScript call that tells the browser that the authentication was successful. That call would then generate a public/private key pair, sending the public key to the authority and storing the private key locally. A user could have multiple identities, each tied to a unique email, at one or more authorities.

When the user logs into a site that allows VEP authentication (i.e. a "relying party" or RP), they would be prompted to choose one of their email addresses to use. The browser would then create an "assertion" that listed the email address, a timestamp, and some other information, sign it with the private key, and send it to the site. The site then contacts the authority to get the public key for the user and verifies that the assertion is correctly signed. At that point, the web site can be sure that it is talking to a browser that is (or was at one point) controlled by a user with that email address.

Obviously, email providers are not likely to be falling all over themselves to implement an in-progress protocol, and it may be years—if ever—before they do, so VEP has the concept of "secondary verifiers" that would be stand-ins for the email provider authorities. If the user trusted the secondary (to respect their privacy for example), they could establish their control of a particular email address via a link in an email sent by the secondary. That would include the browser in the transaction so that the key pair could be generated and the public key sent to the secondary. If an RP also trusted the secondary, it could retrieve the public key from there and verify the authenticity of the email-browser connection that way.

In addition, some smaller web sites might wish (or need) to farm out the verification to a verification service run by a trusted third party. As the VEP wiki page notes: "These services obviously have tremendous power and would need to be constructed with both technical and legal care."

Doing a round-trip SSL transaction to an authority (or secondary verifier) whenever a user logs in may add unacceptable latency to the log in process. It would also leak information about which sites a user is visiting to the authority. One way to handle that is with an "identity certificate" that contains the user's public key and is signed by the authority. That way, a web site would only be retrieving the authority's public key, not the user's, and that authority key could be cached by the site to eliminate all but one of the retrieval round-trips. But that raises the problem of key revocation.

VEP is certainly not alone in having that particular problem. To a certain extent, all public key encryption mechanisms suffer from key revocation problems. In fact, key management is one of the hardest problems to solve for public key cryptography. As the VEP page notes, there are already problems with revocation for SSL certificates:

Just as with site-identifying certificates, the RP is required to either retrieve a revocation list or use an online status check (that is, a CRL [Certificate Revocation List] or OCSP [Online Certificate Status Protocol]) to make sure an identity certificate is still valid. These steps have proven to be problematic for the site-identifying CAs [certificate authorities] that power the SSL site-identification infrastructure, and there is little reason to think that email hosts would be any more capable of handling them at larger scale. It may be realistic to think that the internet could support identity certificate revokation at scale; perhaps we should focus our attention instead on limiting the scope of breaches, for example by encouraging short-lived identity certificates and automated certificate refresh.

Much of the actual guts of the protocol are still being worked out, and it is interesting to see that some flexibility in the protocol is envisioned. The wiki document describes it this way:

The basic message flow that makes this system work is independent of the exact cryptographic protocols and message formats that encode the messages. For purposes of clarity, however, it is described it using a specific set of protocols. The reader is asked to understand that those choices are for illustrative purposes, and that multiple encodings of the trust relationships described herein are possible.

Specifically: The explanation contained here will assume that user data lookups occur through the Webfinger protocol, that site-level metadata is retrieved through HTTPS using the .well-known/host-meta mechanism described in IETF RFC 5785 and draft-hammer-hostmeta, that assertions are generated and signed according to the JSON Web Tokens draft, and that asymmetric cryptography is performed using either RSA or ECDSA keypairs. When reference to a public key certificate is made, it is usually assumed that this would be an X509 certificate but there is no strong requirement that it be.

There are, of course, some concerns about BrowserID, not least the fact that an enormous amount of sensitive information would be stored by the browser (i.e. any public keys the user has generated) on a user's computer or device. In some ways, though, that's not much different than the current practice of storing username/password pairs for multiple web sites. Protecting that data store is clearly of utmost importance (whether BrowserID ever takes off or not).

BrowserID itself is a JavaScript implementation of VEP that will run in "all modern browsers, including recent versions of IE, and on mobile browsers" according to the Mozilla announcement. In addition to that and the VEP documents, Mozilla has set up browserid.org as the central location for information about the protocol and implementation.

Mozilla would clearly like to see other browser makers, users, and web sites work with it to firm up BrowserID and see it headed in a direction toward deployment. It's unclear whether that will happen or whether BrowserID will be yet another failed identity experiment. It certainly does have some interesting properties, and would allow sites to gather the extra information from users that they crave (i.e. beyond just an email address). When a user sets up an account, the application could request or require much more than just the email address it needs for authentication. The lack of that extra information is part of the reason that OpenID has never really taken off (and why OpenID Connect was proposed). One thing seems sure, solutions to this problem (or related set of problems) will keep coming up until something that is easy to use and can cater to privacy-conscious users actually becomes widespread.

Comments (5 posted)

Brief items

Security quotes of the week

War texting is something that [Don] Bailey demonstrated earlier this year with personal GPS locators. He demonstrated how to hack vendor Zoombak's personal GPS devices to find, target, and impersonate the user or equipment rigged with those consumer-focused devices. Those low-cost embedded tracking devices in smartphones or those personal GPS devices that track the whereabouts of your children, car, pet, or shipment can easily be intercepted by hackers, who can then pinpoint their whereabouts, impersonate them, and spoof their physical location, he says.
-- Dark Reading looks at talk at the upcoming Black Hat conference

What he found is that the batteries are shipped from the factory in a state called "sealed mode" and that there's a four-byte password that's required to change that. By analyzing a couple of updates that Apple had sent to fix problems in the batteries in the past, [Charlie] Miller found that password and was able to put the battery into "unsealed mode."

From there, he could make a few small changes to the firmware, but not what he really wanted. So he poked around a bit more and found that a second password was required to move the battery into full access mode, which gave him the ability to make any changes he wished. That password is a default set at the factory and it's not changed on laptops before they're shipped. Once he had that, Miller found he could do a lot of interesting things with the battery.

-- Threat Post on a Black Hat talk about Apple laptop battery vulnerabilities

Stage 1 (hiding): All participants registered for the backdoor hiding game are given a set of requirements for a software program. Before the deadline, they must submit the source code for a program that fulfills these requirements plus includes a backdoor. They must also send a description explaining how to exploit the backdoor.

Stage 2 (finding): All players registered are given a bundle with the different pieces of source code. To each bundle the organizers will add a few placebos (source codes that fulfill the requirements but should not include a backdoor). Before a deadline, the players must answer for each source code if they believe it includes a backdoor or not.

-- The 2nd Open Backdoor Hiding and Finding Contest to be held at DEFCON 0x13

This archive contains 18,592 scientific publications totaling 33GiB, all from Philosophical Transactions of the Royal Society and which should be available to everyone at no cost, but most have previously only been made available at high prices through paywall gatekeepers like JSTOR.
-- Gregory Maxwell protests the charges against Aaron Swartz

Comments (7 posted)

Embedded Web Servers Exposing Organizations To Attack (Dark Reading)

Dark Reading previews another talk from the upcoming Black Hat conference, this time on embedded web servers that have been connected to the internet, probably unknowingly. "[Michael] Sutton used Amazon EC2 computing resources to constantly scan large blocks of addresses and to detect any embedded Web servers. Sharp and Ricoh copiers digitally archive past photocopies, he notes, so if that feature is enabled and the copier is sitting on the Net unsecured, an attacker could retrieve any previously photocopied documents, he says. Even the fax-forwarding feature in some HP scanners could be abused if the scanner were open to the Internet: An attacker could access any faxed documents to the user by having them forwarded to his fax machine, for example."

Comments (8 posted)

New vulnerabilities

cifs-utils: /etc/mtab file corruption

Package(s):cifs-utils CVE #(s):CVE-2011-1678
Created:July 25, 2011 Updated:September 23, 2011
Description: From the CVE entry:

smbfs in Samba 3.5.8 and earlier attempts to use (1) mount.cifs to append to the /etc/mtab file and (2) umount.cifs to append to the /etc/mtab.tmp file without first checking whether resource limits would interfere, which allows local users to trigger corruption of the /etc/mtab file via a process with a small RLIMIT_FSIZE value, a related issue to CVE-2011-1089.

Alerts:
Gentoo 201206-22 samba 2012-06-24
Oracle ELSA-2012-0313 samba 2012-03-07
Mandriva MDVSA-2011:148 samba 2011-10-11
Ubuntu USN-1226-1 samba 2011-10-04
Ubuntu USN-1226-2 cifs-utils 2011-10-04
CentOS CESA-2011:1220 samba3x 2011-09-22
CentOS CESA-2011:1219 samba 2011-09-22
Scientific Linux SL-samb-20110829 samba3x 2011-08-29
Scientific Linux SL-samb-20110829 samba 2011-08-29
Scientific Linux SL-Samb-20110829 samba, cifs-utils 2011-08-29
CentOS CESA-2011:1219 samba 2011-08-29
Red Hat RHSA-2011:1221-01 samba, cifs-utils 2011-08-29
Red Hat RHSA-2011:1220-01 samba3x 2011-08-29
Red Hat RHSA-2011:1219-01 samba 2011-08-29
Fedora FEDORA-2011-9269 cifs-utils 2011-07-12

Comments (none posted)

freetype: arbitrary code execution

Package(s):freetype CVE #(s):CVE-2011-0226
Created:July 21, 2011 Updated:August 31, 2011
Description: From the CVE entry:

Integer signedness error in psaux/t1decode.c in FreeType before 2.4.6, as used in CoreGraphics in Apple iOS before 4.2.9 and 4.3.x before 4.3.4 and other products, allows remote attackers to execute arbitrary code or cause a denial of service (memory corruption and application crash) via a crafted Type 1 font in a PDF document, as exploited in the wild in July 2011.

Alerts:
Gentoo 201201-09 freetype 2012-01-23
Fedora FEDORA-2011-9525 freetype 2011-07-22
Fedora FEDORA-2011-9542 freetype 2011-07-22
Debian DSA-2294-1 freetype 2011-08-14
SUSE SUSE-SU-2011:0853-1 freetype2 2011-07-28
openSUSE openSUSE-SU-2011:0852-1 freetype 2011-07-28
Mandriva MDVSA-2011:120 freetype2 2011-07-26
Ubuntu USN-1173-1 freetype 2011-07-25
Scientific Linux SL-free-20110721 freetype 2011-07-21
Red Hat RHSA-2011:1085-01 freetype 2011-07-21

Comments (none posted)

icedtea-web: multiple vulnerabilities

Package(s):icedtea-web CVE #(s):CVE-2011-2513 CVE-2011-2514
Created:July 25, 2011 Updated:August 2, 2011
Description: From the Red Hat bugzilla: [1, 2]

Omair Majid discovered an information disclosure flaw in the JNLP (Java Network Launching Protocol) implementation used in IcedTea and IcedTea-web. An unsigned Java Web Start application or Java Applet could use this flaw to determine a path to the cache directory (/home/<username>/.netx/cache/) used to store downloaded jars for Web Start application or Applet by querying class's ClassLoader properties. This discloses full path to user's home directory on the local system and user's login name.

Omair Majid discovered a flaw in the JNLP (Java Network Launching Protocol) implementation used in IcedTea-web. An unsigned Java Web Start application could use this flaw to manipulate content of the Security Warning dialog to show different file name than the one access to which was requested by the applications. This could confuse user to grant unintended access to local files.

Alerts:
Fedora FEDORA-2011-9523 java-1.6.0-openjdk 2011-07-22
Scientific Linux SL-iced-20110727 icedtea-web 2011-07-27
Ubuntu USN-1178-1 icedtea-web, openjdk-6, openjdk-6b18 2011-07-27
Red Hat RHSA-2011:1100-01 icedtea-web 2011-07-27
openSUSE openSUSE-SU-2011:0829-1 icedtea-web 2011-07-25
Fedora FEDORA-2011-9541 icedtea-web 2011-07-22

Comments (none posted)

kernel: denial of service

Package(s):kernel CVE #(s):CVE-2011-1780 CVE-2011-2525 CVE-2011-2689
Created:July 21, 2011 Updated:November 21, 2011
Description: From the Red Hat advisory:

* A flaw was found in the way the Xen hypervisor implementation handled instruction emulation during virtual machine exits. A malicious user-space process running in an SMP guest could trick the emulator into reading a different instruction than the one that caused the virtual machine to exit. An unprivileged guest user could trigger this flaw to crash the host. This only affects systems with both an AMD x86 processor and the AMD Virtualization (AMD-V) extensions enabled. (CVE-2011-1780, Important)

* A flaw allowed the tc_fill_qdisc() function in the Linux kernel's packet scheduler API implementation to be called on built-in qdisc structures. A local, unprivileged user could use this flaw to trigger a NULL pointer dereference, resulting in a denial of service. (CVE-2011-2525, Moderate)

* A flaw was found in the way space was allocated in the Linux kernel's Global File System 2 (GFS2) implementation. If the file system was almost full, and a local, unprivileged user made an fallocate() request, it could result in a denial of service. Note: Setting quotas to prevent users from using all available disk space would prevent exploitation of this flaw. (CVE-2011-2689, Moderate)

Alerts:
openSUSE openSUSE-SU-2012:0206-1 kernel 2012-02-09
Oracle ELSA-2011-2037 enterprise kernel 2011-12-15
Ubuntu USN-1286-1 linux 2011-12-03
Ubuntu USN-1269-1 linux-ec2 2011-11-21
Ubuntu USN-1274-1 linux-mvl-dove 2011-11-21
Ubuntu USN-1256-1 linux-lts-backport-natty 2011-11-09
Ubuntu USN-1268-1 kernel 2011-11-21
Ubuntu USN-1241-1 linux-fsl-imx51 2011-10-25
Debian DSA-2310-1 linux-2.6 2011-09-22
CentOS CESA-2011:1065 kernel 2011-09-22
Ubuntu USN-1211-1 linux 2011-09-21
Ubuntu USN-1212-1 linux-ti-omap4 2011-09-21
Debian DSA-2303-2 linux-2.6 2011-09-10
Debian DSA-2303-1 linux-2.6 2011-09-08
Scientific Linux SL-kern-20110823 kernel 2011-08-23
Red Hat RHSA-2011:1189-01 kernel 2011-08-23
Red Hat RHSA-2011:1163-01 kernel 2011-08-16
Red Hat RHSA-2011:1065-01 kernel 2011-07-21

Comments (none posted)

kernel: multiple vulnerabilities

Package(s):kernel CVE #(s):CVE-2011-1020 CVE-2011-2183 CVE-2011-2491 CVE-2011-2496
Created:July 25, 2011 Updated:December 27, 2011
Description: From the SUSE advisory:

CVE-2011-1020: The proc filesystem implementation in the Linux kernel did not restrict access to the /proc directory tree of a process after this process performs an exec of a setuid program, which allowed local users to obtain sensitive information or cause a denial of service via open, lseek, read, and write system calls.

CVE-2011-2183: Fixed a race between ksmd and other memory management code, which could result in a NULL ptr dereference and kernel crash.

CVE-2011-2491: A local unprivileged user able to access a NFS filesystem could use file locking to deadlock parts of an nfs server under some circumstance.

CVE-2011-2496: The normal mmap paths all avoid creating a mapping where the pgoff inside the mapping could wrap around due to overflow. However, an expanding mremap() can take such a non-wrapping mapping and make it bigger and cause a wrapping condition.

Alerts:
Oracle ELSA-2013-1645 kernel 2013-11-26
Oracle ELSA-2012-0150 kernel 2012-03-07
Red Hat RHSA-2012:0116-01 kernel 2012-02-15
Debian DSA-2389-1 linux-2.6 2012-01-15
Oracle ELSA-2012-0007 kernel 2012-01-12
Scientific Linux SL-kern-20120112 kernel 2012-01-12
CentOS CESA-2012:0007 kernel 2012-01-11
Red Hat RHSA-2012:0007-01 kernel 2012-01-10
Scientific Linux SL-Kern-20111206 kernel 2011-12-06
Oracle ELSA-2011-2037 enterprise kernel 2011-12-15
Red Hat RHSA-2011:1813-01 kernel 2011-12-13
Red Hat RHSA-2011:1530-03 kernel 2011-12-06
Ubuntu USN-1286-1 linux 2011-12-03
Ubuntu USN-1285-1 linux 2011-11-29
Ubuntu USN-1281-1 linux-ti-omap4 2011-11-24
Ubuntu USN-1280-1 linux-ti-omap4 2011-11-24
Ubuntu USN-1279-1 linux-lts-backport-natty 2011-11-24
Ubuntu USN-1278-1 linux-lts-backport-maverick 2011-11-24
Ubuntu USN-1269-1 linux-ec2 2011-11-21
Ubuntu USN-1274-1 linux-mvl-dove 2011-11-21
Ubuntu USN-1271-1 linux-fsl-imx51 2011-11-21
Ubuntu USN-1272-1 linux 2011-11-21
Ubuntu USN-1256-1 linux-lts-backport-natty 2011-11-09
openSUSE openSUSE-SU-2011:1222-1 kernel 2011-11-08
Ubuntu USN-1268-1 kernel 2011-11-21
Ubuntu USN-1244-1 linux-ti-omap4 2011-10-25
Ubuntu USN-1241-1 linux-fsl-imx51 2011-10-25
Scientific Linux SL-kern-20111020 kernel 2011-10-20
CentOS CESA-2011:1386 kernel 2011-10-21
Red Hat RHSA-2011:1386-01 kernel 2011-10-20
Scientific Linux SL-kern-20111005 kernel 2011-10-05
Red Hat RHSA-2011:1350-01 kernel 2011-10-05
Ubuntu USN-1218-1 linux 2011-09-29
Ubuntu USN-1216-1 linux-ec2 2011-09-26
CentOS CESA-2011:1212 kernel 2011-09-22
Debian DSA-2310-1 linux-2.6 2011-09-22
Ubuntu USN-1211-1 linux 2011-09-21
SUSE SUSE-SU-2011:1058-1 kernel 2011-09-21
Ubuntu USN-1212-1 linux-ti-omap4 2011-09-21
SUSE SUSE-SA:2011:040 kernel 2011-09-20
Ubuntu USN-1208-1 linux-mvl-dove 2011-09-14
Ubuntu USN-1205-1 linux-lts-backport-maverick 2011-09-13
Ubuntu USN-1204-1 linux-fsl-imx51 2011-09-13
Ubuntu USN-1203-1 linux-mvl-dove 2011-09-13
Ubuntu USN-1202-1 linux-ti-omap4 2011-09-13
Ubuntu USN-1201-1 linux 2011-09-13
Red Hat RHSA-2011:1253-01 kernel-rt 2011-09-12
Debian DSA-2303-2 linux-2.6 2011-09-10
Scientific Linux SL-kern-20110906 kernel 2011-09-06
Debian DSA-2303-1 linux-2.6 2011-09-08
Red Hat RHSA-2011:1212-01 kernel 2011-09-06
Scientific Linux SL-kern-20110823 kernel 2011-08-23
Red Hat RHSA-2011:1189-01 kernel 2011-08-23
Fedora FEDORA-2011-11103 kernel 2011-08-18
Ubuntu USN-1189-1 kernel 2011-08-19
SUSE SUSE-SU-2011:0899-1 kernel 2011-08-12
SUSE SUSE-SA:2011:034 kernel 2011-08-12
openSUSE openSUSE-SU-2011:0861-1 kernel 2011-08-02
openSUSE openSUSE-SU-2011:0860-1 kernel 2011-08-02
SUSE SUSE-SU-2011:0832-1 kernel 2011-07-25
SUSE SUSE-SA:2011:031 kernel 2011-07-25

Comments (none posted)

libsndfile: arbitrary code execution

Package(s):libsndfile CVE #(s):CVE-2011-2696
Created:July 21, 2011 Updated:December 18, 2013
Description: From the Red Hat advisory:

An integer overflow flaw, leading to a heap-based buffer overflow, was found in the way the libsndfile library processed certain Ensoniq PARIS Audio Format (PAF) audio files. An attacker could create a specially-crafted PAF file that, when opened, could cause an application using libsndfile to crash or, potentially, execute arbitrary code with the privileges of the user running the application.

Alerts:
Gentoo 201312-14 libsndfile 2013-12-17
Fedora FEDORA-2011-9319 libsndfile 2011-07-15
Pardus 2011-103 libsndfile 2011-08-04
openSUSE openSUSE-SU-2011:0855-1 libsndfile 2011-08-01
openSUSE openSUSE-SU-2011:0854-1 libsndfile 2011-07-29
Debian DSA-2288-1 libsndfile 2011-07-28
Ubuntu USN-1174-1 libsndfile 2011-07-25
Mandriva MDVSA-2011:119 libsndfile 2011-07-25
Fedora FEDORA-2011-9325 libsndfile 2011-07-15
Scientific Linux SL-libs-20110720 libsndfile 2011-07-20
Red Hat RHSA-2011:1084-01 libsndfile 2011-07-20

Comments (none posted)

logrotate: symlink and hard link attacks

Package(s):logrotate CVE #(s):CVE-2011-1548
Created:July 21, 2011 Updated:July 27, 2011
Description: From the CVE entry:

The default configuration of logrotate on Debian GNU/Linux uses root privileges to process files in directories that permit non-root write access, which allows local users to conduct symlink and hard link attacks by leveraging logrotate's lack of support for untrusted directories, as demonstrated by /var/log/postgresql/.

Alerts:
Ubuntu USN-1172-1 logrotate 2011-07-21

Comments (none posted)

mapserver: multiple vulnerabilities

Package(s):mapserver CVE #(s):CVE-2011-2703 CVE-2011-2704
Created:July 26, 2011 Updated:October 30, 2012
Description: From the Debian advisory:

CVE-2011-2703: Several instances of insufficient escaping of user input, leading to SQL injection attacks via OGC filter encoding (in WMS, WFS, and SOS filters).

CVE-2011-2704: Missing length checks in the processing of OGC filter encoding that can lead to stack-based buffer overflows and the execution of arbitrary code.

Alerts:
Fedora FEDORA-2012-16028 mapserver 2012-10-30
Debian DSA-2285-1 mapserver 2011-07-26

Comments (none posted)

opensaml2: XML signature wrapping attack

Package(s):opensaml2 CVE #(s):CVE-2011-1411
Created:July 25, 2011 Updated:September 27, 2011
Description: From the Debian advisory:

Juraj Somorovsky, Andreas Mayer, Meiko Jensen, Florian Kohlar, Marco Kampmann and Joerg Schwenk discovered that Shibboleth, a federated web single sign-on system is vulnerable to XML signature wrapping attacks. More details can be found in the Shibboleth advisory at http://shibboleth.internet2.edu/security-advisories.html.

Alerts:
Fedora FEDORA-2011-12890 opensaml 2011-09-18
Debian DSA-2284-1 opensaml2 2011-07-25

Comments (none posted)

opie: privilege escalation/code execution

Package(s):opie CVE #(s):CVE-2011-2489 CVE-2011-2490
Created:July 21, 2011 Updated:July 27, 2011
Description: From the Debian advisory:

Sebastian Krahmer discovered that opie, a system that makes it simple to use One-Time passwords in applications, is prone to a privilege escalation (CVE-2011-2490) and an off-by-one error, which can lead to the execution of arbitrary code (CVE-2011-2489).

Alerts:
openSUSE openSUSE-SU-2011:0848-1 opie 2011-07-27
Debian DSA-2281-1 opie 2011-07-21

Comments (none posted)

phpmyadmin: multiple vulnerabilities

Package(s):phpmyadmin CVE #(s):CVE-2011-2505 CVE-2011-2506 CVE-2011-2507 CVE-2011-2508 CVE-2011-2642
Created:July 27, 2011 Updated:August 15, 2011
Description: From the Debian advisory:

CVE-2011-2505: Possible session manipulation in Swekey authentication.

CVE-2011-2506: Possible code injection in setup script, in case session variables are compromised.

CVE-2011-2507: Regular expression quoting issue in Synchronize code.

CVE-2011-2508: Possible directory traversal in MIME-type transformation.

CVE-2011-2642: Cross site scripting in table Print view when the attacker can create crafted table names.

Alerts:
Gentoo 201201-01 phpmyadmin 2012-01-04
Mandriva MDVSA-2011:124 phpmyadmin 2011-08-14
Debian DSA-2286-1 phpmyadmin 2011-07-26
Fedora FEDORA-2011-9725 phpMyAdmin 2011-07-26
Fedora FEDORA-2011-9734 phpMyAdmin 2011-07-26

Comments (none posted)

qemu-kvm: privilege escalation

Package(s):qemu-kvm CVE #(s):CVE-2011-2527
Created:July 25, 2011 Updated:August 20, 2012
Description: From the Debian advisory:

Andrew Griffiths discovered that group privileges were insufficiently dropped when started with -runas option, resulting in privilege escalation.

Alerts:
Mageia MGASA-2012-0222 qemu 2012-08-18
Fedora FEDORA-2012-8604 qemu 2012-06-07
openSUSE openSUSE-SU-2012:0207-1 kvm 2012-02-09
Scientific Linux SL-qemu-20111206 qemu-kvm 2011-12-06
Red Hat RHSA-2011:1531-03 qemu-kvm 2011-12-06
Ubuntu USN-1177-1 qemu-kvm 2011-07-27
Debian DSA-2282-1 qemu-kvm 2011-07-25

Comments (none posted)

rgmanager: privilege escalation

Package(s):rgmanager CVE #(s):CVE-2010-3389
Created:July 21, 2011 Updated:December 9, 2011
Description: From the Red Hat advisory:

The rgmanager package contains the Red Hat Resource Group Manager, which provides the ability to create and manage high-availability server applications in the event of system downtime.

It was discovered that certain resource agent scripts set the LD_LIBRARY_PATH environment variable to an insecure value containing empty path elements. A local user able to trick a user running those scripts to run them while working from an attacker-writable directory could use this flaw to escalate their privileges via a specially-crafted dynamic library.

Alerts:
Gentoo 201412-09 racer-bin, fmod, PEAR-Mail, lvm2, gnucash, xine-lib, lastfmplayer, webkit-gtk, shadow, PEAR-PEAR, unixODBC, resource-agents, mrouted, rsync, xmlsec, xrdb, vino, oprofile, syslog-ng, sflowtool, gdm, libsoup, ca-certificates, gitolite, qt-creator 2014-12-11
Scientific Linux SL-reso-20111206 resource-agents 2011-12-06
Red Hat RHSA-2011:1580-03 resource-agents 2011-12-06
Gentoo 201110-18 rgmanager 2011-10-22
CentOS CESA-2011:1000 rgmanager 2011-09-22
Red Hat RHSA-2011:1000-01 rgmanager 2011-07-21
Scientific Linux SL-rgma-20110721 rgmanager 2011-07-21

Comments (none posted)

ruby: predictable random numbers

Package(s):ruby CVE #(s):CVE-2011-2686 CVE-2011-2705
Created:July 26, 2011 Updated:January 31, 2012
Description: From the Red Hat bugzilla:

It was found that Ruby did not properly reinitialize the random number generator, when forking new Ruby process. A local attacker could use this flaw to easier predict random numbers.

Alerts:
Debian-LTS DLA-235-1 ruby1.9.1 2015-05-30
Ubuntu USN-1377-1 ruby1.8 2012-02-27
openSUSE openSUSE-SU-2012:0228-1 Ruby 2012-02-09
Scientific Linux SL-ruby-20120130 ruby 2012-01-30
Oracle ELSA-2012-0070 ruby 2012-01-31
Oracle ELSA-2012-0070 ruby 2012-01-31
CentOS CESA-2012:0070 ruby 2012-01-30
CentOS CESA-2012:0070 ruby 2012-01-30
Red Hat RHSA-2012:0070-01 ruby 2012-01-30
Scientific Linux SL-ruby-20111206 ruby 2011-12-06
Red Hat RHSA-2011:1581-03 ruby 2011-12-06
Fedora FEDORA-2011-9374 ruby 2011-07-16
Fedora FEDORA-2011-9359 ruby 2011-07-16
Pardus 2011-101 ruby 2011-08-03

Comments (none posted)

samba: multiple vulnerabilities

Package(s):samba CVE #(s):CVE-2011-2522 CVE-2011-2694
Created:July 27, 2011 Updated:September 23, 2011
Description: From the Mandriva advisory:

All current released versions of Samba are vulnerable to a cross-site request forgery in the Samba Web Administration Tool (SWAT). By tricking a user who is authenticated with SWAT into clicking a manipulated URL on a different web page, it is possible to manipulate SWAT (CVE-2011-2522).

All current released versions of Samba are vulnerable to a cross-site scripting issue in the Samba Web Administration Tool (SWAT). On the Change Password field, it is possible to insert arbitrary content into the user field (CVE-2011-2694).

Alerts:
SUSE SUSE-SU-2012:0348-1 Samba 2012-03-09
Oracle ELSA-2012-0313 samba 2012-03-07
CentOS CESA-2011:1220 samba3x 2011-09-22
CentOS CESA-2011:1219 samba 2011-09-22
openSUSE openSUSE-SU-2011:0998-1 samba 2011-09-05
Pardus 2011-110 samba 2011-09-05
Scientific Linux SL-samb-20110829 samba3x 2011-08-29
Scientific Linux SL-samb-20110829 samba 2011-08-29
Scientific Linux SL-Samb-20110829 samba, cifs-utils 2011-08-29
CentOS CESA-2011:1219 samba 2011-08-29
Red Hat RHSA-2011:1221-01 samba, cifs-utils 2011-08-29
Red Hat RHSA-2011:1220-01 samba3x 2011-08-29
Red Hat RHSA-2011:1219-01 samba 2011-08-29
Fedora FEDORA-2011-10367 samba 2011-08-05
Fedora FEDORA-2011-10341 samba 2011-08-05
Debian DSA-2290-1 samba 2011-08-07
Slackware SSA:2011-210-03 samba 2011-08-01
Mandriva MDVSA-2011:121 samba 2011-07-27
Ubuntu USN-1182-1 samba 2011-08-02

Comments (none posted)

squirrelmail: multiple vulnerabilities

Package(s):squirrelmail CVE #(s):CVE-2011-2023 CVE-2010-4555 CVE-2010-4554
Created:July 25, 2011 Updated:August 15, 2011
Description: From the CVE entries:

Cross-site scripting (XSS) vulnerability in functions/mime.php in SquirrelMail before 1.4.22 allows remote attackers to inject arbitrary web script or HTML via a crafted STYLE element in an e-mail message. (CVE-2011-2023)

Multiple cross-site scripting (XSS) vulnerabilities in SquirrelMail 1.4.21 and earlier allow remote attackers to inject arbitrary web script or HTML via vectors involving (1) drop-down selection lists, (2) the > (greater than) character in the SquirrelSpell spellchecking plugin, and (3) errors associated with the Index Order (aka options_order) page. (CVE-2010-4555)

functions/page_header.php in SquirrelMail 1.4.21 and earlier does not prevent page rendering inside a frame in a third-party HTML document, which makes it easier for remote attackers to conduct clickjacking attacks via a crafted web site. (CVE-2010-4554)

Alerts:
Scientific Linux SL-squi-20120208 squirrelmail 2012-02-08
Oracle ELSA-2012-0103 squirrelmail 2012-02-09
Oracle ELSA-2012-0103 squirrelmail 2012-02-09
CentOS CESA-2012:0103 squirrelmail 2012-02-08
CentOS CESA-2012:0103 squirrelmail 2012-02-08
Red Hat RHSA-2012:0103-01 squirrelmail 2012-02-08
Mandriva MDVSA-2011:123 squirrelmail 2011-08-13
Debian DSA-2291-1 squirrelmail 2011-08-08
Fedora FEDORA-2011-9309 squirrelmail 2011-07-13
Fedora FEDORA-2011-9311 squirrelmail 2011-07-13

Comments (none posted)

systemtap: privilege escalation

Package(s):systemtap CVE #(s):CVE-2011-2502 CVE-2011-2503
Created:July 26, 2011 Updated:September 23, 2011
Description: From the Red Hat advisory:

It was found that SystemTap did not perform proper module path sanity checking if a user specified a custom path to the uprobes module, used when performing user-space probing ("staprun -u"). A local user who is a member of the stapusr group could use this flaw to bypass intended module-loading restrictions, allowing them to escalate their privileges by loading an arbitrary, unsigned module. (CVE-2011-2502)

A race condition flaw was found in the way the staprun utility performed module loading. A local user who is a member of the stapusr group could use this flaw to modify a signed module while it is being loaded, allowing them to escalate their privileges. (CVE-2011-2503)

Alerts:
Debian DSA-2348-1 systemtap 2011-11-17
CentOS CESA-2011:1089 systemtap 2011-09-22
Scientific Linux SL-syst-20110725 systemtap 2011-07-25
Fedora FEDORA-2011-9739 systemtap 2011-07-26
Fedora FEDORA-2011-9722 systemtap 2011-07-26
Scientific Linux SL-syst-20110725 systemtap 2011-07-25
Red Hat RHSA-2011:1089-01 systemtap 2011-07-25
Red Hat RHSA-2011:1088-01 systemtap 2011-07-25

Comments (none posted)

wireshark: denial of service

Package(s):wireshark CVE #(s):CVE-2011-2597
Created:July 25, 2011 Updated:August 10, 2011
Description: From the CVE entry:

The Lucent/Ascend file parser in Wireshark 1.2.x before 1.2.18, 1.4.x through 1.4.7, and 1.6.0 allows remote attackers to cause a denial of service (infinite loop) via malformed packets.

Alerts:
Oracle ELSA-2013-1569 wireshark 2013-11-26
CentOS CESA-2012:0509 wireshark 2012-04-24
Oracle ELSA-2012-0509 wireshark 2012-04-23
Scientific Linux SL-wire-20120423 wireshark 2012-04-23
Red Hat RHSA-2012:0509-01 wireshark 2012-04-23
openSUSE openSUSE-SU-2011:1263-1 wireshark 2011-11-21
SUSE SUSE-SU-2011:1262-1 wireshark 2011-11-21
openSUSE openSUSE-SU-2011:1142-1 wireshark 2011-10-18
Gentoo 201110-02 wireshark 2011-10-09
Fedora FEDORA-2011-9638 wireshark 2011-07-23
Fedora FEDORA-2011-9640 wireshark 2011-07-23
Pardus 2011-107 wireshark 2011-08-04
Mandriva MDVSA-2011:118 wireshark 2011-07-24

Comments (none posted)

Page editor: Jake Edge

Kernel development

Brief items

Kernel release status

The 3.0 kernel is out, released on July 21. Linus said:

As already mentioned several times, there are no special landmark features or incompatibilities related to the version number change, it's simply a way to drop an inconvenient numbering system in honor of twenty years of Linux. In fact, the 3.0 merge window was calmer than most, and apart from some excitement from RCU I'd have called it really smooth.

Beyond the numbering scheme change, this kernel includes POSIX alarm timer support, a just-in-time compiler for BPF packet filters, a new sendmmsg() system call, ICMP sockets, the merging of the Xen backend driver (completing the long process of getting Xen Dom0 support into the kernel), namespace file descriptors, and more. See the KernelNewbies 3.0 page for lots of details.

Stable updates: no stable updates have been released in the last week. The 2.6.35.14 update is in the review process as of this writing.

Comments (2 posted)

Quotes of the week

I am quite at ease not participating in netfilter/iptables anymore while the discussion about IPv6 NAT becomes an issue again: I always indicated "over my dead body", and now that I am no longer in charge, nobody will have to kill me ;)
-- Harald Welte

Working on an update kernel for Fedora 15, rebasing from 2.6.38 to 3.0. As we know a bunch of userspace packages need updating to deal with the 2.6 -> 3.x transition, we made a decision to ship 3.0, but call it 2.6.40 rather than ship a ton of updates, and risk breaking other code that we don't ship.

I look forward to the "OMG, RED HAT FORKS LINUX" posts on slashdot.

-- OMG Dave Jones FORKS LINUX!

Thanks to git send-email I know exactly what networking patches every Linux vendor is backporting into their kernel.
-- David Miller

Comments (42 posted)

Garrett: Further adventures in EFI booting

Matthew Garrett continues his investigation into the subtleties of booting Linux with EFI. "GPT, or the GUID Partition Table, is the EFI era's replacement for MBR partitions. It has two main advantages over MBR - firstly it can cover partitions larger than 2TB without having to increase sector size, and secondly it doesn't have the primary/logical partition horror that still makes MBR more difficult than it has any right to be. The format is pretty simple - you have a header block 1 logical block into the media (so 512 bytes on a typical USB stick), and then a pointer to a list of partitions. There's then a secondary table one block from the end of the disk, which points at another list of partitions. Both blocks have multiple CRCs that guarantee that neither the header nor the partition list have been corrupted. It turns out to be a relatively straightforward modification of isohybrid to get it to look for a secondary EFI image and construct a GPT entry pointing at it. This works surprisingly well, and media prepared this way will boot EFI machines if burned to a CD or written to a USB stick."

Comments (9 posted)

Kernel development news

3.1 merge window part 1

By Jonathan Corbet
July 27, 2011
As of this writing, almost 5,400 non-merge changesets have been pulled into the mainline repository for the 3.1 development cycle. It's a wide-ranging set of changes, but many of them are cleanups - almost 600 of those changes have the word "remove" in the title, and the total growth of the kernel is less than 5,000 lines. A number of trees remain unpulled, though, so there is plenty of scope for the kernel to grow yet.

User-visible changes merged for 3.1 include:

  • Xen has gained a couple of new guest memory management techniques called "self-ballooning" and "frontswap-selfshrinking." Both use transcendent memory to try to improve memory performance and smooth out usage spikes.

  • The Xen PCI backend driver - allowing the kernel to export PCI devices to guests - has been merged.

  • The Xen balloon driver now supports memory hotplug.

  • There are a number of enhancements to the IPset including a mechanism to store network addresses and interface names together as named pairs, adjustable timeouts for SET targets, and more.

  • The BATMAN-adv protocol (covered here in February) has gained a better roaming mechanism, improved client announcement, and some performance improvements.

  • The networking layer has a new "fanout" feature; using setsockopt(), packets captured from an AF_PACKET socket can be divided among multiple processes. A number of policies describing how packets are "fanned out" are supported.

  • The BPF JIT compiler now supports the PowerPC architecture.

  • The ptrace() system call has been augmented with some new commands, starting with PTRACE_SEIZE, which is like PTRACE_ATTACH but does not trap the traced process or change its signal state. PTRACE_INTERRUPT will stop a traced process without creating confusion with signals. PTRACE_LISTEN allows the traced process to receive certain events even though it is in a stopped state. All of these options are considered to be under development; a special PTRACE_SEIZE_DEVEL flag must be provided by user space to acknowledge an understanding that things might change.

  • The lseek() system call now implements SEEK_HOLE and SEEK_DATA; these operations can be used to locate extended blocks of zeroes within files.

  • Architecture support for the OpenRISC CPU has been added.

  • A number of writeback-improvement changes have gone in, including dynamic estimation of backing store bandwidth and a determined attempt to make use of most of that bandwidth.

  • The iwlagn driver now has WoWLAN (wakeup on wireless LAN) support.

  • New drivers:

    • Processors and systems: CSR SiRFSoC PRIMA2 ARM Cortex A9 boards, Xilinx Zynq ARM Cortex A9 boards, Wolfson Cragganmore 6410 CPU modules, and Marvell PXA168 GuruPlug Display (gplugD) boards. Also, low-level support for the OLPC XO-1 laptop has finally been merged.

    • Audio: Analog Devices ADAU1701 SigmaDSP codecs, Analog Devices ADAV801 and ADAV803 audio codecs, ST STA32x 2.1-channel digital audio systems, Wolfson WM8983 codecs, and Creative CA0132 codecs.

    • Block: Brocade-1860 fabric adapters.

    • Input: Speedlink VAD Cezanne mice.

    • Miscellaneous: Cirrus Logic EP93xx M2P/M2M DMA controllers, SMSC SCH5636 Super I/O hardware monitor chips, AMS369FG06 AMOLED LCD controllers, FSA9480 micro USB switches, Microwire 93XX46 EEPROM controllers, Qualcomm PMIC8XXX realtime clock modules, Analog Devices AD5686R/AD5685R/AD5684R digital to analog converters, and Analog Devices AD7792 and AD7793 analog to digital converters.

    • Network: Low-level CAIF-over-HSI network devices, Faraday FTGMAC100 Gigabit Ethernet adapters, and NXP PN533 near-field communication adapters.

    • USB: PLX NET2272 controllers.

Changes visible to kernel developers include:

  • A general-purpose CRC8 generation library has been added.

  • The networking layer has gained generic support for near-field communication (NFC) devices. See Documentation/networking/nfc.txt for details.

  • The power management callbacks found in struct dev_pm_ops have been augmented with a whole set of "noirq" versions. The power domains subsystem uses these callbacks for system-wide power transitions.

  • The cleanup of the ARM tree continues, with a lot of code duplication resolved and the removal of some unused machine types.

  • The check_acl() inode operation has been replaced by get_acl(), whose job is to simply fetch the access control list from disk. Actual checking of ACLs is now done in the core VFS code.

  • The checkpatch.pl script has a new --ignore option to turn off various types of messages.

It is not clear when this merge window will close; Linus is about to go on vacation, and, as he has noted, connectivity tends to be poor when one is under water in scuba gear. If he is unable to get everything merged while he is traveling, the merge window may be extended a little past the normal two weeks. Or he could decide he has pulled enough and close things early. Stay tuned for an update next week.

Comments (1 posted)

Per-CPU variables and the realtime tree

By Jonathan Corbet
July 26, 2011
One of the problems with relying on out-of-tree kernel code is that one can never be sure when that code might be updated for newer kernels. Keeping up with the kernel can be painful even for maintainers of small patches; it's much more so for those who maintain a large, invasive patch series. It is probably safe to say that, if the realtime preemption developers do not keep their patches current, there are very few other developers who are in a position to take on that work. So it was certainly discouraging for some realtime users to watch multiple kernel releases go by while the realtime patch series remained stuck at 2.6.33.

The good news is that the roadblock has been overcome and there is now a new realtime tree for the 3.0 kernel. Even better news is that the realtime developers may have come up with a solution for one of the most vexing problems keeping the realtime code out of the mainline. The only potential down side is that this approach relies on an interesting assumption about how per-CPU data is used; this assumption will have to be verified with a lot of testing and, likely, a number of fixes throughout the kernel.

Symmetric multiprocessing systems are nice in that they offer equal access to memory from all CPUs. But taking advantage of the feature is a guaranteed way to create a slow system. Shared data requires mutual exclusion to avoid concurrent access; that means locking and the associated bottlenecks. Even in the absence of lock contention, simply moving cache lines between CPUs can wreck performance. The key to performance on SMP systems is minimizing the sharing of data, so it is not surprising that a great deal of scalability work in the kernel depends on the use of per-CPU data.

A per-CPU variable in the Linux kernel is actually an array with one instance of the variable for each processor. Each processor works with its own copy of the variable; this can be done with no locking, and with no worries about cache line bouncing. For example, some slab allocators maintain per-CPU lists of free objects and/or pages; these allow quick allocation and deallocation without the need for locking to exclude any other CPUs. Without these per-CPU lists, memory allocation would scale poorly as the number of processors grows.

Safe access to per-CPU data requires a couple of constraints, though: the thread working with the data cannot be preempted and it cannot be migrated while it manipulates per-CPU variables. If the thread is preempted, the thread that replaces it could try to work with the same variable; migration to another CPU could cause confusion for fairly obvious reasons. To avoid these hazards, access to per-CPU variables is normally bracketed with calls to get_cpu_var() and put_cpu_var(); the get_cpu_var() call, along with providing the address for the processor's version of the variable, disables preemption. So code which obtains a reference to a per-CPU data will not be scheduled out of the CPU until it releases that reference. Needless to say, any such code must be atomic.

The conflict with realtime operation should be obvious: in the realtime world, anything that disables preemption is a possible source of unwanted latency. Realtime developers want the highest-priority process to run at all times; they have little patience for waiting while a low-priority thread gets around to releasing a per-CPU variable reference. In the past, this problem has been worked around by protecting per-CPU variables with spinlocks. These locks keep the code preemptable, but they wreck the scalability that per-CPU variables were created to provide and complicate the code. It has been clear for some time that a different solution would need to be found.

With the 3.0-rc7-rt0 announcement, Thomas Gleixner noted that "the number of sites which need to be patched is way too large and the resulting mess in the code is neither acceptable nor maintainable." So he and Peter Zijlstra sat down to come up with a better solution for per-CPU data. The solution they came up with is surprisingly simple: whenever a process acquires a spinlock or obtains a CPU reference with get_cpu(), the scheduler will refrain from migrating that process to any other CPU. That process remains preemptable - code holding spinlocks can be preempted in the realtime world - but it will not be moved to another processor.

Disabling migration avoids one clear source of trouble: a process which is migrated in the middle of manipulating a per-CPU variable will end up working with the wrong CPU's instance of that variable. But what happens if a process is preempted by another process that needs to access the same variable? If preemption is no longer disabled, this unfortunate event seems like a distinct possibility.

After puzzling over this problem for a bit, the path to enlightenment became clear: just ask Thomas what they are thinking with this change. What they are thinking, it turns out, is that any access to per-CPU data needs to be protected by some sort of lock. If need be, the lock itself can be per-CPU, so the locking need not reintroduce the cache line bouncing that the per-CPU variable is intended to prevent. In many cases, that locking is already there for other purposes.

The realtime developers are making the bet that this locking is already there in almost every place where per-CPU data is manipulated, and that the exceptions are mostly for data like statistics used for debugging where an occasional error is not really a problem. When it comes to locking, though, a gut feeling that things are right is just not good enough; locking problems have a way of lurking undetected for long periods of time until some real damage can be done. Fortunately, this is a place where computers can help; the realtime tree will probably soon acquire an extension to the locking validator that checks for consistent locking around per-CPU data accesses.

Lockdep is very good at finding subtle locking problems which are difficult or impossible to expose with ordinary testing. So, once this extension has been implemented and the resulting problem reports investigated and resolved, the assumption that all per-CPU accesses are protected by locking will be supportable. That process will likely take some time and, probably, a number of fixes to the mainline kernel. For example, there may well be bugs now where per-CPU variables are manipulated in interrupt handlers but non-interrupt code does not disable interrupts; the resulting race will be hard to hit, but possibly devastating when it happens.

So, as has happened before, the realtime effort is likely to result in fixes which improve things for non-realtime users as well. Some churn will be involved, but, once it is done, there should be a couple of significant benefits: the realtime kernel will be more scalable on multiprocessor systems, and the realtime patches should be that much closer to being ready for merging into the mainline.

Comments (7 posted)

3.0 and RCU: what went wrong

July 27, 2011

This article was contributed by Paul McKenney

My goal has always been for my code to go in without so much as a ripple. Although I don't always meet that goal, I can't recall any recent failure quite as spectacular as RCU in v3.0. My v3.0 code didn't just cause a few ripples, it bellyflopped. It is therefore worthwhile to review what happened and why it happened in order to avoid future bellyflops and trainwrecks.

This post-mortem will cover the following topics:

  1. Overview of preemptible RCU read-side code
  2. Steaming towards the trainwreck
  3. Fixes
  4. Current status
  5. Preventing future bellyflops and trainwrecks
It will end with the obligatory answers to the quick quizzes.

Overview of preemptible RCU read-side code

Understanding the trainwreck requires reviewing a small amount of TREE_PREEMPT_RCU's read-side code. First, let's look at __rcu_read_lock(), which, in preemptible RCU, does the real work for rcu_read_lock():

  1 void __rcu_read_lock(void)
  2 {
  3   current->rcu_read_lock_nesting++;
  4   barrier();
  5 }

This is quite straightforward: line 3 increments the per-task ->rcu_read_lock_nesting counter and line 4 ensures that the compiler does not bleed code from the following RCU read-side critical section out before the __rcu_read_lock(). In short, __rcu_read_lock() does nothing more than to increment a nesting-level counter.

The __rcu_read_unlock() function, which, in preemptible RCU, does the real work for rcu_read_unlock(), is only slightly more complex:

  1 void __rcu_read_unlock(void)
  2 {
  3   struct task_struct *t = current;
  4 
  5   barrier();
  6   --t->rcu_read_lock_nesting;
  7   barrier();
  8   if (t->rcu_read_lock_nesting == 0 &&
  9       unlikely(ACCESS_ONCE(t->rcu_read_unlock_special)))
 10     rcu_read_unlock_special(t);
 11 }

Line 5 prevents the compiler from bleeding code from the RCU read-side critical section out past the __rcu_read_unlock(), line 6 decrements the per-task nesting-level counter, so that thus far __rcu_read_unlock() is the inverse of __rcu_read_lock().

However, if the value of the nesting counter is now zero, we now need to check to see if anything unusual happened during the just-ended RCU read-side critical section, which is the job of lines 8 and 9. Line 7 prevents the compiler from moving this check to precede the decrement on line 6 because otherwise something unusual might happen just after the check but before the decrement, which would in turn mean that __rcu_read_unlock() would fail to clean up after that unusual something. The "unusual somethings" are:

  1. The RCU read-side critical section might have blocked or been preempted. In this case, the per-task ->rcu_read_unlock_special variable will have the RCU_READ_UNLOCK_BLOCKED bit set.

  2. The RCU read-side critical section might have executed for more than a jiffy or two. In this case, the per-task ->rcu_read_unlock_special variable will have the RCU_READ_UNLOCK_NEED_QS bit set.

In either case, the per-task ->rcu_read_unlock_special will be non-zero, so that __rcu_read_unlock() will invoke rcu_read_unlock_special(), which we look at next:

  1 static void rcu_read_unlock_special(struct task_struct *t)
  2 {
  3   int empty;
  4   int empty_exp;
  5   unsigned long flags;
  6   struct rcu_node *rnp;
  7   int special;
  8 
  9   if (in_nmi())
 10     return;
 11   local_irq_save(flags);
 12   special = t->rcu_read_unlock_special;
 13   if (special & RCU_READ_UNLOCK_NEED_QS) {
 14     rcu_preempt_qs(smp_processor_id());
 15   }
 16   if (in_irq()) {
 17     local_irq_restore(flags);
 18     return;
 19   }
 20   if (special & RCU_READ_UNLOCK_BLOCKED) {
 21     t->rcu_read_unlock_special &= ~RCU_READ_UNLOCK_BLOCKED;
 22 
 23     /* Clean up after blocking. */
 24 
 25   }
 26 }

Lines 9 and 10 take an early exit if we are executing in non-maskable interrupt (NMI) context. The reason for this early exit is that NMIs cannot be interrupted or preempted, so there should be no rcu_read_unlock_special() processing required. Otherwise, line 11 disables interrupts and line 12 takes a snapshot of the per-task ->rcu_read_unlock_special variable. Line 13 then checks to see if the just-ended RCU read-side critical section ran for too long, and, if so, invokes rcu_preempt_qs() to immediately record a quiescent state. Recall that any point in the code that is not in an RCU read-side critical section is potentially a quiescent state. Therefore, since someone is waiting, report the quiescent state immediately.

Lines 16 through 18 take an early exit if we are executing in a hardware interrupt handler. This is appropriate given that hardware interrupt handlers cannot block, so it is not possible to preempt or to block within an RCU read-side critical section running within a hardware interrupt handler. (Of course, threaded interrupt handlers are another story altogether.)

Finally, line 20 checks to see if we blocked or were preempted within the just-ended RCU read-side critical section, clearing the corresponding bit and cleaning up after blockage or preemption if so. The exact details of the cleanup are not important (and are therefore omitted from the code fragment above), although curious readers are referred to kernel.org. The important thing is what happens if this RCU read-side critical section was the last one blocking an expedited RCU grace period or if the just-ended RCU read-side critical section was priority-boosted. Either situation requires that RCU interact with the scheduler, which may require the scheduler to acquire its runqueue and priority-inheritance locks.

Because the scheduler disables interrupts when acquiring the runqueue and the priority-inheritance locks, an RCU read-side critical section that lies entirely within one of these locks' critical sections cannot be interrupted, preempted, or blocked. Therefore, such an RCU read-side critical section should not enter rcu_read_unlock_special(), and should thus avoid what would otherwise be an obvious self-deadlock scenario.

Quick Quiz 1: But what about RCU read-side critical sections that begin before a runqueue lock is acquired and end within that lock's critical section? Answer

As we will see later, a number of self-deadlock scenarios can be avoided via the in_irq() early exit from rcu_read_unlock_special(). Keep the critical importance of this early exit firmly in mind as we steam down the tracks towards the RCU/scheduler/threaded-irq trainwreck.

Steaming towards the trainwreck

Before we leave the station, please keep in mind that in_irq() can return inaccurate results because it consults the preempt_count() bitmask, which is updated in software. At the start of the interrupt, there is therefore a period of time before preempt_count() is updated to record the start of the interrupt, during which time the interrupt handler has started executing, but in_irq() returns false. Similarly, at the end of the interrupt, there is a period of time after preempt_count() is updated to record the end of the interrupt, during which time the interrupt handler has not completed executing, but again in_irq() returns false. This last is most emphatically the case when the end-of-interrupt processing kicks off softirq handling.

With that background, the sequence of commits leading to the trainwreck is as follows:

  1. In March of 2009, commit a18b83b7ef added the first known rcu_read_unlock() to be called while holding a runqueue lock.

    Quick Quiz 2: Suppose that an RCU read-side critical section is enclosed within a runqueue-lock critical section. Why couldn't that RCU read-side critical section be the last RCU read-side critical section blocking a TREE_PREEMPT_RCU expedited grace period? Answer

    Quick Quiz 3: Why can't we avoid this whole mess by treating interrupt-disabled segments of code as if they were RCU read-side critical sections? Answer

  2. In December of 2010, commit d9a3da069 added synchronize_rcu_expedited() to TREE_PREEMPT_RCU, which causes the last reader blocking an expedited grace period to call wake_up() from within rcu_read_unlock(). Of course, the wake_up() acquires the runqueue locks.

Although this appears to open the door to an obvious deadlock scenario where the RCU read-side critical section under the runqueue lock is the last one blocking a preemptible-RCU expedited grace period, this cannot happen as long as the runqueue lock is held across the entire duration of the RCU read-side critical section.

Continuing down the tracks toward the trainwreck:

  1. In June of 2010, commit f3b577dec1 added an RCU read-side critical section in wake_affine(). Given that I was blissfully unaware of the true nature of in_irq(), I raised no objection to this patch. Quite the opposite, in fact, as can be seen by a quick glance at this commit.

    Quick Quiz 4: Exactly what vulnerability did commit f3b577dec1 expose? Answer
  2. The addition of threaded interrupt handlers meant that almost all hardware interrupts started invoking the scheduler in order to awaken the corresponding interrupt kthread, which in turn increased the likelihood that rcu_read_unlock_special() would become confused by the return value from in_irq().

  3. Many more RCU read-side critical sections were added within runqueue and priority-inheritance critical sections, further increasing the interaction cross-section between RCU and the scheduler.

  4. RCU_BOOST introduced an incorrect cross-task write to the per-task ->rcu_read_unlock_special variable. This could result in this variable being corrupted, resulting in all manner of deadlocks. This was fixed by commit 7765be2fe.

  5. In addition, RCU_BOOST introduced another call from RCU into the scheduler in the form of a rt_mutex_unlock().
All of these changes set the stage for a number of potential failures; one possible sequence of events is as follows:
  1. An RCU read-side critical section is preempted, then resumes. This causes the the per-task ->rcu_read_unlock_special variable to have the RCU_READ_UNLOCK_BLOCKED bit set.

  2. This task remains preempted for so long that RCU priority boosting is invoked.

  3. The RCU read-side critical section ends by invoking rcu_read_unlock(), which in in turn invokes the __rcu_read_unlock() function shown above.

  4. An interrupt arrives just after __rcu_read_unlock() reaches line 7.

  5. The interrupt handler runs to completion, so that irq_exit() is invoked, and irq_exit() decrements the irq nesting-level count to zero.

  6. Then irq_exit() then invokes invoke_softirq(), which determines that ksoftirqd must be awakened.

  7. The scheduler is invoked to awaken ksoftirqd, which acquires a runqueue lock and then enters an RCU read-side critical section.

  8. When the interrupt handler leaves the RCU read-side critical section, line 9 of __rcu_read_unlock() will find that the per-task ->rcu_read_unlock_special variable is non-zero, and will therefore invoke rcu_read_unlock_special().

  9. Because in_irq() returns false, line 16 of rcu_read_unlock_special() does not take an early exit. Therefore, rcu_read_unlock_special() sees the RCU_READ_UNLOCK_BLOCKED bit set in ->rcu_read_unlock_special, and also notes that the task has been priority boosted. It therefore invokes the scheduler to unboost itself.

  10. The scheduler will therefore attempt to acquire a runqueue lock. Because this task already holds a runqueue lock, deadlock can (and sometimes did) result.
There were a number of other failure scenarios, but this one is a representative specimen. Needless to say, figuring all this out was a bit of a challenge for everyone involved, as was the question of how to fix the problem.

Fixes

The fixes applied to the RCU trainwreck are as follows:

  1. b0d30417 (rcu: Prevent RCU callbacks from executing before scheduler initialized), which does what its name says. This addressed a few boot-time hangs.

  2. 131906b0 (rcu: decrease rcu_report_exp_rnp coupling with scheduler), which causes RCU to drop one of its internal locks before invoking the scheduler, thereby eliminating one set of deadlock scenarios involving expedited grace periods.

  3. 7765be2f (rcu: Fix RCU_BOOST race handling current->rcu_read_unlock_special), which allocates a separate task_struct field to indicate that a task has been priority boosted. This change meant that the ->rcu_read_unlock_special field returned to its earlier (and correct) status of being manipulated only by the corresponding task. This prevented a number of scenarios where an instance of __rcu_read_unlock() invoked from interrupt context would incorrectly invoke rcu_read_unlock_special(), which would again result in deadlocks.

  4. be0e1e21 (rcu: Streamline code produced by __rcu_read_unlock()), which was an innocent bystander brought along due to dependencies among patches.

  5. 10f39bb1 (rcu: protect __rcu_read_unlock() against scheduler-using irq handlers), which rearranges __rcu_read_unlock()'s manipulation of ->rcu_read_lock_nesting so as to prevent interrupt-induced recursion in __rcu_read_unlock()'s invocation of rcu_read_unlock_special(), which in turn prevents another class of deadlock scenarios. This commit was inspired by an earlier patch by Steven Rostedt.

  6. c5d753a5 (sched: Add irq_{enter,exit}() to scheduler_ipi() by Peter Zijlstra), which informs RCU that the scheduler is running. This is especially important when the IPI interrupts dyntick-idle mode: Without this patch, RCU would simply ignore any RCU read-side critical sections in scheduler_ipi().

  7. ec433f0c (softirq,rcu: Inform RCU of irq_exit() activity by Peter Zijlstra), which informs RCU of scheduler activity that occurs from hardware interrupt level, but after irq_exit() has cleared the preempt_count() indication that in_irq() relies on. It is quite possible that 10f39bb1 makes this change unnecessary, but proving that would have delayed 3.0 even more.

  8. a841796f (signal: align __lock_task_sighand() irq disabling and RCU) fixes one case where an RCU read-side critical section is preemptible, but its rcu_read_unlock() is invoked with interrupts disabled. As noted earlier, there might be a broad-spectrum solution that renders this patch unnecessary, but that solution was not appropriate for 3.0.
So, where are we now?

Current status

The Linux 3.0 version of RCU finally seems stable, but the following potential vulnerabilities remain:

  1. In RCU_BOOST kernels, if an RCU read-side critical section has at any time been preemptible, then it is illegal to invoke its rcu_read_unlock() with interrupts disabled. There is an experimental patch that removes this restriction, but at the cost of lengthening the real-time mutex acquisition code path. Work continues to find a solution with better performance characteristics.

  2. In all preemptible-RCU kernels, if an RCU read-side critical section has at any time been preemptible, then it is illegal to invoke its rcu_read_unlock() while holding a runqueue or a priority-inheritance lock. Although there are some possible cures for this condition, all currently known cures are worse than the disease.

    Quick Quiz 5: How could you remove the restriction on possibly-preempted RCU read-side critical sections ending with runqueue or priority-inheritance locks held? Answer
  3. TINY_PREEMPT_RCU might well contain similar vulnerabilities.
So, what should be done to prevent this particular bit of history from repeating itself?

Preventing future bellyflops and trainwrecks

Prevention is better than cure, so what preventative measures should be taken?

The most important preventative measure is to do a full review of the RCU code, documenting it as I go. In the past, I documented new RCU functionality as a matter of course, before that functionality was accepted into the kernel. However, over the past few years, I have gotten out of that habit. Although some of the bugs would probably have escaped me, I would likely have spotted a significant fraction. In addition, the documentation might have helped others better understand RCU, which in turn might have helped some of them to spot the bugs.

Although no one has yet reported similar bugs in TINY_PREEMPT_RCU, that does not mean that similar bugs do not exist. Therefore, when inspecting the code, I need to pay special attention to the corresponding portions of TINY_PREEMPT_RCU.

Another important preventative measure is to question long-held assumptions. My unquestioning faith in in_irq() was clearly misplaced. Although in_irq() was “good enough” for RCU for quite some time, it suddenly was not. In short, when you are working on something as low-level as RCU, you shouldn't be taking things like this for granted.

Dealing with the trainwreck also exposed some shortcomings in my test setup, which emphasizes thoroughness over fast turnaround. Although there is no substitute for a heavy round of testing on a number of different configurations, it would be good to be able to validate debug patches and experimental fixes much more quickly. I have therefore started setting up an RCU testing environment using KVM. This testing environment also has the great advantage of working even when I don't have Internet access. Additionally, use of KVM looks like it will shorten the edit-compile-debug cycle, which is quite important when chasing bugs that I actually can reproduce.

Finally, I need to update my test configurations. Some of the bugs reproduce more quickly when threaded interrupt handlers are enabled, so I need to add these to my test regime. Another bug was specific to 32-bit kernels, which I currently don't test, but which KVM makes it easy to test. In fact, on my current laptop, 32-bit kernels are all that KVM is capable of testing.

Hopefully these changes will avoid future late-in-cycle RCU trainwrecks.

Acknowledgments

I am grateful to Steven Rostedt, Peter Zijlstra, Thomas Gleixner, Ingo Molnar, Ben Greear, Julie Sullivan, and Ed Tomlinson for finding bugs, creating patches, and lots of testing. I owe thanks to Jim Wasko for his support of this effort.

Answers to Quick Quizzes

Quick Quiz 1: But what about RCU read-side critical sections that begin before a runqueue lock is acquired and end within that lock's critical section?

Answer: That would be very bad. The scheduler is therefore forbidden from doing this.

Back to Quick Quiz 1.

Quick Quiz 2: Suppose that an RCU read-side critical section is enclosed within a runqueue-lock critical section. Why couldn't that RCU read-side critical section be the last RCU read-side critical section blocking a TREE_PREEMPT_RCU expedited grace period?

Answer: No, it cannot. To see why, note that the TREE_PREEMPT_RCU variant of synchronize_rcu_expedited is implemented in two phases. The first phase invokes synchronize_sched_expedited(), which forces a context switch on each CPU. The second phase waits for any RCU read-side critical sections that were preempted in phase 1. Because acquiring runqueue locks disables interrupts, it is not possible to preempt an RCU read-side critical section that is totally enclosed in a runqueue-lock critical section, and therefore synchronize_rcu_expedited will never wait on such an RCU read-side critical section, which in turn means that the corresponding rcu_read_unlock() cannot have a need to invoke the scheduler, thus avoiding the deadlock.

Of course, the last link in the above chain of logic was broken by a later bug, but read on...

Back to Quick Quiz 2.

Quick Quiz 3: Why can't we avoid this whole mess by treating interrupt-disabled segments of code as if they were RCU read-side critical sections?

Answer: For two reasons:

  1. The fact that interrupt-disable sections of code act as RCU read-side critical sections is a property of the current implementation. Later implementations are likely to need to do quiescent-state processing off-CPU in order to reduce OS jitter, and such implementations will not be able to treat interrupt-disable sections of code as RCU read-side critical sections. This property is important to a number of users, so much so that there is an out-of-tree RCU implementation that provides it (see here and here for more recent versions). Therefore, we should be prepared for mainline Linux kernel's RCU implementation to treat interrupt-disable sections of code as the quiescent states that they really are.

  2. Having multiple very different things that provide read-side protection makes the code more difficult to maintain, with RCU-sched being a case in point.

Back to Quick Quiz 3.

Quick Quiz 4: Exactly what vulnerability did commit f3b577dec1 expose?

Answer:

Suppose that an RCU read-side critical section is the last one blocking an expedited grace period, and that its __rcu_read_unlock() is interrupted just after it decrements the nesting count to zero. The current->rcu_read_unlock_special bitmask will therefore be non-zero, indicating that special processing is required (in this case, waking up the task that kicked of the expedited grace period). Suppose further that softirq processing is kicked off at the end of the interrupt, and that there are so many softirqs pending that they need to be handed off to ksoftirqd. Therefore wake_up() is invoked, which acquires the needed runqueue locks. But because wake_affine() is invoked, there is an RCU read-side critical section whose __rcu_read_unlock() will see that current->rcu_read_unlock_special is nonzero. At this point, in_irq() will be returning false, so the resulting call to rcu_read_unlock_special() won't know to take the early exit. It will therefore invoke wake_up(), which will again attempt to acquire the runqueue lock, resulting in deadlock.

Back to Quick Quiz 4.

Quick Quiz 5: How could you remove the restriction on possibly-preempted RCU read-side critical sections ending with runqueue or priority-inheritance locks held?

Answer: Here are some possibilities:

  1. Enclose all runqueue or priority-inheritance critical section in an RCU read-side critical section. This would mean that any rcu_read_unlock() that executed with one of these locks held would be inside the enclosing RCU read-side critical section, and thus would be guaranteed not to invoke rcu_read_unlock_special(). However, this approach would add overhead to the scheduler's fastpaths and requires yet another odd hand-crafted handoff at context-switch time.

  2. Keep some per-task state indicating that at least one scheduler lock is held. Then rcu_read_unlock_special() could set another per-task variable indicating that cleanup is required. The scheduler could check this flag when releasing its locks. I hope that the maintainability challenges of this approach are self-evident.

  3. Your idea here.

Back to Quick Quiz 5.

Comments (10 posted)

Patches and updates

Kernel trees

  • Thomas Gleixner: 3.0-rt1 . (July 22, 2011)
  • Thomas Gleixner: 3.0-rt2 . (July 23, 2011)
  • Thomas Gleixner: 3.0-rt3 . (July 25, 2011)

Architecture-specific

Core kernel code

Development tools

Device drivers

Filesystems and block I/O

Memory management

Networking

Virtualization and containers

Miscellaneous

Page editor: Jonathan Corbet

Distributions

Debian debates systemd

By Jake Edge
July 27, 2011

Wherever systemd goes, arguments about it seem to follow. The latest episode involves Debian "discussing" the pros and cons of the init replacement, with many of the same arguments we have seen elsewhere on both sides. But there is a difference for Debian because, unlike most distributions, it supports both Linux and FreeBSD kernels and may start supporting Hurd in 7.0 ("Wheezy"). That makes switching to systemd more difficult for Debian—if it is even desirable—but it also brings up an interesting question for the distribution: should the needs of the smaller sub-distributions (GNU/kFreeBSD, GNU/Hurd) hold back progress on Debian GNU/Linux?

Perhaps unaware of the firestorm he was about to set off, Juliusz Chroboczek posted some observations about systemd to the debian-devel mailing list. In it, he offered his opinion on the good and the bad with respect to systemd and tried to make it clear that he wasn't trying to push the decision in any particular direction, just recording his observations. Overall, it is a fairly even-handed look at systemd that notes multiple advantages and disadvantages.

Of course, systemd advocates will argue that some of the disadvantages cited are wrong as Debian systemd maintainer Tollef Fog Heen did, but overall there weren't many big complaints about the posting itself—except from systemd developer Lennart Poettering. His response was forwarded to the list by Chroboczek and was characteristically combative, which to some completely justified one of the original posting's complaints: "Systemd's author is annoying".

Undoubtedly Poettering is tired of defending systemd against what he sees as "amazingly badly informed" criticisms. Given that the overall tone of Chroboczek's post was fairly positive, though, it's a little surprising to see the animosity with which Poettering responds. One of the main problems that some in the Debian community (including Chroboczek) have identified with systemd is its "Linux-only" attitude. Poettering addresses that, like he has many times before, but includes a long list of non-POSIX features that systemd uses, concluding: "There's a reason why systemd is more powerful than other init systems: we don't limit ourselves to POSIX, we actually want to give the user/administrator the power that Linux can offer you."

But that power also limits the environments where systemd can run, of course. In addition, the systemd developers have made it clear that they are not interested in taking patches to make it portable to non-Linux systems. In fact, Poettering calls it "practically impossible. about every line of it is non-portable code" in an IRC conversation summary posted to the thread by Matthias Klumpp. All of that makes it difficult for the Debian FreeBSD port, as well as the Hurd effort (and would presumably hinder a humorously suggested Plan 9 version of Debian too).

The Linux versions of Debian (including the various architectures and embedded Linux versions) are by far the most popular, so there is a question of how much minority Debians should be able to hold back progress. As Uoti Urpala puts it:

I think the important question is whether portability to other kernels is or should be a "project's goal", and how much else you're willing to lose for the sake of that goal. I know I would personally be a lot happier with a Debian that supports systemd functionality than with a Debian that can run on a BSD kernel.

Wouter Verhelst, on the other hand, is adamant that GNU/kFreeBSD is going to continue as part of Debian, and that systemd is not welcome if it will make it harder for that variation to operate:

Whatever its features, if we have to jump through a large heap of hoops to get it to work at all, or to make life for maintainers of daemon packages not a complete nightmare, it's not likely to become the default in Debian any time soon.

As might be guessed, Urpala was not convinced that supporting FreeBSD was enough of a reason to stop the eventual adoption of systemd:

But the attitude that it's OK for kFreeBSD to set limits on Linux development (or that developers working on Linux must handle the BSD porting/compatibility to be "permitted" to adopt a new technology) smells of trying to hold the project hostage, and I doubt it can have positive effects for the project overall.

In addition, he and others believe that moving away from the traditional System V shell-script-based init would be a net benefit for package maintainers: "I think the life of many maintainers of daemon packages is a 'complete nightmare' now with sysvinit, compared to what it would be with systemd." Because systemd can use existing init scripts, there is no need for a mass translation of packages to support systemd. However, an eventual switch to use systemd by default would undoubtedly cause various Debian packages to drop their init scripts in favor of systemd unit files that are far simpler, but wouldn't be usable directly by the System V init system that is currently the default. All is not lost however, as Russell Coker describes:

If a daemon supports socket activation then there would need to be separate work done to write a systemd unit and a sysvinit script.

If a daemon doesn't support socket activation then IMHO the ideal situation would be to have a program that takes a systemd unit file as input and creates a sysvinit script. That would reduce the amount of effort and reduce the amount of low quality sysvinit scripts that are out there (and I've written my share of such bad scripts).

Another possibility would be for Debian to directly support both System V and another init (i.e. systemd or Upstart) but many think that idea is a non-starter. Maintainers would have to support both styles of initialization (or ignore the benefits of the newer systems) as Russ Allberry noted:

Unless you're willing to write init scripts and cripple systemd by making it use init scripts, it's a huge pain, since you have to maintain two parallel init setups for every package requiring something to run at boot, one of which will probably never be tested by the maintainer.

The same issue applies with upstart, of course. Both systems support old-style init scripts, but one of the huge motivations for switching init systems is to *stop using* old-style init scripts, since they support a tiny fraction of the capabilities of systemd or upstart and are massively annoying and tricky to maintain in a bug-free fashion.

There is a potential support problem for upstream projects, however, as Gergely Nagy points out: "[...] even if systemd can be made portable enough for Debian's needs, or Debian can find a way to work around systemds unportability, upstreams who need to support other systems will still have yet another extra burden to carry." Of course, whether Debian switches to systemd, Upstart, or stays put, the problem for upstreams doesn't really change. None of the init systems is likely to disappear anytime soon, so either upstreams or distributions will have to support all of them in one way or another. As is often the case, Debian project leader Stefano Zacchiroli finds some middle ground:

But what I find surprising in this discussion (with notable exception, luckily) is the feeling that portability is boolean: it is not. It is rather a trade-off among the work that needs to be done / code that needs to be maintained and the distro-wide technical choices that we make. In that respect, the fact that systemd upstream might decide not to integrate upstream our [changes] is sad, but it's not the end of the world: it won't be the first nor the last upstream not willing to integrate some of our changes.

Zacchiroli's post—worth reading in full—manages to express support for most of the positions taken in the thread, while also pointing out a clear path forward for any change to the init system for Debian. While there hasn't been a large contingent pushing Upstart in the thread, it is clearly on the radar as a possibility. Any change is likely to be a ways off in any case, so a long thread arguing the merits of systemd is premature. Whenever such a decision is made, though, the general sense from those participating in the thread is that the decision will be made on technical grounds separate from the issue of how to support non-Linux versions of Debian. That problem will be solved too, but there is no reason to hold back progress on Linux for other kernels (or vice versa).

Comments (89 posted)

Brief items

Distribution quote of the week

As you can clearly see, you can see nothing. Yes, nothing! As of 18:55:52 UTC+2 the NEW queue, which at some times was well over 500, sometimes even 600 packages is now empty. Completely empty.

To the best of my (and Ganneff's knowledge) the last time the NEW queue was empty was at least five years ago.

Interesting enough, that triggered an yet undiscovered bug in dak, which refused to scan an empty directory...

-- Alexander Reichle-Schmehl

Comments (none posted)

Release for CentOS-6.0 LiveCD i386 and x86_64

The CentOS 6.0 live DVD is available. "The CentOS-6.0 LiveCD is meant to be a Linux environment suited to be run directly from either CD media or USB storage devices. It does not need any persistent storage on a machine, which also makes it a suitable recovery environment."

Full Story (comments: 1)

Fedora 15 for IBM System z 64bit official release

The Fedora IBM System z (s390x) Secondary Arch team has announced the Fedora 15 for IBM System z 64bit official release.

Full Story (comments: none)

IPFire 2.9 Core Update 50 released

IPFire is a server distribution intended for use as a firewall. IPFire 2.9 Core Update 50 has been released. "Since 44 months and 50 core updates, IPFire is working better than on the first day. The developers keep working on little updates that improve the base system and addons, but also bring major updates on the way. That is why the system runs very great on recent hardware and keeps up with new technology. A very special attention is paid to safety-critical problems. Many security issues of third party packages have been patched, tested and delivered only within a couple of hours."

Full Story (comments: none)

Mandriva 2011 RC2

The second release candidate for Mandriva 2011 has been released. "In this release candidate we fixed more than 300 bugs and added or changed about 700 packages."

Comments (none posted)

openSUSE 12.1 milestone 3

The openSUSE project has released the third milestone for openSUSE 12.1. "Just a few days ago the third of six milestones on the road to openSUSE 12.1 has been made available for testing before it goes to final release November 11th, 2011. (Yes, 11-11-11!)"

Comments (none posted)

Red Hat Enterprise Linux 5.7 Now Available

Red Hat has announced the release of Red Hat Enterprise Linux 5.7. "Today's update adds features that enhance the flexibility, security, and stability of Red Hat Enterprise Linux 5 environments, and includes a number of features incorporated from Red Hat Enterprise Linux 6. Application interface consistency is maintained between Red Hat Enterprise Linux 5.7 and prior updates, allowing systems to be updated easily without application re-certification." More information can be found in the release notes and the technical notes.

Comments (none posted)

Ubuntu 10.04.3 (Lucid Lynx) LTS released!

The Ubuntu team has announced the release of Ubuntu 10.04.3 LTS. "This release includes updated server, desktop, alternate installation CDs and DVDs for the i386 and amd64 architectures." Kubuntu 10.04.3 is also available.

Full Story (comments: none)

Distribution News

Debian GNU/Linux

Debian 7 'Wheezy' to introduce multiarch support

The introduction of "multiarch support" is now a release goal for the coming Debian release 7 "Wheezy" (targeted for a 2013 release). "Multiarch is a radical rethinking of the filesystem hierarchy with respect to library and header paths, to make programs and libraries of different hardware architectures easily installable in parallel on the very same system."

Full Story (comments: 18)

openSUSE

openSUSE Conference travel sponsorship program

The openSUSE conference team has announced the travel sponsorship program to financially support community members to attend the conference (September 11-14 in Nuremberg, Germany). The application deadline is August 5.

Full Story (comments: none)

Newsletters and articles of interest

Distribution newsletters

Comments (none posted)

Fedora 16 to have Grub2, GNOME 3.2 and KDE 4.7 (The H)

The H takes a look at some of the features planned for Fedora 16. "The feature list contains 40 items, including GNOME 3.2 and KDE Plasma Workspaces 4.7. The developers are planning to switch to using Grub2 for the boot loader. Having switched to systemd, as an alternative to sysvinit and upstart, in Fedora 15, the project plans to replace further sysv init scripts with systemd units in version 16. Furthermore, Fedora is to offer everything that's required for Xen virtualisation, as version 3.0 of the Linux kernel, which is now expected to be released on Friday, will include all the necessary components."

Comments (none posted)

Ten reasons for giving Slackware Linux a go (ZDNet)

Jack Wallen presents his top 10 reasons to use Slackware. "1. Stability Even for an operating system known for its stability, you'll be hard-pressed to find a more reliable Linux distribution than Slackware. It's been around for 20 years and has long enjoyed a reputation for being solid. In my time using it - and I have installed the most recent version as well as having used versions throughout my time with Linux - I can honestly say its reputation is entirely justified. Whether on a server or a desktop, it is remarkably stable."

Comments (4 posted)

Page editor: Rebecca Sobol

Development

Google's Native Client forges ahead

July 27, 2011

This article was contributed by Nathan Willis

Google released an update to its Native Client (NaCl) framework in June, which is an open source utility to enable web developers to deploy faster applications by allowing them to run native binary code in a sandboxed environment within the browser. The new release incorporates API changes and updates to the SDK and toolchain, but the technology remains disabled by default in the Chrome browser. NaCl has been listed as "experimental" since its inception, but the company is beginning to shift its message, trying to attract developers to the platform and other browser makers to the framework.

NaCl is essentially a plugin in which "untrusted" native code can be executed in a secure, sandboxed environment within the browser. Native code in this context means machine language — compiled binaries, delivered as self-contained modules. They do not have access to OS subsystems or toolkits, but only a minimal support library provided by NaCl. Most other browsers plugins (Java, Flash, etc.) are already native code, of course, and like them NaCl modules can only interact with the containing page's contents through JavaScript and a restrictive API. Of course, the mere mention of Java and Flash raises warning flags about security and performance, to which Google is doing its best to respond.

The project has been in development since 2008, and originally ran only on 32-bit x86 architectures, although ARM and 64-bit x86 implementations are now under development as well. Google describes the goal of NaCl as enabling developers to leverage existing software components and legacy applications, and to develop more compute-intensive web applications that would run too slowly in JavaScript or HTML5 — all without compromising security.

Shaking it out

The NaCl plugin isolates code in the sandbox by using the memory segmentation available in processes, thus providing a contiguous, private address space for each component — currently 256MB in size. It also attempts to detect insecure code (and refuses to run it), by restricting each component to a set of "safe" instructions, and enforcing structural rules to prevent code obfuscation techniques — such as jumping to a location in the middle of an instruction. Loaded modules are also read-only in memory, to prevent self-modifying code.

In addition to the "inner sandbox" dedicated to isolating native code modules, NaCl also implements an "outer sandbox" that intercepts any system calls. Furthermore, code modules are isolated from each other. They can only communicate by calling NaCl's inter-module communication (IMC) mechanism. IMC is a bi-directional datagram service designed to resemble Unix domain sockets. IMC is also used to facilitate communication between modules and the document object model (DOM) object that created them (e.g. a web page or JavaScript application). The DOM object, of course, can pass messages between native modules or provide them access to shared storage.

NaCl also provides two higher-level mechanisms built on top of IMC: the Simple Remote Procedure Call (SRPC) facility, and an implementation of the traditional Netscape Plugin API (NPAPI). SRPC can be used to access native module routines from other modules or directly from JavaScript, while the NPAPI implementation provides access to the same browser facilities and information open to other browser plugins.

Each NaCl module also runs as its own OS process (although at the moment, the NaCl plugin itself is run in the browser's process). NaCl cannot provide secure, cross-platform exception handling for modules to recover from hardware exceptions. As a result, a module that triggers a hardware exception will be shut down by the OS, but, by running each module in its own process, other modules should be unaffected.

Developing NaCl modules

For application developers, the project is also introducing a native code API named Pepper, which is currently provided in C and C++ form. Pepper evolved out of Google's earlier efforts to expand on NPAPI, and is thus sometimes referred to in NaCl documentation as the Pepper Plugin API (PPAPI).

Pepper includes interfaces for NaCl's messaging systems and the existing NPAPI functionality, but also provides interfaces for image handling, 2D drawing, and audio, plus memory management, timing, threads, strongly typed variables, and managing module instances.

June's 0.4 release of the NaCl SDK includes minor changes to the C interfaces, and introduces a new method for including an NaCl module in an HTML page: by linking to it with the src= attribute inside of an <embed> tag. However, there are more substantial changes in the build system. It has migrated to the Python-based SCons build tool in place of GNU make, Cygwin has been removed from the Windows toolchain, and experimental support for Valgrind on 64-bit Linux has been added.

The toolchain itself is built on top of a customized version of GCC and GNU binutils that implement the constraints of the NaCl sandbox. Thus re-compilation is necessary, even for the "existing software components" and "legacy applications" use cases. The NaCl plugin provides a C library customized from NewLib.

As discussed earlier, the current SDK can build binary modules for x86-32, x86-64, and ARM, and there are mechanisms for web developers to provide all three varieties of their module within an application. Google is intent on expanding the processor support offerings, however, by adapting the build tools to produce a "portable" binary instead of the processor-specific code. Portable NaCl (PNaCl) compiles source to an intermediate LLVM bytecode format, which is then translated at runtime into the relevant machine code.

Google maintains a gallery of NaCl examples, including a Monte Carlo pi calculator, audio synthesizer, and Conway's game of Life. The NaCl white papers also describe internal efforts to port Quake, Bullet, and an H.264 decoder to NaCl, and claim the performance to be "indistinguishable" from normal executables, although that code has evidently not been released to the public.

The view outside the Googleplex

From a security standpoint, most of the ideas implemented by NaCl are not new. Rather than using code signing to provide a measure of security as ActiveX does for its binary modules, NaCl uses a static verifier to check all modules before they are allowed to run, and terminates any that pass that check and still manage to make an unsafe system call. The fault-isolation methods used by the code verifier are also well-known. On the development side, the modified GCC and binutils act as a "trusted" compiler, in theory ensuring that no unsafe code gets executed in the first place. Code that doesn't conform to the structural and alignment requirements that the toolchain emits will be rejected.

Reaction from other browser vendors has been decidedly negative, however. Although NaCl is marketed as an open source project open to any browser developer, both Mozilla and Opera have said they have no interest in the technology, and view it as conflicting with the goal of promoting open standards like HTML5 as the unified, cross-platform target platform for web application developers.

In addition, both browser vendors have focused attention on refuting Google's claim that NaCl enables substantially faster applications in the first place, citing the increased performance of modern JavaScript engines. Last year, Mozilla's Chris Blizzard demonstrated a JavaScript version of Google's own NaCl photo-editing demo running at comparable speeds — although video of the session does not appear to be online, so it is unclear on which version of Firefox the demo ran.

The specific version could make a difference; Mozilla introduced TraceMonkey (a JavaScript optimizer that compiles certain JavaScript loops down to native code) with the release of Firefox 3.5 in 2008. Firefox 4.0 then introduced a second optimizer named JaegerMonkey, further improving performance. JaegerMonkey is a "just in time" (JIT) compiler that also compiles JavaScript to machine code, and is similar to the optimizer employed by Chrome. Mozilla claims that Firefox achieves better JavaScript performance through the fail-over combination of TraceMonkey and JaegerMonkey than JIT-only solutions. Its successor IonMonkey is projected to perform better still.

Of course, NaCl lines up with Google's interest in promoting the ChromeOS platform. If NaCl can squeeze additional performance out of netbook CPUs with code delivered in the browser, the need for locally-installed applications is reduced. But that concern may not line up with increasing the performance of standards-based web applications that run in every browser. The NaCl project itself is not on a standardization path, although the FAQ hints at interest in pursuing it

If Google remains unsuccessful at persuading the other browsers to include support for NaCl, it might attempt to build NaCl plugins for the other browsers (which it did in years past, but it's been deprecated due to the limitations of having only the NPAPI interface). But it may have a harder time convincing a significant number of developers to re-engineer NaCl-based applications. As tantalizing as "native speed" sounds from afar, the double sandbox security restrictions, limited execution environment, and current need to develop for three separate processor architectures does not sound as exciting up close. As for PNaCl's promise to eliminate the architecture problem by targeting an intermediate byte-code representation instead — that platform starts to sound more and more like client-side Java. Perhaps it does hold the key for a performance increase, but it is not going to be an easy sales pitch.

Comments (19 posted)

Brief items

GCC front end paper

Andi Hellmund has announced the publication of a white paper on the GCC front end [PDF]. It's a work in progress, and he is interested in comments from readers.

Comments (none posted)

GDB 7.3 released

Version 7.3 of the GDB debugger is available. New features include OpenCL language support, better Python support, better debugging of threaded programs, Blackfin CPU support, and more.

Full Story (comments: none)

"Drawing Comics with Krita" DVD available for pre-order

A 6-hour training DVD on drawing comics with Krita is now available for pre-order. Also available is a 20-page printed comic book that includes two comics created in Krita. Proceeds go to fund further Krita development. "Drawing Comics with Krita, helps you learn how to draw, color assemble and publish comics yourself using Krita, the free and open source digital painting suite. The DVD, comic book combo shows you, step-by-step how to use the most important of Krita's flexible painting tools. These are skills that can be used in any drawing or painting project. Better yet, each purchase helps fund getting creative commons training out there to help get more digital artists into Krita, free culture and free software in general." (Thanks to Armijn Hemel.)

Comments (none posted)

Mozilla to develop a stand-alone operating system

The Mozilla project has announced a project called "Boot to Gecko" which appears to be a sort of competitor to ChromeOS and/or Android. "Mozilla believes that the web can displace proprietary, single-vendor stacks for application development. To make open web technologies a better basis for future applications on mobile and desktop alike, we need to keep pushing the envelope of the web to include --- and in places exceed --- the capabilities of the competing stacks in question." The associated repository contains only a README file thus far.

Comments (34 posted)

PowerDNS Authoritative Server 3.0 released

The PowerDNS 3.0 release is out. "The largest news in 3.0 is of course the advent of DNSSEC. Not only does PowerDNS now (finally) support DNSSEC, we think that our support of this important protocol is among the easiest to use available." Other new features include TSIG support, a MyDNS-compatible backend, Lua-based incoming zone editing, a native Oracle backend, and more.

Full Story (comments: none)

spectmorph 0.2.0 released

Spectmorph is an audio tool "which allows to analyze samples of musical instruments, and to combine them (morphing). It can be used to construct hybrid sounds, for instance a sound between a trumpet and a flute; or smooth transitions, for instance a sound that starts as a trumpet and then gradually changes to a flute." The 0.2.0 release - the first to actually support morphing, is now available. Other new features include a BEAST plugin, JACK support, a graphical instrument inspector, and more.

Full Story (comments: none)

Newsletters and articles

Development newsletters from the past week

Comments (none posted)

Bencina: Real-time audio programming 101: time waits for nothing

Ross Bencina has put up an introduction to glitch-free audio programming. "The main problems I'm concerned with here are with code that runs with unpredictable or un-bounded execution time. That is, you're unable to predict in advance how long a function or algorithm will take to complete. Perhaps this is because the algorithm you chose isn't appropriate, or perhaps it's because you don't understand the temporal behavior of the code you're calling. Whatever the cause, the result is the same: sooner or later your code will take longer than the buffer period and your audio will glitch."

Comments (14 posted)

Super Collision At Studio Dave: The New World of SuperCollider3, Part 1 (Linux Journal)

Dave Phillips begins a three part review of SuperCollider on Linux Journal. "SuperCollider is composer/programmer James McCartney's gift to the world of open-source audio synthesis/composition environments. In its current manifestation, SuperCollider3 includes capabilities for a wide variety of sound synthesis and signal processing methods, cross-platform integrated GUI components for designing interfaces for interactive performance, support for remote control by various external devices, and a rich set of tools for algorithmic music and sound composition. And yes, there's more, much more."

Comments (none posted)

The Robustness Principle Reconsidered (CACM)

Eric Allman takes another look at Postel's law ("be conservative in what you send, liberal in what you accept") in the Communications of the ACM. "For many years the Robustness Principle was accepted dogma, failing more when it was ignored rather than when practiced. In recent years, however, that principle has been challenged. This isn't because implementers have gotten more stupid, but rather because the world has become more hostile. Two general problem areas are impacted by the Robustness Principle: orderly interoperability and security."

Comments (22 posted)

Page editor: Jonathan Corbet

Announcements

Brief items

Oracle acquires Ksplice

Ksplice is a mechanism for applying patches to running kernels without the need to bring the system down; LWN covered it back in 2008. Now the company that was built around this technology has been acquired by Oracle, which plans to offer the service with its enterprise distribution. "The addition of Ksplice's technology will increase the security, reliability and availability of Oracle Linux by enabling customers to apply security updates, diagnostics patches and critical bug fixes without rebooting."

Comments (24 posted)

The Open Cloud Initiative launches

The Open Cloud Initiative has announced its existence. "Its purpose is to provide a legal framework within which the greater cloud computing community of users and providers can reach consensus on a set of requirements for Open Cloud, as described in the Open Cloud Principles (OCP) document, and then apply those requirements to cloud computing products and services, again by way of community consensus." Comments are sought on the draft open cloud principles.

Comments (4 posted)

Microsoft/Novell agreement renewed

Microsoft has announced that the controversial patent deal with Novell has been renewed for a few more years. "This relationship will extend through Jan. 1, 2016, with Microsoft committed to invest $100 million in new SUSE Linux Enterprise certificates for customers receiving Linux support from SUSE."

Comments (7 posted)

DebConf video streams available

DebConf 11 (July 24-30) is underway in Banja Luka, Bosnia and Herzegovina. Streaming videos of the conference are available for those who would like to follow along at home. There is also an IRC channel to allow remote participants to comment and ask questions about the ongoing sessions.

Full Story (comments: 2)

Canonical Takes Ubuntu for Business Into the Channel with New Support Program

Canonical has announced the launch of its new Ubuntu Advantage (UA) partner program, "designed to help resellers bring a new set of support services for Ubuntu server, desktop and cloud installations direct to businesses. The program is launching with global partners, including CSS in the US, Asia and EMEA, Ashisuto in Japan, RedPill Linpro in Scandinavia and Alterway in France."

Full Story (comments: none)

Articles of interest

When Patents Attack! (This American Life)

National Public Radio [US] recently aired an episode of "This American Life" which took a critical look at the patent system. From the transcript: "Why would a company rent an office in a tiny town in East Texas, put a nameplate on the door, and leave it completely empty for a year? The answer involves a controversial billionaire physicist in Seattle, a 40 pound cookbook, and a war waging right now, all across the software and tech industries. We take you inside this war, and tell the fascinating story of how an idea enshrined in the US constitution to promote progress and innovation, is now being used to do the opposite." The episode is available at the "This American Life" website. (Thanks to Jack Davis and Daniel Morsing)

Comments (none posted)

Shuttleworth: The responsibilities of ownership

Mark Shuttleworth's push for copyright assignment agreements takes an interesting turn with this lengthy post suggesting that contributors owe a project their copyrights since they are dumping a maintenance load on that project. "So, one of the reasons I'm happy to donate (fully and irreversibly) a patch to a maintainer, and why Canonical generally does assign patches to upstreams who ask for it, is that I think the rights and responsibilities of ownership should be matched. If I want someone else to handle the work - the responsibility - of maintenance, then I'm quite happy for them to carry the rights as well. That only seems balanced. In the common case, that maintenance turns out to be as much work as the original crafting of the patch, and frankly, it's the 'boring work' part, while the fun part was solving the problem immediately at hand."

Comments (72 posted)

O'Reilly: Sexual Harassment at Technical Conferences: A Big No-No

Responding to a considerable amount of pressure to adopt an anti-harassment policy for OSCON, Tim O'Reilly has posted a statement on inappropriate behavior at O'Reilly events. "While we're still trying to understand exactly what might have happened at Oscon or other O'Reilly conferences in the past, it's become clear that this is a real, long-standing issue in the technical community. And we do know this: we don't condone harassment or offensive behavior, at our conferences or anywhere. It's counter to our company values. More importantly, it's counter to our values as human beings."

Comments (38 posted)

Märdian: Openmoko GTA04 "Phoenux"

The Openmoko community has teamed up with German Openmoko distributor Golden Delicious Computers to develop the GTA04, an open smartphone. "Golden Delicious Computers and the enthusiasts from the Openmoko community started off with the idea of stuffing a BeagleBoard into a Neo Freerunner case and connecting an USB UMTS dongle to it — this was the first prototype GTA04A1, announced in late 2010 and presented at OHSW 2010 and FOSDEM 2011." (Thanks to Neil Brown)

Comments (7 posted)

Linux Foundation Releases New White Paper on FOSS Compliance for Suppliers (Linux.com)

The Linux Foundation has announced the availability of a white paper on compliance practices for free/open source software. "It examines compliance practices needed when software supplied by a third party vendor is brought into the code baseline of a product to be distributed externally. The white paper discusses requirements a company should impose upon its suppliers to disclose FOSS in their deliverables and to provide what's needed to achieve compliance. The paper also discusses steps a company can take to review and validate the FOSS disclosures made by its suppliers. In addition to those topics, the white paper addresses measures a company can undertake to assess its suppliers' compliance capabilities." Registration is required to view the paper.

Comments (none posted)

Linux Foundation Monthly Newsletter: July 2011

The July edition of the LF monthly covers additions to LinuxCon NA, new members, 20th anniversary events, and several other topics.

Full Story (comments: none)

Humble Indie Bundle 3: Pay What You Want for Linux Games (Linux.com)

Joe "Zonker" Brockmeier covers the release of the third Humble Indie Bundle. "The Humble Bundle sales, and Humble Indie Bundle sales, are an experiment in letting users set their own price for games. Yes, you read that right — users can set their own price for games. Better yet, the games are DRM-free, so you can download and install them without worrying about managing a key or having the DVD in the drive to play a game."

Comments (none posted)

FSFE: Fellowship interview with Bernhard Reiter

Guido Arnold interviews Bernhard Reiter, on behalf of the Free Software Foundation Europe. "Bernhard is founder and Executive Director of Intevation GmbH, a company with exclusively Free Software products and services since 1999. He played a crucial role in the establishment of FSFE as one of its founders, and architect of the original German team. Beside that he participated in setting up three important Free Software organisations: FreeGIS.org, FFII, and FossGIS."

Comments (none posted)

New Books

CoffeeScript: Accelerated JavaScript Development--New from Pragmatic Bookshelf

Pragmatic Bookshelf has released "CoffeeScript: Accelerated JavaScript Development" by Trevor Burnham.

Full Story (comments: none)

Contests and Awards

Sourcefabric CMS theme contest

Sourcefabric has launched a new global theme contest, in which designers worldwide are invited to submit themes for Newscoop, "the open source CMS for news organisations." "Sourcefabric's Newscoop Theme Contest gives aspiring designers a chance to submit themes for newspaper sites like El Faro. Two winning entries will receive all-expenses paid trips to Prague for Sourcecamp 2011, the annual get-together of the Sourcefabric community."

Full Story (comments: none)

Upcoming Events

Events: August 4, 2011 to October 3, 2011

The following event listing is taken from the LWN.net Calendar.

Date(s)EventLocation
July 30
August 6
Linux Beer Hike (LinuxBierWanderung) Lanersbach, Tux, Austria
August 4
August 7
Wikimania 2011 Haifa, Israel
August 6
August 12
Desktop Summit Berlin, Germany
August 10
August 12
USENIX Security ’11: 20th USENIX Security Symposium San Francisco, CA, USA
August 10
August 14
Chaos Communication Camp 2011 Finowfurt, Germany
August 13
August 14
OggCamp 11 Farnham, UK
August 15
August 16
KVM Forum 2011 Vancouver, BC, Canada
August 15
August 17
YAPC::Europe 2011 “Modern Perl” Riga, Latvia
August 17
August 19
LinuxCon North America 2011 Vancouver, Canada
August 20
August 21
PyCon Australia Sydney, Australia
August 20
August 21
Conference for Open Source Coders, Users and Promoters Tapei, Taiwan
August 22
August 26
8th Netfilter Workshop Freiburg, Germany
August 23 Government Open Source Conference Washington, DC, USA
August 25
August 28
EuroSciPy Paris, France
August 25
August 28
GNU Hackers Meeting Paris, France
August 26 Dynamic Language Conference 2011 Edinburgh, United-Kingdom
August 27
August 28
Kiwi PyCon 2011 Wellington, New Zealand
August 27 PyCon Japan 2011 Tokyo, Japan
August 27 SC2011 - Software Developers Haven Ottawa, ON, Canada
August 30
September 1
Military Open Source Software (MIL-OSS) WG3 Conference Atlanta, GA, USA
September 6
September 8
Conference on Domain-Specific Languages Bordeaux, France
September 7
September 9
Linux Plumbers' Conference Santa Rosa, CA, USA
September 8 Linux Security Summit 2011 Santa Rosa, CA, USA
September 8
September 9
Italian Perl Workshop 2011 Turin, Italy
September 8
September 9
Lua Workshop 2011 Frick, Switzerland
September 9
September 11
State of the Map 2011 Denver, Colorado, USA
September 9
September 11
Ohio LinuxFest 2011 Columbus, OH, USA
September 10
September 11
PyTexas 2011 College Station, Texas, USA
September 10
September 11
SugarCamp Paris 2011 - "Fix Sugar Documentation!" Paris, France
September 11
September 14
openSUSE Conference Nuremberg, Germany
September 12
September 14
X.Org Developers' Conference Chicago, Illinois, USA
September 14
September 16
Postgres Open Chicago, IL, USA
September 14
September 16
GNU Radio Conference 2011 Philadelphia, PA, USA
September 15 Open Hardware Summit New York, NY, USA
September 16 LLVM European User Group Meeting London, United Kingdom
September 16
September 18
Creative Commons Global Summit 2011 Warsaw, Poland
September 16
September 18
Pycon India 2011 Pune, India
September 18
September 20
Strange Loop St. Louis, MO, USA
September 19
September 22
BruCON 2011 Brussels, Belgium
September 22
September 25
Pycon Poland 2011 Kielce, Poland
September 23
September 24
Open Source Developers Conference France 2011 Paris, France
September 23
September 24
PyCon Argentina 2011 Buenos Aires, Argentina
September 24
September 25
PyCon UK 2011 Coventry, UK
September 27
September 30
PostgreSQL Conference West San Jose, CA, USA
September 27
September 29
Nagios World Conference North America 2011 Saint Paul, MN, USA
September 29
October 1
Python Brasil [7] São Paulo, Brazil
September 30
October 3
Fedora Users and Developers Conference: Milan 2011 Milan, Italy
October 1
October 2
WineConf 2011 Minneapolis, MN, USA
October 1
October 2
Big Android BBQ Austin, TX, USA

If your event does not appear here, please tell us about it.

Page editor: Rebecca Sobol


Copyright © 2011, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds