User: Password:
Subscribe / Log in / New account Weekly Edition for October 4, 2012

With Madison it's not just a bill

By Nathan Willis
October 3, 2012

US House Representative Darrell Issa has been an active participant in "open government" advocacy in the United States over the past few years; among other things, he co-founded the OpenGov Foundation, which is dedicated to increasing access to government data. Free software advocates will also remember Issa's participation in the opposition to SOPA and PIPA in December 2011. That effort produced an online "legislative markup" application called Project Madison. The Madison source code has now been released on GitHub under GPLv3, for immediate use by DIY-legislators and armchair founding fathers — and potentially by other communities interested in collaborative editing and criticism.

In its original form, Madison allowed critics to log their complaints about the SOPA/PIPA legislation, and to help crowd-source an alternative bill known as the OPEN Act. Issa told the O'Reilly Radar blog in July that he was working to get the OpenGov Foundation (which is not to be confused with the Sunlight Foundation's registered as a 501(c)(3) nonprofit in the US, and that the Madison code would be released under an open source license as part of that effort. The source code release was announced on September 28 on the OpenGov blog. A live Madison installation is running at the KeepTheWebOpen site, which currently hosts nine bills and related documents for public commentary and improvement (including the commentary recorded for SOPA and PIPA).

Obviously other tools exist for collaboratively editing documents (EtherPad derivatives perhaps being the most well-known). But Madison is designed to preserve the canonical form of the document while still enabling feedback. The stated goal of Madison is to permit such feedback in a way that makes contributing easy, but also enables administrators to sort through potentially thousands of comments in a meaningful fashion. Madison presents a document in structured form, divided into paragraph-based sections (it appears that legislation is often drafted in one-sentence-per-paragraph style, so this is in fact quite granular). Users can attach separate comments to each section, as well as propose re-wording suggestions. Both types of feedback are presented in a sidebar to the document, but suggestions and comments are displayed in separate boxes in the interface.

Users can also register "likes" and "dislikes" on each feedback submission posted, as well as flag inappropriate comments. The interface tracks likes and dislikes, plus users' Facebook "likes" and Twitter tweets spawned by the submission. For each section, the interface sorts user contributions by an aggregate of these community metrics so as to allow popular ideas to bubble up to the top for easy consumption by the administrator. If the administrator chooses to incorporate a user-contributed change into the main document, that change is highlighted in the document interface with a different background color. Anonymous comments are not supported; the application supports both individual and group user accounts (although group accounts must be requested and approved by administrators). Facebook login via OAuth 2 is also supported.

Under the hood, Madison is straightforward PHP and MySQL. However, a major limitation is that each document presented for public commentary needs to be manually added to the database, with one database row per document section. There is also no installer to set up the database tables and create administrator accounts, which makes getting started more complicated. But the Madison team is well aware of the hardship these limitations impose, and has posted a roadmap on the GitHub page that outlines plans for these and other features. Also on the list, for example, is support for larger, multi-part documents, in addition to general improvements like additional third-party-account support (Twitter, Reddit, Google Plus, etc.).

World of commentary

Madison was written during a Congressional hackathon in December 2011. The source code release follows on the heels of the White House's August release of its online petition application We The People. We The People is implemented as a Drupal module, however; hopefully Madison will evolve into a component more easily integrated with existing sites, because there are clear use cases for such an application that have nothing to do with politics.

For example, online document mark-up and commentary have become an integral part of the free software community's license revision process. The FSF commissioned its Stet mark-up tool for use in crafting GPLv3, and the COMT application was used by Mozilla during the comment period for MPL2. Madison offers some features not supported in Stet or COMT, such as the user-voting mechanism, and differentiating between individual and group accounts. However it is still a bit behind on the feature front overall; the other tools support things like version comparison that are valuable whenever wording is expected to change.

Madison also has user interface issues to be resolved, and the administrative interface lacks some tools that would be helpful in a lengthy comment-gathering process (such as the ability to reclassify a comment as a suggestion, or vice-versa, and the ability to indicate when a suggestion has been acted upon). But there is no denying Madison's real-world suitability. According to The Atlantic Wire, the application successfully managed more than 200,000 visitors during a 12-hour SOPA editing marathon. At that scale, Madison's ability to help administrators separate the wheat from the chaff could be the difference between a useful comment period and chaos. One could also argue that the demise of SOPA and PIPA is in part an indicator of Madison's success; although there were clearly more important factors, improving access to the text of the bill and encouraging citizens to delve into the details certainly helped the public's grasp of the issues.

A good commenting and annotation system is critical to the creation of any document that needs public acceptance. In recent years, the free software world has seen several such processes, and frequently those that are seen as doing a poor job listening to or responding to public comments attract criticism (such as Project Harmony). Madison is a noteworthy release because it represents progress in the advancement of open government principles, but it is also valuable for enabling anyone to collect document feedback and contributions from the public, which is a principle that the free software community holds dear.

Comments (2 posted)

XDC2012: Programming languages for X application development

By Michael Kerrisk
October 3, 2012
2012 X.Org Developers' Conference

To what extent is the choice of programming language important in terms of making it easier to build desktop applications? This was the question asked by Bart Massey in his talk on day two of the 2012 X.Org Developers' Conference.

Bart observed that we've now reached the point where it's harder to write a desktop application than it is to write a mobile or web application. Furthermore, the quality of desktop applications is often worse than for mobile and web applications. In Bart's opinion, the situation should be the other way round, and he is puzzling over why it is not.

Some factors that make programming desktop applications harder are beyond the X developers' control, he noted. For example, desktop applications may be much bigger than mobile and web applications. They may also have requirements that are not present, or are simply not addressed on mobile and web platforms. For example, desktop applications may need to manage quite sophisticated user interactions, and provide full internationalization support (which is omitted from many mobile and web applications). It's hard for the X developers to control these factors, but Bart noted one area where they can have some influence in making the development of desktop applications easier: the choice of programming language.

When it comes to programming languages, how are mobile and web applications different? Bart pointed out that—unlike desktop applications—it is notable that they are almost exclusively not written in C or C++. Objective C is used in the mobile space, but Bart noted that that language is special (because it provides some features not found in C/C++ that are found in the other languages used for mobile applications). In the mobile and web space, nearly everyone is using languages that have automated memory management; that eliminates problems with memory leaks and mishandled pointers. These programming languages commonly support late binding and dynamic loading. Languages that provide these sorts of features are attractive because they allow developers to be much more productive. That's especially important because the mobile and web spaces are fast moving and programmers need to be able to put applications together quickly. By contrast, developing for the desktop—and Bart made it clear that here he was speaking about desktops in general, not just X—requires a much greater initial effort before a programmer is productive.

On the X desktop, the common programming languages have been C and C++, and Bart noted a number of ways in which these languages have been a source of pain for application developers. There are many reasons to use multiple threads in GUI programs, most notably because a windowed interface in a GUI environment is naturally concurrent. However, threading support in C and C++ (typically via POSIX threads) is not easy to use for the average application developer. The frameworks used in C/C++ often involve callbacks and other complicated flows of control, which are likewise challenging for programmers. Manual memory management is tricky for programmers as well: monitoring memory usage of desktop applications on any platform (not just X), it's easy to find cases of applications that leak memory. And while its true that things can go wrong with automated memory management, in practice those problems are not as bad.

In Bart's view, the choice of programming language (and the accompanying frameworks that that choice implies) is one of the things getting in the way when it comes to developing X applications. The X tool kits have a widget mentality, and while he feels that that is probably a good mentality, the toolkits make widget creation a lot of work: too much boiler-plate code and "weird interaction" is required for building widgets. "The problem is that the language doesn't give you convenient ways to express what you want to express."

We use C and C++ on the X desktop because they've always been there, Bart noted. But, what are the alternatives? Bart estimated that there are some 400 programming languages in wide use, but noted that there are obvious attractions to using a mainstream language, in terms of the pool of developers and available support. Java is an obvious choice. C# is another possibility, but he thought that there were no advantages that made it clearly compelling over Java. If one widens the net, other alternatives might include SML, OCaml, or Haskell, but he felt sure that if he asked the room what language should be the future of desktop development, he'd get fifteen answers.

Bart then considered the toolkits used with X a little more deeply. He noted that low-level libraries such as XCB are written in C/C++. There are various reasons for that choice, among them efficiency and the ease providing foreign-function interfaces (bindings) for other languages. The choice of C/C++ at this level of the stack is okay, he thinks. But, as we go up the stack, the next thing we come to is the toolkits, and the choices there are Qt and GTK. Both are old, entrenched, and written in C/C++. Because of that, the applications created above them tend to be written in the same language. It's time to think hard about whether these are the right tool kits for applications built in 2012. The problems of building desktop applications are amplified by using these large, complex libraries, and they don't provide the automated support that is provided in mobile and web frameworks.

Bart noted some summary points from his work during the last ten years. If someone asks him what language to use to write a graphical desktop application, he is typically going to respond: Java. This is ironic, because he, an X developer, is recommending a language that will not produce a native X application. But, he noted that the overhead of writing a Java GUI application is simply lower (than C/C++), even when developing for the Linux/X desktop. Another reasonable alternative that he sees is Python with a toolkit (for example, wxWidgets). As a teacher (at Portland State University), Bart recommends those alternatives to his students, rather than the C/C++-based frameworks available for X.

Is programming language even the right thing to be thinking about, when considering problem of building desktop applications? Bart said that the audience might be able to convince him otherwise. But, he wanted to know, if programming language is not a part of the problem, then just how weird does a programming language need to be before it is a problem? He then mentioned a few factors that may or may not matter with respect to the choice of programming languages and toolkits. One of the more interesting of these was portability of applications across desktop, mobile, and web. It would be nice to have that, though it is "really hard." But, he asked, what can we do in terms of choice of programming language to facilitate that possibility?

On the other hand perhaps we are stuck with GTK and Qt, Bart said. He noted that it was hard enough to replace Xlib with XCB. The task has taken ten years, and Xlib is likely still to be with us for another decade. Perhaps replacing the toolkit frameworks is too difficult, and when it comes to developing desktop applications, we're consequently stuck with C/C++ forever. He concluded, however, that "if we're going to save the desktop—and don't fool yourself, it is in trouble—then I think programming language is part of the problem."

A lively discussion followed the formal part of Bart's presentation. In response to some comments from Supreet Pal Singh about the satisfactions of doing X application development in C/C++, Bart elaborated further on his point. He noted that that there is no ramp to being an X developer—it's a cliff. Developers have to master many technologies before they can do anything. He acknowledged that he too has a different feeling when developing X applications in C++, "but it's not a happy one: I feel smart for being able to do this in C++, but not smart for having done it."

Another audience member noted that Qt Creator can be used for rapid development of X applications. Bart observed that code-generation tools solve some problems, but replace them with other problems as one reaches the limits of what the code generator can do. At that point, the developer then has the problem of trying to understand what is going on under the hood of the generator and possibly tweaking its output, which can complicate code maintenance. He also noted that a GUI application created with a code generator tends to behave less well for the user. For example, he suggested, try using a GUI tool to build an application designed for a small window and then scale that window up and watch what happens. The result is very different from an application where the layouts and layout policy have been hand-crafted so that it works well for all window sizes.

Peter Hutterer pointed out that it may be because of the relative youth of the web and mobile application spaces that they are not suffering some of the same problems as desktop X applications. X has been around for well over 20 years; in 10 years' time, web and mobile applications may have some of the same issues. He mentioned Android fragmentation as a possible example of the kinds of problems to come. Bart agreed that the passage of time may bring legacy problems to mobile and web application development, but noted that many of his points were independent of legacy problems: the need to manage pointers and memory, wacky untyped (void *) interfaces, and complex linking interactions. He constantly sees his students struggling with these problems.

Matthias Hopf pointed out one language that was missing from Bart's discussion, but noted that he was hesitant to propose it as a candidate. That language is of course JavaScript, which started out in the web space, but is making its way to the desktop in the form of node.js. Bart responded that he wasn't against JavaScript as a development language. Some aspects of the language make him suspicious, "but the very fact that people are successfully developing web and mobile applications in JavaScript and HTML5 says that it's worth considering [for the desktop]. It would be interesting to consider what a set of JavaScript bindings for X would look like."

Other members in the audience noted that the combination of QML, JavaScript, and C++ is an effective framework for rapid development of X applications, though Keith Packard interjected that he didn't want to have to write an application in three languages. Although Bart acknowledged that some of the information about QML was new to him, by the end of the session, he still didn't seem to be convinced away from his thesis that the current X programming languages and the frameworks were part of the barrier to developing X applications.

Decades of experience have shown C and C++ are powerful languages for system programming, but few would deny that they provide application developers with too many opportunities to shoot themselves in the foot. Bart was facing a rather atypical audience: a room full of C/C++ experts. Nevertheless, there was some sympathy for his thesis, and one suspects that if a more typical cross section of application developers were asked, there would be rather more agreement with his position.

The X.Org wiki has a pointer to the video of this session.

Comments (62 posted)

Tent pitches a new social networking protocol

By Nathan Willis
October 3, 2012

The free software community produces a constant stream of ideas about how to displace the proprietary network services that dominate so much online interaction. In mid-2012, Tent became the latest entrant in the conversation, heralding an "open, decentralized, and built for the future" social networking solution that "changes everything." Beyond the project's manifesto, however, there was scant detail, particularly on how Tent related to other distributed social networking efforts like OStatus, the protocol used by StatusNet and September 21 brought the first look at something more concrete, courtesy of a reference Tent server and initial documentation of the system's protocols.

Staking out the turf

Tent's general idea is familiar enough: a functional replacement for proprietary social networking services like Twitter, Flickr, and Facebook, but built with free software and designed so that individual users retain control over their data (including not handing over personal information to a third party). A key part of making such functionality possible is devising a mechanism for distinct installations to interoperate, thus allowing users to converse, share content, and subscribe to content posted by others — without demanding any permanent ties to other users' software.

This goal is much the same as that of the OStatus community, which led a number of people to open issues on the Tent bug tracker asking what justifies starting the new project. The Tent FAQ (on the project home page) says that "the architects of Tent investigated existing protocols for the distributed social web. Each of them lacked several critical features necessary for modern users." Elsewhere the FAQ comments that OStatus and Diaspora were "first steps" but does not go into detail about what they lack. On the issue tracker, however, developer Jonathan Rudenberg lists three features not covered by existing federated social networking projects: support for private messages, a server-to-client API, and the lack of a "social graph" specification (e.g., existing "friend" or "following" relationships) enabling users to export their user-to-user connections for portability between services. Developer Daniel Siders reiterated those issues in a Hacker News discussion about Tent.

Several commenters found those features to be weak justification for writing entirely new protocols, however. Dave Wilkinson II argued that OStatus does not address private messaging because it does not attempt to address identity management, but that the related standards Webfinger and PubSubHubbub together can be used to implement private messaging. He also said that migrating social graph information is trivial in OStatus precisely because OStatus does not bind to the user's identity. OStatus co-author Evan Prodromou said private messages were in development for PubSubHubbub 0.4 (and subsequently OStatus), and pointed to ActivityPub as an effort to develop a generic server-to-client API.

The Tent project's documentation also sees its definition of "decentralized" web services as being fundamentally different than OStatus's definition of "federated" services. Prodromou suggested on the issue tracker that this distinction was inaccurate, and that what Tent describes is no different than the federation functionality of Status.Net and Diaspora. Siders replied that:

To us, a service is federated instead of decentralized when first class features are not specified in the federation protocol. Federation protocols provide a least common denominator for the transport of messages and may not have 1:1 mapping with the internal services of each node.

In other words, he continued, Tent differs from federated social networking systems because it combines the server-to-server and client-to-server communication protocols, akin to unifying SMTP and IMAP. "Tent is not a federation protocol because it provides end to end communication between users, not just servers."

The 0.1 specification

Abstract principles aside, the Tent team released version 0.1 of its protocol documentation on September 21, as well as tentd, a demonstration server written in Ruby that implements a Twitter-like service. The documentation outlines the basics of Tent's messaging and network design, the server-to-server protocol, the server-to-client API, and descriptions of post and profile data fields. In general, Tent uses JSON to format all messages, with OAuth 2 authentication for applications and HTTP MAC access authentication to cryptographically verify individual requests and responses.

Every Tent user (or "entity") is associated with a separate server, which is expected to always be online and accessible over HTTPS. Servers are meant to be found through HTTP Link headers and HTML link tags that point to the user's profile URL. Requesting the profile URL returns the user's profile data as a JSON object. Currently the server-to-server protocol addresses only Twitter-style "follower" relationships; user A can follow user B by POST-ing a request that includes user A's own entity URL, the flavors of post the user wishes to subscribe to, the licenses acceptable to user A, and a URL to which user B should send posts. Assuming user B approves the follow-request (which is not addressed in the documentation), user B's server sends its MAC to user A so that subsequent posts can be authenticated.

The documentation does not address private messages directly, other than their usage of MAC authentication. However, the post documentation mentions a permissions object that, in the examples, can be marked as public or list specific entities or groups that are allowed access. Groups and their representation are not currently defined. The notion of "acceptable licenses" is not explained in detail, either; it seems to place the burden on the publishing server to filter out content that individual subscribers do not find acceptable on licensing grounds (the only license example used is Creative Commons Attribution 3.0). The server-to-server API also defines methods for requesting another user's list of followers, the entities that the user is following, canceling or altering a follow-request, and fetching another user's posts (either in bulk or by querying parameters like publication date).

Each of these methods has a corresponding method in the client API; in practice user A's front-end software would relay these requests to the user's Tent server, which would in turn handle the nitty-gritty of subscribing or querying user B's server. As it stands, the scheme is quite simple; there are six post types defined: status (a short, 256-character message), essay (a longer, unlimited-length text entry), photo, album (for a collection of photos), repost (a pointer to another user's post), profile (a notification of changes to a user profile), and delete (a notification that another post has been expunged).

Currently unaddressed are activity-style posts (e.g., Facebook-style "likes," geographic check-ins, or any number of other actions). There was some discussion on the Tent issue tracker about adopting the ActivityStreams format for these post types. User profiles are defined; there are three required fields (entity URL, the licenses under which the entity publishes content, and the canonical API root URL needed to interact with the entity's server). A handful of other properties are defined as well (avatar image, bio, location, etc.).

Interested users can sign up for a free Tent account at The site runs the TentStatus application, although free accounts are only permitted to send status-type posts; access to essays and photos costs $12 per month.

Tent invented here

Tent 0.1 is bare-bones, to say the least. Several of the key features that are supposed to differentiate it from OStatus, Diaspora, and other systems are simply not present, which makes it difficult to assess fairly. For example, there is no way to export one's social graph to import it into a separate account. There are several existing standards for social graph-like information, such as the RDF-based FOAF and XML-based XFN. The Tent team has been critical of using anything other than JSON in its debates on the issue tracker; it would be interesting to see how they implement the social graph functionality.

But there are also aspects of Tent 0.1 that simply need stress-testing. The fact that subscribers tell publishers which licenses they find acceptable for future posts is puzzling, and it will be interesting to see whether that scales well in practice. The diagrams on the Tent site appear to indicate that each publisher sends a separate copy of each status update to each subscriber. When multimedia content is allowed, that could be problematic (and it is one of the problems PubSubHubbub was created to address, regardless of whether or not one finds it an acceptable solution).

At a more fundamental level, though, several commenters in the issue tracker and other discussions are unconvinced that Tent's decision to associate a user identity with a URL is wise. The prevailing wisdom is that users (particularly non-developers) associate URLs with content, not with individuals. Many consider OpenID's requirement that every user have an OpenID URL as an identifier to be one of its greatest flaws. As long as remains the only Tent site (and tentd the only Tent implementation), the URL identity question will remain unexplored because all users exist in the same namespace.

Of course, until there are other Tent servers and applications, none of the federation/decentralization features can really be put to the test, either —not to mention shaking out Siders's assertion that Tent is not "federated" because it connects users rather than servers ... even though every user is required to have a separate server.

In short, the interesting bits are still theoretical. One only hopes that we will get to examine these other bits before too long. It is not immediately convincing that Tent's approach of bundling identity, server-to-server, and client-to-server into a single API is a strength. But it is clear that by starting over from scratch on all of these topics the Tent team has carved out a much larger task for itself than it would have if it had attempted to implement private messaging in OStatus.

Toward the end of discussions like the OStatus issue tracker thread, a lot of the reasons for design decisions seem to boil down to personal preference: JSON versus XML, HTTP versus Webfinger, and so on. There is certainly nothing wrong with building an application to suit one's own preferences, but in the long run it is a difficult way to establish a standard. The Tent FAQ ends with a statement affirming the project's commitment to "open governance models leading to a ratified standard." But, as Steve Klanik observed, "Working with existing standards is way less fun than just building your own." That said, the OStatus suite of protocols is indeed slow-moving and feature-incomplete; perhaps Tent can spur that community on — it has certainly reinvigorated the discussion already.

Comments (9 posted)

Page editor: Jonathan Corbet


LSS: Security modules and RPM

By Jake Edge
October 3, 2012
2012 Kernel Summit

The RPM package format and tools have long supported SELinux, so that policies are configured and files get labeled correctly at installation time. But support for other security solutions, Smack for example, is lacking in RPM. Elena Reshetova presented some ideas for rectifying that in her presentation at the 2012 Linux Security Summit (LSS). By adding hooks into RPM processing, more Linux Security Modules (LSMs) or other security components could be supported.

Reshetova began with an overview of RPM. The format is used by multiple distributions, beyond just the Red Hat distributions where it began. SUSE/openSUSE, Mageia, Tizen, and others all use RPM.

RPM package installation has the notion of a "transaction", which encompasses all of the packages to be installed or removed in a single operation. Inside these transactions are the individual packages or "transaction elements". Dependency checking is done at the transaction level, so it is only done once. Scripts to run before the transaction starts and after it ends can be configured in a package specification.

[RPM flowchart]

Installing each package entails a series of steps inside the transaction, starting with the optional signature verification. If that passes (or is turned off), then the "pre" script is run, the files are unpacked from the archive and installed, and the "post" script is run. As might be expected, there are a few other steps (e.g. initialization, cleanup), but, as depicted in a flowchart (seen at right), pre-unpack-post makes up the bulk of the processing.

When SELinux handling was added to RPM, it was done to set up and install the policies and label the files that get installed. That work was mostly done in the sepolicy RPM plugin using the existing RPM hooks. But some SELinux support is in the RPM core, including running the maintainer scripts (pre, post, and a few others) and doing some labeling tasks. The maintainer scripts are run using rpm_execcon() to set a particular security context before their execution.

When Reshetova and others working on Tizen started looking into adding Smack support for RPM, they realized it needed a more generalized security plugin interface. Smack requires setting up access control domains and rules on a per-package basis, but there are other security mechanisms that have needs as well. The security policy for a system or device might trust certain application repositories and only allow packages from those source to access "sensitive services". Integrity measurements may need to be bootstrapped, container configuration established, or seccomp() restrictions enabled, all of which could be handled by a generalized security plugin.

Currently, there are just a few hooks available in RPM: two before the pre-transaction script is run, one before the pre script, one after the post script, and a cleanup hook. Reshetova would like to work with the LSM developers to create an expanded set of hooks that will serve all of the LSMs (as well as the other uses). Making the hooks symmetrical, so that there are hooks both before and after transactions and package installation/removal, might be the starting point. Adding a hook to wrap script execution for setting up the proper security context is another.

Currently, the verification step only allows specifying which keys to use and what should happen if the package does not verify. Adding a hook for verification would allow for additional checking, such as that the package was signed by the right key (corresponding to the repository it came from, as opposed to any installed key, as RPM checks for currently), and to make security policy checks based on the originating repository.

The other hooks that Reshetova proposed are associated with the individual files in a package. Those would allow things like security labeling or calculating hashes on the file contents (for integrity purposes). The last hook she proposed is to handle conflicts. If a package wants to install a file that another package has already installed, the hook could install a conflict file recording the problem; later hooks could use that file to make decisions depending on the attributes of the two packages involved. If one package is from a more trusted repository, its version could be chosen, for example.

In addition, some environments may have non-native applications that use their own installer. Those have all of the same problems with handling security contexts, labeling, and so on. It would be nice to have the security plugin functionality available as standalone library that could be used by non-native application installers, Reshetova said.

Once those hooks (or a similar set that is agreed upon) are available, the SELinux-specific pieces of RPM could be moved out of the core. A unified layer of security hooks would be beneficial for a wide variety of use cases, she said. More information is available on the Tizen wiki and a GitHub repository contains the proposed changes for RPM.

Dan Walsh asked what the RPM maintainers thought about the changes; Reshetova said they are interested in seeing a unified solution. They want to make sure that there is agreement between the LSM developers, which is one of the motivations for her presentation. The intent would be to cause no disruption for the SELinux parts in RPM when moving that to the new hooks, she said in answer to another question. Walsh said that there really isn't anyone who is the "SELinux/RPM person", but he and others wouldn't oppose a patch to move SELinux out of the core; "don't break anything and I'm fine" with it, he said, though he did caution that performance might be an issue.

Since the summit, Reshetova has started a wider discussion of the hooks on the SELinux mailing list. It would seem likely that we will have a more generalized solution for RPM in the not-too-distant future.

Comments (4 posted)

Brief items

Security quotes of the week

Clearly, one MUST configure the webserver to NOT permit off-site access to the credentials and configuration file: wp-config.php but I'll be darned if I can see instructions on the WordPress site, showing a novice administrator how to do this. In a shared hosting environment without 'root' level control, it is probably not even doable.
-- Russ Herrold

Whenever possible, when the law is ambiguous or silent on the issue at bar, the courts should let those who want to market new technologies carry the burden of persuasion that a new exception to the broad rights enacted by Congress should be established. That is especially so if that technology poses grave dangers to the exclusive rights that Congress has given copyright owners. Commercial exploiters of new technologies should be required to convince Congress to sanction a new delivery system and/or exempt it from copyright liability. That is what Congress intended.
-- Ralph Oman [PDF], former US Register of Copyrights (by way of Techdirt)

Taking pictures in your private space may be embarrassing and may expose your mistress or illegal pot plants to the world, but as far as burglars go, it is irrelevant: they can tell easily whether your house is worth breaking into from the outside. And the idea that a bunch of dim-wit burglars are using poor quality 3D models to plan their heist wouldn't even fly as a movie plot.

This project strengthens the ludicrous idea in people's heads that photography is somehow a significant threat to safety or security. Photographic documentation is an extremely important part of modern democracy, and projects like these threaten the ability of people to take pictures.

-- Slashdot commenter kenorland (Thanks to Paul Wise.)

When China starts looking like a Free Speech haven, something is really wrong with the United States.
-- Nina Paley (Also thanks to Paul Wise.)

Comments (3 posted)

Mozilla "Persona" beta release

Mozilla has announced the beta release of its "Persona" authentication system. "For the past year Mozilla has been working on an experimental login system that completely eliminates passwords on websites while being safe, secure, and easy to use. Today we’re casting off the 'experimental' label and announcing the first 'beta' release of Persona." LWN looked at this system in 2011, when it was still known as "BrowserID."

Comments (30 posted)

New vulnerabilities

apache: cross-site scripting

Package(s):apache CVE #(s):CVE-2012-2687
Created:October 2, 2012 Updated:April 5, 2013
Description: From the CVE entry:

Multiple cross-site scripting (XSS) vulnerabilities in the make_variant_list function in mod_negotiation.c in the mod_negotiation module in the Apache HTTP Server 2.4.x before 2.4.3, when the MultiViews option is enabled, allow remote attackers to inject arbitrary web script or HTML via a crafted filename that is not properly handled during construction of a variant list.

openSUSE openSUSE-SU-2014:1647-1 apache2 2014-12-15
openSUSE openSUSE-SU-2013:0632-1 apache2 2013-04-05
CentOS CESA-2013:0512 httpd 2013-03-09
openSUSE openSUSE-SU-2013:0629-1 apache2 2013-04-05
Scientific Linux SL-http-20130228 httpd 2013-02-28
Oracle ELSA-2013-0512 httpd 2013-02-25
Red Hat RHSA-2013:0512-02 httpd 2013-02-21
Fedora FEDORA-2013-1661 httpd 2013-02-12
openSUSE openSUSE-SU-2013:0248-1 apache2 2013-02-05
openSUSE openSUSE-SU-2013:0243-1 apache2 2013-02-05
openSUSE openSUSE-SU-2013:0245-1 apache2 2013-02-05
Scientific Linux SL-http-20130116 httpd 2013-01-16
Oracle ELSA-2013-0130 httpd 2013-01-12
Ubuntu USN-1627-1 apache2 2012-11-08
Mageia MGASA-2012-0280 apache 2012-10-06
Mandriva MDVSA-2012:154-1 apache 2012-10-01

Comments (none posted)

inn: man-in-the-middle attack

Package(s):inn CVE #(s):CVE-2012-3523
Created:October 2, 2012 Updated:January 22, 2014
Description: From the Mandriva advisory:

The STARTTLS implementation in INN's NNTP server for readers, nnrpd, before 2.5.3 does not properly restrict I/O buffering, which allows man-in-the-middle attackers to insert commands into encrypted sessions by sending a cleartext command that is processed after TLS is in place, related to a plaintext command injection attack, a similar issue to CVE-2011-0411 (CVE-2012-3523).

Gentoo 201401-24 inn 2014-01-21
Mandriva MDVSA-2012:156 inn 2012-10-02
Mageia MGASA-2012-0305 inn 2012-10-29

Comments (none posted)

kernel: information leak / denial of service

Package(s):kernel CVE #(s):CVE-2012-3510
Created:October 3, 2012 Updated:October 24, 2012
Description: From the Red Hat advisory:

A use-after-free flaw was found in the xacct_add_tsk() function in the Linux kernel's taskstats subsystem. A local, unprivileged user could use this flaw to cause an information leak or a denial of service.

Mageia MGASA-2013-0016 kernel-rt 2013-01-24
Mageia MGASA-2013-0011 kernel-tmb 2013-01-18
Mageia MGASA-2013-0010 kernel 2013-01-18
Mageia MGASA-2013-0012 kernel-vserver 2013-01-18
Mageia MGASA-2013-0009 kernel-linus 2013-01-18
SUSE SUSE-SU-2012:1391-1 Linux kernel 2012-10-24
Oracle ELSA-2012-1323 kernel 2012-10-04
Oracle ELSA-2012-1323 kernel 2012-10-03
Scientific Linux SL-kern-20121003 kernel 2012-10-03
CentOS CESA-2012:1323 kernel 2012-10-03
Red Hat RHSA-2012:1323-01 kernel 2012-10-02

Comments (none posted)

mod_rpaf: denial of service

Package(s):mod_rpaf CVE #(s):CVE-2012-3526
Created:September 28, 2012 Updated:October 3, 2012

From the Gentoo advisory:

An error has been found in the way mod_rpaf handles X-Forwarded-For headers. Please review the CVE identifier referenced below for details.

A remote attacker could send a specially crafted HTTP header, possibly resulting in a Denial of Service condition.

Gentoo 201209-20 mod_rpaf 2012-09-27

Comments (none posted)

moodle: multiple vulnerabilities

Package(s):moodle CVE #(s):CVE-2012-4400 CVE-2012-4408 CVE-2012-4402 CVE-2012-4403
Created:September 27, 2012 Updated:October 3, 2012

From the Red Hat Bugzilla entries [1, 2, 3]:

CVE-2012-4400: A possibility to bypass file upload size constraint was found in the way the webservice script, called from the filepicker front end of Moodle, a course management system, performed sanitization of 'maxbytes' variable. A remote attacker could issue a specially-crafted request that, when processed could allow an attacker to upload a file even if it was larger than specified constraint.

CVE-2012-4402, CVE-2012-4403: Users with permission to access multiple services were able to use a token from one service to access another. An attacker could use this flaw, in an unauthorized way, to access content of an external service.

CVE-2012-4408: A security flaw was found in the way Moodle course management system performed permission check on course reset page (the course reset link was protected by a correct permission, but the reset page itself was being checked for a different permission). A remote attacker could use this flaw to in an unauthorized way to reset particular course.

Fedora FEDORA-2012-14348 moodle 2012-09-27
Fedora FEDORA-2012-14295 moodle 2012-09-27

Comments (none posted)

moodle: multiple vulnerabilities

Package(s):moodle CVE #(s):CVE-2012-4401 CVE-2012-4407
Created:September 27, 2012 Updated:October 3, 2012

From the Red Hat bugzilla entries [1, 2]:

CVE-2012-4401: A security flaw was found in the way Moodle course management system performed user permissions validation by course topic management. A remote attackers, with course editing capabilities, but without ability to show / hide topics or set the current topic for a particular course could use this flaw to successfully complete these actions under certain circumstances.

CVE-2012-4407: A security flaw was found in the way file serving functionality of Moodle course management system enforced file access restrictions on blog post(s). A remote attacker could use this flaw to deliver files embedded as part of a blog without the publication state to be checked properly.

Fedora FEDORA-2012-14348 moodle 2012-09-27

Comments (none posted)

postfixadmin: multiple vulnerabilities

Package(s):postfixadmin CVE #(s):CVE-2012-0811 CVE-2012-0812
Created:September 27, 2012 Updated:March 28, 2014

From the Gentoo advisory:

Multiple SQL injection vulnerabilities (CVE-2012-0811) and cross-site scripting vulnerabilities (CVE-2012-0812) have been found in Postfixadmin.

Debian DSA-2889-1 postfixadmin 2014-03-28
Gentoo 201209-18 postfixadmin 2012-09-27

Comments (none posted)

software-properties: man-in-the-middle attack

Package(s):software-properties CVE #(s):
Created:October 2, 2012 Updated:October 3, 2012
Description: From the Ubuntu advisory:

It was discovered that the apt-add-repository tool incorrectly validated PPA GPG keys when importing from a keyserver. If a remote attacker were able to perform a man-in-the-middle attack, this flaw could be exploited to install altered package repository GPG keys.

Ubuntu USN-1588-1 software-properties 2012-10-01

Comments (none posted)

tor: denial of service

Package(s):tor CVE #(s):CVE-2012-4922
Created:October 2, 2012 Updated:February 4, 2013
Description: From the CVE entry:

The tor_timegm function in common/util.c in Tor before, and 0.2.3.x before, does not properly validate time values, which allows remote attackers to cause a denial of service (assertion failure and daemon exit) via a malformed directory object, a different vulnerability than CVE-2012-4419.

Fedora FEDORA-2012-14650 tor 2013-02-03
Gentoo 201301-03 tor 2013-01-08
openSUSE openSUSE-SU-2012:1278-1 tor 2012-10-02

Comments (none posted)

vmware-player: multiple vulnerabilities

Package(s):vmware-player CVE #(s):CVE-2007-5671 CVE-2008-0967 CVE-2008-1340 CVE-2008-1361 CVE-2008-1362 CVE-2008-1363 CVE-2008-1364 CVE-2008-1392 CVE-2008-2098 CVE-2008-2100 CVE-2008-2101 CVE-2008-4915 CVE-2008-4916 CVE-2008-4917 CVE-2009-0909 CVE-2009-0910 CVE-2009-1244 CVE-2009-2267 CVE-2009-3707 CVE-2009-3732 CVE-2009-3733 CVE-2009-4811 CVE-2010-1137 CVE-2010-1138 CVE-2010-1139 CVE-2010-1140 CVE-2010-1141 CVE-2010-1142 CVE-2010-1143 CVE-2011-3868
Created:October 1, 2012 Updated:October 3, 2012
Description: From the Gentoo advisory:

Multiple vulnerabilities have been discovered in VMware Player, Server, and Workstation.

Local users may be able to gain escalated privileges, cause a Denial of Service, or gain sensitive information.

A remote attacker could entice a user to open a specially crafted file, possibly resulting in the remote execution of arbitrary code, or a Denial of Service. Remote attackers also may be able to spoof DNS traffic, read arbitrary files, or inject arbitrary web script to the VMware Server Console.

Furthermore, guest OS users may be able to execute arbitrary code on the host OS, gain escalated privileges on the guest OS, or cause a Denial of Service (crash the host OS).

Gentoo 201209-25 vmware-player 2012-09-29

Comments (none posted)

xdiagnose: insecure temp files

Package(s):xdiagnose CVE #(s):
Created:October 3, 2012 Updated:October 3, 2012
Description: From the Ubuntu advisory:

Alec Warner discovered that xdiagnose improperly handled temporary files in when creating user-initiated archive files. While failsafeX does not use the vulnerable code, this update removes this functionality to protect any 3rd party applications which import the vulnerable code. In the default Ubuntu installation, this should be prevented by the Yama link restrictions.

Ubuntu USN-1591-1 xdiagnose 2012-10-02

Comments (none posted)

Page editor: Jake Edge

Kernel development

Brief items

Kernel release status

The 3.6 kernel was released on September 30. In the announcement Linus said:

When I did the -rc7 announcement a week ago, I said I might have to do an -rc8, but a week passed, and things have been calm, and I honestly cannot see a major reason to do another rc. So here it is, 3.6 final.

Notable features in 3.6 include TCP small queues, the client-side TCP fast open implementation (server side has been merged for 3.7), IOMMU groups, the Btrfs send/receive feature, the VFIO device virtualization mechanism, and more. See the KernelNewbies 3.6 page for details.

Stable updates: 3.5.5, 3.4.12 and 3.0.44 were released on October 2; each contains a longer-than-usual list of important fixes.

Comments (none posted)

Quotes of the week

It's not a very advanced regular expression, but I still find this a bit alarming in the Linux kernel:

    $ git log --no-merges v3.5..v3.6 | \
	  egrep -i '(integer|counter|buffer|stack|fix) (over|under)flow' | \
	  wc -l

How many were security relevant? How many got CVEs?

Kees Cook

I chose SHA-512 because everyone knows it's 512 times more secure than SHA-1.
Rusty Russell

A familiar test case that makes 5 million random accesses to a 1GB memory area goes from 20 seconds down to 0.43 seconds with THP enabled on my SPARC T4-2 box.
— minor performance improvements from David Miller

I added "having no life" as a skill on my Linked In profile. Please endorse me!
Jon Masters

Comments (3 posted)

Linux security workgroup formed

One outcome of the recently-concluded Linux Security Summit was the decision to form a workgroup around Linux security issues. That workgroup now exists; it will be using the existing kernel-hardening list for its discussions.

The charter of the workgroup is to provide on-going security verification of Linux kernel subsystems in order to assist in securing the Linux Kernel and maintain trust and confidence in the security of the Linux ecosystem.

This may include, but is not limited to, topics such as tooling to assist in securing the Linux Kernel, verification and testing of critical subsystems for vulnerabilities, security improvements for build tools, and providing guidance for maintaining subsystem security.

The group intends to discuss a wide range of approaches including tool development, static analysis, verification efforts, and even the possibility of tightening the rules for patch signing. Interested people are encouraged to join in.

Full Story (comments: none)

Standardizing virtio

Rusty Russell has announced a proposal to standardize the virtio I/O subsystem. He says:

I believe that a documented standard (aka virtio 1.0) will increase visibility and adoption in areas outside our normal linux/kvm universe. There's been some of that already, but this is the clearest path to accelerate it. Not the easiest path, but I believe that a solid I/O standard is a Good Thing for everyone.

The plan is to start an OASIS working group which would help in the development (and standardization) of version 1.0 of the virtio specification. He is asking for comments on the idea, but few have been posted as of this writing.

Full Story (comments: none)

Questioning link restrictions

One of the headline features in the 3.6 release was the long-awaited advent of security restrictions that change the handling of hard and soft links in world-writable directories. One of the reasons this change took so long to merge was concerns about breaking programs and scripts on user systems. The case was finally made that problems would be limited to malware, and the feature was merged.

Now, a single report of trouble on the linux-kernel list has developers questioning the change — or, at least, whether it should be turned on by default. Linus fears that this report could be followed by others:

However, I suspect we'll see more. And once that happens, we're not going to keep a default that breaks peoples old scripts, and we're going to have to rely on distributions (or users) explicitly setting it.

Compatibility is just too important.

Other developers have argued for making the change as soon as the 3.6.1 stable update. Needless to say, agreement on this point is not universal; Kees Cook, the author of the change, argues that the benefits far outweigh the pain. The kernel community is committed to not breaking things that used to work, though; if this change appears to be causing problems more widely, it will probably be reversed in the near future.

Comments (none posted)

Kernel development news

3.7 Merge window part 1

By Jonathan Corbet
October 3, 2012
A mere 72 days after the beginning of the 3.6 development cycle, the process has started again with the opening of the 3.7 merge window. As of this writing, some 5540 non-merge changesets have been pulled into the mainline, with more to come. Some of the more interesting user-visible changes merged thus far include:

  • The arm64 patch set, adding support for ARM's 64-bit "AARCH64" architecture, has been merged.

  • The perf kvm tool has gained a "stat" command for analysis of event data. Extensive bash completion support for perf (for both commands and event names) has also been added.

  • The new perf trace tool is meant to function like the strace utility, but with the ability to show events beyond system calls. This tool appears to be just getting started; the commit message reads "It gets stuck sometimes, but hey, it works sometimes too!"

  • Applications on the s/390 architecture can now make use of the System zEC12 hardware transactional memory feature.

  • Support for the Intel supervisor mode access prevention feature has been added.

  • The CIFS filesystem now has complete SMB2.1 support; SMB2 is still marked as experimental, but that's a step forward from its previous "broken" status.

  • The ARM subtree cleanup continues; the Tegra subarchitecture is now fully converted to the device tree mechanism. The unloved and unused Philips Nexperia PNX4008 subarchitecture support has been removed.

  • Extended attributes are now implemented on the control directories for control groups. This is a Systemd-inspired feature allowing ancillary information to be attached to control groups.

  • If non-hierarchical control group controllers are used with nested (hierarchical) control groups, a warning will now be emitted. The behavior of those controllers in that situation might change in the future; see this article for more information.

  • The Generic Routing Encapsulation (GRE) tunneling protocol is now supported over IPv6. Network address translation (NAT) is also now available for IPv6.

  • Server-side support for the TCP fast open protocol enhancement has been merged.

  • The kernel now has support for the VXLAN tunneling protocol. See Documentation/networking/vxlan.txt for more information.

  • The IMA integrity appraisal security extension has been merged.

  • Subject to a configuration option, the "Yama" security module can be automatically stacked regardless of which security module is the "primary" module.

  • A number of changes improving support for trusted platform module (TPM) devices have gone in. There is now support for TPM modules supporting the TCG TIS 1.2 specification and Infineon's I2C 0.20 specification. IBM virtual TPMs are now supported. The "physical presence interface" mechanism is also supported, making TPM administration easier.

  • New hardware support includes:

    • Boards and processors: Broadcom BCM2835 SoCs, Raspberry Pi boards, and Micrel KS8695 SoC-based boards.

    • Block: s/390 "storage class memory" devices, Calxeda Highbank SATA controllers, and QLogic ISP83xx iSCSI host adapters.

    • Input: Sony PS3 BD remote controls.

    • Miscellaneous: Fairchild FAN53555 regulators, Maxim 8907 voltage regulators, Freescale i.MX28 LRADC analog to digital converters (ADCs), Analog Devices AD7787, AD7788, AD7789, AD7790 and AD7791 SPI ADCs, Analog Devices AD5755/AD5755-1/AD5757/AD5735/AD5737 ADCs, TI LP8788 ADCs, Maxim MAX197 ADCs, Analog Devices ADT7410 temperature monitoring chips, Samsung GPIO/pinmux controllers, Nomadik DB8540 pin controllers, Freescale IMX35 pin controllers, Avionic Design N-bit GPIO expanders, Broadcom BCM2835 GPIO units, Freescale MXS SPI controllers, and NXP SC18IS602/603 SPI controllers.

    • Networking: Silicom Bypass network interface cards, Freescale XGMAC MDIO controllers, and Microchip MRF24J40 transceivers.

    • Serial: NXP SCCNXP serial ports, NXP LPC32XX high speed serial ports, Maxim MAX3108 UARTs, and Digi Realport remote serial devices.

    • USB: Broadcom BCM63xx peripheral controllers, Marvell USB 3.0 PHY controllers, ZTE USB to serial devices, and Cambridge Electronic Design 1401 USB devices (described as "whatever that is" in the Kconfig entry).

Changes visible to kernel developers include:

  • The regulator subsystem now supports a "bypass mode" wherein the input is connected directly to the output.

  • The handling of read-copy-update grace periods has been pushed into a set of kernel threads, allowing for better preemptability and reduced power consumption; The October 11 LWN Weekly Edition will include an article on this work. RCU has also seen work to allow user-mode execution to be seen as a sort of quiescent state; this is a necessary precondition to fully tickless execution.

  • There is a new "parking" facility for kernel threads. The primary purpose is to provide a lightweight mechanism to get these threads out of the way when CPU hotplug events are processed.

  • The new TIMER_IRQSAFE timer flag causes the timer function to be executed with interrupts off. It exists to make it possible to safely wait for (and cancel) timers from within interrupt handlers.

  • There is a new sensor framework for human input devices; it registers a multifunction device for each sensor hub and enumerates the sensors found attached to it. See Documentation/hid/hid-sensor.txt for details.

  • The firmware caching API has been merged. This subsystem will pull copies of potentially interesting device firmware into memory just prior to a system suspend, thus ensuring that the firmware will be available at resume time.

  • The feature-removal.txt file is now a removed feature. Linus zapped it, saying: "There is never any reason to add stuff to this idiotic file. Either something isn't getting used, and you should just remove it, or there is no excuse for removing it in the first place. Just stop the idiocy."

  • Initial multiplatform support for the ARM architecture has been merged. This is an important step toward the "single zImage" goal, where one kernel can run on a wide variety of ARM systems, but there is still a lot of work to be done before that goal can be reached.

  • The non-reentrant workqueues patch has been merged. There are also new mod_delayed_work() and mod_delayed_work_on() functions to modify the expiration time for delayed work items.

  • The user namespace conversion work continues, meaning that the newish kuid_t and kgid_t types are appearing in more kernel subsystems.

The 3.7 merge window can be expected to stay open until approximately October 14. That said, Linus has warned the community that he will be traveling during this time; he, along with your editor, will be at the Linux Foundation's Korea Linux Forum. If the travel interferes with the merging process — which hasn't been a problem in previous merge windows — this merge window may be extended to compensate.

Comments (8 posted)

Another LSM stacking approach

By Jake Edge
October 3, 2012

Anyone who follows Linux kernel security discussions has probably heard of the "LSM stacking issue". It is a perennial topic on the mailing lists and solutions have been proposed from time to time. The basic problem is that only one Linux Security Module (LSM) can be active in a running kernel, and that single slot is often occupied by one of the "monolithic" solutions (e.g. SELinux or AppArmor) supplied by distributions. That leaves some of the smaller or more special-purpose LSMs—or users who want to use multiple approaches—out in the cold.

Back in February 2011, David Howells proposed a stacking solution for LSMs. At the time, Casey Schaufler mentioned a solution he had been working on that would be posted in a "day or two". That prediction turns out to have been overly optimistic, but his solution has surfaced—more than a year-and-a-half later. He also discussed the patches in a lightning talk at the recently held Linux Security Summit.

There are three types of LSMs available in the kernel today and there are use cases for combining them in various ways. Administrators might want to add some AppArmor restrictions on top of the distribution-supplied SELinux configuration—or use SELinux-based sandboxes on a TOMOYO system. The two "labeled" LSMs, SELinux and Smack, require that files have extended attributes (xattrs) containing labels that are used for access decisions. The two "path-based" LSMs, AppArmor and TOMOYO, both base their access decisions on the paths used to access files in the system. The only other LSM currently available is Yama, which is something of a container for discretionary access control (DAC) enhancements.

Yama is the LSM that is perhaps most likely to be stacked. It adds some restrictions to the ptrace() attach operation that Ubuntu and ChromeOS use, and other distributions are considering it as well. In fact, Yama developer Kees Cook has proposed making the LSM unconditionally stackable via the CONFIG_SECURITY_YAMA_STACKED kernel build option (which was merged for 3.7). Over the years, though, various other security ideas have been proposed and pointed in the direction of the LSM API, so other targeted LSMs may come about down the road. Making each separately stackable is less than ideal, so a more general solution is desirable. In addition, combining labeled and path-based solutions manually can't really be sanely done.

When Howells posted his solution, he explicitly disallowed combining the two labeled LSMs because of implementation difficulties (mainly with respect to the LSM-specific secid which is used by SELinux and Smack, but none of the others). There was also a belief that mixing SELinux and Smack (or AppArmor and TOMOYO for that matter) is not a particularly sought-after feature. But Schaufler thought that was an unnecessary restriction, one that he was trying to address in his solution.

As it turns out, Schaufler ended up at the same place. His proposal also defers stacking (or "composing") SELinux and Smack, noting that it "has proven quite a challenge". But he was able to get the other combinations working—at least to the extent that the kernel would boot without complaints in the logs. The Smack tests passed as well. Performance for Smack with AppArmor, TOMOYO, and Yama enabled is "within the noise", he said.

Schaufler's version ensures that the hooks for each enabled LSM are called, which is different than Howells's approach that short-circuited the other hooks if one denied the access. Instead, Schaufler patches call each LSM's hooks, remembering the last non-zero return (denial or error of some sort) as the return value for the hook. His argument is that an LSM could reasonably expect to see—and possibly record information about—each access decision, even if it has been denied by another LSM.

Much of the "guts" of the changes are described in the infrastructure patch, which is the largest of the five patches. The others make fairly modest (if pervasive) changes to SELinux, Smack, TOMOYO, and AppArmor to support stacking. As it turns out, Yama "required no change and gets in free". The changes to the individual LSMs are optional, as they can still be used (in a non-stackable way) without them.

Stacking is governed by the CONFIG_SECURITY_COMPOSER option. If that is not chosen, all of the existing LSMs function as they do today. If stacking is built in, the security= boot parameter can then be used to control which LSMs are enabled. For example, security=selinux,apparmor will enable those two. If nothing is specified on the boot command line, all of the LSMs built into the kernel will be enabled. The /proc/PID/attr/current interface has also been changed to report information from any of the active LSMs (only SELinux, Smack, and AppArmor actually use that interface today).

Existing kernels store pointers to the hooks implemented by an LSM in a struct security_operations called security_ops. Schaufler's patch replaces that with an array of security_operations pointers called composer_ops. That array is indexed based on the order that is assigned to each LSM as it is registered. The first entry (composer_ops[0]) is reserved for the Linux capabilities hooks. Those have been manually "stacked" into the LSMs for some time, so entries in composer_ops[0] get zeroed out if one of the other LSMs implements the hook (as the capabilities checks will be done there). If there is no entry in composer_ops[0], each of the hooks in the other entries in that array are called, as described above.

The security "blobs" (private storage for each LSM) are still managed by the LSMs, but because there are blob pointers sprinkled around various kernel data structures (e.g. inodes, files, sockets, keys, etc.), a "composer blob" is used. That blob contains pointers to each of the active LSM blobs, and new calls are used to get and set the blob pointers (e.g. lsm_get_inode() or lsm_set_sock()). Most of the changes for the individual LSMs are converting to use this new interface.

So far, most of the comments have been about implementation details; Schaufler addressed those in the second version of the patch set. Notably missing, at least so far, were some of the concerns about strange interactions between stacked LSMs leading to vulnerabilities that have come up in earlier discussions. But, without any major complaints, one would guess some more testing will be done, including gathering some additional performance numbers, before the linux-kernel gauntlet will be run. The rest of the kernel developers have heard about the need for stacking LSMs enough times that it seems likely that Schaufler's patches (or something derived from them) will eventually pass muster.

Comments (4 posted)

How 3.6 nearly broke PostgreSQL

By Jonathan Corbet
October 2, 2012
In mid-September, the 3.6 kernel appeared to be stabilizing nicely. Most of the known regressions had been fixed, the patch volume was dropping, and Linus was relatively happy. Then Nikolay Ulyanitsky showed up with a problem: the pgbench PostgreSQL benchmark ran 20% slower than under 3.5. The resulting discussion shows just how hard scalability can be on contemporary hardware and how hard scheduling can be in general.

Borislav Petkov was able to reproduce the problem; a dozen or so bisection iterations later he narrowed down the problem to this patch, which was duly reverted. There is just one little problem left: the offending patch was, itself, meant to improve scheduler performance. Reverting it fixed the PostgreSQL regression, but at the cost of losing an optimization that improves things for many (arguably most) other workloads. Naturally, that led to a search to figure out what the real problem was so that the optimization could be restored without harmful effects on PostgreSQL.

What went wrong

The kernel's scheduling domains mechanism exists to optimize scheduling decisions by modeling the costs of moving processes between CPUs. Migrating a process from one CPU to a hyperthreaded sibling is nearly free; cache is shared at all levels, so the moved process will not have to spend time repopulating cache with its working set. Moving to another CPU within the same physical package will cost more, but mid-level caches are still shared, so such a move is still much less expensive than moving to another package entirely. The current scheduling code thus tries to keep processes within the same package whenever possible, but it also tries to spread runnable processes across the package's CPUs to maximize throughput.

The problem that the offending patch (by Mike Galbraith) was trying to solve comes from the fact that the number of CPUs built into a single package has been growing over time. Not too long ago, examining every processor within a package in search of an idle CPU for a runnable process was a relatively quick affair. As the number of CPUs in a package increases, the cost of that search increases as well, to the point that it starts to look expensive. The current scheduler's behavior, Mike said at the time, could also result in processes bouncing around the package excessively. The result was less-than-optimal performance.

Mike's solution was to organize CPUs into pairs; each CPU gets one "buddy" CPU. When one CPU wakes a process and needs to find a processor for that process to run on, it examines only the buddy CPU. The process will be placed on either the original CPU or the buddy; the search will go no further than that even if there might be a more lightly loaded CPU elsewhere in the package. The cost of iterating over the entire package is eliminated, process bouncing is reduced, and things run faster. Meanwhile, the scheduler's load balancing code can still be relied upon to distribute the load across the available CPUs in the longer term. Mike reported significant improvements in tbench benchmark results with the patch, and it was quickly accepted for the 3.6 development cycle.

So what is different about PostgreSQL that caused it to slow down in response to this change? It seems to come down to the design of the PostgreSQL server and the fact that it does a certain amount of its own scheduling with user-space spinlocks. Carrying its own spinlock implementation does evidently yield performance benefits for the PostgreSQL project, but it also makes the system more susceptible to problems resulting from scheduler changes in the underlying system. In this case, restricting the set of CPUs on which a newly-woken process can run increases the chance that it will end up preempting another PostgreSQL process. If the new process needs a lock held by the preempted process, it will end up waiting until the preempted processes manages to run again, slowing things down. Possibly even worse is that preempting the PostgreSQL dispatcher process — also more likely with Mike's patch — can slow the flow of tasks to all PostgreSQL worker processes; that, too, will hurt performance.

Making things better

What is needed is a way to gain the benefits of Mike's patch without making things worse for PostgreSQL-style loads. One possibility, suggested by Linus, is to try to reduce the cost of searching for an idle CPU instead of eliminating the search outright. It appears that there is some low-hanging fruit in this area, but it is not at all clear that optimizing the search, by itself, will solve the entire problem. Mike's patch eliminates that search cost, but it also reduces movement of processes around the package; a fix that only addresses the first part risks falling short in the end.

Another possibility is to simply increase the scheduling granularity, essentially giving longer time slices to running processes. That will reduce the number of preemptions, making it less likely that PostgreSQL processes will step on each other's toes. Increasing the granularity does, indeed, make things better for the pgbench load. There may be some benefit to be had from messing with the granularity, but it is not without its risks. In particular, increasing the granularity could have an adverse effect on desktop interactivity; there is no shortage of Linux users who would consider that to be a bad trade.

Yet another possibility is to somehow teach the scheduler to recognize processes — like the PostgreSQL dispatcher — that should not be preempted by related processes if it can be avoided. Ingo Molnar suggested investigating this idea:

Add a kernel solution to somehow identify 'central' processes and bias them. Xorg is a similar kind of process, so it would help other workloads as well. That way lie dragons, but might be worth an attempt or two.

The problem, of course, is the dragons. The O(1) scheduler, used by Linux until the Completely Fair Scheduler (CFS) was merged for 2.6.23, had, over time, accumulated no end of heuristics and hacks designed to provide the "right" kind of scheduling for various types of workloads. All these tweaks complicated the scheduler code considerably, making it fragile and difficult to work with — and they didn't even work much of the time. This complexity inspired Con Kolivas's "staircase deadline scheduler" as a much simpler solution to the problem; that work led to the writing (and merging) of CFS.

Naturally, CFS has lost a fair amount of its simplicity since it was merged; contact with the real world tends to do that to scheduling algorithms. But it is still relatively free of workload-specific heuristics. Opening the door to more of them now risks driving the scheduler in a less maintainable, more brittle direction where nothing can be done without a significant chance of creating problems in unpredictable places. It seems unlikely that the development community wants to go there.

A potentially simpler alternative is to let the application itself tell the scheduler that one of its processes is special. PostgreSQL could request that its dispatcher be allowed to run at the expense of one of its own workers, even if the normal scheduling algorithm would dictate otherwise. That approach reduces complexity, but it does so by pushing some of the cost into applications. Getting application developers to accept that cost can be a challenge, especially if they are interested in supporting operating systems other than Linux. As a general rule, it is far better if things just work without the need for manual intervention of this type.

So, in other words, nobody really knows how this problem will be solved at this time. There are several interesting ideas to pursue, but none that seem like an obvious solution. Further research is clearly called for.

One good point in all of this is that the problem was found before the final 3.6 kernel shipped. Performance regressions have a way of hiding, sometimes for years, before they emerge to bite some important workload. Eventually, tools like Linsched may help to find more of these problems early, but we will always be dependent on users who will perform this kind of testing with workloads that matter to them. Without Nikolay's 3.6-rc testing, PostgreSQL users might have had an unpleasant surprise when this kernel was released.

Comments (25 posted)

Patches and updates

Kernel trees


Core kernel code

Development tools

Device drivers

Filesystems and block I/O

Memory management



Virtualization and containers


Page editor: Jonathan Corbet


Interview: Fedora's new cloud manager

By Jonathan Corbet
October 3, 2012
Longtime Fedora community member Matthew Miller recently announced that he had taken a position with Red Hat to "work on bringing some sense to the whole 'Cloud' thing" within Fedora. We asked Matthew if he would be willing to answer a few questions about just what that means; as can be seen below, he was more than willing to do so. Read on for a detailed discussion of his view of cloud computing and how Fedora fits into the cloud picture.

LWN: First of all, what, in your mind, does the term "cloud" really refer to?

MM: Most of my conversations these days seem to start with this. :)

Clearly, "Cloud" is a both marketing term and a hot business buzzword, neither of which lend themselves to clarity. However, there are some actual significant changes in computing represented by the word.

On the business side, there's a trend towards centralization of resources — sometimes described as a big, constant pendulum swing, with "cloud data center" simply standing in for "mainframe" this time around — but there are actually interesting new developments which make cloud computing compelling. It may be that I've been in a university setting too long, but I like the NIST definition [PDF] which describes the essential cloud characteristics. Or, there's the "OSSM" definition [video], which goes like this:

  • On-demand: the resources are already set up and ready to be deployed.

  • Self-service: you choose what you want, when you want it.

  • Scalable: you also choose how much you want, and ramp up if necessary.

  • Measurable: metering and reporting are integrated so you know you are getting what you pay for.

With this option, if you've got a new startup, putting together your own data center is suddenly crazy. If you're an agile developer, on-demand self-service is very appealing. And for larger enterprises, it's amazing to have built-in scalability and measurability.

Small, nimble companies are already benefiting from cloud computing; bigger companies are mostly dipping their toes in. There's a lot of interest in on-premises private cloud, and especially in transparent hybrid cloud (where local and public cloud infrastructure are mixed together). All of the technology is still in flux and there are plenty of unresolved questions, but it's certainly a matter of how-much-how-quickly, not of if-at-all.

On the user and client-platform side, the important trend is mobile and tablet computing, and the movement away from general-purpose computing devices to a locked-down "app store" model — even for desktop systems. Previous attempts at making restricted computing devices always failed in the market, because while most people only do a few things with their computers, there's a long tail of different things each person wants, and any narrow selection of most-common tools was never good enough.

Developer-friendly app marketplace and distribution channels which run right on the device get around that — even when the platform is locked down, consumer convenience is served by "there's an app for that". Consumers get access to tools, developers get a market, and vendors get lock-in and control. In proprietary operating systems, we're going to see more and more of that.

So, a key role for free and open source Linux distributions is to provide a viable alternative to the locked-in "consumption device" dystopia. This includes providing the convenience and functionality of cloud computing that proprietary platforms offer. There's one key thing the NIST cloud definition covers that isn't in the OSSM one (to be pronounced "awesome", by the way). That's "broad network access", and it means you can get to your data from anywhere, from any client platform. It's the lure of cloud computing from an end-user point of view — don't worry if you lose your phone, because it's all safe in the cloud. In fact, don't worry if you're house burns down, because your family photos are all safely floating out there as well.

At least, don't worry as long as there are sufficient protections in place! A number of people are saying that open source on the desktop (or any client device) doesn't even matter — that open web is the new front, and that the battle is about making sure people have control over and access to their own remotely-stored data.

I think that's important, and I'm glad people are fighting for it, but I disagree that it's sufficient. It's not hard to imagine a future where the normal tech platform most people buy isn't able to run arbitrary code — look at the boot restrictions for Windows 8 ARM systems. Historically, the mass-market platform has also been the development platform, and that's done wonderful, magical things for the democratization of invention. But in a restricted future, the flexible development platform is a special niche product — more expensive and maybe even not available to everyone.

So, a key role for free and open source Linux distributions is to provide a viable alternative to the locked-in "consumption device" dystopia. This includes providing the convenience and functionality of cloud computing that proprietary platforms offer, whether it's through open web and open cloud initiatives, or through building in local cloud and peer-to-peer cloudlike services.

LWN: How might Fedora fit into that picture?

MM: Since I'm just stepping into this job, I hope you'll entertain a high-level answer. Although Fedora's primary product is, of course, our Linux-based operating system, our vision is much wider. The Fedora Project creates a world where

  • free culture is welcoming and widespread,
  • collaboration is commonplace, and
  • people control their content and devices.

We want these things in cloud computing as well, and over the next few months I'll be working with other people in Fedora to identify more specific outcomes to work toward in cloud computing, and from there we'll develop programs and activities focused on those outcomes.

We have a strong base in the Fedora distribution, a great worldwide user and developer community, and excellent infrastructure. And, we have a lot of cloud-related work in progress.

LWN: Where are people using Fedora in cloudy settings now? How would you like to see that change in the future?

MM: There is a lot of Fedora running in Amazon EC2. For technical reasons, Amazon users were stuck at Fedora 8 for a very long time, and that's still causing a huge spike in our statistics. We have up-to-date images now, and making those more prominent is one of my early tasks. The same goes for guest images at other major cloud infrastructure providers as well.

We're also a huge center for cloud infrastructure software packaging and development. OpenShift Origin and OpenStack Folsom are two of the big features for the upcoming Fedora 18 release, along with Eucalyptus, OwnCloud, and Heat (a cloud orchestration API for OpenStack). We've also got work in progress on OpenNebula and CloudStack.

Now, I don't think anyone has delusions that a great number of people will run production infrastructure on top of Fedora with these packages. The main two use cases are: a) developers of cloud software (including, but definitely not limited to, Red Hat) working to make sure the software is ready for future use in enterprise Linux products, and b) cloud early-adopters who are following that development. Those constituent groups are going to remain crucial to Fedora in general and Fedora cloud in specific, and I think there are some gentle adjustments we can make that will help encourage those relationships even more.

Over the years we've lost a lot of our sysadmin and server-side Linux users, as they've felt somewhat left behind by all the energy and development around desktop. Sometimes it's seemed like the only voice available to that group has been a negative one — "Hey, slow down!" — which is a frustrating side of the conversation to be on, when really we all want to work together to make better software and a better world (back to that vision statement). When you're cast in the stop-energy role, it's hard to feel listened to, let alone constructive.

So, the rise in cloud and all the exciting new development work gives that group a positive voice. That's good for Red Hat Enterprise Linux as a downstream project. It's good for developers of cloud software because we'll make sure their code works in the distribution. And it's good for Fedora, because this is a large group of technology professionals with years of experience and wisdom.

I especially want to expand what we can do for the small, agile organizations working at the leading edge of cloud technology. For this, the Fedora focus isn't so much on the cloud infrastructure software as on the cloud guest images.

As I mentioned, we've got an up-to-date EC2 image, and we're working on making that even more lightweight and on offering more variants for different use cases. We also support appliance creation with BoxGrinder (a Fedora 15 feature), and JEOS ["just enough operating system"] builds with Oz. Or if you're looking for something more complete — for example, to make a virtualized developer's desktop — it's easy to make an image from our Live CD ISO (although we need to make that several steps easier).

Producing images isn't enough by itself, though. We also need to increase our engagement with the DevOps community. At this point, that means mostly listening and being present, more than direct evangelism. Fedora has always aimed at users who might become contributors, and the DevOps world is a natural fit, with good alignment with our values of "freedom, friends, features, first".

On a technical level, we see the continuous struggle between having the most up-to-date versions of specific software (Ruby gems, for example), while having a base that you just don't have to worry about. That's a hard problem, but in order to be relevant to actual users, we need to address it, whether it's through Software Collections, with something related to OpenShift cartridges, or in some other way.

We're also looking into how we can make better use of cloud technology for Fedora developers. The Fedora Infrastructure Team has deployed and is evaluating Eucalyptus and OpenStack for package mass rebuilds, for automated (and non-automated) testing, and eventually for more.

The Fedora features process has been great for the distribution as a whole and is successful overall in helping us look at development in a goal-oriented way. I want to make it easier for Fedora cloud contributors – both new and already involved – to participate in this. As with any process in software engineering, the "meta-work" involved is often very painful to the actual implementers, sometimes simply because of the context switches required, not because the actual paperwork is particularly onerous. That wastes valuable developer time and produces less than ideal process results. We need to have people – like, for example, me – involved in the various features who can both keep up with the technical work and keep meta-work from being a burden to developers, while still reaping the benefits of good process.

For end-user and desktop cloud services, we haven't yet explored all of the possibilities. We can make it easier for moderately-savvy users to set up their own private infrastructure using SparkleShare or OwnCloud as alternatives to proprietary hosted filesharing. Our OpenStack packages will include Puppet modules making it trivial to get a fully-functional private cloud out of the box, and we can extend that to other parts of the distribution as well. We'll also continue to look at what more we can do to enable users to connect themselves to open cloud offerings in ways that align with the project mission. It'd be nice to provide a push-button process where open source web applications can be deployed either locally or pushed to a cloud provider — a tie in to Red Hat's OpenShift, say, or to any open cloud provider.

LWN: Cloud providers tend to try to project an image of solidness and stability. How well does that fit with the relatively bleeding-edge nature of the Fedora distribution?

MM: To be blunt, those companies should use Red Hat Enterprise Linux in that aspect of their operation. Since I work for them now, my suggestion may seem a little slanted, but it's really the basic fact: solidness and stability are the product.

But, those cloud providers should use Fedora for their testbeds, precisely because the field itself is on the bleeding edge and we can follow it more quickly. For example, QEMU 1.2 was pulled into Fedora right after it was released, and we track the upstream kernel closely, which means we get the latest hardware and low-level virtualization support. Also, Fedora is upstream for RHEL and is home to a lot of the exciting development activity done by Red Hat. If you have an open source / free software cloud project that you want to work in that ecosystem, Fedora is the natural path.

Meanwhile, those cloud providers should be interested in supporting Fedora as a guest OS, because it provides value to their users. It's important to stress that while Fedora is fast-moving, every actual release is production-ready and stable. I won't claim perfection, but we have an excellent quality engineering team that takes this very seriously. While we develop on a six-month development cycle, we make sure it works before we release.

LWN: Do you see Fedora's relatively short support cycle as being a problem for cloud deployments? How might any such problems be mitigated?

MM: We're looking for use cases for Fedora where Features and First are more important than longevity. The rapid cycle can be hard to keep up with, but I don't think slowing things down or investing significantly in an extended lifecycle is the right approach for us. Moving quickly may put us in more of a niche, but by focusing on the right targets we can make Fedora the best choice for those specific areas.

There's always some background talk about a rolling release, but I like the cadence of actual releases, with planning and a features process, and the quality assurance work and release engineering would be hard to replicate in the rolling model. We have ongoing work on in-place upgrades using yum, and further polish to that will reduce the pain of upgrades. The model I mentioned earlier where OpenStack configuration is shipped as a Puppet module is also a good direction, since that helps abstract away the specifics of the underlying system.

Right now, we support upgrades going two releases back, which fits the Fedora lifecycle, but we also want to make sure people who miss that mark aren't entirely left behind. One specific thing I'm working on is putting together tools and resources for the many people still running on Fedora 8 in Amazon EC2.

We've also had a continuous discussion in Fedora about the fire hose of updates during a release's lifetime. Bringing that under control is important for real-world use, but we don't want to keep users from getting new features quickly or prevent developers and packagers from getting their code out. I'm very much in support of the idea of bundling non-security updates into monthly sets which can be tested and installed in a more controlled way. I also strongly prefer to see development targeted at Rawhide and future releases, with a focus on stability for the thirteen months of a release's maintenance cycle. Or, beyond that, a focus on stability for the first twelve months and a focus on smooth upgrades for the final one.

LWN: What compelling features does Fedora bring to the cloud setting now? What kind of things do you think need to be done to make Fedora more interesting in this area?

MM: I've talked about some of the cloud software we currently have and are working on. We also offer our usual large collection of packaged server software and development tools, again almost always in up-to-date versions with the latest features. We also have great infrastructure for collaboration, and for building and securely distributing packages across an excellent global mirror network.

But, beyond software and beyond technology, we bring an important set of values and a vision for a free, open, and collaborative future which is just as vital in a cloud computing future as it is in the older local-computing model.

What needs to be done to make Fedora more interesting in the cloud setting? In addition to what I've already talked about, I think most crucially we need to increase community involvement and build up our user and contributor base. That's part of my job, too, and I want to help Fedora respond to what the community needs. I'm working hard to listen, discuss, read, and absorb as much as possible. We need input from everyone who wants to make Fedora better, and who is interested in extending the benefits of free and open source software to the cloud.

LWN: "Cloud computing" often seems like a return to centralized computing with very little end-user freedom or control. That vision seems mildly incompatible with the first of the posted Fedora "Foundations." How can the two be reconciled so that Fedora brings more freedom to cloud computing?

MM: Well, first, from the developer and business perspective – even including academic institutions at large scale – the OSSM aspects of cloud can mean more freedom and control. There's certainly a risk that proprietary, locked-in cloud offerings will dominate, but open cloud has such advantages and such energy behind it from so many different quarters that I'm confident we'll ultimately win in this area.

But, at the user level, when we're talking about software as a service and about protections for privacy with remote data, it's a legitimate worry. Fedora needs to offer better alternatives: connections to services which put privacy and freedom first, the ability to easily stand up cloud services under your own control, and, finally, a way to work which doesn't have to be cloud-dependent.

LWN: What sort of tools and/or superpowers has Red Hat given you to get this job done?

MM: Well, the Fedora Design Team made me a very nice logo, and I'm hoping to get it emblazoned on a Fedora Blue cape....

Seriously, though, the primary tool is connections with people – in the Fedora Project, at Red Hat, and in the open cloud community in general. I'm very accessible by email, including in the Fedora Cloud SIG and development mailing lists, and I'm trying to get my 1990s-era IRC habits back up to snuff (user mattdm on freenode). I'll also be visiting a lot of conferences, and I hope to see anyone who has read this far, and those of you I can't meet in person I hope to talk with in other ways.

If you're interested in making Fedora better through cloud computing, or in making the cloud better through Fedora, please join us in the Cloud SIG. There's no experience necessary and no formal join process — just jump into the mailing list and we'll get started.

Many thanks to Matthew for taking the time to answer our questions in such detail. We are most interested to see where this effort will go.

Comments (2 posted)

Brief items

Distribution quote of the week

Enthusiasm is a shockingly rare resource, anywhere. The reason enthusiasm is a rare resource is because it’s fragile; I’ve seen potentially-great ideas abandoned because the initial response was a liturgy of reasons why it won’t work. It’s not the criticism which kills, it’s the scorn.

So when someone emails or approaches you with something they’re excited about, please reply thinking “What can I do to help?. Often I limit my commitment to an encouraging and thoughtful response: a perfectly acceptable minimum. You might want to go further and offer pointers or advice, but take care to fan that delicate flutter of enthusiasm without extinguishing it. Other forces will usually take care of that soon enough, but let it not be you.

-- Rusty Russell

Comments (7 posted)

Slackware 14 released

The Slackware 14 release is available. "We are sure you'll enjoy the many improvements. We've done our best to bring the latest technology to Slackware while still maintaining the stability and security that you have come to expect. Slackware is well known for its simplicity and the fact that we try to bring software to you in the condition that the authors intended."

Update: Slackware ARM 14.0 is also available.

Comments (31 posted)

Open webOS 1.0 Edition

Open webOS 1.0 has been released. "We now have an OpenEmbedded build that allows a full webOS experience running inside an OE emulator. We have added core applications — email & browser — while continuing to support the desktop build environment. The 1.0 release also brings support for Enyo2. You can now take apps built on one of the best cross-platform JavaScript frameworks and easily run these same apps on Open webOS or other platforms."

Comments (none posted)

openSUSE on ARM Release Candidate 1

The first release candidate for openSUSE 12.2 on ARM is available. "This RC1 release is focused on ARMv7 which encompasses the Cortex-A processor profile from the Cambridge, UK based chip designer. Due to the current nature of the existing ARM landscape it doesn’t mean that all devices that use a v7 SoC are supported though. As such openSUSE took the engineering decision to focus on a subset of devices to minimize the time it takes to bring the distribution up on the architecture. The supported SoC vendors for this release are Texas Instruments’ OMAP3 & OMAP4 and Freescale I.MX51; the supported devices running with these SoCs are the Beagleboard, Beagleboard-xM, Pandaboard, Pandaboard-ES and the EfikaMX smarttop/smartbook. There is also an image for the VersatileExpress which is suitable for use in Qemu as well as a generic root file system tarball that users and developers may use to help bring up unsupported devices."

Comments (none posted)

GNU Linux-libre 3.6-gnu sources are now available

GNU Linux-libre is a stripped down kernel (with no binary blobs and other non-Free components) suitable for use with Free Software Foundation-approved GNU/Linux distributions. The 3.6 version is available. "The GNU Linux-libre project takes a minimal-changes approach to cleaning up Linux, making no effort to substitute components that need to be removed with functionally equivalent Free ones."

Full Story (comments: none)

Updated Debian 6.0: 6.0.6 released

The Debian project has announced the sixth update of its stable distribution, Debian 6.0 aka Squeeze. "This update mainly adds corrections for security problems to the stable release, along with a few adjustments for serious problems. Security advisories were already published separately and are referenced where available."

Full Story (comments: none)

Distribution News

Debian GNU/Linux

Debian relicenses its logo

[Debian open use logo] The Debian "open use" logo has long been distributed under a non-free license, a fact that has not sat well with a lot of Debian developers. Project leader Stefano Zacchiroli has just announced that the logo is now dual-licensed under the LGPL and the Creative Commons attribution-sharealike 3.0 license. Note that the "official" logo remains restricted.

Full Story (comments: 9)

Debian's Google Summer of Code 2012 wrap-up

Ana Guerrero presents a summary of Debian's Google Summer of Code projects. "Debian participated this year with 15 projects in the Google Summer of Code, with 12 projects finishing successfully and with some of the students greatly exceeding our expectations. We would like to thank everybody involved, mentors, co-mentors and students for all the work they put this summer. We would like specially invite our students to continue being involved in making Debian better, whether by continued work on their summer project or by working in other areas of Debian. Please feel welcomed!"

Full Story (comments: none)

Newsletters and articles of interest

Distribution newsletters

Comments (none posted)

CyanogenMod is getting its own OTA update manager (The H)

The H looks at a new feature in CyanogenMod 10. "With the new feature, CyanogenMod 10 (which is based upon Android 4.1.x "Jelly Bean") will be able to receive direct over-the-air (OTA) updates for the first time, making the update experience more akin to Google's vanilla Android version. The new updater will download ROM images directly from, the project's official distribution servers. While it lacks the flexibility of ROM Manager, the new updater will be easier to use for inexperienced users as it automatically checks for updates (a feature that is only available in the paid-for version of ROM Manager) and only shows updates for the version of CyanogenMod that is actually installed. For users who want more flexibility, ROM Manager is still available as a separate application."

Comments (none posted)

Yocto-Compliant Enea Linux Reaches Version 2.0 ( has a quick look at Enea Linux 2.0, an embedded distribution based on Yocto. "Enea's Light-Weight Runtime Threads (LWRT) technology, for instance, offers improved real-time characteristics in Linux user-space, delivering scheduling, message passing, and resource management functionality. The result is that applications can run with improved determinism and minimal overhead, thus addressing some commonly held concerns about traditional Linux solutions."

Comments (1 posted)

Page editor: Rebecca Sobol


XDC2012: OpenGL futures

By Michael Kerrisk
October 3, 2012
2012 X.Org Developers' Conference

Ian Romanick works on Mesa, an open-source implementation of the OpenGL specification. His presentation on the final day of the 2012 X.Org Developers' Conference looked at what he hoped would be the future of the OpenGL interfaces on Linux. His talk was broken into three broad areas: the current status of the OpenGL interfaces, where they should go in the future, and how to get to that future.

The current OpenGL ABI was defined in 2000. It consists of a number of pieces. The first of these is libGL, which implements three components: OpenGL 1.2, GLX up to version 1.2 (the current version of GLX is 1.4), and the ARB multi-texture extensions. Ian highlighted that libGL on its own is not sufficient for any useful applications these days. The remaining pieces of the OpenGL ABI are libEGL, and two separate libraries, libGLES_CM and libGLESv2, for versions 1.2 and 2.0 of OpenGL ES.

There are many problems with the current situation. Applications that want to use graphics components beyond OpenGL 1.2 have to "jump through hoops." It's even more work to use GLES with GLX, or to use desktop OpenGL with EGL. The implementation of indirect rendering, a feature that allows OpenGL commands to be encapsulated in the X protocol stream and sent over the wire to the X server, is "completely fail": it performs poorly and supports OpenGL up to only version 1.4—or 1.5 with a lot of effort. The specification requires indirect rendering to be supported, but the number of legitimate use cases is quite small. And the presence of this rarely used feature sometimes creates problems when applications accidentally trigger indirect rendering and force OpenGL back to version 1.4, leading to user "rage" and bug reports.

Ian then went through the proposed solution. The first step is to split libGL, so that the OpenGL and GLX components are separated into different libraries, called (say) libOpenGL and libGLX. Andy Ritger has sent out a detailed proposal to the mesa-dev mailing list describing how the split could be accomplished. Splitting the libraries will allow applications to mix and match components as needed, so that, for example, GLES and GLX can be easily used together by linking with the right libraries. Using both OpenGL and EGL together would become similarly straightforward. To maintain backward compatibility for old binaries that look for libGL, it would still be necessary to provide a legacy version of libGL that glues libOpenGL and libGLX together.

Among the problems to be solved during the split are how to version the libOpenGL library and "get away from at least some of the GetProcAddress madness." That "madness" exists because the current ABI forces some applications to make calls to various "GetProcAddress" APIs (similar in concept to the GNU dynamic linker's dlsym() API) in order to obtain the addresses of multiple functions in the various libraries that constitute the OpenGL ABI. How the libOpenGL library would be versioned is an open question. Ian noted that the possibilities included ELF library versioning or embedding the version number in the library name, as is done with GLES. He also speculated about whether it would be possible to bump up the minimum OpenGL version supported by the ABI. The current implementation is required to support OpenGL versions as far back as 1.2. However, OpenGL 1.2 is now so old that it is "useless", though Ian still sees occasional bug reports for version 1.3.

Once the library is split, Ian would like to see GLX deprecated. In addition to the problems caused by direct rendering, adding new GLX extensions is painful, because support must be added in both the client and the server. This can create problems in the cases where support is not added simultaneously on both sides: the client may end up sending unsupported protocol requests to the server and "they die in a fire." One recent fire-starter was the GLX_ARB_create_context feature: support appeared in the X server only in September, more than a year after client-side support was added to GLX. By contrast, EGL does not have this problem because support needs to be added only on the client side. In other words, getting rid of GLX will allow new features to be shipped to users much more quickly.

A prerequisite for deprecating GLX is to have Linux distributors ship EGL by default. Most distributions do provide EGL, but Ian supposed that it is not generally included in the default install. However, Martin Gräßlin said that KDE started optionally depending on EGL about two years ago, so it is now part of the default install in most distributions. Later, Ian noted that encouraging the move to EGL may require the creation of a GLX-to-EGL porting guide; while there is independent documentation for both GLX and EGL, there seems to be none that explains how to port code from one to the other. A lot of the required source code changes can be accomplished with some simple scripting, but there are also a few deeper semantic differences as well as a few GLX features that don't have direct counterparts in EGL.

Another important step is to make OpenGL ES part of the OpenGL ABI. Bart Massey's XDC2012 presentation bemoaned the fact that developers are not making applications for X. Ian said that the reason they are not is that they're too busy making applications for mobile platforms. So, by enabling the OpenGL ES ABI that every developer uses on mobile platforms, it becomes possible for developers to use X as a test bed for mobile applications; it also becomes possible to port applications from Android and iOS to X.

One final step would be to update the loader/driver interface. This interface defines the way that libGL or libEGL talks to the client-side driver that it loads, and the way that the GLX module in the X server talks to the driver to load and route GL calls to it. This will probably be the hardest step, and it may take some time to resolve the details. As a side note, Ian pointed out that if indirect rendering is dropped, it will probably make the task quite a bit easier, because it won't be necessary to support the loader/driver interface inside the X server.

Ian's presentation was followed by some audience discussion of various topics. There were some questions about compatibility with old applications. Ian thinks that compatibility requirements mean that it will probably be necessary to ship a legacy version of libGL for the indefinite future. There was some discussion on how to handle multiple versions of libOpenGL in the future. Some audience members seemed unclear on what options were available, but others were confident that ELF symbol versioning, as used in the GNU C library, would be sufficient. Later, Chad Versace expressed concerns about ensuring that any proposed solution also worked when using the Clang compiler and the gold linker. Ian noted that there will need to be some more investigation of the requirements and build environments before any final decisions are made.

Bart Massey expressed concern that indirect rendering seemed to be going away with no replacement in sight. He noted that he'd had students who had been happy users of indirect rendering for use in certain environments with limited hardware. Ian suggested that a VNC "send the pixels across the wire" type of solution might be the way to go. Later, Eric Anholt suggested that the loader could detect that it is communicating over the wire with the X server, open the driver as normal, and then transmit images over the wire with PutImage, with the application being more or less unaware of this happening.

There are still many details to be resolved regarding the proposed reworking of the OpenGL ABI. However, there seemed near unanimous agreement that the proposal described by Ian was the right direction, and it seems likely that, once some design details have been resolved, the work will commence quite soon.

The X.Org wiki has a pointer to the video of this presentation.

Comments (27 posted)

Brief items

Quotes of the week

That is not how open source works, you need to do 90% of the work upfront, people only join when you have something useful.
Miguel de Icaza

When lots of people who clearly aren't complete idiots tell you something happens to them, it's probably best just to accept that it does, because arguing that you can't possibly see how it could possibly happen to them is only going to make you look churlish.
Adam Williamson

Comments (3 posted)

wlterm: the native Wayland terminal emulator

Any self-respecting graphical display system must naturally offer a proper terminal emulator. Wayland now has one in the form of "wlterm," which is meant to be both a fully capable terminal emulator and a development tool. "We need more independent clients written from scratch that try to find bugs in the wayland-client API. If we use toytoolkit all the time, we will probably not find them. wlterm draws its own decoration and does not depend on any demo-code from the weston repository. So its nice to check whether new weston features are working and how they behave."

Full Story (comments: 59)

Calibre 0.9.0 released

Version 0.9.0 of the Calibre electronic book management application is out. There is a long list of improvements here, including a better book viewer, much better Android support (including, at last, an implementation of the MTP protocol), lots of conversion engine enhancements, and more.

Comments (none posted)

Tizen 2.0 Alpha SDK and Source Code release

The Tizen project has unveiled the first release on the road to its next major release, 2.0. The 2.0 alpha includes both source for Tizen itself and a new build of the SDK. Additions include HTML5 API coverage and a "platform SDK" based on OBS. It is still an alpha, however, and the announcement notes that "there are additional components that we plan to add in the coming weeks, and we will continue to fix bugs and add additional features."

Comments (2 posted)

Mango open movie project "Tears of Steel" released

The Blender Foundation has released "Tears of Steel," the short film produced during its Mango open movie project. As with previous open movie efforts, the production process was used to develop and implement new functionality for Blender — this iteration focused on the visual effects pipeline for compositing with live action. The short is available via YouTube or direct download now (as are source files); DVDs are still to come.

Comments (7 posted)

Python 3.3.0 released

The Python 3.3.0 release is out. It adds a new yield from syntax for subgenerators, an improved string representation, a reworked module importer, and a great many library improvements; see the "what's new in 3.3" document for details.

Full Story (comments: 8)

Newsletters and articles

Development newsletters from the last week

Comments (none posted)

Joomla 3.0: Major version jump for the open source CMS (The H)

The H takes a look at the recent 3.0 release of the Joomla web content management system. Changes include a new administrative interface, use of Twitter's Bootstrap framework, and significant changes under the hood. "Thanks to jQuery, Joomla now supports drag and drop, finally allowing items in the backend interface to be sorted using the mouse. The Mootools JavaScript framework is still included for backwards compatibility, but it is scheduled to be removed from Joomla eventually."

Comments (none posted)

An Interview with Brian Kernighan (InformIT)

InformIT has an interview with Brian Kernighan, mostly about the C language. "One could argue that Unix's influence on operating systems is easier to assess: aside from Windows in its many variants, most operating systems today are Unix systems. If one counts cellphones, there may well be more *nix systems running than anything else."

Comments (27 posted)

Page editor: Nathan Willis


Brief items

The Document Foundation celebrates its second anniversary

The Document Foundation celebrated its second anniversary. The project was announced September 28, 2010. "During the last 12 months, the foundation was legally established in Berlin, the Board of Directors and the Membership Committee were elected by TDF members, where membership is based on meritocracy and not on invitation, Intel became a supporter, and LibreOffice 3.5 and 3.6 families were announced. In addition, TDF has shown the prototypes of a cloud and a tablet version of LibreOffice, which will be available sometime in late 2013 or early 2014."

The Foundation has also started a fundraising campaign. "'So far, volunteers have provided most of the work necessary to sustain the project, but after two years it is is mandatory to start thinking really big', says Italo Vignoli, the dean of the Board of Directors. 'We had a dream, and now that thousands around the world made that dream come true we want to get to the major league of software development and advocacy. By donating during the fourth quarter of 2012, donors will define the budget we have available for 2013'."

Full Story (comments: none)

Articles of interest

Free Software Supporter -- Issue 54

The Free Software Supporter is the Free Software Foundation's monthly news digest and action update. Topics in the September issue include software patents, updating the Free Software Directory, Dreamhost makes a matching pledge, Apple v. Samsung, GPL violations, trademarks, and much more.

Full Story (comments: none)

Education and Certification

LPI Announces Linux Essentials for North America

The Linux Professional Institute (LPI) has announced the availability of its Linux Essentials program in North America. LPI's Linux Essentials program is designed to measure foundational knowledge in Linux and Open Source Software. "Linux Essentials was initially released as a pilot program in Europe, the Middle East and Africa but due to popular demand has now expanded to North America."

Full Story (comments: none)

Upcoming Events

PyData NYC 2012 Speakers and Talks announced

The schedule has been announced for PyData, which takes place October 26-28, 2012 in New York City, NY. "We're thrilled with the exciting lineup of workshops, hands-on tutorials, and talks about real-world uses of Python for data analysis."

Full Story (comments: none)

OpenOffice track at ApacheCon EU 2012

ApacheCon will take place November 5-8, 2012 in Sinsheim, Germany. The conference will feature an Apache OpenOffice track. "Several Apache OpenOffice (incubating) contributors will give talks on different topics around Apache OpenOffice and its ecosystem. They will be available for further discussions. General community sessions are also part of the schedule."

Full Story (comments: none)

Announcing the CloudStack Collaboration Conference

The CloudStack Collaboration Conference will take place November 30-December 2, 2012 in Las Vegas, Nevada. "The Collaboration Conference will give CloudStack users an opportunity to learn about improvements in the upcoming Apache CloudStack 4.0 release, and best practices for deploying and managing CloudStack. Users will also be able to attend sessions on projects and tools that work well with CloudStack for configuration management, storage, monitoring, creating a Platform-as-a-Service (PaaS) on top of CloudStack, and more."

Full Story (comments: none)

LCA2013 Opens Registrations (LCA) has opened registrations. Discounted 'early bird' tickets are available for a limited time. "The 2013 conference builds on a long tradition of sharing technical know-how between seasoned open source gurus and newcomers to the community. Since its inception in 1999, the conference has moved around Australia and New Zealand, most recently to Ballarat, Victoria, and Brisbane, Queensland. This year, the conference is in Canberra in celebration of our national capital's centenary year. The conference was last hosted in Canberra in 2005, and it has grown significantly since then, bringing some unique challenges to the organising team." LCA takes place January 28-February 2, 2013.

Full Story (comments: none)

SCALE 11X: Matthew Garrett to give keynote

The Southern California Linux Expo has chosen Matthew Garrett as the first of two keynote speakers for SCALE 11X in February 2013. "Garrett's keynote is entitled “The Secure Boot Journey” and details his work over the past year – technical, political and diplomatic – in getting Linux to run on UEFI Secure Boot systems. He will outline the scenario where Linux users could not only be assured that they can run Linux out of the box in UEFI-based systems, but also how Secure Boot can be used to enhance security."

Full Story (comments: none)

Events: October 4, 2012 to December 3, 2012

The following event listing is taken from the Calendar.

October 2
October 4
Velocity Europe London, England
October 4
October 5
PyCon South Africa 2012 Cape Town, South Africa
October 5
October 6
T3CON12 Stuttgart, Germany
October 6
October 8
GNOME Boston Summit 2012 Cambridge, MA, USA
October 11
October 12
Korea Linux Forum 2012 Seoul, South Korea
October 12
October 13
Open Source Developer's Conference / France Paris, France
October 13 2012 Columbus Code Camp Columbus, OH, USA
October 13
October 14
Debian BSP in Alcester (Warwickshire, UK) Alcester, Warwickshire, UK
October 13
October 14
PyCon Ireland 2012 Dublin, Ireland
October 13
October 14
Debian Bug Squashing Party in Utrecht Utrecht, Netherlands
October 13
October 15
FUDCon:Paris 2012 Paris, France
October 15
October 18
OpenStack Summit San Diego, CA, USA
October 15
October 18
Linux Driver Verification Workshop Amirandes,Heraklion, Crete
October 17
October 19
LibreOffice Conference Berlin, Germany
October 17
October 19
MonkeySpace Boston, MA, USA
October 18
October 20
14th Real Time Linux Workshop Chapel Hill, NC, USA
October 20
October 21
Gentoo miniconf Prague, Czech Republic
October 20
October 21
PyCon Ukraine 2012 Kyiv, Ukraine
October 20
October 21
PyCarolinas 2012 Chapel Hill, NC, USA
October 20
October 21
LinuxDays Prague, Czech Republic
October 20
October 23
openSUSE Conference 2012 Prague, Czech Republic
October 22
October 23
PyCon Finland 2012 Espoo, Finland
October 23
October 26
PostgreSQL Conference Europe Prague, Czech Republic
October 23
October 25 Dommeldange, Luxembourg
October 25
October 26
Droidcon London London, UK
October 26
October 27
Firebird Conference 2012 Luxembourg, Luxembourg
October 26
October 28
PyData NYC 2012 New York City, NY, USA
October 27
October 28
Technical Dutch Open Source Event Eindhoven, Netherlands
October 27 Linux Day 2012 Hundreds of cities, Italy
October 27 Central PA Open Source Conference Harrisburg, PA, USA
October 27 pyArkansas 2012 Conway, AR, USA
October 29
November 2
Linaro Connect Copenhagen, Denmark
October 29
November 1
Ubuntu Developer Summit - R Copenhagen, Denmark
October 29
November 3
PyCon DE 2012 Leipzig, Germany
October 30 Ubuntu Enterprise Summit Copenhagen, Denmark
November 3
November 4
OpenFest 2012 Sofia, Bulgaria
November 3
November 4
MeetBSD California 2012 Sunnyvale, California, USA
November 5
November 9
Apache OpenOffice Conference-Within-a-Conference Sinsheim, Germany
November 5
November 7
Embedded Linux Conference Europe Barcelona, Spain
November 5
November 7
LinuxCon Europe Barcelona, Spain
November 5
November 8
ApacheCon Europe 2012 Sinsheim, Germany
November 7
November 9
KVM Forum and oVirt Workshop Europe 2012 Barcelona, Spain
November 7
November 8
LLVM Developers' Meeting San Jose, CA, USA
November 8 NLUUG Fall Conference 2012 ReeHorst in Ede, Netherlands
November 9
November 11
Free Society Conference and Nordic Summit Göteborg, Sweden
November 9
November 11
Mozilla Festival London, England
November 9
November 11
Python Conference - Canada Toronto, ON, Canada
November 10
November 16
SC12 Salt Lake City, UT, USA
November 12
November 14
Qt Developers Days Berlin, Germany
November 12
November 16
19th Annual Tcl/Tk Conference Chicago, IL, USA
November 12
November 17
PyCon Argentina 2012 Buenos Aires, Argentina
November 16 PyHPC 2012 Salt Lake City, UT, USA
November 16
November 19
Linux Color Management Hackfest 2012 Brno, Czech Republic
November 20
November 24
8th Brazilian Python Conference Rio de Janeiro, Brazil
November 24
November 25
Mini Debian Conference in Paris Paris, France
November 24 London Perl Workshop 2012 London, UK
November 26
November 28
Computer Art Congress 3 Paris, France
November 29
November 30
Lua Workshop 2012 Reston, VA, USA
November 29
December 1
FOSS.IN/2012 Bangalore, India
November 30
December 2
Open Hard- and Software Workshop 2012 Garching bei München, Germany
November 30
December 2
CloudStack Collaboration Conference Las Vegas, NV, USA
December 1
December 2
Konferensi BlankOn #4 Bogor, Indonesia
December 2 Foswiki Association General Assembly online and Dublin, Ireland

If your event does not appear here, please tell us about it.

Page editor: Rebecca Sobol

Copyright © 2012, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds