US House Representative Darrell Issa has been an active participant in "open government" advocacy in the United States over the past few years; among other things, he co-founded the OpenGov Foundation, which is dedicated to increasing access to government data. Free software advocates will also remember Issa's participation in the opposition to SOPA and PIPA in December 2011. That effort produced an online "legislative markup" application called Project Madison. The Madison source code has now been released on GitHub under GPLv3, for immediate use by DIY-legislators and armchair founding fathers — and potentially by other communities interested in collaborative editing and criticism.
In its original form, Madison allowed critics to log their complaints about the SOPA/PIPA legislation, and to help crowd-source an alternative bill known as the OPEN Act. Issa told the O'Reilly Radar blog in July that he was working to get the OpenGov Foundation (which is not to be confused with the Sunlight Foundation's OpenGovernment.org) registered as a 501(c)(3) nonprofit in the US, and that the Madison code would be released under an open source license as part of that effort. The source code release was announced on September 28 on the OpenGov blog. A live Madison installation is running at the KeepTheWebOpen site, which currently hosts nine bills and related documents for public commentary and improvement (including the commentary recorded for SOPA and PIPA).
Obviously other tools exist for collaboratively editing documents (EtherPad derivatives perhaps being the most well-known). But Madison is designed to preserve the canonical form of the document while still enabling feedback. The stated goal of Madison is to permit such feedback in a way that makes contributing easy, but also enables administrators to sort through potentially thousands of comments in a meaningful fashion. Madison presents a document in structured form, divided into paragraph-based sections (it appears that legislation is often drafted in one-sentence-per-paragraph style, so this is in fact quite granular). Users can attach separate comments to each section, as well as propose re-wording suggestions. Both types of feedback are presented in a sidebar to the document, but suggestions and comments are displayed in separate boxes in the interface.
Users can also register "likes" and "dislikes" on each feedback submission posted, as well as flag inappropriate comments. The interface tracks likes and dislikes, plus users' Facebook "likes" and Twitter tweets spawned by the submission. For each section, the interface sorts user contributions by an aggregate of these community metrics so as to allow popular ideas to bubble up to the top for easy consumption by the administrator. If the administrator chooses to incorporate a user-contributed change into the main document, that change is highlighted in the document interface with a different background color. Anonymous comments are not supported; the application supports both individual and group user accounts (although group accounts must be requested and approved by administrators). Facebook login via OAuth 2 is also supported.
Under the hood, Madison is straightforward PHP and MySQL. However, a major limitation is that each document presented for public commentary needs to be manually added to the database, with one database row per document section. There is also no installer to set up the database tables and create administrator accounts, which makes getting started more complicated. But the Madison team is well aware of the hardship these limitations impose, and has posted a roadmap on the GitHub page that outlines plans for these and other features. Also on the list, for example, is support for larger, multi-part documents, in addition to general improvements like additional third-party-account support (Twitter, Reddit, Google Plus, etc.).
Madison was written during a Congressional hackathon in December 2011. The source code release follows on the heels of the White House's August release of its online petition application We The People. We The People is implemented as a Drupal module, however; hopefully Madison will evolve into a component more easily integrated with existing sites, because there are clear use cases for such an application that have nothing to do with politics.
For example, online document mark-up and commentary have become an integral part of the free software community's license revision process. The FSF commissioned its Stet mark-up tool for use in crafting GPLv3, and the COMT application was used by Mozilla during the comment period for MPL2. Madison offers some features not supported in Stet or COMT, such as the user-voting mechanism, and differentiating between individual and group accounts. However it is still a bit behind on the feature front overall; the other tools support things like version comparison that are valuable whenever wording is expected to change.
Madison also has user interface issues to be resolved, and the administrative interface lacks some tools that would be helpful in a lengthy comment-gathering process (such as the ability to reclassify a comment as a suggestion, or vice-versa, and the ability to indicate when a suggestion has been acted upon). But there is no denying Madison's real-world suitability. According to The Atlantic Wire, the application successfully managed more than 200,000 visitors during a 12-hour SOPA editing marathon. At that scale, Madison's ability to help administrators separate the wheat from the chaff could be the difference between a useful comment period and chaos. One could also argue that the demise of SOPA and PIPA is in part an indicator of Madison's success; although there were clearly more important factors, improving access to the text of the bill and encouraging citizens to delve into the details certainly helped the public's grasp of the issues.
A good commenting and annotation system is critical to the creation of any document that needs public acceptance. In recent years, the free software world has seen several such processes, and frequently those that are seen as doing a poor job listening to or responding to public comments attract criticism (such as Project Harmony). Madison is a noteworthy release because it represents progress in the advancement of open government principles, but it is also valuable for enabling anyone to collect document feedback and contributions from the public, which is a principle that the free software community holds dear.
To what extent is the choice of programming language important in terms of making it easier to build desktop applications? This was the question asked by Bart Massey in his talk on day two of the 2012 X.Org Developers' Conference.
Bart observed that we've now reached the point where it's harder to write a desktop application than it is to write a mobile or web application. Furthermore, the quality of desktop applications is often worse than for mobile and web applications. In Bart's opinion, the situation should be the other way round, and he is puzzling over why it is not.
Some factors that make programming desktop applications harder are beyond the X developers' control, he noted. For example, desktop applications may be much bigger than mobile and web applications. They may also have requirements that are not present, or are simply not addressed on mobile and web platforms. For example, desktop applications may need to manage quite sophisticated user interactions, and provide full internationalization support (which is omitted from many mobile and web applications). It's hard for the X developers to control these factors, but Bart noted one area where they can have some influence in making the development of desktop applications easier: the choice of programming language.
When it comes to programming languages, how are mobile and web applications different? Bart pointed out that—unlike desktop applications—it is notable that they are almost exclusively not written in C or C++. Objective C is used in the mobile space, but Bart noted that that language is special (because it provides some features not found in C/C++ that are found in the other languages used for mobile applications). In the mobile and web space, nearly everyone is using languages that have automated memory management; that eliminates problems with memory leaks and mishandled pointers. These programming languages commonly support late binding and dynamic loading. Languages that provide these sorts of features are attractive because they allow developers to be much more productive. That's especially important because the mobile and web spaces are fast moving and programmers need to be able to put applications together quickly. By contrast, developing for the desktop—and Bart made it clear that here he was speaking about desktops in general, not just X—requires a much greater initial effort before a programmer is productive.
On the X desktop, the common programming languages have been C and C++, and Bart noted a number of ways in which these languages have been a source of pain for application developers. There are many reasons to use multiple threads in GUI programs, most notably because a windowed interface in a GUI environment is naturally concurrent. However, threading support in C and C++ (typically via POSIX threads) is not easy to use for the average application developer. The frameworks used in C/C++ often involve callbacks and other complicated flows of control, which are likewise challenging for programmers. Manual memory management is tricky for programmers as well: monitoring memory usage of desktop applications on any platform (not just X), it's easy to find cases of applications that leak memory. And while its true that things can go wrong with automated memory management, in practice those problems are not as bad.
In Bart's view, the choice of programming language (and the accompanying frameworks that that choice implies) is one of the things getting in the way when it comes to developing X applications. The X tool kits have a widget mentality, and while he feels that that is probably a good mentality, the toolkits make widget creation a lot of work: too much boiler-plate code and "weird interaction" is required for building widgets. "The problem is that the language doesn't give you convenient ways to express what you want to express."
We use C and C++ on the X desktop because they've always been there, Bart noted. But, what are the alternatives? Bart estimated that there are some 400 programming languages in wide use, but noted that there are obvious attractions to using a mainstream language, in terms of the pool of developers and available support. Java is an obvious choice. C# is another possibility, but he thought that there were no advantages that made it clearly compelling over Java. If one widens the net, other alternatives might include SML, OCaml, or Haskell, but he felt sure that if he asked the room what language should be the future of desktop development, he'd get fifteen answers.
Bart then considered the toolkits used with X a little more deeply. He noted that low-level libraries such as XCB are written in C/C++. There are various reasons for that choice, among them efficiency and the ease providing foreign-function interfaces (bindings) for other languages. The choice of C/C++ at this level of the stack is okay, he thinks. But, as we go up the stack, the next thing we come to is the toolkits, and the choices there are Qt and GTK. Both are old, entrenched, and written in C/C++. Because of that, the applications created above them tend to be written in the same language. It's time to think hard about whether these are the right tool kits for applications built in 2012. The problems of building desktop applications are amplified by using these large, complex libraries, and they don't provide the automated support that is provided in mobile and web frameworks.
Bart noted some summary points from his work during the last ten years. If someone asks him what language to use to write a graphical desktop application, he is typically going to respond: Java. This is ironic, because he, an X developer, is recommending a language that will not produce a native X application. But, he noted that the overhead of writing a Java GUI application is simply lower (than C/C++), even when developing for the Linux/X desktop. Another reasonable alternative that he sees is Python with a toolkit (for example, wxWidgets). As a teacher (at Portland State University), Bart recommends those alternatives to his students, rather than the C/C++-based frameworks available for X.
Is programming language even the right thing to be thinking about, when considering problem of building desktop applications? Bart said that the audience might be able to convince him otherwise. But, he wanted to know, if programming language is not a part of the problem, then just how weird does a programming language need to be before it is a problem? He then mentioned a few factors that may or may not matter with respect to the choice of programming languages and toolkits. One of the more interesting of these was portability of applications across desktop, mobile, and web. It would be nice to have that, though it is "really hard." But, he asked, what can we do in terms of choice of programming language to facilitate that possibility?
On the other hand perhaps we are stuck with GTK and Qt, Bart said. He noted that it was hard enough to replace Xlib with XCB. The task has taken ten years, and Xlib is likely still to be with us for another decade. Perhaps replacing the toolkit frameworks is too difficult, and when it comes to developing desktop applications, we're consequently stuck with C/C++ forever. He concluded, however, that "if we're going to save the desktop—and don't fool yourself, it is in trouble—then I think programming language is part of the problem."
A lively discussion followed the formal part of Bart's presentation. In response to some comments from Supreet Pal Singh about the satisfactions of doing X application development in C/C++, Bart elaborated further on his point. He noted that that there is no ramp to being an X developer—it's a cliff. Developers have to master many technologies before they can do anything. He acknowledged that he too has a different feeling when developing X applications in C++, "but it's not a happy one: I feel smart for being able to do this in C++, but not smart for having done it."
Another audience member noted that Qt Creator can be used for rapid development of X applications. Bart observed that code-generation tools solve some problems, but replace them with other problems as one reaches the limits of what the code generator can do. At that point, the developer then has the problem of trying to understand what is going on under the hood of the generator and possibly tweaking its output, which can complicate code maintenance. He also noted that a GUI application created with a code generator tends to behave less well for the user. For example, he suggested, try using a GUI tool to build an application designed for a small window and then scale that window up and watch what happens. The result is very different from an application where the layouts and layout policy have been hand-crafted so that it works well for all window sizes.
Peter Hutterer pointed out that it may be because of the relative youth of the web and mobile application spaces that they are not suffering some of the same problems as desktop X applications. X has been around for well over 20 years; in 10 years' time, web and mobile applications may have some of the same issues. He mentioned Android fragmentation as a possible example of the kinds of problems to come. Bart agreed that the passage of time may bring legacy problems to mobile and web application development, but noted that many of his points were independent of legacy problems: the need to manage pointers and memory, wacky untyped (void *) interfaces, and complex linking interactions. He constantly sees his students struggling with these problems.
Decades of experience have shown C and C++ are powerful languages for system programming, but few would deny that they provide application developers with too many opportunities to shoot themselves in the foot. Bart was facing a rather atypical audience: a room full of C/C++ experts. Nevertheless, there was some sympathy for his thesis, and one suspects that if a more typical cross section of application developers were asked, there would be rather more agreement with his position.
The X.Org wiki has a pointer to the video of this session.
The free software community produces a constant stream of ideas about how to displace the proprietary network services that dominate so much online interaction. In mid-2012, Tent became the latest entrant in the conversation, heralding an "open, decentralized, and built for the future" social networking solution that "changes everything." Beyond the project's manifesto, however, there was scant detail, particularly on how Tent related to other distributed social networking efforts like OStatus, the protocol used by StatusNet and Identi.ca. September 21 brought the first look at something more concrete, courtesy of a reference Tent server and initial documentation of the system's protocols.
Tent's general idea is familiar enough: a functional replacement for proprietary social networking services like Twitter, Flickr, and Facebook, but built with free software and designed so that individual users retain control over their data (including not handing over personal information to a third party). A key part of making such functionality possible is devising a mechanism for distinct installations to interoperate, thus allowing users to converse, share content, and subscribe to content posted by others — without demanding any permanent ties to other users' software.
This goal is much the same as that of the OStatus community, which led a number of people to open issues on the Tent bug tracker asking what justifies starting the new project. The Tent FAQ (on the project home page) says that "the architects of Tent investigated existing protocols for the distributed social web. Each of them lacked several critical features necessary for modern users." Elsewhere the FAQ comments that OStatus and Diaspora were "first steps" but does not go into detail about what they lack. On the issue tracker, however, developer Jonathan Rudenberg lists three features not covered by existing federated social networking projects: support for private messages, a server-to-client API, and the lack of a "social graph" specification (e.g., existing "friend" or "following" relationships) enabling users to export their user-to-user connections for portability between services. Developer Daniel Siders reiterated those issues in a Hacker News discussion about Tent.
Several commenters found those features to be weak justification for writing entirely new protocols, however. Dave Wilkinson II argued that OStatus does not address private messaging because it does not attempt to address identity management, but that the related standards Webfinger and PubSubHubbub together can be used to implement private messaging. He also said that migrating social graph information is trivial in OStatus precisely because OStatus does not bind to the user's identity. OStatus co-author Evan Prodromou said private messages were in development for PubSubHubbub 0.4 (and subsequently OStatus), and pointed to ActivityPub as an effort to develop a generic server-to-client API.
The Tent project's documentation also sees its definition of "decentralized" web services as being fundamentally different than OStatus's definition of "federated" services. Prodromou suggested on the issue tracker that this distinction was inaccurate, and that what Tent describes is no different than the federation functionality of Status.Net and Diaspora. Siders replied that:
In other words, he continued, Tent differs from federated social networking systems because it combines the server-to-server and client-to-server communication protocols, akin to unifying SMTP and IMAP. "Tent is not a federation protocol because it provides end to end communication between users, not just servers."
Abstract principles aside, the Tent team released version 0.1 of its protocol documentation on September 21, as well as tentd, a demonstration server written in Ruby that implements a Twitter-like service. The documentation outlines the basics of Tent's messaging and network design, the server-to-server protocol, the server-to-client API, and descriptions of post and profile data fields. In general, Tent uses JSON to format all messages, with OAuth 2 authentication for applications and HTTP MAC access authentication to cryptographically verify individual requests and responses.
Every Tent user (or "entity") is associated with a separate server, which is expected to always be online and accessible over HTTPS. Servers are meant to be found through HTTP Link headers and HTML link tags that point to the user's profile URL. Requesting the profile URL returns the user's profile data as a JSON object. Currently the server-to-server protocol addresses only Twitter-style "follower" relationships; user A can follow user B by POST-ing a request that includes user A's own entity URL, the flavors of post the user wishes to subscribe to, the licenses acceptable to user A, and a URL to which user B should send posts. Assuming user B approves the follow-request (which is not addressed in the documentation), user B's server sends its MAC to user A so that subsequent posts can be authenticated.
The documentation does not address private messages directly, other than their usage of MAC authentication. However, the post documentation mentions a permissions object that, in the examples, can be marked as public or list specific entities or groups that are allowed access. Groups and their representation are not currently defined. The notion of "acceptable licenses" is not explained in detail, either; it seems to place the burden on the publishing server to filter out content that individual subscribers do not find acceptable on licensing grounds (the only license example used is Creative Commons Attribution 3.0). The server-to-server API also defines methods for requesting another user's list of followers, the entities that the user is following, canceling or altering a follow-request, and fetching another user's posts (either in bulk or by querying parameters like publication date).
Each of these methods has a corresponding method in the client API; in practice user A's front-end software would relay these requests to the user's Tent server, which would in turn handle the nitty-gritty of subscribing or querying user B's server. As it stands, the scheme is quite simple; there are six post types defined: status (a short, 256-character message), essay (a longer, unlimited-length text entry), photo, album (for a collection of photos), repost (a pointer to another user's post), profile (a notification of changes to a user profile), and delete (a notification that another post has been expunged).
Currently unaddressed are activity-style posts (e.g., Facebook-style "likes," geographic check-ins, or any number of other actions). There was some discussion on the Tent issue tracker about adopting the ActivityStreams format for these post types. User profiles are defined; there are three required fields (entity URL, the licenses under which the entity publishes content, and the canonical API root URL needed to interact with the entity's server). A handful of other properties are defined as well (avatar image, bio, location, etc.).
Interested users can sign up for a free Tent account at tent.is. The site runs the TentStatus application, although free accounts are only permitted to send status-type posts; access to essays and photos costs $12 per month.
Tent 0.1 is bare-bones, to say the least. Several of the key features that are supposed to differentiate it from OStatus, Diaspora, and other systems are simply not present, which makes it difficult to assess fairly. For example, there is no way to export one's social graph to import it into a separate tent.is account. There are several existing standards for social graph-like information, such as the RDF-based FOAF and XML-based XFN. The Tent team has been critical of using anything other than JSON in its debates on the issue tracker; it would be interesting to see how they implement the social graph functionality.
But there are also aspects of Tent 0.1 that simply need stress-testing. The fact that subscribers tell publishers which licenses they find acceptable for future posts is puzzling, and it will be interesting to see whether that scales well in practice. The diagrams on the Tent site appear to indicate that each publisher sends a separate copy of each status update to each subscriber. When multimedia content is allowed, that could be problematic (and it is one of the problems PubSubHubbub was created to address, regardless of whether or not one finds it an acceptable solution).
At a more fundamental level, though, several commenters in the issue tracker and other discussions are unconvinced that Tent's decision to associate a user identity with a URL is wise. The prevailing wisdom is that users (particularly non-developers) associate URLs with content, not with individuals. Many consider OpenID's requirement that every user have an OpenID URL as an identifier to be one of its greatest flaws. As long as tent.is remains the only Tent site (and tentd the only Tent implementation), the URL identity question will remain unexplored because all users exist in the same namespace.
Of course, until there are other Tent servers and applications, none of the federation/decentralization features can really be put to the test, either —not to mention shaking out Siders's assertion that Tent is not "federated" because it connects users rather than servers ... even though every user is required to have a separate server.
In short, the interesting bits are still theoretical. One only hopes that we will get to examine these other bits before too long. It is not immediately convincing that Tent's approach of bundling identity, server-to-server, and client-to-server into a single API is a strength. But it is clear that by starting over from scratch on all of these topics the Tent team has carved out a much larger task for itself than it would have if it had attempted to implement private messaging in OStatus.
Toward the end of discussions like the OStatus issue tracker thread, a lot of the reasons for design decisions seem to boil down to personal preference: JSON versus XML, HTTP versus Webfinger, and so on. There is certainly nothing wrong with building an application to suit one's own preferences, but in the long run it is a difficult way to establish a standard. The Tent FAQ ends with a statement affirming the project's commitment to "open governance models leading to a ratified standard." But, as Steve Klanik observed, "Working with existing standards is way less fun than just building your own." That said, the OStatus suite of protocols is indeed slow-moving and feature-incomplete; perhaps Tent can spur that community on — it has certainly reinvigorated the discussion already.
The RPM package format and tools have long supported SELinux, so that policies are configured and files get labeled correctly at installation time. But support for other security solutions, Smack for example, is lacking in RPM. Elena Reshetova presented some ideas for rectifying that in her presentation at the 2012 Linux Security Summit (LSS). By adding hooks into RPM processing, more Linux Security Modules (LSMs) or other security components could be supported.
Reshetova began with an overview of RPM. The format is used by multiple distributions, beyond just the Red Hat distributions where it began. SUSE/openSUSE, Mageia, Tizen, and others all use RPM.
RPM package installation has the notion of a "transaction", which encompasses all of the packages to be installed or removed in a single operation. Inside these transactions are the individual packages or "transaction elements". Dependency checking is done at the transaction level, so it is only done once. Scripts to run before the transaction starts and after it ends can be configured in a package specification.
Installing each package entails a series of steps inside the transaction, starting with the optional signature verification. If that passes (or is turned off), then the "pre" script is run, the files are unpacked from the archive and installed, and the "post" script is run. As might be expected, there are a few other steps (e.g. initialization, cleanup), but, as depicted in a flowchart (seen at right), pre-unpack-post makes up the bulk of the processing.
When SELinux handling was added to RPM, it was done to set up and install the policies and label the files that get installed. That work was mostly done in the sepolicy RPM plugin using the existing RPM hooks. But some SELinux support is in the RPM core, including running the maintainer scripts (pre, post, and a few others) and doing some labeling tasks. The maintainer scripts are run using rpm_execcon() to set a particular security context before their execution.
When Reshetova and others working on Tizen started looking into adding Smack support for RPM, they realized it needed a more generalized security plugin interface. Smack requires setting up access control domains and rules on a per-package basis, but there are other security mechanisms that have needs as well. The security policy for a system or device might trust certain application repositories and only allow packages from those source to access "sensitive services". Integrity measurements may need to be bootstrapped, container configuration established, or seccomp() restrictions enabled, all of which could be handled by a generalized security plugin.
Currently, there are just a few hooks available in RPM: two before the pre-transaction script is run, one before the pre script, one after the post script, and a cleanup hook. Reshetova would like to work with the LSM developers to create an expanded set of hooks that will serve all of the LSMs (as well as the other uses). Making the hooks symmetrical, so that there are hooks both before and after transactions and package installation/removal, might be the starting point. Adding a hook to wrap script execution for setting up the proper security context is another.
Currently, the verification step only allows specifying which keys to use and what should happen if the package does not verify. Adding a hook for verification would allow for additional checking, such as that the package was signed by the right key (corresponding to the repository it came from, as opposed to any installed key, as RPM checks for currently), and to make security policy checks based on the originating repository.
The other hooks that Reshetova proposed are associated with the individual files in a package. Those would allow things like security labeling or calculating hashes on the file contents (for integrity purposes). The last hook she proposed is to handle conflicts. If a package wants to install a file that another package has already installed, the hook could install a conflict file recording the problem; later hooks could use that file to make decisions depending on the attributes of the two packages involved. If one package is from a more trusted repository, its version could be chosen, for example.
In addition, some environments may have non-native applications that use their own installer. Those have all of the same problems with handling security contexts, labeling, and so on. It would be nice to have the security plugin functionality available as standalone library that could be used by non-native application installers, Reshetova said.
Once those hooks (or a similar set that is agreed upon) are available, the SELinux-specific pieces of RPM could be moved out of the core. A unified layer of security hooks would be beneficial for a wide variety of use cases, she said. More information is available on the Tizen wiki and a GitHub repository contains the proposed changes for RPM.
Dan Walsh asked what the RPM maintainers thought about the changes; Reshetova said they are interested in seeing a unified solution. They want to make sure that there is agreement between the LSM developers, which is one of the motivations for her presentation. The intent would be to cause no disruption for the SELinux parts in RPM when moving that to the new hooks, she said in answer to another question. Walsh said that there really isn't anyone who is the "SELinux/RPM person", but he and others wouldn't oppose a patch to move SELinux out of the core; "don't break anything and I'm fine" with it, he said, though he did caution that performance might be an issue.
Since the summit, Reshetova has started a wider discussion of the hooks on the SELinux mailing list. It would seem likely that we will have a more generalized solution for RPM in the not-too-distant future.
This project strengthens the ludicrous idea in people's heads that photography is somehow a significant threat to safety or security. Photographic documentation is an extremely important part of modern democracy, and projects like these threaten the ability of people to take pictures.
|Created:||October 2, 2012||Updated:||April 5, 2013|
|Description:||From the CVE entry:
Multiple cross-site scripting (XSS) vulnerabilities in the make_variant_list function in mod_negotiation.c in the mod_negotiation module in the Apache HTTP Server 2.4.x before 2.4.3, when the MultiViews option is enabled, allow remote attackers to inject arbitrary web script or HTML via a crafted filename that is not properly handled during construction of a variant list.
|Created:||October 2, 2012||Updated:||January 22, 2014|
|Description:||From the Mandriva advisory:
The STARTTLS implementation in INN's NNTP server for readers, nnrpd, before 2.5.3 does not properly restrict I/O buffering, which allows man-in-the-middle attackers to insert commands into encrypted sessions by sending a cleartext command that is processed after TLS is in place, related to a plaintext command injection attack, a similar issue to CVE-2011-0411 (CVE-2012-3523).
|Created:||October 3, 2012||Updated:||October 24, 2012|
|Description:||From the Red Hat advisory:
A use-after-free flaw was found in the xacct_add_tsk() function in the Linux kernel's taskstats subsystem. A local, unprivileged user could use this flaw to cause an information leak or a denial of service.
|Created:||September 28, 2012||Updated:||October 3, 2012|
From the Gentoo advisory:
An error has been found in the way mod_rpaf handles X-Forwarded-For headers. Please review the CVE identifier referenced below for details.
A remote attacker could send a specially crafted HTTP header, possibly resulting in a Denial of Service condition.
|Package(s):||moodle||CVE #(s):||CVE-2012-4400 CVE-2012-4408 CVE-2012-4402 CVE-2012-4403|
|Created:||September 27, 2012||Updated:||October 3, 2012|
CVE-2012-4400: A possibility to bypass file upload size constraint was found in the way the webservice script, called from the filepicker front end of Moodle, a course management system, performed sanitization of 'maxbytes' variable. A remote attacker could issue a specially-crafted request that, when processed could allow an attacker to upload a file even if it was larger than specified constraint.
CVE-2012-4402, CVE-2012-4403: Users with permission to access multiple services were able to use a token from one service to access another. An attacker could use this flaw, in an unauthorized way, to access content of an external service.
CVE-2012-4408: A security flaw was found in the way Moodle course management system performed permission check on course reset page (the course reset link was protected by a correct permission, but the reset page itself was being checked for a different permission). A remote attacker could use this flaw to in an unauthorized way to reset particular course.
|Package(s):||moodle||CVE #(s):||CVE-2012-4401 CVE-2012-4407|
|Created:||September 27, 2012||Updated:||October 3, 2012|
CVE-2012-4401: A security flaw was found in the way Moodle course management system performed user permissions validation by course topic management. A remote attackers, with course editing capabilities, but without ability to show / hide topics or set the current topic for a particular course could use this flaw to successfully complete these actions under certain circumstances.
CVE-2012-4407: A security flaw was found in the way file serving functionality of Moodle course management system enforced file access restrictions on blog post(s). A remote attacker could use this flaw to deliver files embedded as part of a blog without the publication state to be checked properly.
|Package(s):||postfixadmin||CVE #(s):||CVE-2012-0811 CVE-2012-0812|
|Created:||September 27, 2012||Updated:||March 28, 2014|
From the Gentoo advisory:
Multiple SQL injection vulnerabilities (CVE-2012-0811) and cross-site scripting vulnerabilities (CVE-2012-0812) have been found in Postfixadmin.
|Created:||October 2, 2012||Updated:||October 3, 2012|
|Description:||From the Ubuntu advisory:
It was discovered that the apt-add-repository tool incorrectly validated PPA GPG keys when importing from a keyserver. If a remote attacker were able to perform a man-in-the-middle attack, this flaw could be exploited to install altered package repository GPG keys.
|Created:||October 2, 2012||Updated:||February 4, 2013|
|Description:||From the CVE entry:
The tor_timegm function in common/util.c in Tor before 0.2.2.39, and 0.2.3.x before 0.2.3.22-rc, does not properly validate time values, which allows remote attackers to cause a denial of service (assertion failure and daemon exit) via a malformed directory object, a different vulnerability than CVE-2012-4419.
|Package(s):||vmware-player||CVE #(s):||CVE-2007-5671 CVE-2008-0967 CVE-2008-1340 CVE-2008-1361 CVE-2008-1362 CVE-2008-1363 CVE-2008-1364 CVE-2008-1392 CVE-2008-2098 CVE-2008-2100 CVE-2008-2101 CVE-2008-4915 CVE-2008-4916 CVE-2008-4917 CVE-2009-0909 CVE-2009-0910 CVE-2009-1244 CVE-2009-2267 CVE-2009-3707 CVE-2009-3732 CVE-2009-3733 CVE-2009-4811 CVE-2010-1137 CVE-2010-1138 CVE-2010-1139 CVE-2010-1140 CVE-2010-1141 CVE-2010-1142 CVE-2010-1143 CVE-2011-3868|
|Created:||October 1, 2012||Updated:||October 3, 2012|
|Description:||From the Gentoo advisory:
Multiple vulnerabilities have been discovered in VMware Player, Server, and Workstation.
Local users may be able to gain escalated privileges, cause a Denial of Service, or gain sensitive information.
A remote attacker could entice a user to open a specially crafted file, possibly resulting in the remote execution of arbitrary code, or a Denial of Service. Remote attackers also may be able to spoof DNS traffic, read arbitrary files, or inject arbitrary web script to the VMware Server Console.
Furthermore, guest OS users may be able to execute arbitrary code on the host OS, gain escalated privileges on the guest OS, or cause a Denial of Service (crash the host OS).
|Created:||October 3, 2012||Updated:||October 3, 2012|
|Description:||From the Ubuntu advisory:
Alec Warner discovered that xdiagnose improperly handled temporary files in welcome.py when creating user-initiated archive files. While failsafeX does not use the vulnerable code, this update removes this functionality to protect any 3rd party applications which import the vulnerable code. In the default Ubuntu installation, this should be prevented by the Yama link restrictions.
Page editor: Jake Edge
Brief itemsthe announcement Linus said:
Notable features in 3.6 include TCP small queues, the client-side TCP fast open implementation (server side has been merged for 3.7), IOMMU groups, the Btrfs send/receive feature, the VFIO device virtualization mechanism, and more. See the KernelNewbies 3.6 page for details.
$ git log --no-merges v3.5..v3.6 | \ egrep -i '(integer|counter|buffer|stack|fix) (over|under)flow' | \ wc -l 31
How many were security relevant? How many got CVEs?
This may include, but is not limited to, topics such as tooling to assist in securing the Linux Kernel, verification and testing of critical subsystems for vulnerabilities, security improvements for build tools, and providing guidance for maintaining subsystem security.
The group intends to discuss a wide range of approaches including tool development, static analysis, verification efforts, and even the possibility of tightening the rules for patch signing. Interested people are encouraged to join in.virtio I/O subsystem. He says:
The plan is to start an OASIS working group which would help in the development (and standardization) of version 1.0 of the virtio specification. He is asking for comments on the idea, but few have been posted as of this writing.security restrictions that change the handling of hard and soft links in world-writable directories. One of the reasons this change took so long to merge was concerns about breaking programs and scripts on user systems. The case was finally made that problems would be limited to malware, and the feature was merged.
Now, a single report of trouble on the linux-kernel list has developers questioning the change — or, at least, whether it should be turned on by default. Linus fears that this report could be followed by others:
Compatibility is just too important.
Other developers have argued for making the change as soon as the 3.6.1 stable update. Needless to say, agreement on this point is not universal; Kees Cook, the author of the change, argues that the benefits far outweigh the pain. The kernel community is committed to not breaking things that used to work, though; if this change appears to be causing problems more widely, it will probably be reversed in the near future.
Kernel development news
Changes visible to kernel developers include:
The 3.7 merge window can be expected to stay open until approximately October 14. That said, Linus has warned the community that he will be traveling during this time; he, along with your editor, will be at the Linux Foundation's Korea Linux Forum. If the travel interferes with the merging process — which hasn't been a problem in previous merge windows — this merge window may be extended to compensate.
Anyone who follows Linux kernel security discussions has probably heard of the "LSM stacking issue". It is a perennial topic on the mailing lists and solutions have been proposed from time to time. The basic problem is that only one Linux Security Module (LSM) can be active in a running kernel, and that single slot is often occupied by one of the "monolithic" solutions (e.g. SELinux or AppArmor) supplied by distributions. That leaves some of the smaller or more special-purpose LSMs—or users who want to use multiple approaches—out in the cold.
Back in February 2011, David Howells proposed a stacking solution for LSMs. At the time, Casey Schaufler mentioned a solution he had been working on that would be posted in a "day or two". That prediction turns out to have been overly optimistic, but his solution has surfaced—more than a year-and-a-half later. He also discussed the patches in a lightning talk at the recently held Linux Security Summit.
There are three types of LSMs available in the kernel today and there are use cases for combining them in various ways. Administrators might want to add some AppArmor restrictions on top of the distribution-supplied SELinux configuration—or use SELinux-based sandboxes on a TOMOYO system. The two "labeled" LSMs, SELinux and Smack, require that files have extended attributes (xattrs) containing labels that are used for access decisions. The two "path-based" LSMs, AppArmor and TOMOYO, both base their access decisions on the paths used to access files in the system. The only other LSM currently available is Yama, which is something of a container for discretionary access control (DAC) enhancements.
Yama is the LSM that is perhaps most likely to be stacked. It adds some restrictions to the ptrace() attach operation that Ubuntu and ChromeOS use, and other distributions are considering it as well. In fact, Yama developer Kees Cook has proposed making the LSM unconditionally stackable via the CONFIG_SECURITY_YAMA_STACKED kernel build option (which was merged for 3.7). Over the years, though, various other security ideas have been proposed and pointed in the direction of the LSM API, so other targeted LSMs may come about down the road. Making each separately stackable is less than ideal, so a more general solution is desirable. In addition, combining labeled and path-based solutions manually can't really be sanely done.
When Howells posted his solution, he explicitly disallowed combining the two labeled LSMs because of implementation difficulties (mainly with respect to the LSM-specific secid which is used by SELinux and Smack, but none of the others). There was also a belief that mixing SELinux and Smack (or AppArmor and TOMOYO for that matter) is not a particularly sought-after feature. But Schaufler thought that was an unnecessary restriction, one that he was trying to address in his solution.
As it turns out, Schaufler ended up at the same place. His proposal also defers stacking (or "composing") SELinux and Smack, noting that it "has proven quite a challenge". But he was able to get the other combinations working—at least to the extent that the kernel would boot without complaints in the logs. The Smack tests passed as well. Performance for Smack with AppArmor, TOMOYO, and Yama enabled is "within the noise", he said.
Schaufler's version ensures that the hooks for each enabled LSM are called, which is different than Howells's approach that short-circuited the other hooks if one denied the access. Instead, Schaufler patches call each LSM's hooks, remembering the last non-zero return (denial or error of some sort) as the return value for the hook. His argument is that an LSM could reasonably expect to see—and possibly record information about—each access decision, even if it has been denied by another LSM.
Much of the "guts" of the changes are described in the infrastructure patch, which is the largest of the five patches. The others make fairly modest (if pervasive) changes to SELinux, Smack, TOMOYO, and AppArmor to support stacking. As it turns out, Yama "required no change and gets in free". The changes to the individual LSMs are optional, as they can still be used (in a non-stackable way) without them.
Stacking is governed by the CONFIG_SECURITY_COMPOSER option. If that is not chosen, all of the existing LSMs function as they do today. If stacking is built in, the security= boot parameter can then be used to control which LSMs are enabled. For example, security=selinux,apparmor will enable those two. If nothing is specified on the boot command line, all of the LSMs built into the kernel will be enabled. The /proc/PID/attr/current interface has also been changed to report information from any of the active LSMs (only SELinux, Smack, and AppArmor actually use that interface today).
Existing kernels store pointers to the hooks implemented by an LSM in a struct security_operations called security_ops. Schaufler's patch replaces that with an array of security_operations pointers called composer_ops. That array is indexed based on the order that is assigned to each LSM as it is registered. The first entry (composer_ops) is reserved for the Linux capabilities hooks. Those have been manually "stacked" into the LSMs for some time, so entries in composer_ops get zeroed out if one of the other LSMs implements the hook (as the capabilities checks will be done there). If there is no entry in composer_ops, each of the hooks in the other entries in that array are called, as described above.
The security "blobs" (private storage for each LSM) are still managed by the LSMs, but because there are blob pointers sprinkled around various kernel data structures (e.g. inodes, files, sockets, keys, etc.), a "composer blob" is used. That blob contains pointers to each of the active LSM blobs, and new calls are used to get and set the blob pointers (e.g. lsm_get_inode() or lsm_set_sock()). Most of the changes for the individual LSMs are converting to use this new interface.
So far, most of the comments have been about implementation details; Schaufler addressed those in the second version of the patch set. Notably missing, at least so far, were some of the concerns about strange interactions between stacked LSMs leading to vulnerabilities that have come up in earlier discussions. But, without any major complaints, one would guess some more testing will be done, including gathering some additional performance numbers, before the linux-kernel gauntlet will be run. The rest of the kernel developers have heard about the need for stacking LSMs enough times that it seems likely that Schaufler's patches (or something derived from them) will eventually pass muster.a problem: the pgbench PostgreSQL benchmark ran 20% slower than under 3.5. The resulting discussion shows just how hard scalability can be on contemporary hardware and how hard scheduling can be in general.
Borislav Petkov was able to reproduce the problem; a dozen or so bisection iterations later he narrowed down the problem to this patch, which was duly reverted. There is just one little problem left: the offending patch was, itself, meant to improve scheduler performance. Reverting it fixed the PostgreSQL regression, but at the cost of losing an optimization that improves things for many (arguably most) other workloads. Naturally, that led to a search to figure out what the real problem was so that the optimization could be restored without harmful effects on PostgreSQL.
The kernel's scheduling domains mechanism exists to optimize scheduling decisions by modeling the costs of moving processes between CPUs. Migrating a process from one CPU to a hyperthreaded sibling is nearly free; cache is shared at all levels, so the moved process will not have to spend time repopulating cache with its working set. Moving to another CPU within the same physical package will cost more, but mid-level caches are still shared, so such a move is still much less expensive than moving to another package entirely. The current scheduling code thus tries to keep processes within the same package whenever possible, but it also tries to spread runnable processes across the package's CPUs to maximize throughput.
The problem that the offending patch (by Mike Galbraith) was trying to solve comes from the fact that the number of CPUs built into a single package has been growing over time. Not too long ago, examining every processor within a package in search of an idle CPU for a runnable process was a relatively quick affair. As the number of CPUs in a package increases, the cost of that search increases as well, to the point that it starts to look expensive. The current scheduler's behavior, Mike said at the time, could also result in processes bouncing around the package excessively. The result was less-than-optimal performance.
Mike's solution was to organize CPUs into pairs; each CPU gets one "buddy" CPU. When one CPU wakes a process and needs to find a processor for that process to run on, it examines only the buddy CPU. The process will be placed on either the original CPU or the buddy; the search will go no further than that even if there might be a more lightly loaded CPU elsewhere in the package. The cost of iterating over the entire package is eliminated, process bouncing is reduced, and things run faster. Meanwhile, the scheduler's load balancing code can still be relied upon to distribute the load across the available CPUs in the longer term. Mike reported significant improvements in tbench benchmark results with the patch, and it was quickly accepted for the 3.6 development cycle.
So what is different about PostgreSQL that caused it to slow down in response to this change? It seems to come down to the design of the PostgreSQL server and the fact that it does a certain amount of its own scheduling with user-space spinlocks. Carrying its own spinlock implementation does evidently yield performance benefits for the PostgreSQL project, but it also makes the system more susceptible to problems resulting from scheduler changes in the underlying system. In this case, restricting the set of CPUs on which a newly-woken process can run increases the chance that it will end up preempting another PostgreSQL process. If the new process needs a lock held by the preempted process, it will end up waiting until the preempted processes manages to run again, slowing things down. Possibly even worse is that preempting the PostgreSQL dispatcher process — also more likely with Mike's patch — can slow the flow of tasks to all PostgreSQL worker processes; that, too, will hurt performance.
What is needed is a way to gain the benefits of Mike's patch without making things worse for PostgreSQL-style loads. One possibility, suggested by Linus, is to try to reduce the cost of searching for an idle CPU instead of eliminating the search outright. It appears that there is some low-hanging fruit in this area, but it is not at all clear that optimizing the search, by itself, will solve the entire problem. Mike's patch eliminates that search cost, but it also reduces movement of processes around the package; a fix that only addresses the first part risks falling short in the end.
Another possibility is to simply increase the scheduling granularity, essentially giving longer time slices to running processes. That will reduce the number of preemptions, making it less likely that PostgreSQL processes will step on each other's toes. Increasing the granularity does, indeed, make things better for the pgbench load. There may be some benefit to be had from messing with the granularity, but it is not without its risks. In particular, increasing the granularity could have an adverse effect on desktop interactivity; there is no shortage of Linux users who would consider that to be a bad trade.
Yet another possibility is to somehow teach the scheduler to recognize processes — like the PostgreSQL dispatcher — that should not be preempted by related processes if it can be avoided. Ingo Molnar suggested investigating this idea:
The problem, of course, is the dragons. The O(1) scheduler, used by Linux until the Completely Fair Scheduler (CFS) was merged for 2.6.23, had, over time, accumulated no end of heuristics and hacks designed to provide the "right" kind of scheduling for various types of workloads. All these tweaks complicated the scheduler code considerably, making it fragile and difficult to work with — and they didn't even work much of the time. This complexity inspired Con Kolivas's "staircase deadline scheduler" as a much simpler solution to the problem; that work led to the writing (and merging) of CFS.
Naturally, CFS has lost a fair amount of its simplicity since it was merged; contact with the real world tends to do that to scheduling algorithms. But it is still relatively free of workload-specific heuristics. Opening the door to more of them now risks driving the scheduler in a less maintainable, more brittle direction where nothing can be done without a significant chance of creating problems in unpredictable places. It seems unlikely that the development community wants to go there.
A potentially simpler alternative is to let the application itself tell the scheduler that one of its processes is special. PostgreSQL could request that its dispatcher be allowed to run at the expense of one of its own workers, even if the normal scheduling algorithm would dictate otherwise. That approach reduces complexity, but it does so by pushing some of the cost into applications. Getting application developers to accept that cost can be a challenge, especially if they are interested in supporting operating systems other than Linux. As a general rule, it is far better if things just work without the need for manual intervention of this type.
So, in other words, nobody really knows how this problem will be solved at this time. There are several interesting ideas to pursue, but none that seem like an obvious solution. Further research is clearly called for.
One good point in all of this is that the problem was found before the final 3.6 kernel shipped. Performance regressions have a way of hiding, sometimes for years, before they emerge to bite some important workload. Eventually, tools like Linsched may help to find more of these problems early, but we will always be dependent on users who will perform this kind of testing with workloads that matter to them. Without Nikolay's 3.6-rc testing, PostgreSQL users might have had an unpleasant surprise when this kernel was released.
Patches and updates
Core kernel code
Filesystems and block I/O
Virtualization and containers
Page editor: Jonathan Corbet
Distributionsannounced that he had taken a position with Red Hat to "work on bringing some sense to the whole 'Cloud' thing" within Fedora. We asked Matthew if he would be willing to answer a few questions about just what that means; as can be seen below, he was more than willing to do so. Read on for a detailed discussion of his view of cloud computing and how Fedora fits into the cloud picture.
LWN: First of all, what, in your mind, does the term "cloud" really refer to?
Clearly, "Cloud" is a both marketing term and a hot business buzzword, neither of which lend themselves to clarity. However, there are some actual significant changes in computing represented by the word.
On the business side, there's a trend towards centralization of resources — sometimes described as a big, constant pendulum swing, with "cloud data center" simply standing in for "mainframe" this time around — but there are actually interesting new developments which make cloud computing compelling. It may be that I've been in a university setting too long, but I like the NIST definition [PDF] which describes the essential cloud characteristics. Or, there's the "OSSM" definition [video], which goes like this:
With this option, if you've got a new startup, putting together your own data center is suddenly crazy. If you're an agile developer, on-demand self-service is very appealing. And for larger enterprises, it's amazing to have built-in scalability and measurability.
Small, nimble companies are already benefiting from cloud computing; bigger companies are mostly dipping their toes in. There's a lot of interest in on-premises private cloud, and especially in transparent hybrid cloud (where local and public cloud infrastructure are mixed together). All of the technology is still in flux and there are plenty of unresolved questions, but it's certainly a matter of how-much-how-quickly, not of if-at-all.
On the user and client-platform side, the important trend is mobile and tablet computing, and the movement away from general-purpose computing devices to a locked-down "app store" model — even for desktop systems. Previous attempts at making restricted computing devices always failed in the market, because while most people only do a few things with their computers, there's a long tail of different things each person wants, and any narrow selection of most-common tools was never good enough.
Developer-friendly app marketplace and distribution channels which run right on the device get around that — even when the platform is locked down, consumer convenience is served by "there's an app for that". Consumers get access to tools, developers get a market, and vendors get lock-in and control. In proprietary operating systems, we're going to see more and more of that.
[PULL QUOTE: So, a key role for free and open source Linux distributions is to provide a viable alternative to the locked-in "consumption device" dystopia. This includes providing the convenience and functionality of cloud computing that proprietary platforms offer. END QUOTE] There's one key thing the NIST cloud definition covers that isn't in the OSSM one (to be pronounced "awesome", by the way). That's "broad network access", and it means you can get to your data from anywhere, from any client platform. It's the lure of cloud computing from an end-user point of view — don't worry if you lose your phone, because it's all safe in the cloud. In fact, don't worry if you're house burns down, because your family photos are all safely floating out there as well.
At least, don't worry as long as there are sufficient protections in place! A number of people are saying that open source on the desktop (or any client device) doesn't even matter — that open web is the new front, and that the battle is about making sure people have control over and access to their own remotely-stored data.
I think that's important, and I'm glad people are fighting for it, but I disagree that it's sufficient. It's not hard to imagine a future where the normal tech platform most people buy isn't able to run arbitrary code — look at the boot restrictions for Windows 8 ARM systems. Historically, the mass-market platform has also been the development platform, and that's done wonderful, magical things for the democratization of invention. But in a restricted future, the flexible development platform is a special niche product — more expensive and maybe even not available to everyone.
So, a key role for free and open source Linux distributions is to provide a viable alternative to the locked-in "consumption device" dystopia. This includes providing the convenience and functionality of cloud computing that proprietary platforms offer, whether it's through open web and open cloud initiatives, or through building in local cloud and peer-to-peer cloudlike services.
LWN: How might Fedora fit into that picture?
We want these things in cloud computing as well, and over the next few months I'll be working with other people in Fedora to identify more specific outcomes to work toward in cloud computing, and from there we'll develop programs and activities focused on those outcomes.
We have a strong base in the Fedora distribution, a great worldwide user and developer community, and excellent infrastructure. And, we have a lot of cloud-related work in progress.
LWN: Where are people using Fedora in cloudy settings now? How would you like to see that change in the future?
We're also a huge center for cloud infrastructure software packaging and development. OpenShift Origin and OpenStack Folsom are two of the big features for the upcoming Fedora 18 release, along with Eucalyptus, OwnCloud, and Heat (a cloud orchestration API for OpenStack). We've also got work in progress on OpenNebula and CloudStack.
Now, I don't think anyone has delusions that a great number of people will run production infrastructure on top of Fedora with these packages. The main two use cases are: a) developers of cloud software (including, but definitely not limited to, Red Hat) working to make sure the software is ready for future use in enterprise Linux products, and b) cloud early-adopters who are following that development. Those constituent groups are going to remain crucial to Fedora in general and Fedora cloud in specific, and I think there are some gentle adjustments we can make that will help encourage those relationships even more.
Over the years we've lost a lot of our sysadmin and server-side Linux users, as they've felt somewhat left behind by all the energy and development around desktop. Sometimes it's seemed like the only voice available to that group has been a negative one — "Hey, slow down!" — which is a frustrating side of the conversation to be on, when really we all want to work together to make better software and a better world (back to that vision statement). When you're cast in the stop-energy role, it's hard to feel listened to, let alone constructive.
So, the rise in cloud and all the exciting new development work gives that group a positive voice. That's good for Red Hat Enterprise Linux as a downstream project. It's good for developers of cloud software because we'll make sure their code works in the distribution. And it's good for Fedora, because this is a large group of technology professionals with years of experience and wisdom.
I especially want to expand what we can do for the small, agile organizations working at the leading edge of cloud technology. For this, the Fedora focus isn't so much on the cloud infrastructure software as on the cloud guest images.
As I mentioned, we've got an up-to-date EC2 image, and we're working on making that even more lightweight and on offering more variants for different use cases. We also support appliance creation with BoxGrinder (a Fedora 15 feature), and JEOS ["just enough operating system"] builds with Oz. Or if you're looking for something more complete — for example, to make a virtualized developer's desktop — it's easy to make an image from our Live CD ISO (although we need to make that several steps easier).
Producing images isn't enough by itself, though. We also need to increase our engagement with the DevOps community. At this point, that means mostly listening and being present, more than direct evangelism. Fedora has always aimed at users who might become contributors, and the DevOps world is a natural fit, with good alignment with our values of "freedom, friends, features, first".
On a technical level, we see the continuous struggle between having the most up-to-date versions of specific software (Ruby gems, for example), while having a base that you just don't have to worry about. That's a hard problem, but in order to be relevant to actual users, we need to address it, whether it's through Software Collections, with something related to OpenShift cartridges, or in some other way.
We're also looking into how we can make better use of cloud technology for Fedora developers. The Fedora Infrastructure Team has deployed and is evaluating Eucalyptus and OpenStack for package mass rebuilds, for automated (and non-automated) testing, and eventually for more.
The Fedora features process has been great for the distribution as a whole and is successful overall in helping us look at development in a goal-oriented way. I want to make it easier for Fedora cloud contributors – both new and already involved – to participate in this. As with any process in software engineering, the "meta-work" involved is often very painful to the actual implementers, sometimes simply because of the context switches required, not because the actual paperwork is particularly onerous. That wastes valuable developer time and produces less than ideal process results. We need to have people – like, for example, me – involved in the various features who can both keep up with the technical work and keep meta-work from being a burden to developers, while still reaping the benefits of good process.
For end-user and desktop cloud services, we haven't yet explored all of the possibilities. We can make it easier for moderately-savvy users to set up their own private infrastructure using SparkleShare or OwnCloud as alternatives to proprietary hosted filesharing. Our OpenStack packages will include Puppet modules making it trivial to get a fully-functional private cloud out of the box, and we can extend that to other parts of the distribution as well. We'll also continue to look at what more we can do to enable users to connect themselves to open cloud offerings in ways that align with the project mission. It'd be nice to provide a push-button process where open source web applications can be deployed either locally or pushed to a cloud provider — a tie in to Red Hat's OpenShift, say, or to any open cloud provider.
LWN: Cloud providers tend to try to project an image of solidness and stability. How well does that fit with the relatively bleeding-edge nature of the Fedora distribution?
But, those cloud providers should use Fedora for their testbeds, precisely because the field itself is on the bleeding edge and we can follow it more quickly. For example, QEMU 1.2 was pulled into Fedora right after it was released, and we track the upstream kernel closely, which means we get the latest hardware and low-level virtualization support. Also, Fedora is upstream for RHEL and is home to a lot of the exciting development activity done by Red Hat. If you have an open source / free software cloud project that you want to work in that ecosystem, Fedora is the natural path.
Meanwhile, those cloud providers should be interested in supporting Fedora as a guest OS, because it provides value to their users. It's important to stress that while Fedora is fast-moving, every actual release is production-ready and stable. I won't claim perfection, but we have an excellent quality engineering team that takes this very seriously. While we develop on a six-month development cycle, we make sure it works before we release.
LWN: Do you see Fedora's relatively short support cycle as being a problem for cloud deployments? How might any such problems be mitigated?
There's always some background talk about a rolling release, but I like the cadence of actual releases, with planning and a features process, and the quality assurance work and release engineering would be hard to replicate in the rolling model. We have ongoing work on in-place upgrades using yum, and further polish to that will reduce the pain of upgrades. The model I mentioned earlier where OpenStack configuration is shipped as a Puppet module is also a good direction, since that helps abstract away the specifics of the underlying system.
Right now, we support upgrades going two releases back, which fits the Fedora lifecycle, but we also want to make sure people who miss that mark aren't entirely left behind. One specific thing I'm working on is putting together tools and resources for the many people still running on Fedora 8 in Amazon EC2.
We've also had a continuous discussion in Fedora about the fire hose of updates during a release's lifetime. Bringing that under control is important for real-world use, but we don't want to keep users from getting new features quickly or prevent developers and packagers from getting their code out. I'm very much in support of the idea of bundling non-security updates into monthly sets which can be tested and installed in a more controlled way. I also strongly prefer to see development targeted at Rawhide and future releases, with a focus on stability for the thirteen months of a release's maintenance cycle. Or, beyond that, a focus on stability for the first twelve months and a focus on smooth upgrades for the final one.
LWN: What compelling features does Fedora bring to the cloud setting now? What kind of things do you think need to be done to make Fedora more interesting in this area?
But, beyond software and beyond technology, we bring an important set of values and a vision for a free, open, and collaborative future which is just as vital in a cloud computing future as it is in the older local-computing model.
What needs to be done to make Fedora more interesting in the cloud setting? In addition to what I've already talked about, I think most crucially we need to increase community involvement and build up our user and contributor base. That's part of my job, too, and I want to help Fedora respond to what the community needs. I'm working hard to listen, discuss, read, and absorb as much as possible. We need input from everyone who wants to make Fedora better, and who is interested in extending the benefits of free and open source software to the cloud.
LWN: "Cloud computing" often seems like a return to centralized computing with very little end-user freedom or control. That vision seems mildly incompatible with the first of the posted Fedora "Foundations." How can the two be reconciled so that Fedora brings more freedom to cloud computing?
But, at the user level, when we're talking about software as a service and about protections for privacy with remote data, it's a legitimate worry. Fedora needs to offer better alternatives: connections to services which put privacy and freedom first, the ability to easily stand up cloud services under your own control, and, finally, a way to work which doesn't have to be cloud-dependent.
LWN: What sort of tools and/or superpowers has Red Hat given you to get this job done?
Seriously, though, the primary tool is connections with people – in the Fedora Project, at Red Hat, and in the open cloud community in general. I'm very accessible by email, including in the Fedora Cloud SIG and development mailing lists, and I'm trying to get my 1990s-era IRC habits back up to snuff (user mattdm on freenode). I'll also be visiting a lot of conferences, and I hope to see anyone who has read this far, and those of you I can't meet in person I hope to talk with in other ways.
If you're interested in making Fedora better through cloud computing, or in making the cloud better through Fedora, please join us in the Cloud SIG. There's no experience necessary and no formal join process — just jump into the mailing list and we'll get started.
Many thanks to Matthew for taking the time to answer our questions in such detail. We are most interested to see where this effort will go.
So when someone emails or approaches you with something they’re excited about, please reply thinking “What can I do to help?. Often I limit my commitment to an encouraging and thoughtful response: a perfectly acceptable minimum. You might want to go further and offer pointers or advice, but take care to fan that delicate flutter of enthusiasm without extinguishing it. Other forces will usually take care of that soon enough, but let it not be you.
Newsletters and articles of interest
Page editor: Rebecca Sobol
Ian Romanick works on Mesa, an open-source implementation of the OpenGL specification. His presentation on the final day of the 2012 X.Org Developers' Conference looked at what he hoped would be the future of the OpenGL interfaces on Linux. His talk was broken into three broad areas: the current status of the OpenGL interfaces, where they should go in the future, and how to get to that future.
The current OpenGL ABI was defined in 2000. It consists of a number of pieces. The first of these is libGL, which implements three components: OpenGL 1.2, GLX up to version 1.2 (the current version of GLX is 1.4), and the ARB multi-texture extensions. Ian highlighted that libGL on its own is not sufficient for any useful applications these days. The remaining pieces of the OpenGL ABI are libEGL, and two separate libraries, libGLES_CM and libGLESv2, for versions 1.2 and 2.0 of OpenGL ES.
There are many problems with the current situation. Applications that want to use graphics components beyond OpenGL 1.2 have to "jump through hoops." It's even more work to use GLES with GLX, or to use desktop OpenGL with EGL. The implementation of indirect rendering, a feature that allows OpenGL commands to be encapsulated in the X protocol stream and sent over the wire to the X server, is "completely fail": it performs poorly and supports OpenGL up to only version 1.4—or 1.5 with a lot of effort. The specification requires indirect rendering to be supported, but the number of legitimate use cases is quite small. And the presence of this rarely used feature sometimes creates problems when applications accidentally trigger indirect rendering and force OpenGL back to version 1.4, leading to user "rage" and bug reports.
Ian then went through the proposed solution. The first step is to split libGL, so that the OpenGL and GLX components are separated into different libraries, called (say) libOpenGL and libGLX. Andy Ritger has sent out a detailed proposal to the mesa-dev mailing list describing how the split could be accomplished. Splitting the libraries will allow applications to mix and match components as needed, so that, for example, GLES and GLX can be easily used together by linking with the right libraries. Using both OpenGL and EGL together would become similarly straightforward. To maintain backward compatibility for old binaries that look for libGL, it would still be necessary to provide a legacy version of libGL that glues libOpenGL and libGLX together.
Among the problems to be solved during the split are how to version the libOpenGL library and "get away from at least some of the GetProcAddress madness." That "madness" exists because the current ABI forces some applications to make calls to various "GetProcAddress" APIs (similar in concept to the GNU dynamic linker's dlsym() API) in order to obtain the addresses of multiple functions in the various libraries that constitute the OpenGL ABI. How the libOpenGL library would be versioned is an open question. Ian noted that the possibilities included ELF library versioning or embedding the version number in the library name, as is done with GLES. He also speculated about whether it would be possible to bump up the minimum OpenGL version supported by the ABI. The current implementation is required to support OpenGL versions as far back as 1.2. However, OpenGL 1.2 is now so old that it is "useless", though Ian still sees occasional bug reports for version 1.3.
Once the library is split, Ian would like to see GLX deprecated. In addition to the problems caused by direct rendering, adding new GLX extensions is painful, because support must be added in both the client and the server. This can create problems in the cases where support is not added simultaneously on both sides: the client may end up sending unsupported protocol requests to the server and "they die in a fire." One recent fire-starter was the GLX_ARB_create_context feature: support appeared in the X server only in September, more than a year after client-side support was added to GLX. By contrast, EGL does not have this problem because support needs to be added only on the client side. In other words, getting rid of GLX will allow new features to be shipped to users much more quickly.
A prerequisite for deprecating GLX is to have Linux distributors ship EGL by default. Most distributions do provide EGL, but Ian supposed that it is not generally included in the default install. However, Martin Gräßlin said that KDE started optionally depending on EGL about two years ago, so it is now part of the default install in most distributions. Later, Ian noted that encouraging the move to EGL may require the creation of a GLX-to-EGL porting guide; while there is independent documentation for both GLX and EGL, there seems to be none that explains how to port code from one to the other. A lot of the required source code changes can be accomplished with some simple scripting, but there are also a few deeper semantic differences as well as a few GLX features that don't have direct counterparts in EGL.
Another important step is to make OpenGL ES part of the OpenGL ABI. Bart Massey's XDC2012 presentation bemoaned the fact that developers are not making applications for X. Ian said that the reason they are not is that they're too busy making applications for mobile platforms. So, by enabling the OpenGL ES ABI that every developer uses on mobile platforms, it becomes possible for developers to use X as a test bed for mobile applications; it also becomes possible to port applications from Android and iOS to X.
One final step would be to update the loader/driver interface. This interface defines the way that libGL or libEGL talks to the client-side driver that it loads, and the way that the GLX module in the X server talks to the driver to load and route GL calls to it. This will probably be the hardest step, and it may take some time to resolve the details. As a side note, Ian pointed out that if indirect rendering is dropped, it will probably make the task quite a bit easier, because it won't be necessary to support the loader/driver interface inside the X server.
Ian's presentation was followed by some audience discussion of various topics. There were some questions about compatibility with old applications. Ian thinks that compatibility requirements mean that it will probably be necessary to ship a legacy version of libGL for the indefinite future. There was some discussion on how to handle multiple versions of libOpenGL in the future. Some audience members seemed unclear on what options were available, but others were confident that ELF symbol versioning, as used in the GNU C library, would be sufficient. Later, Chad Versace expressed concerns about ensuring that any proposed solution also worked when using the Clang compiler and the gold linker. Ian noted that there will need to be some more investigation of the requirements and build environments before any final decisions are made.
Bart Massey expressed concern that indirect rendering seemed to be going away with no replacement in sight. He noted that he'd had students who had been happy users of indirect rendering for use in certain environments with limited hardware. Ian suggested that a VNC "send the pixels across the wire" type of solution might be the way to go. Later, Eric Anholt suggested that the loader could detect that it is communicating over the wire with the X server, open the driver as normal, and then transmit images over the wire with PutImage, with the application being more or less unaware of this happening.
There are still many details to be resolved regarding the proposed reworking of the OpenGL ABI. However, there seemed near unanimous agreement that the proposal described by Ian was the right direction, and it seems likely that, once some design details have been resolved, the work will commence quite soon.
The X.Org wiki has a pointer to the video of this presentation.
The Tizen project has unveiled the first release on the road to its next major release, 2.0. The 2.0 alpha includes both source for Tizen itself and a new build of the SDK. Additions include HTML5 API coverage and a "platform SDK" based on OBS. It is still an alpha, however, and the announcement notes that "there are additional components that we plan to add in the coming weeks, and we will continue to fix bugs and add additional features."
The Blender Foundation has released "Tears of Steel," the short film produced during its Mango open movie project. As with previous open movie efforts, the production process was used to develop and implement new functionality for Blender — this iteration focused on the visual effects pipeline for compositing with live action. The short is available via YouTube or direct download now (as are source files); DVDs are still to come.the "what's new in 3.3" document for details.
Newsletters and articles
Page editor: Nathan Willis
Brief itemsDuring the last 12 months, the foundation was legally established in Berlin, the Board of Directors and the Membership Committee were elected by TDF members, where membership is based on meritocracy and not on invitation, Intel became a supporter, and LibreOffice 3.5 and 3.6 families were announced. In addition, TDF has shown the prototypes of a cloud and a tablet version of LibreOffice, which will be available sometime in late 2013 or early 2014."
The Foundation has also started a fundraising campaign. "'So far, volunteers have provided most of the work necessary to sustain the project, but after two years it is is mandatory to start thinking really big', says Italo Vignoli, the dean of the Board of Directors. 'We had a dream, and now that thousands around the world made that dream come true we want to get to the major league of software development and advocacy. By donating during the fourth quarter of 2012, donors will define the budget we have available for 2013'."
Articles of interest
Education and CertificationLinux Essentials was initially released as a pilot program in Europe, the Middle East and Africa but due to popular demand has now expanded to North America."
Upcoming EventsWe're thrilled with the exciting lineup of workshops, hands-on tutorials, and talks about real-world uses of Python for data analysis." Several Apache OpenOffice (incubating) contributors will give talks on different topics around Apache OpenOffice and its ecosystem. They will be available for further discussions. General community sessions are also part of the schedule." The Collaboration Conference will give CloudStack users an opportunity to learn about improvements in the upcoming Apache CloudStack 4.0 release, and best practices for deploying and managing CloudStack. Users will also be able to attend sessions on projects and tools that work well with CloudStack for configuration management, storage, monitoring, creating a Platform-as-a-Service (PaaS) on top of CloudStack, and more." The 2013 conference builds on a long tradition of sharing technical know-how between seasoned open source gurus and newcomers to the community. Since its inception in 1999, the conference has moved around Australia and New Zealand, most recently to Ballarat, Victoria, and Brisbane, Queensland. This year, the conference is in Canberra in celebration of our national capital's centenary year. The conference was last hosted in Canberra in 2005, and it has grown significantly since then, bringing some unique challenges to the organising team." LCA takes place January 28-February 2, 2013. Garrett's keynote is entitled “The Secure Boot Journey” and details his work over the past year – technical, political and diplomatic – in getting Linux to run on UEFI Secure Boot systems. He will outline the scenario where Linux users could not only be assured that they can run Linux out of the box in UEFI-based systems, but also how Secure Boot can be used to enhance security."
|Velocity Europe||London, England|
|PyCon South Africa 2012||Cape Town, South Africa|
|GNOME Boston Summit 2012||Cambridge, MA, USA|
|Korea Linux Forum 2012||Seoul, South Korea|
|Open Source Developer's Conference / France||Paris, France|
|October 13||2012 Columbus Code Camp||Columbus, OH, USA|
|Debian BSP in Alcester (Warwickshire, UK)||Alcester, Warwickshire, UK|
|PyCon Ireland 2012||Dublin, Ireland|
|Debian Bug Squashing Party in Utrecht||Utrecht, Netherlands|
|FUDCon:Paris 2012||Paris, France|
|OpenStack Summit||San Diego, CA, USA|
|Linux Driver Verification Workshop||Amirandes,Heraklion, Crete|
|LibreOffice Conference||Berlin, Germany|
|MonkeySpace||Boston, MA, USA|
|14th Real Time Linux Workshop||Chapel Hill, NC, USA|
|Gentoo miniconf||Prague, Czech Republic|
|PyCon Ukraine 2012||Kyiv, Ukraine|
|PyCarolinas 2012||Chapel Hill, NC, USA|
|LinuxDays||Prague, Czech Republic|
|openSUSE Conference 2012||Prague, Czech Republic|
|PyCon Finland 2012||Espoo, Finland|
|PostgreSQL Conference Europe||Prague, Czech Republic|
|Droidcon London||London, UK|
|Firebird Conference 2012||Luxembourg, Luxembourg|
|PyData NYC 2012||New York City, NY, USA|
|Technical Dutch Open Source Event||Eindhoven, Netherlands|
|October 27||Linux Day 2012||Hundreds of cities, Italy|
|October 27||Central PA Open Source Conference||Harrisburg, PA, USA|
|October 27||pyArkansas 2012||Conway, AR, USA|
|Linaro Connect||Copenhagen, Denmark|
|Ubuntu Developer Summit - R||Copenhagen, Denmark|
|PyCon DE 2012||Leipzig, Germany|
|October 30||Ubuntu Enterprise Summit||Copenhagen, Denmark|
|OpenFest 2012||Sofia, Bulgaria|
|MeetBSD California 2012||Sunnyvale, California, USA|
|Apache OpenOffice Conference-Within-a-Conference||Sinsheim, Germany|
|Embedded Linux Conference Europe||Barcelona, Spain|
|LinuxCon Europe||Barcelona, Spain|
|ApacheCon Europe 2012||Sinsheim, Germany|
|KVM Forum and oVirt Workshop Europe 2012||Barcelona, Spain|
|LLVM Developers' Meeting||San Jose, CA, USA|
|November 8||NLUUG Fall Conference 2012||ReeHorst in Ede, Netherlands|
|Free Society Conference and Nordic Summit||Göteborg, Sweden|
|Mozilla Festival||London, England|
|Python Conference - Canada||Toronto, ON, Canada|
|SC12||Salt Lake City, UT, USA|
|Qt Developers Days||Berlin, Germany|
|19th Annual Tcl/Tk Conference||Chicago, IL, USA|
|PyCon Argentina 2012||Buenos Aires, Argentina|
|November 16||PyHPC 2012||Salt Lake City, UT, USA|
|Linux Color Management Hackfest 2012||Brno, Czech Republic|
|8th Brazilian Python Conference||Rio de Janeiro, Brazil|
|Mini Debian Conference in Paris||Paris, France|
|November 24||London Perl Workshop 2012||London, UK|
|Computer Art Congress 3||Paris, France|
|Lua Workshop 2012||Reston, VA, USA|
|Open Hard- and Software Workshop 2012||Garching bei München, Germany|
|CloudStack Collaboration Conference||Las Vegas, NV, USA|
|Konferensi BlankOn #4||Bogor, Indonesia|
|December 2||Foswiki Association General Assembly||online and Dublin, Ireland|
If your event does not appear here, please tell us about it.
Page editor: Rebecca Sobol
Copyright © 2012, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds