LWN.net Weekly Edition for September 27, 2012
LinuxCon: The tragedy of the commons gatekeepers
During the 2012 LinuxCon North America conference, Richard Fontana, legal counsel at Red Hat, began a rather philosophical talk with what seemed to be a rather philosophical question: how do we decide what is free and open source software (FOSS), or rather, how do the organizations that have taken on this task make these decisions? However, he immediately pointed out that this is in fact a rather practical problem, since if we can't define FOSS, then it becomes rather difficult to reason and make decisions about it.
Many users and organizations need to make practical decisions based on the definition of FOSS. Individual users may have an ideological preference for FOSS. Software projects may need to know the status of software as FOSS for legal or policy reasons. Some of those projects may want to exclude non-free software; some Linux distributions may want to confine non-free software to a separate repository. Many governments nowadays have software procurement policies that are based on free software. Acknowledging the presence of Bradley Kuhn, executive director of the Software Freedom Conservancy (SFC) in the audience, Richard noted that the SFC requires the projects that it supports to be under a free software license. Some project-hosting web sites likewise have hosting policies predicated on a definition of FOSS. (Examples of such policies include those of SourceForge and Oregon State University's Open Source Lab.) Finally, some corporations have policies governing the use of open source software. All of these organizations care in a quite practical way about the definition of FOSS.
Deferring to authority
Richard didn't explicitly explain the origin of his talk title, but with a little reflection it became clear. The "commons" is of course the body of software that the community considers to be free. "Gatekeeping" is the process of admitting software to the category "free". What then is the "tragedy"? For Richard, it is the extent to which a freedom-loving community has surrendered the decision about what constitutes FOSS; instead, we commonly defer to authorities who make the decision for us. When people do consider the question of what is free software, they often say "the OSI [Open Source Initiative] has this figured out". Or they take the same approach with the FSF (Free Software Foundation).
Sometimes, people or organizations do consider this question more
deeply, but they ultimately arrive at a justification to defer to an
authority. Richard mentioned the example of the UK-based OSS Watch. OSS Watch recognizes that
there are many definitions of open source, but for the purposes of their
mission to advocate open source software in higher education, they've made
the decision to accept the set of OSI-certified licenses as their
definition. OSS Watch's justification for deferring to the OSI is that it
is a quick way to accept that the code is open and "accepted by a
large community, and if you've ever seen the OSI license list, you'll
realize that is ridiculous.
" On the other hand, Fedora rejects the
OSI as an authority for the definition of free software, and instead adopts
the FSF's definition, on the basis that the FSF has the competence to make
this definition. (Richard somewhat humorously expressed the Fedora approach
as "What would RMS [Richard Stallman] do?
")
Three organizations have tried to define FOSS: the FSF, the OSI, and the Debian project. These organizations have taken both a legislative and a judicial role, and Richard observed that this raises a separation-of-powers issue. He quoted Bradley's statement that "the best outcome for the community is for the logical conjunction of the OSI's list and the FSF's list to be considered the accepted list of licenses". The point here is that even though Bradley often disagrees with the OSI, he clearly sees that it's in the best interests of the community that no single group acts as legislator and judge when it comes to defining FOSS. Richard then turned to examining each of these three authorities, looking at their history and processes, and offering some criticism.
The Free Software Foundation (FSF)
The FSF has had a definition of software freedom as far back as 1986. By 1999 that definition had evolved into the well-known statement of the four software freedoms:
- The freedom to run the program, for any purpose.
- The freedom to study how the program works, and change it so it
does your computing as you wish.
- The freedom to redistribute copies so you can help your neighbor.
- The freedom to distribute copies of your modified versions to others.
Richard pointed out that this a very compact definition of software freedom that covers many bases. It includes a legal definition (explaining at a very high level what permissions the software gives the user), technical criteria (source code must be available), policy justifications (freedom is important because it's important to be able to share), and "autonomousness" (it's important to control your own computing).
Since 1999, the FSF has maintained a list of free and non-free software licenses, with (often brief) rationales for the categorization of the licenses. Richard noted that the license list is accompanied by an evolving explanatory text that is rather useful. The FSF even gives a rule of construction which clarifies that they apply their criteria expansively when deciding if a license is free:
Richard then outlined some criticisms of the FSF, but emphasized that they were all mild. There seems to be a lot of inconsistency in the FSF's decisions about what is or is not a free software license. He likened the issue to Anglo-Saxon judicial systems, where the rationale for reaching a decision derives not just from the law but also from past legal decisions; an analogous process seems to happen in the FSF's categorization of software licenses. Furthermore, sometimes the rationale for decisions about particular licenses is too limited to be useful. Here, he mentioned the Perl Artistic License, version 1, which the FSF categorizes as non-free with the following humorous, but not very helpful explanation:
Another criticism that Richard raised is that the FSF is sometimes too
formalist in its analysis of licenses, ignoring factors that are external
to the license. Here, he mentioned the example of the Pine
license. The Pine email client, developed at the University of
Washington, had a BSD-style license for many years. But, at a certain
point, and contrary to widespread understanding of such licenses, they
claimed that the license did not give permission to redistribute
modified versions. The FSF saw this as a textual problem, hinging on how
particular words should be interpreted. But, the real problem was that
"the University of Washington was being a [legal] bully and was
giving an unreasonable interpretation of license
".
Richard's final criticism of the FSF was that there was an appearance
of bias. The FSF has multiple roles—steward of the GPL, maintainer of
the free software definition, sponsor of the GNU project, and adjudicator
on licenses—that can potentially conflict. "Could you imagine
the FSF ever saying that a version of GPL is a non-free license?
" Here,
he gave an example relating to the GPLv2. Section 8 of that
license allows the licensor to impose geographic restrictions on
distribution for patent reasons. (The GPLv3 does not have such a clause.)
In Richard's opinion, invoking that clause today would make the GPLv2
non-free (here, the implication was, non-free according to the FSF's own
definition) "but I can't conceive of the FSF reaching that view
".
Debian
Richard spent some time talking about Debian, beginning with some details of the Debian Social Contract (DSC). The DSC was written in 1997 by Bruce Perens. The Debian Free Software Guidelines (DFSG) form part of the DSC. The DFSG divides the software that Debian distributes into free and non-free parts, and this distinction has taken on a somewhat ideological dimension in the Debian community today. However, originally, the main focus was on being a high-quality noncommercial distribution fashioned on the Linux kernel project. One of the intentions was to be the upstream for successful commercial redistributors, and the reason for dividing software packages into "free" and "non-free" was to signal to their downstreams that there might be a problem with some software; in other words, the DFSG is a packaging policy. In later times, the Debian perspective became more ideological, as Bruce Perens increasingly stressed the free software ideal. And by now, the DFSG has taken on a life of its own, becoming something of a constitutional document for the Debian project.
Richard talked a bit about the process of how software comes to be defined as free in Debian. Essentially, this is a packaging decision made by a group of elite packagers—the FTP Masters—who, guided by the DFSG, determine whether software packages end up in "main" or "non-free". He criticized a few aspects of this process. The FTP Masters typically don't provide rationales for their licensing decisions (the rationale for the AGPLv3 was an exception that he noted approvingly). And though there is a process for review of their decisions, the FTP Masters have something approaching absolute power in these matters (but he emphasized that this was not much different from the situation with the FSF).
The Open Source Initiative (OSI)
The OSI's Open Source Definition (OSD) was crafted in 1998 by Eric Raymond working with Bruce Perens, using the DFSG as a basis. Richard characterized this as a somewhat strange approach, because the DFSG is very specific to the problems that a 1990s noncommercial distribution would face if it wanted to classify package software licenses in order to assist downstream commercial redistributors. By contrast, the OSD was intended to be a general definition of open source. Some parts of the reuse work, but some do not. For example, there is a clause in the OSD that refers to "distribution on [a] medium" that makes sense in the context of Debian packaging, but is out of place in what is supposed to be a general definition of open source. These problems probably spring from the fact that the authors wanted to quickly draft the OSD, and there was something near at hand in the form of the DFSG. Notwithstanding some oddities inherited from the DFSG, the OSD did improve some things, such as the definition of "source code".
Richard described OSI's license-certification process
positively, noting first of all that it has a greater degree of
transparency than the FSF and Debian processes. There is discussion on a
public mailing list, and though the OSI board makes the final certification
decision, there is evidence that they do take serious account of the
mailing list discussions when making their decisions. He did however
express doubts that the board pays much attention to the OSD, because "as
I've said, it's a very strange document
".
The OSI has faced a number of problems in its history, Richard said. Early on, it was accused of worsening the problem of license proliferation (which was ironic, as OSI had been one of the first groups to call attention to the problem). This was a consequence of the OSI's attempts to encourage businesses to use open source. There was indeed a lot enthusiasm from some businesses to do so, but several of them wanted to do what Netscape had already done: write their own license. Several of these licenses were approved by the OSI, and the decisions in some cases seem to have been hasty.
In 2007, the OSI faced a strong challenge to their authority in the
form of what Richard called the "badgeware crisis". A number of companies
were using a modified version of the Mozilla Public License that added a
badgeware clause. This clause allowed licensors to require licensees to
prominently display logos on program start-up. Although the licenses were
unapproved by OSI, these companies posed a challenge to the OSI by calling
their licenses "open source." (In the end, the
OSI even approved a badgeware license.) "As dastardly as these
companies were, I sort of admire them for challenging the idea that they
should just defer to OSI as being authoritative.
"
Richard sees two problems that remain with the OSI to this day. One of
these is OSI's categorization of
certain licenses as "popular and widely used or with strong
communities". In part, the goal of this categorization is to address the
proliferation issue, by recommending a subset of the OSI-approved
licenses. The membership of this category is somewhat arbitrary, and the
fact that the licenses of several OSI board members are on the list has led
to suggestions of cronyism and the
criticism that the list rewards entrenched interests. A further problem
with the idea that people should use "popular" licenses is that it
discourages experimentation with new licenses, and "eventually we
will need new licenses
".
The second problem that Richard noted was inconsistency in the way that license approvals are considered. He cited two contrasting examples. In 2009, Carlo Piana submitted the MXM license on behalf of a client. The license included a rather limited patent grant, and because of that, it met strong opposition in the resulting mailing list discussions. Later, Creative Commons submitted the CC0 license. That license included a clause saying no patent rights were granted. Despite this, it initially received a positive response in mailing list discussions. It was only when Richard started raising some questions about the inconsistency that the tide started to turn against the CC0 license. Why did the two licenses receive such different initial responses? Carlo Piana suggested that it was the identity of the entity submitting the license that made the difference: Creative Commons was viewed positively, but the organization behind MXM was at best viewed neutrally.
Are software licenses enough to define FOSS?
Going off on a related tangent, Richard considered the rise of an idea
that he termed "license insufficiency"—the idea that licenses alone
are not sufficient to define open source. This idea is often posed as a
suggestion that the definition of open source should be expanded to include
normative statements about a project's community and development model. In
other words, it's not enough to have a FOSS license and availability of
source code. One must also consider other questions as well. Is there a
public code repository? Is the development process transparent? Is it
possible to submit a patch? Is the project diverse? Does it use a license
whereby commercial entities are contributing patent licenses? In this
context he mentioned Andy Oliver's "patch test" for defining
open source. (Simon Phipps, who is now president of the OSI, has also written about some of
these ideas, using the label "open-by-rule".) Richard said, "I
don't agree with all of that, but I think it's an interesting idea
"
Conclusions
Richard concluded his talk with a few observations and recommendations. The first of these is that the historical tendency in the community to defer to institutions for the definition of FOSS is a problem, because those institutions have issues of accountability, bias, and transparency. People should be ready to question the authority of these institutions.
He observed that the FSF could learn from OSI's participatory approach to the license approval process. Conversely, the OSI should drop the Open Source Definition in favor of something more like FSF's Free Software Definition, which is far more appropriate than a definition based on the Debian Free Software Guidelines.
The FSF does the best job of providing rationale for its licensing decisions, but all three of the institutions that he talked about could do better at this.
Richard thought that the idea of defining FOSS based on open development criteria ("license insufficiency" above) is based on correct intuitions. We need to expand beyond the idea of licenses in terms of how we define software freedom.
Finally, Richard said that software projects can work together in developing and policing definitions of FOSS. He has seen distributors working together to share opinions on how they view licenses. Distributors are also in a unique role for policing software freedom, since they can sometimes pressure upstream projects to change their licenses. There is potential for this sort of collaborative approach to be generalized to the task of defining and policing the definition of FOSS.
[Michael would like to thank the Linux Foundation for supporting his travel to San Diego for LinuxCon.]
[2013-01-09 update: a recording of Richard's talk can be found on the Free as in Freedom web site.]
ALS: First signs of actual code
Left unchecked, talks about supply chains and long-term industry shifts could easily dominate a business-focused event like the Automotive Linux Summit, but they were balanced out in the 2012 schedule by several sessions that dealt with actual code. Leading the charge at the September 19-20 event in Gaydon, UK was the GENIVI Alliance, which announced three new automotive software projects that will be accessible to those outside GENIVI's member companies. There were also presentations from Yocto and Intel, along with some advice on where automotive Linux still needs contributors. In most cases, the actual code remains just out of reach, but it is still progress.
GENIVI announcements
GENIVI, of course, is a collaboration of more than 150 companies, including automakers, equipment suppliers, silicon merchants, and software consultancies. Its purpose is to hash out a common Linux-based platform for in-vehicle infotainment (IVI) systems, which the various members can build products on with a minimum of duplicated effort. But GENIVI operates behind closed doors; apart from the block diagrams found in slides and blog posts there has not historically been any access to the actual specification for those people not working with GENIVI itself. Moreover, GENIVI has an atypical approach to being an "open source platform": it is committed to using software available under open source licenses, but it does not make that software available to non-members.
The lack of a public specification document and the unavailability of the software have real implications for the Linux community, because GENIVI has long maintained that it would draw upon existing projects wherever possible — but new work would also be necessary to fill in gaps in the stack. At ALS, Pavel Konopelko estimated that the GENIVI platform would consist of 80% existing "community" code, 15% community code extended by GENIVI to meet specific requirements, and 5% purely original work. Some of that work has already seen the light of day, such as the AF_BUS patches, but several other pieces have remained absent.
On the first day of ALS, though, GENIVI announced [PDF] three specific projects that it will open up for public consumption. They are an IVI audio management layer, an IVI screen layer manager, and a logging and tracing framework for use with diagnostic messages. The three projects are set to be hosted on Linux Foundation infrastructure, although so far the sites and code repositories have not appeared. There is a description of each of the components available now on the GENIVI web site, which sheds a bit more light on their scope — although the explanations are not always crystal clear.
The audio manager, for example, implements an API for routing audio that is independent of the hardware and software routing frameworks underneath. That would appear to place it above PulseAudio in the typical Linux stack, while providing the same API if a hardware audio routing mechanism is available instead. The GENIVI specification does not make PulseAudio mandatory; it only mandates (as an "abstract component") that an audio router be provided. The audio-routing problem in IVI includes use cases not encountered in desktop setups, such as alarms (triggered by bumper proximity sensors, for example) that interrupt any other audio streams, and routing sound from a single media source to multiple rear-seat entertainment (RSE) units. The hardware-or-software approach described for the audio manager suggests that there are GENIVI members intent on producing vehicles where such audio routing is handled by onboard hardware.
Similarly, the screen layer manager is described as handling hardware-accelerated compositing, but by implementing an API that can deal both with software video sources like applications and with direct hardware sources like reverse-view cameras. The description of this component also observes that existing IVI implementations tend to build such layer management functionality directly into their GUI front-end (which, in IVI circles, is usually referred to as a Human-Machine Interface or HMI). Since HMI is generally reserved as one of the vendor-specific "differentiating components" in a product, a standard screen layer manager will presumably reduce duplication.
The last component of the three is the Diagnostic Log and Trace (DLT) project, which is described as an abstraction layer for several different diagnostic logging protocols. It is said to support system- and user-level logging, predefined control messages, and callbacks, and to connect to syslog or other existing logging systems.
At this stage, all three projects are (so to speak) "announcement-ware," but assuming that the code and infrastructure follows, they represent a major step forward for GENIVI. If one looks at the GENIVI platform block diagram (for example, the version on slide 9 of Konopelko's presentation [PDF]), there are quite a few components still designated placeholders or abstract requirements. It is hard to see how the missing pieces fit into the 80-15-5 percentages cited, but at least the availability of some GENIVI-authored components should help bring the whole picture into clearer view for those not part of the GENIVI Alliance.
Yocto, Intel, and others
There are indirect ways in which one can explore a GENIVI system already, however, by downloading some member company's GENIVI-compliant operating system. There are a few free options, such as Ubuntu's IVI remix and Tizen IVI. Holger Behrens from Wind River presented another possibility, the Yocto project's meta-ivi layer. Meta-ivi is a Yocto component that will pull in dependencies for GENIVI compliance.
It is designed to be used with Poky, the Yocto build environment, and pulls in the mandatory components of the latest GENIVI reference releases, plus the meta-systemd layer, a separate Yocto component that adds systemd. The current release of meta-ivi dates from May 16, 2012, and is based on the GENIVI 2.0 specification and Yocto 1.2 (an update is due in mid-October to bump the code up to Yocto 1.3 and GENIVI 3.0). It builds and configures the GENIVI and systemd layers, plus a few standard components to fill in GENIVI's optional components (e.g., PulseAudio and GStreamer).
Currently, building a meta-ivi system requires login credentials for
GENIVI, because it pulls from the alliance's Git repository. Behrens
said repeatedly that this requirement is likely to go away as GENIVI
opens up access to outsiders, but for the moment there is no way
around it. A bigger limitation, he said, was that currently meta-ivi
is designed only for ARM's Versatile Express A9 boards. This is
strictly a developer-power issue, he added, imploring interested
parties to contribute with "board support, board support, and
board support
".
Luckily, there were some software options available today, as well. Intel's Kevron Rees presented his work on the Automotive Message Broker (AMB), a vehicle communication abstraction layer. The project is an extension of his previous effort, Nobdy. It provides a source/sink model for applications to connect to vehicle data sources (from OBD-II diagnostic messages to sensor output) without worrying about the underlying mechanics source of the data. It allows multiple sinks to subscribe to messages from the same source, and the message routing engine (which Rees said was modeled on GStreamer) allows for intermediate nodes that could perform transformations on the data, such as filtering or message throttling.
The current version of AMB supports GPS, OBD-II, and CAN bus sources (the latter of which he demonstrated using a video gaming "steering wheel" controller). Only two sinks are implemented at the moment, D-Bus and Web Sockets. The D-Bus output, he explained, was an obvious choice because it provides a property and signaling system for free, and allows Smack-based security policies. The lack of security in Nobdy was one of the principle reasons he decided to undertake a rewrite. The demonstration was short but entertaining; it utilized a dashboard display application called GhostCluster to report mock speed and direction information from the game controller, and allowed access to faux rear-view cameras, which were implemented with webcams.
Jeremiah Foster of Pelagicore also discussed the paucity of software available to interested developers in a session examining progress between the automotive industry and the open source community. Foster is the baseline integration team leader at GENIVI, but as he explained, he spent quite some time beforehand working on the community side as the Maemo "debmaster." The talk included several points about how the automotive industry and traditional open source differed, such as the long-term partnerships in place between automakers and tier-one suppliers. Some of the disconnects are changing already, he said, such as the automotive industry's understanding of how to work with software licenses, but others remain unclear, such as the lines of legal responsibility in cases where software contributes to an accident.
A key point, he said, is that automakers do recognize that rewriting software stacks for every new product is incredibly wasteful, and there are opportunities for developers and agile software companies to do big business during the transition. He then outlined a number of areas where interested developers could work on automotive-related problems.
The first was fast boot, which is required by regulations (such as requiring that a rear-view camera start showing live video to the display less than two seconds after startup). GENIVI has adopted systemd to tackle this requirement, he said, though it is not yet complete. Another systemd-derived feature is "last user context" (LUC), in which a car remembers and restores environmental settings for multiple drivers (such as audio and temperature preferences, plus physical options like mirror and seat adjustment). LUC remains a subject where considerable work is required.
There are also several standard Linux components that automakers and software vendors frequently replace with proprietary components because the open source versions are incomplete, he said. These include BlueZ, ConnMan, and Ofono. All three are missing features and require testing in more configurations. Similarly, IVI systems require some mechanism for data persistence, such as remembering recently-accessed files or playlists. Existing solutions like SQLite have not proven popular with IVI vendors, who would be happy to see additional work.
Finally, he said, there remains a lot of work to be done porting and packaging existing automotive software for the distributions used by developers. The existing IVI distributions (such as Ubuntu and Tizen's IVI flavors) tend to start with a minimalist system and add automotive-specific packages, but this results in a system that developers cannot use for everyday work. The majority of Linux developers, he said, would rather port new software than change distributions. Consequently, bringing the IVI software to existing distributions will attract more developers than will continuing to roll out IVI-only embedded distributions. Bringing automotive packages to desktop distributions could also help the community build its own answer to the pieces that commercial vendors prefer to keep proprietary, like HMI.
Although it was good to hear that GENIVI is opening up more of its code, the three projects announced are just a beginning. GENIVI and other automotive Linux players do seem to recognize that there is a void to be bridged between the industry and the community, though. If the alliance does indeed make its Git repositories publicly accessible, that will break down a major barrier to entry for the potentially enormous talent pool of volunteer contributors.
[The author would like to thank the Linux Foundation for travel assistance to ALS.]
XDC2012: The X.Org Developers' Conference
The 2012 X.Org Developers' Conference took place in the charming Bavarian city Nuremberg (Nürnberg), over the period 19-21 September 2012, hosted at the headquarters of the Linux distributor, SUSE.
The conference program page provides links to pages detailing the various sessions; in many cases, those pages contain links to slides and videos for the sessions. Simon Farnsworth took some rough notes from each session, and these have been placed on a "proceedings" page; that page also has links to videos of nearly all of the talks.
LWN has coverage of selected talks; these will be linked off this page as they appear.
- Status report from the X.Org Board
- Graphics stack security: what are
the security problems in X11, and how can they be avoided in
Wayland/Weston?
- Programming languages for X
application development: to what extent is the choice of programming
language important in terms of making it easier to build desktop
applications?
- OpenGL futures: what are the plans for the OpenGL ABI?
Above: XDC2012 conference group photo
Above: Kristian Høgsberg bending the laws of desktop graphics on Weston
XDC2012: Status report from the X.Org Board
On the first day of the 2012 X.Org Developers' Conference, Bart Massey kicked off a short presentation from the Board of Directors of the X.Org Foundation, running through the current status of the foundation and its recent achievements. He began by noting that, with much assistance from the Software Freedom Law Center, the foundation has now achieved 501(c)(3) tax status as a US nonprofit. In addition, the foundation is now a member of the Open Invention Network (OIN). Although the foundation can't offer any patents to OIN (because it owns none), "we do have a lot of prior art". Much of what the X developers are doing is innovative and potentially patentable [by others], and "if you want that not to happen, you should talk to us and OIN".
X.Org did not have any Google Summer of Code (GSoC) projects approved this year, and Bart noted the need for a rethink about how to approach GSoC in the future. On the other hand, in the last year there were four successful projects (and one failed project) in X.Org's own similar "Endless Vacation of Code" (EVoC) program, and all of the successful EVoC students were funded to travel to Nuremberg for the conference. (A session on day one of the conference reviewed the status of the EVoC program, looking at the goals of the program and how its implementation could be improved; video of the session can be found here.)
In the two days immediately preceding the conference, there was a book sprint. This followed on from an earlier book sprint in March, which worked on the creation of a developer's guide that was to some extent client-side focused. The more recent sprint aimed to complete Stéphane Marchesin's Driver Development Guide. There are now 119 pages of documentation that is still rough and in need of editing, but a version should be on the wiki in a few days. He noted that one of the explicit points of adding more documentation was to attract new X developers by lowering the barriers to understanding the X system.
Bart noted that the foundation currently faces a number of challenges. The financial organization is better than it has been for a while, but the once large budget surplus is now starting to run down, to the point where some real effort needs to be spent on fund raising. In a brief treasurer's report, Stuart Kreitman expanded on this point: at the current rate of spending (US$20k to US$30k per year), there's about three year's buffer. The old days when several large UNIX workstation vendors gave large donations have—along with those vendors—long gone. New funding sources will be needed, and X.Org may need to rely more on smaller donations.
Bart pointed out a number of other challenges that X faces. As with many projects, but perhaps especially notable because X is such a fundamental part of our day-to-day infrastructure, X needs more developers, and Bart emphasized the need for ideas on how to attract new developers. There remain some infrastructure problems to be resolved (notably, the X.Org web site was down a number of times in the lead-up to the conference). Then there is the whole "future of Wayland thing". Although the Board does not set technical directions, "it's clear that Wayland is part of the X world", and the question is how to support the transition to a potentially "Wayland world".
But, notwithstanding these and other challenges, Bart stressed that "I couldn't be more excited about what's happening", and certainly the level of interest and detail in the three days of presentations seem to justify his excitement.
A pointer to a video that includes the status session can be found here.
Security
LSS: Kernel security subsystem reports
The morning of day two of this year's Linux Security Summit was filled with reports from various kernel security subsystem maintainers. Each spoke for 20 minutes or so, generally about progress in the last year, as well as plans for the future.
Crypto
Herbert Xu reviewed some of the changes that have come for the kernel crypto subsystem, starting with the new user-space API. Since cryptography can be done in user space, providing an API to do it in the kernel may seem a bit roundabout, but it is important so that user space can access hardware crypto accelerators. The API is targeted at crypto offload devices that were not accessible to user space before.
The interface is socket-based, so data can be sent to devices using write() or send(). For large amounts of data, splice() can be used for zero copy I/O. The API is "completely extensible". It doesn't currently handle asymmetric key cryptography, for example, but that could be easily added.
There is also a new user-space control interface for configuring the kernel crypto algorithms. For example, there are multiple AES algorithms available that are optimized for different processors. The performance of the optimized versions may be 20-30 times better than the generic C implementation. The system can often figure out the right one to use, Xu said, but some variants are not easily chosen automatically, so there is a need for this interface.
Parallelizing the crypto algorithms using pcrypt is a case in point. In some scenarios, it may make sense to spread the crypto work around on different processors, but it can sometimes degrade performance. It was designed for the IPSec use case, but there needs to be an administrative interface to choose. That interface is netlink-based and allows users to select the priority of the algorithms that are used by the kernel.
Optimizations of crypto algorithms for various CPUs have also been added. The SHA-1 algorithm has been enhanced to use the SSE3 instructions for x86 processors, and more AES-NI modes for x86 have been added. There is now SHA support on the VIA Nano processor as well. The arc4 cipher has added "block" cipher support, which means that it can be handed more than a single byte at a time (as was required before).
Support for new hardware has also been added, including picoXcell, CAAM, s5p-sss, and ux500. Those are all non-x86 crypto offload devices.
Finally, Xu noted that asymmetric key ciphers have finally been added to the kernel. He had wanted them for some time, but there were no in-kernel users. Now, "thanks to IMA and module signing", there are such users, so that code, along with hardware acceleration and a user-space interface, has been added.
AppArmor
The AppArmor access control mechanism has seen some incremental improvements over the last year, John Johansen reported. One focus has been on eliminating the out-of-tree patches to complete the AppArmor system. There are some "critical pieces" missing, particularly in the upstream version of AppArmor, he said.
Several things have landed in AppArmor, including some bug fixes and the aafs (AppArmor filesystem) introspection interface. The latter allows programs to examine the rules and policies that have been established in the system.
A larger set of changes have been made on the user-space side. The project has standardized on Python, so some tools got rewritten in that language, while others were ported to support Python 3. In addition, the policy language has been made more consistent, and some simple shortcuts have been added to make it easier to use.
The policy compiler has been improved as well, both in terms of memory usage and performance. There were some test policies that could not be compiled even on 256GB systems, but they can now be compiled on 16GB systems. The compiler runs two to four times faster and produces policies that are 30-50% smaller. Lastly, some basic LXC containers integration has been added to AppArmor.
There are a number of things that are "close to landing", he said. The AppArmor mount rules, which govern the allowable devices, filesystem types, mount points, and so on, for mounting are being tested in Ubuntu right now. The implementation seems solid, but it would be nice to have a Linux Security Module (LSM) hook for pivot_root(). There are some "nasty things" that pivot_root() does with namespaces, and the LSM hook could help there.
The reader-writer locks used by AppArmor have been "finally" converted to use read-copy-update (RCU), and that will be pushed upstream. There are also some improvements to policy introspection, including adding a directory for each profile in a given namespace. The original introspection interface was procfs-style, but AppArmor has moved to a sysfs-style interface, which should be more acceptable.
The policy matching engine has been cleaned up and the performance has been improved. Some of that work has been in minimizing the size of the policies. A new policy templating tool has been created that will build a base policy as a starting point for administrators. There has also been work on a sandbox, similar to the SELinux sandbox, that can dynamically generate policies to create a chroot() or container-based sandbox with a nested X server to isolate processes. The last of the near-term changes is a way to mediate D-Bus access with AppArmor rules, which has been prototyped.
The final category of features that Johansen presented were those that are being worked on, but won't be merged soon. Converting the deterministic finite automata (DFA) used in the matching engine to an extended hybrid finite automata (eHFA) headed that list. An eHFA provides capabilities that DFAs don't have including variable matching and back references. The latter is not something AppArmor is likely to use, but eHFAs do provide better compression and performance. Another matching engine enhancement is sharing state machines between profiles and domains, which will improve memory usage and performance.
Beyond that, there are plans to add a "learning mode", similar to SELinux's audit2allow, so that policies can be created from the actions of running programs. Adding more mediation is also being worked on, including handling environment variable filtering, inter-process communication (IPC), and networking. Internally labeling files and other objects, so that the matching engine does not need to run again for objects that have been recently accessed is also on the horizon.
Key management
In a short presentation, David Howells gave an update on the key management subsystem in the kernel. Over the last year, the subsystem has made better use of RCU, which will improve the scalability when using keys. In addition, the kernel keyrings have been "made more useful" by adding additional keyring operations such as invalidating keys and clearing keyrings. The latter is useful for clearing the kernel DNS resolver cache, for example.
A logon key type has been added to support CIFS multi-user mounts. That key type cannot be read from user space, so that the keys cannot be divulged to attackers (e.g. when the user is away from the system). The lockdep (kernel locking validator) support has been improved, as has the garbage collector. There is now just one garbage collector, rather than two, and a deadlock in garbage collection has been fixed as well.
In the future, a bug where the GNOME display manager (gdm) hangs in certain configurations will be fixed. The problem stems from a limitation in the kernel that does not allow session keyring manipulation from multithreaded programs. Support for a generic "crypto" key type will also be added to support signed kernel modules.
SELinux
Eric Paris prefaced his presentation by explaining that he works on the kernel and user-space pieces of SELinux—he is "not a policy writer"—so he would be focusing on those parts in his talk. There have been some interesting developments in the use of SELinux over the past year, including Red Hat's OpenShift project that allows multiple users to develop web applications on a single box. SELinux is used to isolate those users from each other. In addition, he noted the SELinux-based secure Linux containers work that provides a "super lightweight" sandbox using containers. "Twiddle one bit", he said, and that container-based sandbox can be converted to use KVM instead.
Historically, SELinux has focused on containing system daemons, but that is changing somewhat. There are a couple of user programs that are being contained in Fedora, including running the Nautilus thumbnailing program in a sandbox. In addition, Firefox and its plugins now have SELinux policies to contain them for desktop users.
RHEL 5 and 6 have also received Common Criteria certification for the virtualization profile using QEMU/KVM. SELinux enforcement was an important part of gaining that certification.
Paris said that systemd has become SELinux-aware in a number of ways. He likes the new init system and would like it to have more SELinux integration in the future. The socket activation mechanism makes it easy to launch a container on the first connection to a web port, for example. Systemd handles launching the service automatically, so that you don't need to run the init script directly, nor are "run-init games" needed. It is also much easier to deal with daemons that want to use TTYs, he said. Using SELinux enforcement in systemd means that an Apache server running as root would not be able to start or stop the MySQL server, or that a particular administrator would only be able to start and stop the web server, but not the database server.
The named file transitions feature (filename_trans) was "a little bit contentious" when it got added to SELinux, but it "ended up being brilliant", Paris said. The feature took ideas from AppArmor and TOMOYO and helps avoid mislabeling files. In addition to the standard SELinux labels for objects, policies can now use the file name to make decisions. It is just the name of the file, not the full path that "Al Viro says doesn't exist", but it allows proper labeling decisions to be made.
For example, the SSH daemon will create a .ssh directory when a user sends their keys to the system using something like ssh-copy-id. But, without filename_trans, SELinux would have no way to know what label to put on that directory, because it couldn't tell if it was creating .ssh or some other directory (e.g. a directory being copied from the remote host). There used to be a daemon that would fix the label but that was a "hacky" solution. Similarly, SELinux policies can now distinguish between accesses to resolv.conf and shadow. 90% of the bugs reported for SELinux are because the label is wrong, he said, and filename_trans will help alleviate that.
There has also been a split in the SELinux policy world. The upstream maintainers of the core SELinux policies have been slower to adopt changes because they are concerned with "hard security goals". That means that it can take a lot of time to get changes upstream. So, there is now a "contrib" set of policies that affect non-core pieces. That reduces the amount of "messy policy" that Dan Walsh has to fix for Fedora and RHEL.
Shrinking the policies is another area that has been worked on. The RHEL 6 policy is 6.8MB after it is compiled down, but the Fedora 18 policy has shrunk to 4.8MB. The unconfined user policies were removed, as were some duplicate policy entries, which resulted in further space savings. There are "no real drawbacks", he said, as the new policies can do basically the same things as the old in 65% less space.
But there are also efforts to grow the policies. There are "hundreds of daemons and programs" that now have a default policy, which have been incorporated into the Fedora policies. The 65% reduction number includes "all the new stuff we added", he said.
Paris finished his talk by joking that "by far the most interesting" development in the SELinux world recently was the new SELinux stickers that he handed out to interested attendees.
Integrity
The work on the integrity subsystem started long ago, but a lot of it has been merged into the mainline over the years, Mimi Zohar said to begin her report. The integrity measurement architecture (IMA) has been merged in several pieces, starting with IMA-measurement in 2.6.30, and there is still more to come. For example, IMA-appraisal should be merged soon, and the IMA-directories patches have been posted for review. In addition, digital signature support has been added for the IMA file data measurements as well as for the extended verification module (EVM) file metadata measurements. Beyond that, there is a patch to audit the log file measurements that is currently in linux-next.
The integrity subsystem is going in two directions at once, Zohar said. It is extending Trusted Boot by adding remote attestation, while also extending Secure Boot with local integrity measurement and appraisal.
There is still more work to be done, of course. Support for signing files (including kernel modules) needs to be added to distributions, she said. There is also a need to ensure that anything that gets loaded by the kernel is signed and verified. For example, files that are loaded via the request_firmware() interface may still need to be verified.
The kernel build process also needs some work to handle signing the kernel image and modules. For users who may not be interested in maintaining a key pair but still want to sign their kernel, an ephemeral key pair can be created during the build. The private key can be used to sign the image and modules, then it can be discarded. The public key needs to be built into the kernel for module verification. There is also a need for a safe mechanism to store that public key in the UEFI key database for Secure Boot, she said.
TOMOYO
The TOMOYO LSM was added in the 2.6.30 kernel as an alternative mandatory access control (MAC) mechanism, maintainer Tetsuo Handa said. That was based on version 2.2 of TOMOYO, and the 3.2 kernel has been updated to use TOMOYO 3.5. There have been no major changes to TOMOYO since the January release of 3.2.
Handa mostly wanted to discuss adding hooks to the LSM API to protect against shellcode attacks. Those hooks would also allow TOMOYO to run in parallel with other LSMs, he said. By checking the binfmt handler permissions in those hooks, and possibly sanitizing the arguments to the handler, one could thwart some kinds of shellcode execution. James Morris and others seemed somewhat skeptical about that approach, noting that attackers would just adapt to the restrictions.
Those hooks are also useful for Handa's latest project, the CaitSith [PDF] LSM. He believes that customers are finding it too difficult to configure SELinux, so they are mostly disabling it. CaitSith is one of a number of different approaches he has tried (including TOMOYO) to attack that problem.
Smack
In a talk entitled "Smack veers mobile", Casey Schaufler looked at the improvements to the LSM, while pointing to the mobile device space as one of its main users. The security models in the computing industry are changing, he said. Distributions, users, files, and system administrators are "out", while operating systems, user experience, apps, and resources are "in". That shift is largely caused by the recent emphasis on mobile computing.
For Smack, there have been "a few new things" over the last year. There is now an interface for user space to ask Smack to do an access check, rather than wait for a denial. One can write a query to /smack/access, then read back the access decision. Support for the SO_PEERCRED option to getsockopt() for Unix domain sockets has been added. That allows programs to query the credentials of the remote end of the socket to determine what kind of privileges to give it.
If a parent and child process are running with two different labels, there could be situations where the child can't signal its death to the parent. That can lead to zombie processes. It's only "humane" to allow the child to notify the parent, so that has been added.
There is also a new mechanism to revoke all of the rules for a given subject label. Tizen was trying to do this in a library, but it required reading all of the rules in, then removing each. Now, using /smack/remove-subject, that can be all be done in one operation.
The length of Smack labels has increased again. It started out with a seven-character limit, but that was raised earlier to 23 characters in support of labeled networking. It turns out that humans don't generally create the labels, he said, so the limit has now been raised to 255 characters to support generated label names. For example, the label might include information on the version of an app, which app store it came from, and so on. Care must be taken, as there needs to be an explicit mapping from Smack labels to network labels (which are still limited to 23 characters by the CIPSO header).
There is now a "friendlier" rule setting interface for Smack. The original /smack/load interface used a fixed-length buffer with an explicit format, which caused "complaints from time to time". The new /smack/load2 interface uses white space as a separator.
"Transmuting" directories is now recursive. Directories can get their label either from their parent or from the process that creates them, and when the label changes, those changes now propagate into the children. Schaufler originally objected to the change, but eventually "figured out that is was better" that way, he said.
The /smack/onlycap mechanism has been extended to cover CAP_MAC_ADMIN. That means that privileged daemons can still be forced to follow the Smack rules even if they have the CAP_MAC_ADMIN capability. By writing a Smack label to /smack/onlycap, the system will be configured to only allow processes with that label to circumvent the Smack rules. Previously, only CAP_MAC_OVERRIDE was consulted, which would allow processes to get around this restriction.
The Smack rules have been split into multiple lists based on the subject label. In the past, the Smack rule list could get rather long, so it took a long time to determine that there was no rule governing a particular access. By splitting the list, a 30-95% performance increase was realized on a 40,000 rule set, depending on how evenly the rules split.
Some cleanup has been done to remove unnecessary locking and bounds checks. In addition, Al Viro had "some very interesting things to say" about the Smack fcntl() implementation. After three months, he finally settled down, reread the message, and agreed with Viro's assessment. Those problems have now been fixed.
Schaufler said that he is excited by the inclusion of Smack as the MAC solution for the Tizen distribution. He is "very much involved" in the Tizen project and looks forward to Smack being deployed in real world situations.
There are some other things coming for Smack, including better rule list searching and true list entry removal. Right now, rules that are removed are just marked, not taken out of the list, because there is a "small matter of locking" to be resolved. Beyond that, there is probably a surprise or two lurking out there for new Smack features. If someone can make the case for a feature, like the often requested multiple labels feature, it may just find its way into Smack in the future.
Yama
Kees Cook's Yama LSM was named after a Buddhist god of the underworld who is the "ruler of the departed". It started as an effort to get some symbolic link restrictions added to the kernel. Patches to implement those restrictions had been floating around since at least 1996, but had never been merged. Those restrictions are now available in the kernel in the form of the Yama LSM, but the path of getting them into the mainline was rather tortuous.
Cook outlined that history, noting that his original submission was rejected for not being an LSM in May 2010. In June of that year, he added some hardlink and ptrace() attach restrictions to the symlink changes and submitted it as the Yama LSM. In July, a process relationship API was added to allow the ptrace() restrictions to be relaxed for things like crash handlers, but Yama was reverted out of the security-next tree because it was an LSM. Meanwhile, the code was released in Ubuntu 10.10 in October and then in ChromeOS in December 2011.
Eventually, the LSM was "half merged" for the 3.4 kernel. The link restrictions were not part of that, but they have subsequently been merged into the core kernel for 3.6. Those restrictions are at least 16 years old, Cook said, which means they "can drive in the US". He was able to get the link restrictions into the core by working with Al Viro, but he has not been able to get the ptrace() restrictions into the core kernel, which is where he thinks they belong. James Morris noted that none of the core kernel developers "like security", and "some actively hate it", which makes it hard to get these kinds of changes into the core—or sometimes upstream at all.
In the future, Cook would like to see some changes in the kernel module loading path to support ChromeOS. Everyone is talking about signing modules, but ChromeOS already has a protected root partition, he said. If load_module() (or a new interface) could get information about where in the filesystem a module comes from, that would solve his problem. He also mentioned the perennial LSM stacking topic, noting that Ubuntu and other distributions are hardcoding Yama stacking to get the ptrace() restrictions, so maybe that will provide impetus for a more general stacking solution—or to move the ptrace() restrictions into the core kernel.
[ Slides for many of the subsystem reports, as well as the rest of the presentations are available on the LSS schedule page. ]
Brief items
Security quotes of the week
New vulnerabilities
atheme-services: denial of service
| Package(s): | atheme-services | CVE #(s): | CVE-2012-1576 | ||||
| Created: | September 25, 2012 | Updated: | September 26, 2012 | ||||
| Description: | From the Gentoo advisory:
The myuser_delete() function in account.c does not properly remove CertFP entries when deleting user accounts. A remote authenticated attacker may be able to cause a Denial of Service condition or gain access to an Atheme IRC Services user account. | ||||||
| Alerts: |
| ||||||
cloud-init: unspecified vulnerabilities
| Package(s): | cloud-init | CVE #(s): | |||||||||
| Created: | September 26, 2012 | Updated: | September 26, 2012 | ||||||||
| Description: | From the Red Hat bugzilla [1], [2]:
[1] If the init script takes longer than 90 seconds to finish (e.g. package installation & provisioning on slow network), it gets killed by systemd. Adding `TimeoutSec=0` to cloud-final.service[1] seems to fix the problem. [2] cloud-final.service needs StandardOutput=syslog+console so that final-message gets printed to the console while booting. | ||||||||||
| Alerts: |
| ||||||||||
kernel: denial of service
| Package(s): | kernel | CVE #(s): | CVE-2012-3552 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Created: | September 26, 2012 | Updated: | September 26, 2012 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Description: | From the Red Hat advisory:
A race condition was found in the way access to inet->opt ip_options was synchronized in the Linux kernel's TCP/IP protocol suite implementation. Depending on the network facing applications running on the system, a remote attacker could possibly trigger this flaw to cause a denial of service. A local, unprivileged user could use this flaw to cause a denial of service regardless of the applications the system runs. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
kernel-rt: denial of service
| Package(s): | kernel-rt | CVE #(s): | CVE-2012-4398 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Created: | September 20, 2012 | Updated: | October 16, 2013 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Description: | From the Red Hat advisory: It was found that a deadlock could occur in the Out of Memory (OOM) killer. A process could trigger this deadlock by consuming a large amount of memory, and then causing request_module() to be called. A local, unprivileged user could use this flaw to cause a denial of service (excessive memory consumption). | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
libguac: denial of service
| Package(s): | libguac | CVE #(s): | CVE-2012-4415 | ||||||||||||||||||||||||||||||||||||||||||||||||||||
| Created: | September 26, 2012 | Updated: | September 26, 2012 | ||||||||||||||||||||||||||||||||||||||||||||||||||||
| Description: | From the Red Hat bugzilla:
A stack based buffer overflow flaw was found in guac client plug-in protocol handling functionality of libguac, a common library used by all C components of Guacamole. A remote attacker could provide a specially-crafted protocol specification to the guac client plug-in that, when processed would lead to guac client crash (denial of service). | ||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||
MRG Grid 2.2: multiple vulnerabilities
| Package(s): | MRG Grid 2.2 | CVE #(s): | CVE-2012-2680 CVE-2012-2681 CVE-2012-2683 CVE-2012-2684 CVE-2012-2685 CVE-2012-2734 CVE-2012-2735 CVE-2012-3459 CVE-2012-3491 CVE-2012-3492 CVE-2012-3493 CVE-2012-3490 | ||||||||||||||||||||
| Created: | September 20, 2012 | Updated: | March 14, 2013 | ||||||||||||||||||||
| Description: | From the Red Hat advisory: A number of unprotected resources (web pages, export functionality, image viewing) were found in Cumin. An unauthenticated user could bypass intended access restrictions, resulting in information disclosure. (CVE-2012-2680) Cumin could generate weak session keys, potentially allowing remote attackers to predict session keys and obtain unauthorized access to Cumin. (CVE-2012-2681) Multiple cross-site scripting flaws in Cumin could allow remote attackers to inject arbitrary web script on a web page displayed by Cumin. (CVE-2012-2683) An SQL injection flaw in Cumin could allow remote attackers to manipulate the contents of the back-end database via a specially-crafted URL. (CVE-2012-2684) When Cumin handled image requests, clients could request images of arbitrary sizes. This could result in large memory allocations on the Cumin server, leading to an out-of-memory condition. (CVE-2012-2685) Cumin did not protect against Cross-Site Request Forgery attacks. If an attacker could trick a user, who was logged into the Cumin web interface, into visiting a specially-crafted web page, it could lead to unauthorized command execution in the Cumin web interface with the privileges of the logged-in user. (CVE-2012-2734) A session fixation flaw was found in Cumin. An authenticated user able to pre-set the Cumin session cookie in a victim's browser could possibly use this flaw to steal the victim's session after they log into Cumin. (CVE-2012-2735) It was found that authenticated users could send a specially-crafted HTTP POST request to Cumin that would cause it to submit a job attribute change to Condor. This could be used to change internal Condor attributes, including the Owner attribute, which could allow Cumin users to elevate their privileges. (CVE-2012-3459) It was discovered that Condor's file system authentication challenge accepted directories with weak permissions (for example, world readable, writable and executable permissions). If a user created a directory with such permissions, a local attacker could rename it, allowing them to execute jobs with the privileges of the victim user. (CVE-2012-3492) It was discovered that Condor exposed private information in the data in the ClassAds format served by condor_startd. An unauthenticated user able to connect to condor_startd's port could request a ClassAd for a running job, provided they could guess or brute-force the PID of the job. This could expose the ClaimId which, if obtained, could be used to control the job as well as start new jobs on the system. (CVE-2012-3493) It was discovered that the ability to abort a job in Condor only required WRITE authorization, instead of a combination of WRITE authorization and job ownership. This could allow an authenticated attacker to bypass intended restrictions and abort any idle job on the system. (CVE-2012-3491) | ||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||
MRG Messaging 2.2: authentication bypass
| Package(s): | MRG Messaging 2.2 | CVE #(s): | CVE-2012-3467 | ||||||||
| Created: | September 20, 2012 | Updated: | September 26, 2012 | ||||||||
| Description: | From the Red Hat advisory: It was discovered that qpidd did not require authentication for "catch-up" shadow connections created when a new broker joins a cluster. A malicious client could use this flaw to bypass client authentication. (CVE-2012-3467) | ||||||||||
| Alerts: |
| ||||||||||
munin: privilege escalation
| Package(s): | munin | CVE #(s): | CVE-2012-3512 | ||||||||||||||||||||||||
| Created: | September 26, 2012 | Updated: | November 5, 2012 | ||||||||||||||||||||||||
| Description: | From the Red Hat bugzilla:
Currently, plugins which run as root mix their state files in the same directory as non-root plugins. The state directory is owned by munin:munin and is group-writable. Because of these facts, it is possible for an attacker who operates as user munin to cause a root-run plugin to run arbitrary code as root. | ||||||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||||||
qpid: denial of service
| Package(s): | qpid | CVE #(s): | CVE-2012-2145 | ||||||||||||||||||||
| Created: | September 20, 2012 | Updated: | September 26, 2012 | ||||||||||||||||||||
| Description: | From the Red Hat advisory: It was discovered that the Qpid daemon (qpidd) did not allow the number of connections from clients to be restricted. A malicious client could use this flaw to open an excessive amount of connections, preventing other legitimate clients from establishing a connection to qpidd. (CVE-2012-2145) | ||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||
squidclamav: denial of service
| Package(s): | squidclamav | CVE #(s): | CVE-2012-3501 | ||||
| Created: | September 25, 2012 | Updated: | September 26, 2012 | ||||
| Description: | From the CVE entry:
The squidclamav_check_preview_handler function in squidclamav.c in SquidClamav 5.x before 5.8 and 6.x before 6.7 passes an unescaped URL to a system command call, which allows remote attackers to cause a denial of service (daemon crash) via a URL with certain characters, as demonstrated using %0D or %0A. | ||||||
| Alerts: |
| ||||||
transmission: cross-site scripting
| Package(s): | transmission | CVE #(s): | CVE-2012-4037 | ||||||||
| Created: | September 26, 2012 | Updated: | October 30, 2012 | ||||||||
| Description: | From the Ubuntu advisory:
Justin C. Klein Keane discovered that the Transmission web client incorrectly escaped certain strings. If a user were tricked into opening a specially crafted torrent file, an attacker could possibly exploit this to conduct cross-site scripting (XSS) attacks. | ||||||||||
| Alerts: |
| ||||||||||
Page editor: Jake Edge
Kernel development
Brief items
Kernel release status
The current development kernel is 3.6-rc7, released on September 23. This one includes a codename change to "Terrified Chipmunk." Linus says: "So if everything works out well, and the upcoming week is calmer still, I suspect I can avoid another -rc. Fingers crossed."
Stable updates: 3.2.30 was released on September 20.
Quotes of the week
Real-time response. It is far bigger than I thought.
Kernel development news
Adding a huge zero page
The transparent huge pages feature allows applications to take advantage of the larger page sizes supported by most contemporary processors without the need for explicit configuration by administrators, developers, or users. It is mostly a performance-enhancing feature: huge pages reduce the pressure on the system's translation lookaside buffer (TLB), making memory accesses faster. It can also save a bit of memory, though, as the result of the elimination of a layer of page tables. But, as it turns out, transparent huge pages can actually increase the memory usage of an application significantly under certain conditions. The good news is that a solution is at hand; it is as easy as a page full of zeroes.Transparent huge pages are mainly used for anonymous pages — pages that are not backed by a specific file on disk. These are the pages forming the data areas of processes. When an anonymous memory area is created or extended, no actual pages of memory are allocated (whether transparent huge pages are enabled or not). That is because a typical program will never touch many of the pages that are part of its address space; allocating pages before there is a demonstrated need would waste a considerable amount of time and memory. So the kernel will wait until the process tries to access a specific page, generating a page fault, before allocating memory for that page.
But, even then, there is an optimization that can be made. New anonymous pages must be filled with zeroes; to do anything else would be to risk exposing whatever data was left in the page by its previous user. Programs often depend on the initialization of their memory; since they know that memory starts zero-filled, there is no need to initialize that memory themselves. As it turns out, a lot of those pages may never be written to; they stay zero-filled for the life of the process that owns them. Once that is understood, it does not take long to see that there is an opportunity to save a lot of memory by sharing those zero-filled pages. One zero-filled page looks a lot like another, so there is little value in making too many of them.
So, if a process instantiates a new (non-huge) page by trying to read from it, the kernel still will not allocate a new memory page. Instead, it maps a special page, called simply the "zero page," into the process's address space instead. Thus, all unwritten anonymous pages, across all processes in the system, are, in fact, sharing one special page. Needless to say, the zero page is always mapped read-only; it would not do to have some process changing the value of zero for everybody else. Whenever a process attempts to write to the zero page, it will generate a write-protection fault; the kernel will then (finally) get around to allocating a real page of memory and substitute it into the process's address space at the right spot.
This behavior is easy to observe. As Kirill Shutemov described, a process executing a bit of code like this:
posix_memalign((void **)&p, 2 * MB, 200 * MB);
for (i = 0; i < 200 * MB; i+= 4096)
assert(p[i] == 0);
pause();
will have a surprisingly small resident set at the time of the pause() call. It has just worked through 200MB of memory, but that memory is all represented by a single zero page. The system works as intended.
Or, it does until the transparent huge pages feature is enabled; then that process will show the full 200MB of allocated memory. A growth of memory usage by two orders of magnitude is not the sort of result users are typically looking for when they enable a performance-enhancing feature. So, Kirill says, some sites are finding themselves forced to disable transparent huge pages in self defense.
The problem is simple enough: there is no huge zero page. The transparent huge pages feature tries to use huge pages whenever possible; when a process faults in a new page, the kernel will try to put a huge page there. Since there is no huge zero page, the kernel will simply allocate a real zero page instead. This behavior leads to correct execution, but it also causes the allocation of a lot of memory that would otherwise not have been needed. Transparent huge page support, in other words, has turned off another important optimization that has been part of the kernel's memory management subsystem for many years.
Once the problem is understood, the solution isn't that hard. Kirill's patch adds a special, zero-filled huge page to function as the huge zero page. Only one such page is needed, since the transparent huge pages feature only uses one size of huge page. With this page in place and used for read faults, the expansion of memory use simply goes away.
As always, there are complications: the page is large enough that it would be nice to avoid allocating it if transparent huge pages are not in use. So there's a lazy allocation scheme; Kirill also added a reference count so that the huge zero page can be returned if there is no longer a need for it. That reference counting slows a read-faulting benchmark by 1%, so it's not clear that it is worthwhile; in the end, the developers might conclude that it's better to just keep the zero huge page around once it has been allocated and not pay the reference counting cost. This is, after all, a situation that has come about before with the (small) zero page.
There have not been a lot of comments on this patch; the implementation is relatively straightforward and, presumably, does not need a lot in the way of changes. Given the obvious and measurable benefits from the addition of a huge zero page, it should be added to the kernel sometime in the fairly near future; the 3.8 development cycle seems like a reasonable target.
Supervisor mode access prevention
Operating system designers and hardware designers tend to put a lot of thought into how the kernel can be protected from user-space processes. The security of the system as a whole depends on that protection. But there can also be value in protecting user space from the kernel. The Linux kernel will soon have support for a new Intel processor feature intended to make that possible.Under anything but the strangest (out of tree) memory configurations, the kernel's memory is always mapped, so user-space code could conceivably read and modify it. But the page protections are set to disallow that access; any attempt by user space to examine or modify the kernel's part of the address space will result in a segmentation violation (SIGSEGV) signal. Access in the other direction is rather less controlled: when the processor is in kernel mode, it has full access to any address that is valid in the page tables. Or nearly full access; the processor will still not normally allow writes to read-only memory, but that check can be disabled when the need arises.
Intel's new "Supervisor Mode Access Prevention" (SMAP) feature changes that situation; those wanting the details can find them starting on page 408 of this reference manual [PDF]. This extension defines a new SMAP bit in the CR4 control register; when that bit is set, any attempt to access user-space memory while running in a privileged mode will lead to a page fault. Linux support for this feature has been posted by H. Peter Anvin to generally positive reviews; it could show up in the mainline as early as 3.7.
Naturally, there are times when the kernel needs to work with user-space memory. To that end, Intel has defined a separate "AC" flag that controls the SMAP feature. If the AC flag is set, SMAP protection is in force; otherwise access to user-space memory is allowed. Two new instructions (STAC and CLAC) are provided to manipulate that flag relatively quickly. Unsurprisingly, much of Peter's patch set is concerned with adding STAC and CLAC instructions in the right places. User-space access functions (get_user(), for example, or copy_to_user()) clearly need to have user-space access enabled. Other places include transitions between kernel and user mode, futex operations, floating-point unit state saving, and so on. Signal handling, as usual, has special requirements; Peter had to make some significant changes to allow signal delivery to happen without excessive overhead.
Speaking of overhead, support for this feature will clearly have its costs. User-space access functions tend to be expanded inline, so there will be a lot of STAC and CLAC instructions spread around the kernel. The "alternatives" mechanism is used to patch them out if the SMAP feature is not in use (either not supported by the kernel or disabled with the nosmap boot flag), but the kernel will grow a little regardless. The STAC and CLAC instructions also require a little time to execute. Thus far, no benchmarks have been posted to quantify what the cost is; one assumes that it is small but not nonexistent.
The kernel will treat SMAP violations like it treats any other bad pointer access: the result will be an oops.
One might well ask what the value of this protection is, given that the kernel can turn it off at will. The answer is that it can block a whole class of exploits where the kernel is fooled into reading from (or writing to) user-space memory by mistake. The set of null pointer vulnerabilities exposed a few years ago is one obvious example. There have been many situations where an attacker has found a way to get the kernel to use a bad pointer, while the cases where the attacker could execute arbitrary code in kernel space (before exploiting the bad pointer) have been far less common. SMAP should block the more common attacks nicely.
The other benefit, of course, is simply finding kernel bugs. Driver writers (should) know that they cannot dereference user-space pointers directly from the kernel, but code that does so tends to work on some architectures anyway. With SMAP enabled, that kind of mistake will be found and fixed earlier, before the bad code is shipped in a mainline kernel. As is so often the case, there is real value in having the system enforce the rules that developers are supposed to be following.
Linus liked the patch set and nobody else has complained, so the changes have found their way into the "tip" tree. That makes it quite likely that we will see them again quite soon, probably once the 3.7 merge window opens. It will take a little longer, though, to get processors that support this feature; SMAP is set to first appear in the Haswell line, which should start shipping in 2013. But, once the hardware is available, Linux will be able to take advantage of this new feature.
Where the 3.6 kernel came from
As of this writing, the 3.6 development is nearing its close with the 3.6-rc7 prepatch having been released on September 23. There may or may not be a 3.6-rc8 before the final release, but, either way, the real 3.6 kernel is not far away. It thus seems like an appropriate time for our traditional look at what happened in this cycle and who the active participants were.At the release of -rc7, Linus had pulled 10,153 non-merge changesets from 1,216 developers into the mainline. That makes this release cycle just a little slower than its immediate predecessors, but, with over 10,000 changesets committed, the development community has certainly not been idle. This development cycle is already slightly longer than 3.5 (which required 62 days) and may be as much as two weeks longer by the end, if another prepatch release is required. Almost 523,000 lines of code were added and almost 252,000 were removed this time around for a net growth of about 271,000 lines.
Most active 3.6 developers
By changesets H Hartley Sweeten 460 4.5% Mark Brown 175 1.7% David S. Miller 154 1.5% Axel Lin 152 1.5% Johannes Berg 115 1.1% Al Viro 113 1.1% Hans Verkuil 111 1.1% Lars-Peter Clausen 90 0.9% Sachin Kamat 84 0.8% Daniel Vetter 83 0.8% Eric Dumazet 79 0.8% Rafael J. Wysocki 77 0.8% Guenter Roeck 76 0.7% Alex Elder 76 0.7% Guennadi Liakhovetski 75 0.7% Sven Eckelmann 75 0.7% Ian Abbott 74 0.7% Arik Nemtsov 74 0.7% Dan Carpenter 72 0.7% Shawn Guo 70 0.7%
By changed lines Greg Kroah-Hartman 113897 18.3% Mark Brown 18761 3.0% H Hartley Sweeten 14362 2.3% John W. Linville 14177 2.3% Chris Metcalf 11419 1.8% Hans Verkuil 9493 1.5% Alex Williamson 7335 1.2% Pavel Shilovsky 6226 1.0% Sven Eckelmann 5694 0.9% Johannes Berg 5518 0.9% Alexander Block 5465 0.9% Kevin McKinney 5211 0.8% David S. Miller 4600 0.7% Christoph Hellwig 4512 0.7% Yan, Zheng 4481 0.7% Felix Fietkau 4433 0.7% Ola Lilja 4191 0.7% Johannes Goetzfried 4129 0.7% Vaibhav Hiremath 4087 0.7% Nicolas Royer 3989 0.6%
H. Hartley Sweeten is at the top of the changesets column this month as the result of a seemingly unending series of patches to get the Comedi subsystem ready for graduation from the staging tree. Mark Brown continues work on audio drivers and related code. David Miller naturally has patches all over the networking subsystem; his biggest contribution this time around was the long-desired removal of the IPv4 routing cache. Axel Lin made lots of changes to drivers in the regulator and MTD subsystems, among others, and Johannes Berg continues his wireless subsystem work.
Greg Kroah-Hartman pulled the CSR wireless driver into the staging tree to get to the top of the "lines changed" column, even though his 69 changesets weren't quite enough to show up in the left column. John Linville removed some old, unused drivers, making him the developer who removed the most code from the kernel this time around. Chris Metcalf added a number of new features to the Tile architecture subtree.
The list of developers credited for reporting problems is worth a look:
Top 3.6 bug reporters Fengguang Wu 44 7.7% Martin Hundebøll 21 3.7% David S. Miller 19 3.3% Dan Carpenter 17 3.0% Randy Dunlap 14 2.4% Bjørn Mork 11 1.9% Al Viro 10 1.7% Ian Abbott 9 1.6% Stephen Rothwell 9 1.6% Eric Dumazet 8 1.4%
What we are seeing here is clearly the result of Fengguang Wu's build and boot testing work. As Fengguang finds problems, he reports them and they get fixed before the wider user community has to deal with them. Coming up with 44 bug reports in just over 60 days is a good bit of work.
Some 208 companies (that we know of) contributed to the 3.6 kernel. The most active of these were:
Most active 3.6 employers
By changesets (None) 1124 11.1% Red Hat 1035 10.2% Intel 884 8.7% (Unknown) 828 8.2% Vision Engraving Systems 460 4.5% Texas Instruments 418 4.1% Linaro 409 4.0% IBM 286 2.8% SUSE 282 2.8% 243 2.4% Wolfson Microelectronics 180 1.8% (Consultant) 167 1.6% Freescale 152 1.5% Ingics Technology 152 1.5% Samsung 143 1.4% Qualcomm 135 1.3% Cisco 127 1.3% Wizery Ltd. 125 1.2% NVidia 124 1.2% Oracle 122 1.2%
By lines changed Linux Foundation 122520 19.7% (None) 63608 10.2% Red Hat 59662 9.6% Intel 37556 6.0% (Unknown) 25719 4.1% Texas Instruments 25533 4.1% Wolfson Microelectronics 23020 3.7% Vision Engraving Systems 14876 2.4% (Consultant) 12830 2.1% Linaro 11677 1.9% Tilera 11436 1.8% Cisco 11223 1.8% IBM 11006 1.8% Freescale 9630 1.6% SUSE 9035 1.5% Marvell 7984 1.3% Samsung 7621 1.2% OMICRON Electronics 7259 1.2% Etersoft 6236 1.0% 5673 0.9%
Greg Kroah-Hartman's move to the Linux Foundation has caused a bit of a shift in the numbers; the Foundation has moved up in the rankings at SUSE's expense. Beyond that, we see the continued growth of the embedded industry's participation, the continuing slow decline of hobbyist contributions, and an equally slow decline in contributions from "big iron" companies like Oracle and IBM.
Taking a quick look at maintainer signoffs — "Signed-off-by" tags applied by somebody other than the author — the picture is this:
Non-author Signed-off-by tags
By developer Greg Kroah-Hartman 1232 14.1% David S. Miller 754 8.6% John W. Linville 376 4.3% Mauro Carvalho Chehab 323 3.7% Mark Brown 291 3.3% Andrew Morton 280 3.2% Ingo Molnar 173 2.0% Luciano Coelho 132 1.5% Johannes Berg 128 1.5% Gustavo Padovan 124 1.4%
By company Red Hat 2323 26.6% Linux Foundation 1278 14.6% Intel 592 6.8% 428 4.9% (None) 411 4.7% Texas Instruments 359 4.1% Wolfson Microelectronics 292 3.3% SUSE 270 3.1% Samsung 230 2.6% IBM 189 2.2%
The last time LWN put up a version of this table was for 2.6.34 in May, 2010. At that time, over half the patches heading into the kernel passed through the hands of somebody at Red Hat or SUSE. That situation has changed a bit since then, though the list of developers contains mostly the same names. Once again, we are seeing the mobile and embedded industry on the rise.
All told, it looks like business as usual. There are a lot of problems to
be solved in the kernel space, so vast numbers of developers are working to
solve them. There appears to be little danger that Andrew Morton's famous
2005 prediction that
"we have to finish this thing one day
" will come true anytime
in the near future. But, if we can't manage to finish the job, at least we
seem to have the energy and resources to keep trying.
Patches and updates
Kernel trees
Architecture-specific
Core kernel code
Development tools
Device drivers
Filesystems and block I/O
Memory management
Networking
Security-related
Miscellaneous
Page editor: Jonathan Corbet
Distributions
ALS: Automotive Grade Linux
Using Linux in cars is a hot topic, even if the market is less visible to most developers than tablets or mobile phones. The Linux Foundation (LF) announced an initiative at the second Automotive Linux Summit in Gaydon, UK, however, that may result in a higher profile for automotive Linux development. The initiative is called Automotive Grade Linux (AGL), and its goal is to produce a distribution tuned for deployment throughout a vehicle, including in-dash systems, instrument clusters, and even safety-critical engine control units. A number of automakers and industry players are on board — which sparked some confusion at the announcement, because many of the same companies are also involved with existing Linux-based automotive efforts like GENIVI.
AGL announced
LF Executive Director Jim Zemlin announced AGL in an Automotive Linux Summit keynote on September 19. Three automakers are founding participants: Toyota, Nissan, and Jaguar Land Rover. They are joined by a number of electronics and silicon vendors, including Texas Instruments, Intel, Samsung, and Fujitsu. Officially, AGL is a "workgroup," as distinguished from a software project. Zemlin likened it to Carrier Grade Linux, a workgroup started by telecommunications companies in 2002 to address that industry's needs as it migrated its equipment to Linux from proprietary operating systems.
The AGL announcement states that the workgroup
"will facilitate widespread industry collaboration that advances
automotive device development, providing a community reference
platform that companies can use for creating products
". That
reference platform, it continues, will be a Tizen-based distribution
"optimized for a broad set of automotive applications ranging
from Instrumentation Cluster to In-Vehicle-Infotainment (IVI) and
more.
" The announcement specifically mentions fast boot and
extended lifecycle support for automotive products as features, and
says that the workgroup will support other industry efforts like
GENIVI and the W3C's Web and
Automotive workshop.
During the Summit, a number of people — speakers included
— expressed puzzlement about AGL, specifically with regard to
what its ultimate "deliverables" will be, and to how exactly it
competes or cooperates with the other automotive Linux efforts like
GENIVI and Tizen's IVI platform. Zemlin noted in his keynote
that there is no automotive-focused equivalent to the community-based
distributions like Debian and Fedora, and said that as a result it is
much more difficult for interested community developers to get started
working on the automotive-specific problems faced by carmakers and product
vendors. There is now an AGL site alive at automotive.linuxfoundation.org,
which provides a bit more detail, and references that same issue on
its "About" page. It compares the community-managed Debian and Fedora
to the commercially-supported Ubuntu and Red Hat Enterprise Linux, and
says "In a similar manner, AGL is seeking to become the upstream
Linux distribution for automotive use by facilitating cooperation
between multiple industries and the open-source communities.
"
So, then, the "product" to be produced by AGL would appear to be a
full-fledged Linux distribution, rather than a suite of platform
packages or a specification. As to the scope of the project, the site
also says AGL is not limited to IVI systems, but also encompasses
"instrument clusters, climate control, intelligent roadway
instrumentation, etc.
" The site also sets out a project
structure, including a steering committee, steering committee
coordinator, and various expert groups tasked with developing specific
features. The makeup of the committee and the specifics of the expert
groups have not been announced; there are, however, two public mailing
lists available (in addition to a private one for the steering committee).
Whither GENIVI?
Although the announcement and site both say that AGL is not a challenger to GENIVI, it is not difficult to see why some people (particularly those working on GENIVI) either perceive the projects as potential competitors or fear a duplication of effort. Both, after all, are automotive industry associations attempting to build a Linux-based platform that meets the shared requirements of car manufacturers and tier-one equipment makers (and indeed quite a few industry players are members of both efforts). Both target Linux and core services that need to be adapted from the desktop/server/mobile markets where Linux is already established, and both envision their software as some sort of "reference implementation." GENIVI's output is a baseline which is "respun" into other distributions, while AGL's is an "upstream" distribution intended to be adapted and optimized in products.
Still, as similar as that language sounds, there are some arguably important details that distinguish the two projects' goals. First, GENIVI is ultimately a compliance-driven specification: the baseline software that it creates en route is simply a means to that end. This process can be confusing, in large part because both the specification itself and the compliance process are closed to non-GENIVI members. Consequently, those on the outside primarily see the commercial products and distributions that reach compliance.
Second, GENIVI is targeting a middleware platform only. That is to say, the purpose of certifying a particular software stack as GENIVI compliant is that it offers guarantees regarding application- and service-level compatibility. As Visteon's Pavel Konopelko explained in his session, the specification includes numerous "abstract" and "placeholder" components. For example, the Bluetooth stack could be Linux's native BlueZ or a proprietary replacement; either would qualify as long as it implements the required functionality. In addition, GENIVI has not tackled lower-level topics like Controller Area Network (CAN) bus support. CAN bus is a common transport mechanism, but it sits well below the application layer.
Of course, CAN bus may be on its way out; the protocol offers no security and certainly lacks the flexibility of standard TCP/IP. But because GENIVI is also focused on IVI systems specifically, inter-device communication is a bit of a tangent. A third difference between the projects is that AGL draws a wider circle, encompassing non-IVI components. Over the course of the Summit, there were talks about other automotive computing issues, such as communicating with intelligent roadways — e.g., to automatically relay speed limit information or safety reports. Jaguar Land Rover operated an exhibit at the summit's venue, the British Motor Heritage Center, that focused on its new vehicles' automatic adjustments to suspension, braking, and handling in response to off-road conditions. Such things are certainly outside the purview of IVI and, like engine control units (ECUs), probably even more meticulously scrutinized by company lawyers.
The other side to the answer is that AGL bills itself as an open collaboration project, while GENIVI is still members-only. There appears to be movement toward additional openness from GENIVI, and several GENIVI speakers alluded to forthcoming progress on that front at the summit. Of course, AGL has yet to get rolling; it is always possible that the corporate membership will be more secretive than the volunteer free software contributor community would like as well.
Tizen, workgroups, and collaboration
Another factor worth assessing is how AGL will affect the Tizen project. Tizen's two main supporters, Intel and Samsung, are AGL members as well, and the AGL project has already announced that it will use Tizen as the basis of its distribution. On the one hand, this seems to make AGL both an "upstream distribution" to its corporate adopters and a "downstream distribution" to the Tizen Project, which otherwise appears unchanged. On the other hand, perhaps seeing Tizen used as the basis of AGL's distribution work will make Tizen's insistence that it is a "platform" and not a distribution itself a little easier to parse.
Then again, what constitutes a platform and what constitutes a distinct distribution is largely a word game (for proof of that, consider the ever-expanding litany of X-as-a-Service acronyms generated by the cloud computing sector). Tizen remains committed to offering a Linux system that consumer device makers can build on in multiple categories. Tizen (and MeeGo before) it have been advertising such flexible functionality for two years or so, but the automotive market has always seemed to be the ripest for adoption. We may not see Tizen-based phones in the near future, and TVs or set-top boxes are likely to not sport platform branding at all, so perhaps focusing on automotive Linux is the quickest path to success anyway. The difficulty will be managing AGL's insistence that it is building a distribution for IVI and non-IVI automotive computing. The Tizen and MeeGo efforts were explicitly IVI-focused, and skeptics could be forgiven for wondering if Tizen's HTML5 application platform is sufficient for safety-critical uses like dashboard instrument clusters.
One attendee at the summit joked privately that AGL was probably formed because Toyota wanted to be in the driver's seat (pardon the expression). That is a bit cynical if taken at face value, but even if it were true, the LF does exist to accommodate companies that are new to collaborating around Linux. Periodically that may mean hosting a workgroup (such as Carrier Grade Linux or the Consumer Electronics workgroup (CELF)) that seems quite a ways outside the mainstream community. What matters in the long run, however, is that most of these companies eventually become mainstream contributors to the kernel and other parts of the standard Linux stack. Those companies may have unease about working with free software, or about collaborating with their competitors, but often these industry efforts produce work that benefits the rest of the community. The Long Term Support Initiative, for example, grew out of CELF.
It was clear from the Automotive Linux Summit that the car industry is ready to migrate to Linux as quickly as it can manage the transition; the costs of developing and supporting proprietary systems add up more quickly in automotive than they do in most other fields, in no small part because of the decade-long lifecycle of the automobile. Car-buyers expect their vehicles to be serviceable (and, in fact, dealer-serviceable) for ten or more years, a situation that Matt Jones of Jaguar Land Rover said led to his company's current burden of simultaneously supporting three unrelated IVI platforms at different times in recent years. At the moment, the launch of AGL may seem to crowd in on GENIVI, but there is no shortage of development to be done. Besides, who knows? Three or four years from now the two projects may have enough in common to work hand-in-hand or to merge — yet that will still be less than halfway through the lifespan of a typical automotive computer.
[The author would like to thank the Linux Foundation for travel assistance to ALS.]
Brief items
Distribution quotes of the week
When using grep recursively I only get local results:
grep -R fish_t /home/noob/fish_game/*
/home/noob/fish_game/fish.h: struct fish_t {
/home/noob/fish_game/fish.c: struct fish_t eric_the_ fish;
or worse:
grep -R shark_t /home/noob/fish_game/*
/home/noob/fish_game/fish.h: struct shark_t {
/home/noob/fish_game/fish.c: struct shark_t_t mark_sw;
I declare this a bug for two reasons:
- The output is boring.
- The terminal has more than 2 lines!!! It's an unefficient use of my screenspace.
I believe the reason for this is that the grep command only searches locally for things I am actually looking for, I kind of expect the results I get from my codebase and as such it removes any sense of mystery or something new and exciting to spice up my dull geek existence. That's boring, grep -R should also search amazon, so I get more exciting results ...
GeeXboX 3.0 released
The GeeXboX media center distribution has announced its 3.0 release—almost exactly a year after the release of GeeXboX 2.0 (LWN review). "A shiny new GeeXboX release has arrived! GeeXboX 3.0 is a major upgrade that integrates XBMC 11 “Eden” and adds the long-requested PVR functionality. This means you can finally use GeeXboX to watch and record live TV too! In addition to our usual x86 ISOs, this release is also available for several embedded platforms, with working full HD video and graphics acceleration for most of them."
Distribution News
openSUSE
openSUSE Board Welcomes new Chairman: Vincent Untz
SUSE has appointed Vincent Untz as Chairman of the openSUSE Board. "SUSE looked for somebody with respect and trust from the company and the community as well as skills relevant for the openSUSE Board. As you can see on his openSUSE user page Vincent has been around in the project, currently active in the Membership officials team, organizing GSOC, leading member of the GNOME team and as ambassador at various events. In the past he has been on the openSUSE Boosters team, the openSUSE Conference Program committees 2010 and 2011 and the Board Election Committee. Outside of openSUSE he is of course best know for his positions as Director, Chairman and Release Manager for the GNOME Foundation. Within SUSE, he now has his head in the cloud, having been involved in delivering SUSE Cloud, SUSE’s first cloud solution powered by OpenStack."
SUSE Linux
SUSE Linux Enterprise Server 11 Service Pack 1 EOL
Support for SUSE Linux Enterprise 11 Service Pack 1 has ended. SLE 11 SP2 is still supported. "This means that SUSE Linux Enterprise Server 11 SP1, SUSE Linux Enterprise Desktop SP1 and SUSE Linux Enterprise SDK 11 SP1 are now unmaintained and further updates will only update the corresponding 11 SP2 product variants."
Newsletters and articles of interest
Distribution newsletters
- DistroWatch Weekly, Issue 475 (September 24)
- Maemo Weekly News (September 24)
- Ubuntu Weekly Newsletter, Issue 284 (September 23)
Shuttleworth: Amazon search results in the Dash
Mark Shuttleworth explains a revenue sharing agreement between Canonical and Amazon. An early implementation has landed in Ubuntu 12.10 (due in October), where products from Amazon are displayed in the user's search results and if the user buys something, Canonical gets a cut. "We’re not putting ads in Ubuntu. We’re integrating online scope results into the home lens of the dash. This is to enable you to hit “Super” and then ask for anything you like, and over time, with all of the fantastic search scopes that people are creating, we should be able to give you the right answer. These are not ads because they are not paid placement, they are straightforward Amazon search results for your search. So the Dash becomes a super-search of any number of different kinds of data. Right now, it’s not dynamically choosing what to search, it’s just searching local scopes and Amazon, but it will get smarter over time."
New openSUSE Chairman Speaks About Future Goals (The VAR Guy)
The Var Guy talks with Vincent Untz, the new chairman on the openSUSE board, about his goals for openSUSE. One goal is to "[strengthen] openSUSE’s relationship with sponsors. Right now, openSUSE is most obviously identified with SUSE itself. But SUSE, according to Untz, “is not the main force inside the [openSUSE] project.” It’s an important partner, but the project has other major backers as well. Untz envisions forging closer ties with all of openSUSE’s supporters so that the project assumes a more independent identity."
The inner workings of openSUSE (ITWire)
ITWire talks with Andreas Jaeger about openSUSE. "There are a number of employees of SUSE who are involved in the openSUSE project; there are also many outsiders who play a vital role. Jaeger says the project has a six-member board plus a chairman, with the latter being appointed by SUSE. The direction that the project takes is entirely determined by the project itself. "The chairman has veto power, but so far has never had to exercise it," he said. "And I hope this never happens.""
Page editor: Rebecca Sobol
Development
XDC2012: Graphics stack security
Martin Peres and Timothée Ravier's session on day one of XDC2012 looked at security in the graphics stack. They considered user expectations around security in the graphical user interface (GUI), reviewed these expectations against the implementations in X11 and Weston (the reference compositing window manager for Wayland), and covered some other ground as well. Martin began their presentation by noting that they had done quite a bit of research on the topic, and although they are Linux security engineers rather than X developers, he expected that they would nevertheless have useful things to tell the audience.
User security expectations and the X server
Martin began with a review of security on the X server that focused on the three classical areas: confidentiality, integrity, and availability. In each case, he described security weaknesses in the X server.
Starting with confidentiality issues, Martin used the example of a user entering a credit card number while shopping online. In this case, the credit card number could be stolen if a key logger was running on the machine, or another program was taking screen shots periodically. This violates the user's expectations of the GUI: applications should not be able to spy on each other's input events or output buffers. From the user's point of view, the only time that arbitrary applications should obtain information from one another is under explicit user control (for example, cut-and-paste). However, under X11, any application that can provide the magic cookie generated by the X server has full access to other applications' input and output. In other words, X11 provides isolation only between users, not between applications run by the same user. Thus, any application that is launched by the user has the potential to break confidentiality in the manner of the credit card example.
Martin's second example concerned application integrity. A user visits a bank web site, and, being a sophisticated user, carefully checks that the URL shown in the browser's address bar shows "https" plus the correct domain. However, the user is unaware that they are visiting a fake domain, and that the browser's address bar has been redrawn by a malicious application. Consequently, the user's bank information is passed to a third party. This can happen under X11's Digital Rendering Infrastructure (DRI) version 1. (This problem is addressed in DRI2, which has been the default for a few years now.) In addition, virtual keyboards can inject input to the X server; since the X server broadcasts input events, the virtual-keyboard input can reach any application (just like a real keyboard).
Martin's third point was that applications should not be able to make other applications or the entire system unavailable. Under X11, applications can, however, act as screen lockers, denying access to the system. In addition, in the past, a virtual keyboard was able to kill other applications using the XF86ClearGrab feature that was introduced in X server 1.11 (a feature that led to a high-profile security flaw that made it possible to break into a screen-locked system by typing a particular key combination).
Mitigating X server security issues
At this point, Timothée took the lead to discuss techniques that have been developed to mitigate these security problems. The first of the approaches that he described was XSELinux. XSELinux provides finer-grained control within the X server, allowing control over features such as drag-and-drop or access to the mouse. However, the level of control provided by XSELinux is still too coarse-grained to be useful: it allows per-application control of access to X features, but cannot (for example) restrict input to the currently selected application. Consequently, it is either not provided, or disabled, in most distributions. A sometimes recommended alternative for confining applications that use the X server is Xephyr, which implements sandboxing by launching an X server inside another X server. Although this provides a good degree of isolation between applications in different sandboxes, a sandboxing solution has problems of its own: it becomes complicated to share information between applications in different sandboxes.
Timothée went on to describe two projects that have tried, in different ways, to bring greater security to the X server: QubesOS and PIGA-OS. Both of these projects aim to confine applications, control which applications can access input buffers of other applications, and so on.
QubesOS groups applications into "domains" that have similar levels of security. Each domain runs its X server in a separate Xen virtual machine. For example, surfing the web in a browser would be conducted in a low-security domain, while reading corporate email is done in a separate, higher-security domain. Functionality such as cut-and-paste and drag-and-drop between domains is provided by means of a daemon that runs in the privileged dom0 virtual machine that implements mandatory access control. QubesOS provides a high degree of isolation between applications.
However, Timothée described a number of drawbacks to QubesOS. It requires many virtual machines, which results in slow performance on desktop and laptop systems. It is not feasible on mobile systems, because of heavy resource-usage requirements and the almost-mandatory requirement for hardware-assisted virtualization in order to achieve satisfactory performance. Furthermore, one can't be sure that Xen can isolate virtual machines, so there might be ways to access buffers in other virtual machines.
PIGA-OS [PDF], a system that Martin and Timothée have worked on, takes a different approach from QuebesOS. Each application is placed in a separate SELinux domain and XSELinux is used to provide confinement in the X server. SELinux plus XSELinux provide many of the pieces needed for a secure X server, but some pieces are still missing. Therefore, PIGA-OS adds a daemon, PIGA-SYSTRANS, that grants rights to applications and prompts users when they switch between different domains as a consequence of their activities.
PIGA-OS has some notable advantages. It does not require virtual machines. It dynamically adjusts (under user control) the permissions of applications according to the user's activity. However, a significant downside of the PIGA-OS approach is that it requires quite some effort to set up the global SELinux policy that governs applications and activities. (This is a one-time effort, but the policy must be updated if an application acquires new features that require new types of privileged access.)
Wayland and Weston
Timothée then turned to the subject of the Wayland, the display server protocol posited as a replacement for X11, and Weston, the reference implementation of the compositing window manager for Wayland. His goal was to look at how Wayland and Weston have fixed some of the problems of the X server described above and outline the problems that remain.
Timothée divided up the discussion of security somewhat differently in this part of the presentation, beginning by talking about the security of input in Wayland/Weston. On this front, the new system is in good shape. Because Weston knows where applications are on the screen, it is able to decide which application should receive input events (this differs from the X server). This defeats key logging applications. Regarding integrity of input, the kernel limits access to the two main sources (/dev/input and /dev/uinput) to the root user only. Because Wayland/Weston does not (yet) support virtual keyboards it is not (yet) possible to forge input. (The topic of virtual keyboards was revisited later in the talk.)
On the output side, Timothée observed that Weston does have some problems with confidentiality and integrity. Weston uses the Graphics Execution Manager (GEM) to share application buffers between the compositor and applications. The problem is that GEM buffers are referenced using handles that are 32-bit integers. These handles can be guessed (or brute-forced), which means that one application can easily access GEM buffers belonging to other applications. Martin noted that this problem would be resolved if and when Weston moved to the use of DMABUF (DMA buffer sharing).
Timothée then considered how Weston should deal with applications that need exceptional handling with respect to security. The first of these that he covered was virtual keyboards, which are pseudo-devices that are permitted to send input events to the compositor. He made the general point that virtual keyboards should be "included" in the compositor, so that the compositor knows that it can trust the input they provide. Peter Hutterer raised a potential problem: each natural language (with a unique character set) requires its own virtual keyboard, and it seems that every few months someone starts a new virtual keyboard project for another language, with the result that adding keyboards to the compositor would be a never-ending task. In response, Timothée and Martin refined what they meant by "include". The compositor must not trust just any application to be a virtual keyboard. Rather, since the compositor knows which applications it is launching, it can choose which applications it will trust as virtual keyboards. Peter agreed with this approach but noted that there may be some (solvable) complexities, since, when dealing with a multilingual user, a switch from one language to another may involve not only a switch of virtual keyboards, but also a switch of the background framework that generates input events.
Weston does not yet support screen-shot applications, but when that support is added, some care will be needed to avoid confidentiality issues. Timothée's proposal was similar to that for virtual keyboards: the compositor would allow only trusted applications that it has launched to make screen shots. Again, there must be a method for specifying which applications the compositor should trust for this task.
Global keyboard shortcuts present a further problem. For example, media players commonly use keyboard shortcuts to provide functionality such as pausing or skipping to the next track. Typically, media players are not visible on the screen, and so do not receive input events directly. Therefore the compositor needs a way of registering special keystroke combinations and passing these to applications that have registered them. The problem is that this sort of functionality can allow the implementation of a different kind of key logger: a malicious application could register itself for many or all keystroke combinations. Again, there needs to be a way of specify which applications are allowed to register global keyboard shortcuts, and perhaps which keystroke combinations they may register. Further complicating the problem is the fact that the user may change the global keyboard shortcuts that an application uses. Timothée said they had no solution to offer for this problem.
Peter Hutterer suggested what he called a "semantic approach" to the problem, but noted that it would require a lot more code. Instead of allowing applications to register keystroke combinations, the compositor would maintain a global registry where applications would register to say that they want to be notified for events such as "undo" or "cancel", and the compositor would control the key that is assigned to the event. This has the potential advantage that shortcuts could be consistent across applications. On the other hand, there will likely be conflicts between applications over deciding which events are which. Peter noted that the GTK project is currently doing some work in this area, and it may be worth contacting people working on that project.
The situation with screen-locking applications is similar to screen-shot applications. Currently, Weston does not support screen locking, but when that support is added, there should be the notion of having a restricted set of applications that the compositor permits to lock the screen. Indeed, since the screen-locking code is typically small, and the requirements are fairly narrow (so that there is no need for multiple different implementations), it may be sensible to implement that functionality directly inside the compositor.
Timothée summarized the proposals for controlling application security in Wayland and Weston. There should be a mandatory access control (MAC) framework as in the X Access Control Extension (XACE). Suitable hooks should be placed in the code to allow control of which applications can interact with one another and interact with Wayland to perform operations such as receiving input. The MAC framework should be implemented as a library, so that access control is unified across all Wayland compositors.
Rootless Weston
Traditionally, the X server has had to run with root privileges. Because the X window system is a large body of complex—and, in many cases, ancient—code, the fact that that code must run as root creates a window for attacks on a system. For this reason, it has long been a goal to rework the system to the point where root privilege is no longer need to run the X server. Although some progress has been made toward that goal, there is as yet no general solution the problem, and the X server still normally requires root privileges to run. The question then is how to avoid repeating this situation going forward, so that Weston does not require root privileges.Timothée ran through some of the factors blocking rootless Weston. One problem is that Weston needs access to /dev/input, which is accessible only to root. Root privilege is also required to send output to the screen and to support hot plugging of keyboards and screens. The solution he proposed was to isolate the code that requires root privileges into a separate small executable that is run with root privileges. In the case where Weston needed access to a privileged file, the small executable would then open the required file and pass a file descriptor via a UNIX domain socket to Weston. There was little comment on this proposal, which may signify that it seemed reasonable to everyone present.
Hardware and driver security
Martin returned to the microphone to talk about hardware and driver security. He began with the simple observation that graphics drivers and hardware should not allow privilege escalation (allowing a user to gain root access) and should not allow one user to read or write graphics buffers belonging to another user. Various platforms and drivers don't live up to these requirements. For example, on the Tegra 2 platform, the GPU permits shader routines to have full read and write access to all of the video RAM or all of the graphics-hosting RAM. Another example is the NVIDIA driver, which provides unprivileged users with access to nearly all of the GPU registers.
Martin emphasized the need for a sane kernel API that isolates GPU users and doesn't expose GPU registers to unprivileged users. GPU access to RAM also needs to be restricted, in order to prevent the GPU from accessing kernel data structures. (The lack of such a restriction was the source of the recent vulnerability in the NVIDIA driver that allowed root privilege escalation.)
One approach to isolating GPU users would be to apply a virtual-memory model to video memory, so that applications cannot touch each other's memory. This approach provides the best security, but the problem is that it is not supported by all graphics hardware. In addition, implementing this approach increases context-switching times; this is a problem for DRI2, which does a lot of context switching, and for Qt5, where all applications that use QML (which is the recommended approach) have an OpenGL context. The Nouveau driver currently takes this approach, and some other modern GPUs are also capable of doing so.
An alternative approach for isolating GPU users is for the kernel to validate the user commands submitted to the GPU, in order to ensure that they touch only memory that belongs to the user. This approach has the advantage that it can be implemented for any graphics hardware and context-switching costs are lower. However, it imposes a higher CPU overhead. Currently the Radeon and Intel drivers take this approach.
Moving to another topic, Martin observed that a GPU buffer is not zeroed when it is allocated, meaning that the previous user's data is visible to the new user. This could create a confidentiality issue. The problem is that zeroing buffers has a heavy performance impact. He suggested two strategies for dealing with this: zeroing deallocated buffers when the CPU is idle and using the GPU to perform zeroing of buffers. Lucas Stach pointed out that even if one of these strategies was employed, on embedded devices the memory bandwidth required for zeroing buffers was simply too great. Martin accepted the point, and noted that the goal was to allow the user to choose the trade-off between security and performance.
The final topic of the presentation concerned plug-ins. Since the compositor has full access to input events and application output buffers, this means that compositor plug-ins potentially have the same access, so that a (possibly malicious) plug-in could cause a security vulnerability. Martin said that they didn't have too many concrete proposals on how to deal with this, beyond noting that, whenever possible, plug-ins should not have access to user inputs and output buffers. He suggested that address-space layout randomization inside the GPU virtual memory may help, along with killing applications that access invalid virtual addresses.
In some concluding discussion at the end of the session, Matthieu Herrb
remarked that most X11 applications are not ready to handle errors from
failed requests to the X server, and that if the graphics server is going to
implement finer-grained access control, then it's important that
applications have a good strategy for handling the various errors that may
occur. Martin agreed, adding "We don't want to fix X; it's not
possible. We can't ask people to rewrite [applications]. But with Wayland
being new, we can do things better from day one.
"
Summary
From the various comments made during the presentation, it's clear that the X developers are well aware of the security problems in X11. It's equally clear that they are keen to avoid making the same mistakes in Wayland and Weston.
The X.Org wiki has pointers to the slides and video for this presentation, as well as pointers to the slides and video for a follow-on security-related presentation (entitled "DRM2").
Brief items
Quotes of the week
Tent v0.1 released
The initial release of the Tent distributed social networking protocol implementation is out. From the introductory post: "Tent is decentralized, not federated or centralized. Any Tent server can connect to any other Tent server. All features are available to any server as first-class citizens. Anyone can host their own Tent server. Tent servers can also be run as Tor hidden services to create a social darknet for at-risk organizers and activists. Anyone can write applications that connect to Tent in order to display or create user content."
GStreamer 1.0 released
The GStreamer project has announced the release of GStreamer 1.0. "The 1.x series is a stable series targeted at end users. It is not API or ABI compatible with the 0.10.x series. It can, however, be installed in parallel with the 0.10.x series and will not affect an existing 0.10.x installation." LWN recently previewed this release.
REBOL to go open-source
The developer behind the REBOL language has announced his intention to release the code under (probably) the GPLv2 license. "The time has come for REBOL to be released as open source. This is the only way I can see it regaining some degree of momentum and renewed interest -- not just within our REBOL community, but for many yet-to-be users who possess the curiosity or motivation to learn new and powerful programming concepts and techniques."
GTK+ 3.6.0 released
GTK+ 3.6 has been released. Notable improvements since GTK+ 3.4 include new widgets like GtkLevelBar and GtkMenuButton, plus support for CSS animations, transitions, and blur shadows.
GNOME 3.6 released
The GNOME 3.6 release is available. It features improved notifications, an enhanced activities overview, a lot of changes to "Files" (the application formerly known as Nautilus), input source integration, and more; see the release notes for details.WebKitGTK+ 1.10.0 released
WebKitGTK+ version 1.10.0 is available. This release includes the first implementation of a high-level GTK+ API for the rendering engine, as well as hardware accelerated compositing and GStreamer 1.0 support.
Newsletters and articles
Development newsletters from the last week
- Caml Weekly News (September 25)
- What's cooking in git.git (September 21)
- What's cooking in git.git (September 24)
- Haskell Weekly News (September 19)
- Mozilla Hacks Weekly (September 20)
- OpenStack Community Newsletter (September 21)
- Perl Weekly (September 24)
- PostgreSQL Weekly News (September 24)
- Ruby Weekly (September 20)
An Introduction to GCC Compiler Intrinsics in Vector Processing (Linux Journal)
Linux Journal investigates GCC intrinsics for vector processing. "GCC offers an intermediate between assembly and standard C that can get you more speed and processor features without having to go all the way to assembly language: compiler intrinsics. This article discusses GCC's compiler intrinsics, emphasizing vector processing on three platforms: X86 (using MMX, SSE and SSE2); Motorola, now Freescale (using Altivec); and ARM Cortex-A (using Neon). We conclude with some debugging tips and references."
Proceedings of the 2012 X.Org Developer's Conference
Summaries from each session of the recently concluded X.Org developers conference are now available. There are also links to videos of the talks; slides are likely to be available soon as well.Adamczyk: Design Principles Behind Firefox OS UX
For those who are curious what the Firefox OS ("Boot2Gecko") will look like: Patryk Adamczyk has posted a description of the system's core design principles with a lot of screenshots. "Many applications have a distinct character visually catered to their use, with a list layout, focused on quick triage of content. Productivity applications create a sense of an office; bright with an emphasis on typographic content. Media applications create a more theatric (dark) experience with emphasis on graphical content rather than text."
Page editor: Nathan Willis
Announcements
Brief items
Ada’s Angels fundraiser
The Ada Initiative is holding the Ada’s Angels fundraising campaign. "Our goal for the Ada’s Angel campaign is attracting 1000 donors at the $16 – $32 a month level by October 31, 2012. With this level of steady funding, we can commit to long-term projects like holding AdaCamp in India and Europe, take on new projects like creating a practical workshop for overcoming Impostor Syndrome for women in open tech/culture, and significantly reduce our fundraising costs. (“Angel” is used in the non-religious sense of “angel investor” – we welcome people of all or no religious beliefs.)"
Articles of interest
ITC: How an obscure bureaucracy makes the world safe for patent trolls (ars technica)
Ars technica reports on the role of the US International Trade Commission (ITC) in the patent wars. Essentially it is much easier (and quicker) to get "exclusion orders" (effectively injunctions) against "infringing" products via the ITC. "Exclusion orders have proven particularly popular with patent trolls. The eBay [v. MercExchange] standard asks whether the patent holder has suffered an "irreparable injury" due to infringement, and whether an injunction is necessary to repair that injury. Ordinary technology companies can often satisfy this standard. But it's much harder for "non-practicing entities"—patent trolls—to do so, since they can't argue their own products have been hurt by unfair competition. The standard for obtaining exclusion orders is easier for patent trolls to satisfy, however; unsurprisingly, they have accounted for a growing share of the ITC's Section 337 cases."
Calls for Presentations
ELCE 2012 Technical Showcase - Call for demonstrations
The Embedded Linux Conference Europe (ELCE) will take place November 5-7 in Barcelona, Spain. The event staff will be organizing a Technical Showcase for the evening of the 5th. "If you are interested in showcasing something, please contact me before Oct. 19, and prepare a poster too. However, the sooner, the better, as demonstration space will be limited."
Mini DebConf in Paris
There will be a mini-DebConf in Paris, France November 24-25, 2012. The call for talks is open. "... Debian enthusiasts from far and wide will gather to talk about the latest Debian changes, the Debian community and meet up new and old friends. This Mini-DebConf will also be a great chance to talk about the upcoming Wheezy release, as well as talk about features for the release after Wheezy, Jessie!"
Upcoming Events
Events: September 27, 2012 to November 26, 2012
The following event listing is taken from the LWN.net Calendar.
| Date(s) | Event | Location |
|---|---|---|
| September 24 September 27 |
GNU Radio Conference | Atlanta, USA |
| September 27 September 29 |
YAPC::Asia | Tokyo, Japan |
| September 27 September 28 |
PuppetConf | San Francisco, US |
| September 28 September 30 |
Ohio LinuxFest 2012 | Columbus, OH, USA |
| September 28 September 30 |
PyCon India 2012 | Bengaluru, India |
| September 28 October 1 |
PyCon UK 2012 | Coventry, West Midlands, UK |
| September 28 | LPI Forum | Warsaw, Poland |
| October 2 October 4 |
Velocity Europe | London, England |
| October 4 October 5 |
PyCon South Africa 2012 | Cape Town, South Africa |
| October 5 October 6 |
T3CON12 | Stuttgart, Germany |
| October 6 October 8 |
GNOME Boston Summit 2012 | Cambridge, MA, USA |
| October 11 October 12 |
Korea Linux Forum 2012 | Seoul, South Korea |
| October 12 October 13 |
Open Source Developer's Conference / France | Paris, France |
| October 13 October 14 |
Debian BSP in Alcester (Warwickshire, UK) | Alcester, Warwickshire, UK |
| October 13 October 14 |
PyCon Ireland 2012 | Dublin, Ireland |
| October 13 October 15 |
FUDCon:Paris 2012 | Paris, France |
| October 13 | 2012 Columbus Code Camp | Columbus, OH, USA |
| October 13 October 14 |
Debian Bug Squashing Party in Utrecht | Utrecht, Netherlands |
| October 15 October 18 |
OpenStack Summit | San Diego, CA, USA |
| October 15 October 18 |
Linux Driver Verification Workshop | Amirandes,Heraklion, Crete |
| October 17 October 19 |
LibreOffice Conference | Berlin, Germany |
| October 17 October 19 |
MonkeySpace | Boston, MA, USA |
| October 18 October 20 |
14th Real Time Linux Workshop | Chapel Hill, NC, USA |
| October 20 October 21 |
PyCon Ukraine 2012 | Kyiv, Ukraine |
| October 20 October 21 |
Gentoo miniconf | Prague, Czech Republic |
| October 20 October 21 |
PyCarolinas 2012 | Chapel Hill, NC, USA |
| October 20 October 23 |
openSUSE Conference 2012 | Prague, Czech Republic |
| October 20 October 21 |
LinuxDays | Prague, Czech Republic |
| October 22 October 23 |
PyCon Finland 2012 | Espoo, Finland |
| October 23 October 25 |
Hack.lu | Dommeldange, Luxembourg |
| October 23 October 26 |
PostgreSQL Conference Europe | Prague, Czech Republic |
| October 25 October 26 |
Droidcon London | London, UK |
| October 26 October 27 |
Firebird Conference 2012 | Luxembourg, Luxembourg |
| October 26 October 28 |
PyData NYC 2012 | New York City, NY, USA |
| October 27 | Central PA Open Source Conference | Harrisburg, PA, USA |
| October 27 October 28 |
Technical Dutch Open Source Event | Eindhoven, Netherlands |
| October 27 | pyArkansas 2012 | Conway, AR, USA |
| October 27 | Linux Day 2012 | Hundreds of cities, Italy |
| October 29 November 3 |
PyCon DE 2012 | Leipzig, Germany |
| October 29 November 2 |
Linaro Connect | Copenhagen, Denmark |
| October 29 November 1 |
Ubuntu Developer Summit - R | Copenhagen, Denmark |
| October 30 | Ubuntu Enterprise Summit | Copenhagen, Denmark |
| November 3 November 4 |
OpenFest 2012 | Sofia, Bulgaria |
| November 3 November 4 |
MeetBSD California 2012 | Sunnyvale, California, USA |
| November 5 November 7 |
Embedded Linux Conference Europe | Barcelona, Spain |
| November 5 November 7 |
LinuxCon Europe | Barcelona, Spain |
| November 5 November 9 |
Apache OpenOffice Conference-Within-a-Conference | Sinsheim, Germany |
| November 5 November 8 |
ApacheCon Europe 2012 | Sinsheim, Germany |
| November 7 November 9 |
KVM Forum and oVirt Workshop Europe 2012 | Barcelona, Spain |
| November 7 November 8 |
LLVM Developers' Meeting | San Jose, CA, USA |
| November 8 | NLUUG Fall Conference 2012 | ReeHorst in Ede, Netherlands |
| November 9 November 11 |
Free Society Conference and Nordic Summit | Göteborg, Sweden |
| November 9 November 11 |
Mozilla Festival | London, England |
| November 9 November 11 |
Python Conference - Canada | Toronto, ON, Canada |
| November 10 November 16 |
SC12 | Salt Lake City, UT, USA |
| November 12 November 16 |
19th Annual Tcl/Tk Conference | Chicago, IL, USA |
| November 12 November 17 |
PyCon Argentina 2012 | Buenos Aires, Argentina |
| November 12 November 14 |
Qt Developers Days | Berlin, Germany |
| November 16 November 19 |
Linux Color Management Hackfest 2012 | Brno, Czech Republic |
| November 16 | PyHPC 2012 | Salt Lake City, UT, USA |
| November 20 November 24 |
8th Brazilian Python Conference | Rio de Janeiro, Brazil |
| November 24 November 25 |
Mini Debian Conference in Paris | Paris, France |
| November 24 | London Perl Workshop 2012 | London, UK |
If your event does not appear here, please tell us about it.
Page editor: Rebecca Sobol

![[XDC2012 Group
Photo]](https://static.lwn.net/images/conf/2012/xdc/XDC_group_photo-small-s.jpg)
![[XDC2012 Group Photo]](https://static.lwn.net/images/conf/2012/xdc/Kristian_Høgsberg_at_XDC2012-small-s.jpg)