User: Password:
Subscribe / Log in / New account Weekly Edition for March 14, 2013

Some impressions from Linaro Connect

By Jonathan Corbet
March 13, 2013
One need only have a quick look at the LWN conference coverage index to understand that our community does not lack for opportunities to get together. A relatively recent addition to the list of Linux-related conferences is the series of "Linaro Connect" events. Recently, your editor was finally able to attend one of these gatherings: Linaro Connect Asia in Hong Kong. Various talks of interest have been covered in separate articles; this article will focus on the event itself.

Linaro is an industry consortium dedicated to improving the functionality and performance of Linux on the ARM processor; its list of members includes many of the companies working in this area. Quite a bit of engineering work is done under the Linaro banner, to the point that it was the source of 4.6% of the changes going into the 3.8 kernel. A lot of Linaro's developers are employed by member companies and assigned to Linaro, but the number of developers employed by Linaro directly has been growing steadily. All told, there are hundreds of people whose work is related to Linaro in some way.

Given that those people work for a lot of different companies and are spread across the world, it makes sense that they would all want to get together on occasion. That is the purpose of the Linaro Connect events. These conferences are open to any interested attendee, but they are focused on Linaro employees and assignees who otherwise would almost never see each other. The result is that, in some ways, Linaro Connect resembles an internal corporate get-together more than a traditional Linux conference.

So, for example, the opening session was delivered by George Grey, Linaro's CEO; he used it to update attendees on recent developments in the Linaro organization. The Linaro Enterprise Group (LEG) was announced last November; at this point there [George Grey] are 25 engineers working with LEG and 14 member companies. More recently, the Linaro Networking Group was announced as an initiative to support the use of ARM processors in networking equipment. This group has 12 member companies, two of which have yet to decloak and identify themselves.

Life is good in the ARM world, George said; some 8.7 billion ARM chips were shipped in 2012. There are many opportunities for expansion, not the least of which is the data center. He pointed out that, in the US, data centers are responsible for 2.2% of all energy use; ARM provides the opportunity to reduce power costs considerably. The "Internet of things" is also a natural opportunity for ARM, though it brings its own challenges, not the least of which is security: George noted that he really does not want his heart rate to be broadcast to the world as a whole. And, he said, the upcoming 64-bit ARMv8 architecture is "going to change everything."

The event resembled a company meeting in other ways; for example, one of the talks on the first day was an orientation for new employees and assignees. Others were mentoring sessions aimed at helping developers learn how to get code merged upstream. One of the sessions on the final day was for the handing out of awards to the people who have done the most to push Linaro's objectives forward. And a large part of the schedule (every afternoon, essentially) was dedicated to hacking sessions aimed at the solution of specific problems. It was, in summary, a focused, task-oriented gathering meant to help Linaro meet its goals.

There were also traditional talk sessions, though the hope was for them to be highly interactive and task-focused as well. Your editor was amused to hear the standard complaint of conference organizers everywhere: despite their attempts to set up and facilitate discussions, more and more of the sessions seem to be turning into lecture-style presentations with one person [Hong
Kong] talking at the audience. That said, your editor's overall impression was of an event with about 350 focused developers doing their best to get a lot of useful work done.

If there is a complaint to be made about Linaro Connect, it would be that the event, like much in the mobile and embedded communities, is its own world with limited connections to the broader community. Its sessions offered help on how to work with upstream; your editor, in his talk, suggested that Linaro's developers might want to work harder to be the upstream. ARM architecture maintainer Russell King was recently heard to complain about Linaro Connect, saying that it works outside the community and that "It can be viewed as corporate takeover of open source." It is doubtful that many see Linaro in that light; indeed, even Russell might not really view things in such a harsh way. But Linaro Connect does feel just a little bit isolated from the development community as a whole.

In any case, that is a relatively minor quibble. It is clear that the ARM community would like to be less isolated, and Linaro, through its strong focus on getting code upstream, is helping to make that happen. Contributions from the mobile and embedded communities have been steadily increasing for the last few years, to the point that they now make up a significant fraction of the changes going into the kernel. That can be expected to increase further as ARM developers become more confident in their ability to work with the core kernel, and as ARM processors move into new roles. Chances are, in a few years, we'll have a large set of recently established kernel developers, and that quite a few of them will have gotten their start at events like Linaro Connect.

[Your editor would like to thank Linaro for travel assistance to attend this event.]

Comments (9 posted)

LC-Asia: Facebook contemplates ARM servers

By Jonathan Corbet
March 12, 2013
By any reckoning, the ARM architecture is a big success; there are more ARM processors shipping than any other type. But, despite the talk of ARM-based server systems over the last few years, most people still do not take ARM seriously in that role. Jason Taylor, Facebook's Director of Capacity Engineering & Analysis, came to the 2013 Linaro Connect Asia event to say that it may be time for that view to change. His talk was an interesting look into how one large, server-oriented operation thinks ARM may fit into its data centers.

It should come as a surprise to few readers that Facebook is big. The company claims 1 billion users across the planet. Over 350 million photographs are uploaded to Facebook's servers every day; Jason suggested that, perhaps 25% of all photos taken end up on Facebook. The company's servers handle 4.2 billion "likes," posts, and comments every day and vast numbers of users checking in. To be able to handle that kind of load, Facebook invests a lot of money into its data centers; that, in turn, has led naturally to a high level of interest in efficiency.

Facebook sees a server rack as its basic unit of computing. Those racks are populated with five standard types of server; each type is optimized for the needs of one of the top five users within the company. Basic web servers offer a lot of CPU power, but not much else, while database servers are loaded with a lot of memory and large amounts of flash storage capable of providing high I/O operation rates. "Hadoop" servers offer medium [Jason Taylor] levels of CPU and memory, but large amounts of rotating storage; "haystack" servers offer lots of storage and not much of anything else. Finally, there are "feed" servers with fast CPUs and a lot of memory; they handle search, advertisements, and related tasks. The fact that these servers run Linux wasn't really even deemed worth mentioning.

There are clear advantages to focusing on a small set of server types. The machines become cheaper as a result of volume pricing; they are also easier to manage and easier to move from one task to another. New servers can be allocated and placed into service in a matter of hours. On the other hand, these servers are optimized for specific internal Facebook users; everybody else just has to make do with servers that might not be ideal for their needs. Those needs also tend to change over time, but the configuration of the servers remains fixed. There would be clear value in the creation of a more flexible alternative.

Facebook's servers are currently all built using large desktop processors made by Intel and AMD. But, Jason noted, interesting things are happening in the area of mobile processors. Those processors will cross a couple of important boundaries in the next year or two: 64-bit versions will be available, and they will start reaching clock speeds of 2.4 GHz or so. As a result, he said, it is becoming reasonable to consider the use of these processors for big, compute-oriented jobs.

That said, there are a couple of significant drawbacks to mobile processors. The number of instructions executed per clock cycle is still relatively low, so, even at a high clock rate, mobile processors cannot get as much computational work done as desktop processors. And that hurts because processors do not run on their own; they need to be placed in racks, provided with power supplies, and connected to memory, storage, networking, and so on. A big processor reduces the relative cost of those other resources, leading to a more cost-effective package overall. In other words, the use of "wimpy cores" can triple the other fixed costs associated with building a complete, working system.

Facebook's solution to this problem is a server board called, for better or worse, "Group Hug." This design, being put together and published through Facebook's Open Compute Project, puts ten ARM processor boards onto a single server board; each processor has a 1Gb network interface which is aggregated, at the board level, into a single 10Gb interface. The server boards have no storage or other peripherals. The result is a server board with far more processors than a traditional dual-socket board, but with roughly the same computing power as a server board built with desktop processors.

These ARM server boards can then be used in a related initiative called the "disaggregated rack." The problem Facebook is trying to address here is the mismatch between available server resources and what a particular task may need. A particular server may provide just the right amount of RAM, for example, but the CPU will be idle much of the time, leading to wasted resources. Over time, that task's CPU needs might grow, to the point that, eventually, the CPU power on its servers may be inadequate, slowing things down overall. With Facebook's current server architecture, it is hard to keep up with the changing needs of this kind of task.

In a disaggregated rack, the resources required by a computational task are split apart and provided at the rack level. CPU power is provided by boxes with processors and little else — ARM-based "Group Hug" boards, for example. Other boxes in the rack may provide RAM (in the form of a simple key/value database service), high-speed storage (lots of flash), or high-capacity storage in the form of a pile of rotating drives. Each rack can be configured differently, depending on a specific task's needs. A rack dedicated to the new "graph search" feature will have a lot of compute servers and flash servers, but not much storage. A photo-serving rack, instead, will be dominated by rotating storage. As needs change, the configuration of the rack can change with it.

All of this has become possible because the speed of network interfaces has increased considerably. With networking speeds up to 100Gb/sec within the rack, the local bandwidth begins to look nearly infinite, and the network can become the backplane for computers built at a higher level. The result is a high-performance computing architecture that allows systems to be precisely tuned to specific needs and allows individual components to be depreciated (and upgraded) on independent schedules.

Interestingly, Jason's talk did not mention power consumption — one of ARM's biggest advantages — at all. Facebook is almost certainly concerned about the power costs of its data centers, but Linux-based ARM servers are apparently of interest mostly because they can offer relatively inexpensive and flexible computing power. If the disaggregated rack experiment succeeds, it may well demonstrate one way in which ARM-based servers can take a significant place in the data center.

[Your editor would like to thank Linaro for travel assistance to attend this event.]

Comments (21 posted)

SCALE: The life and times of the AGPL

By Nathan Willis
March 13, 2013

At SCALE 11x in Los Angeles, Bradley Kuhn of the Software Freedom Conservancy presented a unique look at the peculiar origin of the Affero GPL (AGPL). The AGPL was created to solve the problem of application service providers (such as Web-delivered services) skirting copyleft while adhering to the letter of licenses like the GPL, but as Kuhn explained, it is not a perfect solution.

The history of AGPL has an unpleasant beginning, middle, and end, Kuhn said, but the community needs to understand it. Many people think of the AGPL in conjunction with the "so-called Application Service Provider loophole"—but it was not really a loophole at all. Rather, the authors of the GPLv2 did not foresee the dramatic takeoff of web applications—and that was not a failure, strictly speaking, since no one can foresee the future.

In the late 1980s, he noted, client-server applications were not yet the default, and in the early 1990s, client/server applications running over the Internet were still comparatively new. In addition, the entire "copyleft hack" that makes the GPL work is centered around distribution, as it functions in copyright law. To the creators of copyleft, making private modifications to a work has never required publishing one's changes, he said, and that is the right stance. Demanding publication in such cases would violate the user's privacy.

Nevertheless, when web applications took off, the copyleft community did recognize that web services represented a problem. In early 2001, someone at an event told Kuhn "I won’t release my web application code at all, because the GPL is the BSD license of the web." In other words, a service can be built on GPL code, but can incorporate changes that are never shared with the end user, because the end user does not download the software from the server. Henry Poole, who founded the web service company Allseer to assist nonprofits with fundraising, also understood how web applications inhibited user freedom, and observed that "we have no copyleft." Poole approached the Free Software Foundation (FSF) looking for a solution, which touched off the development of what became the AGPL.

Searching for an approach

Allseer eventually changed its name to Affero, after which the AGPL is named, but before that license was written, several other ideas to address the web application problem were tossed back and forth between Poole, Kuhn, and others. The first was the notion of "public performance," which is a concept already well-established in copyright law. If running the software on a public web server is a public performance, then perhaps, the thinking went, a copyleft license's terms could specify that such public performances would require source distribution of the software.

The trouble with this approach is that "public performance" has never been defined for software, so relying on it would be somewhat unpredictable—as an undefined term, it would not be clear when it did and did not apply. Establishing a definition for "public performance" in software terms is a challenge in its own right, but without a definition for software public performance, it would be difficult to write a public performance clause into (for example) the GPL and guarantee that it was sufficiently strong to address the web application issue. Kuhn has long supported adding a public performance clause anyway, saying it would be at worst a "no op," but so far he has not persuaded anyone else.

The next idea floated was that of the Ouroboros, which in antiquity referred to a serpent eating its own tail, but in classic computer science terminology also meant a program that could generate its own source code as output. The idea is also found in programs known as quines, Kuhn said, although he only encountered the term later. Perhaps the GPL could add a clause requiring that the program be able to generate its source code as output, Kuhn thought. The GPLv2 already requires in §2(c) that an interactive program produce a copyright notice and information about obtaining the license. Thus, there was a precedent that the GPL can require adding a "feature" for the sole purpose of preserving software freedom.

The long and winding license development path

In September 2002, Kuhn proposed adding the "print your own source code" feature as §2(d) in a new revision of the GPL, which would then be published as version 2.2 (and would serve as Poole's license solution for Affero). Once the lawyers started actually drafting the language, however, they dropped the "computer-sciencey" focus of the print-your-own-source clause and replaced it with the AGPL's now-familiar "download the corresponding source code" feature requirement instead. Poole was happy with the change and incorporated it into the AGPLv1. The initial draft was "buggy," Kuhn said, with flaws such as specifying the use of HTTP, but it was released by the FSF, and was the first officially sanctioned fork of the GPL.

The GPLv2.2 (which could have incorporated the new Affero-style source code download clause) was never released, Kuhn said, even though Richard Stallman agreed to the release in 2003. The reasons the release was never made were mostly bad ones, Kuhn said, including Affero (the company) entering bankruptcy. But there was also internal division within the FSF team. Kuhn chose the "wrong fork," he said, and spent much of his time working on license enforcement actions technical work, which distracted him from other tasks. Meanwhile, other FSF people started working on the GPLv3, and the still-unreleased version 2.2 fell through the cracks.

Kuhn and Poole had both assumed that the Affero clause was safely part of the GPLv3, but those working on the license development project left it out. By the time he realized what had happened, Kuhn said, the first drafts of GPLv3 appeared, and the Affero clause was gone. Fortunately, however, Poole insisted on upgrading the AGPLv1, and AGPLv3 was written to maintain compatibility with GPLv3. AGPLv3 was not released until 2007, but in the interim Richard Fontana wrote a "transitional" AGPLv2 that projects could use to migrate from AGPLv1 to the freshly-minted AGPLv3. Regrettably, though, the release of AGPLv3 was made with what Kuhn described as a "whimper." A lot of factors—and people—contributed, but ultimately the upshot is that the Affero clause did not revolutionize web development as had been hoped.

The Dark Ages

In the time that elapsed between the Affero clause's first incarnation (in 2002) and the release of AGPLv3 (in 2007), Kuhn said, the computing landscape had changed considerably. Ruby on Rails was born, for example, launching a widely popular web development platform that had no ties to the GPL community. "AJAX"—which is now known simply as JavaScript, but at the time was revolutionary—became one of the most widely-adopted way to deliver services. Finally, he said, the possibility of venture–capital funding trained new start-ups to build their businesses on a "release everything but your secret sauce" model.

Open source had become a buzzword-compliance checkbox to tick, but the culture of web development did not pick copyleft licenses, opting instead largely for the MIT License and the three-clause BSD License. The result is what Kuhn called "trade secret software." It is not proprietary in the old sense of the word; since it runs on a server, it is not installed and the user never has any opportunity to get it.

The client side of the equation is no better; web services deliver what they call "minified" JavaScript: obfuscated code that is intentionally compressed. This sort of JavaScript should really be considered a compiled JavaScript binary, Kuhn said, since it is clearly not the "preferred form for modifying" the application. An example snippet he showed illustrated the style:

    try{function e(b){throw b;}var i=void 0,k=null;
    function aa(){return function(b){return b}}
    function m(){return function(){}}
    function ba(b){return function(a){this[b]=a}}
    function o(b){ return function(){return this[b]}}
    function p(b){return function(){return b}}var q;
    function da(b,a,c){b=b.split(".");c=c||ea;
    !(b[0]in c)&&c.execScript&&c.execScript("var "+b[0]);
    for(var d;b.length&&(d=b.shift());)
    function fa(b,a){for(var c=b.split("."),d=a||ea,f;f=c.shift();)
which is not human-readable.

Microsoft understands the opportunity in this approach, he added, noting that proprietary JavaScript can be delivered to run even on an entirely free operating system. Today, the "trade secret" server side plus "compiled JavaScript" client side has become the norm, even with services that ostensibly are dedicated to software freedom, like the OpenStack infrastructure or the git-based GitHub and Bitbucket.

In addition to the non-free software deployment itself, Kuhn worries that software freedom advocates risk turning into a "cloistered elite" akin to monks in the Dark Ages. The monks were literate and preserved knowledge, but the masses outside the walls of the monastery suffered. Free software developers, too, can live comfortably in their own world as source code "haves" while the bulk of computer users remain source code "have-nots."

One hundred years out

Repairing such a bifurcation would be a colossal task. Among other factors, the rise of web application development represents a generational change, Kuhn said. How many of today's web developers have chased a bug from the top of the stack all the way down into the kernel? Many of them develop on Mac OS X, which is proprietary but is of very good quality (as opposed to Microsoft, he commented, which was never a long term threat since its software was always terrible...).

Furthermore, if few of today's web developers have chased a bug all the way down the stack, as he suspects, tomorrow's developers may not ever need to. There are so many layers underneath a web application framework that most web developers do not need to know what happens in the lowest layers. Ironically, the success of free software has contributed to this situation as well. Today, the best operating system software in the world is free, and any teenager out there can go download it and run it. Web developers can get "cool, fun jobs" without giving much thought to the OS layer.

Perhaps this shift was inevitable, Kuhn said, and even if GPLv2.2 had rolled out the Affero clause in 2002 and he had done the best possible advocacy, it would not have altered the situation. But the real question is what the software freedom community should do now.

For starters, he said, the community needs to be aware that the AGPL can be—and often is—abused. This is usually done through "up-selling" and license enforcement done with a profit motive, he said. MySQL AB (now owned by Oracle) is the most prominent example; because it holds the copyright to the MySQL code and offers it under both GPL and commercial proprietary licenses, it can pressure businesses into purchasing commercial proprietary licenses by telling them that their usage of the software violates the GPL, even if it does not. This technique is one of the most frequent uses of the AGPL (targeting web services), Kuhn said, and "it makes me sick," because it goes directly against the intent of the license authors.

But although using the AGPL for web applications does not prevent such abuses, it is still the best option. Preserving software freedom on the web demands more, however, including building more federated services. There are a few examples, he said, including and MediaGoblin, but the problem that such services face is the "Great Marketing Machine." When everybody else (such as Twitter and Flickr) deploys proprietary web services, the resulting marketing push is not something that licensing alone can overtake.

The upshot, Kuhn said, is that "we’re back to catching up to proprietary software," just as GNU had to catch up to Unix in earlier decades. That game of catch-up took almost 20 years, he said, but then again an immediate solution is not critical. He is resigned to the fact that proprietary software will not disappear within his lifetime, he said, but he still wants to think about 50 or 100 years down the road.

Perhaps there were mistakes made in the creation and deployment of the Affero clause, but as Kuhn's talk illustrated, the job of protecting software freedom in web applications involves a number of discrete challenges. The AGPL is not a magic bullet, nor can it change today's web development culture, but the issues that it addresses are vital for the long term preservation of software freedom. The other wrinkle, of course, is that there are a wide range of opinions about what constitutes software freedom on the web. Some draw the line at whatever software runs on the user's local machine (i.e., the JavaScript components), others insist that public APIs and open access to data are what really matters. The position advocated by the FSF and by Kuhn is the most expansive, but because of it, developers now have another licensing option at their disposal.

Comments (43 posted)

Page editor: Jonathan Corbet


SCALE: The Hockeypuck key server

By Nathan Willis
March 13, 2013

At SCALE 11x in Los Angeles, Gazzang's Casey Marshall presented his work developing Hockeypuck, an alternative public PGP keyserver. Although the company developed Hockeypuck to support one of its own products, the AGPL-licensed server is capable of running a standalone key service, and is compatible with Synchronizing Key Server (SKS), the tool used by almost all public key services.

Keyservers are a critical component in the public key infrastructure (PKI), even though they rarely attract significant attention. They enable PGP-aware client applications to search for and retrieve users' public keys, which is what enables parties to encrypt messages to one another without prior agreement. In addition to sender's and recipient's keys, PGP relies on a "web of trust" built up by verifiable signatures from other PGP keys. Aside from secure private email, PGP encryption is also used in an increasing number of other tasks, such as verifying software package signatures. Marshall observed that this system is in essence a globally distributed social network; the Internet's keyservers share identities in a distributed fashion across a truly global pool. Because keyservers distribute the load and can synchronize, it is very hard for an attacker to tamper with or otherwise undermine the keys.

SKS is by far the most commonly used keyserver, Marshall said, and it offers a powerful set of features. It uses an efficient "set reconciliation" algorithm to keep the global database of keys in sync between remote peers, and it uses Berkeley DB for data storage. Although there is an older, email-based protocol for querying and retrieving keys, SKS is built around the HTTP Keyserver Protocol (HKP), which Marshall described as being RESTful before REST was popular.

Enter Hockeypuck

Marshall got into keyserver development while working on a Gazzang product called zTrustee. The product is a closed source storage service that uses OpenPGP keys to encrypt data. Because the service relies on generating and storing separate keys for separate objects, it quite naturally processes keys in heavy volume, which is not the typical workload for a keyserver. The company has been using SKS to distribute keys to clients, he said, but SKS is very write-heavy, and under sufficient load it was found to cause unacceptable delays.

Hoping to improve on the situation, Marshall started writing Hockeypuck. It is not yet ready to replace SKS in zTrustee, but interested developers can test it out. The project is hosted at and mirrored on Github. Binary packages are already available for Ubuntu 13.04, and there is a publicly accessible instance of the server running at GPG users can query the server by supplying it as a command line switch, for example:

     gpg --keyserver --search-keys Santa

The public server's web interface presents a minimalist "Google-style" search page (which, as he pointed out, includes an "I'm Feeling Lucky" button that is not really intended for serious usage). Hockeypuck does not participate in the global set reconciliation algorithm of the SKS keyservers, but the public server was initialized with a dump file provided by an SKS server six months ago, so it contains a significant subset of the total global key pool.

Hockeypuck is written in Go, which Marshall said he selected for several reasons. Its simplicity and modularity make it fun to write in, he said, but it also offers useful features and high-performance message passing. The wide assortment of libraries available included an OpenPGP implementation, which he used, although he noted that there are not many OpenPGP implementations to choose from—most PGP development takes the form of additional services built on top a small set of OpenPGP stacks.

Lessons learned

At the moment, Hockeypuck uses MongoDB for storage; Marshall said he would be adding PostgreSQL support next, and perhaps other database connectors later. The server architecture is fairly straightforward, he said. A separate goroutine handles each HTTP request, opening a channel to a worker that queries the database. Scaling the system could be as simple as running one worker per CPU, or more sophisticated techniques could be employed depending on the database backend.

Indeed, Marshall said, the choice of MongoDB has come with its share of problems. It was easy to develop with, he said; Go even has a library for it. "You give it a struct, and you get a struct back." But not being a real relational database imposes limitations, starting with the fact that you cannot index something just because you want to. The Hockeypuck database is indexed on the uid field (which contains names and email addresses), thus it cannot also run searches on other fields (like key-ID); a truly full-text search is not possible. He also found it necessary to reverse the order of key fingerprints, placing the shorter key-ID at the beginning of the record so that it can be read and searched faster. Maintaining performance has also been tricky, he said; loading data into MongoDB is very quick, but updates must be aggregated for write efficiency. Ultimately, he concluded, MongoDB makes it very easy to write database applications, but it shifts more work onto configuration and deployment.

Maintaining the public Hockeypuck server has also imparted its share of lessons, he said. For example, an unknown user downloaded Marshall's own key, added new email addresses as uid fields, then re-uploaded the key to the server. An OpenPGP client application would not have been fooled by the deception because the grafted-on fields were not signed by the primary key, but the incident pointed out to Marshall that Hockeypuck needed to do its part as well. He quickly added code that checked the signatures on uploads, and reloaded the SKS key database just to be on the safe side. Technically, he observed, keyservers themselves are not meant to be trusted entities—the keys are designed to be verified or rejected cryptographically—but maintaining a tidy and valid database is important too.

Keys to the future

Hockeypuck's key loading is quite fast already, Marshall said; it can load about two million keys in 24 hours. Queries, in turn, are "reasonably fast," and the database appears to be the bottleneck. But apart from increasing performance, he has several other important items on his to-do list. For example, one improvement is support for OpenPGP version 3 signatures. Version 4 signatures are the standard today, but version 3 signatures can still be found in the wild.

A far bigger task (which Marshall regards as the missing "killer feature") is implementing the SKS reconciliation algorithm. This will allow Hockeypuck to interoperate with the global pool of public keys. He has started work on an implementation of the algorithm (that he named conflux), which he hopes will be general-purpose enough to serve as a synchronization library outside of the keyserver itself. Conflux is "getting there," he said; the mathematical portions are passing unit tests, but he still has work to do on the network protocol itself.

Further down the line, he speculated that Hockeypuck could serve as an SSH keyserver as well, and perhaps work with other authentication schemes like Password Authenticated Key Exchange by Juggling (J-PAKE) or

Trust stuff

Despite the fact that Hockeypuck "competes" with SKS, Marshall said he has found the SKS community to be very friendly, and many were excited to hear about Hockeypuck and its implementation of the set reconciliation algorithm. An independent implementation of the feature is good news for any project, but especially for infrastructure projects like SKS, because the "web of trust" that it implements is so crucial.

Marshall concluded his talk by asking attendees to examine the web of trust and how it functions. We trust different identity credentials for very different reasons, he said: we trust PGP keys because of either the signatures of other PGP users or our participation in key-signing events; we trust SSH keys because they are the same key we encountered the first time we connected; we trust SSL/TLS certificates because they contain a signature from a certificate authority that our browser trusts. Our trust will have a stronger foundation if it includes multiple factors, he said; perhaps, for example, PGP keys need to incorporate notions of identity beyond email addresses alone.

Keyservers could also play a role in advancing the conversation about identity management, he suggested. As he noted at the beginning of the session, the SKS global key pool has functioned as a decentralized social network for years—perhaps there are ways to leverage it, such as linking PGP keys to OpenID or OAuth accounts, or to make SSH user authentication as widely accepted as SSH host authentication already is.

Of course, PGP is still in use by just a fraction of global email users; its critics have long argued that public key encryption and the PKI are too complicated for mass appeal. That is a difficult claim to prove, particularly since it is hard to disentangle the ideas of PKI from its client software implementations. But even for those who understand and use PGP on a regular basis, the accidental monoculture of SKS keyservers poses its own potential risks. Hockeypuck might never overtake SKS in popularity, but by offering an additional choice and by shining new light on HKP and other factors, it may strengthen critical pieces of PKI anyway.

Comments (none posted)

Brief items

Security quotes of the week

Electronic devices often retain sensitive and confidential information far beyond the perceived point of erasure, notably in the form of browsing histories and records of deleted files. This quality makes it impractical, if not impossible, for individuals to make meaningful decisions regarding what digital content to expose to the scrutiny that accompanies international travel. A person's digital life ought not be hijacked simply by crossing a border. When packing traditional luggage, one is accustomed to deciding what papers to take and what to leave behind. When carrying a laptop, tablet or other device, however, removing files unnecessary to an impending trip is an impractical solution given the volume and often intermingled nature of the files. It is also a time-consuming task that may not even effectively erase the files.
-- US 9th Circuit Appeals Court rules that border searches of electronic devices are subject to the Constitution (as reported by Techdirt)

[Abdelrahman] Desoky suggests that instead of using a humdrum text document and modifying it in a codified way to embed a secret message, correspondents could use a joke to hide their true meaning. As such, he has developed an Automatic Joke Generation Based Steganography Methodology (Jokestega) that takes advantage of recent software that can automatically write pun-type jokes using large dictionary databases. Among the automatic joke generators available are: The MIT Project, Chuck Norris Joke Generator, Jokes2000, The Joke Generator dot Com and the Online Joke Generator System (pickuplinegen).
-- Science Daily

It is best that the surveillance system be challenged and dismantled before it becomes comprehensive; once every person is tracked all the time it will be far harder to do so, especially as audio surveillance also expands. Once everyone is both tracked and listened to, it will be virtually impossible to organize resistance.

The comprehensive surveillance state, combined with measures to deal with the loyalty of the enforcer class, is the end game: it is where current trends lead. It will be justified to the public as a measure to decrease crime and protect innocents (especially children), but it will lead to a more advanced Stasi state.

-- Ian Welsh

Researchers successfully demonstrated new security vulnerabilities in all three browsers tested - Firefox, Chrome and IE. At the conclusion of the event we received technical details about the exploit so we could issue a fix.

We received the technical details on Wednesday evening and within less than 24 hours diagnosed the issue, built a patch, validated the fix and the resulting builds, and deployed the patch to users. Our fast turn around time on this security issue is a reflection of the priority and focus we place on security. Security is more than a side item for us, it's part of our core principles.

-- Michael Coates of Mozilla on the outcome of the Pwn2Own competition

Comments (none posted)

New vulnerabilities

389-ds-base: denial of service

Package(s):389-ds-base CVE #(s):CVE-2013-0312
Created:March 12, 2013 Updated:March 13, 2013
Description: From the Red Hat advisory:

A flaw was found in the way LDAPv3 control data was handled by 389 Directory Server. If a malicious user were able to bind to the directory (even anonymously) and send an LDAP request containing crafted LDAPv3 control data, they could cause the server to crash, denying service to the directory.

Scientific Linux SL-389--20130312 389-ds-base 2013-03-12
Oracle ELSA-2013-0628 389-ds-base 2013-03-11
CentOS CESA-2013:0628 389-ds-base 2013-03-12
Red Hat RHSA-2013:0628-01 389-ds-base 2013-03-11

Comments (none posted)

crypto-utils: symlink attack

Package(s):crypto-utils CVE #(s):CVE-2012-3504
Created:March 11, 2013 Updated:March 13, 2013
Description: From the CVE entry:

The nssconfigFound function in in crypto-utils 2.4.1-34 allows local users to overwrite arbitrary files via a symlink attack on the "list" file in the current working directory.

Fedora FEDORA-2013-3259 crypto-utils 2013-03-11
Fedora FEDORA-2013-3253 crypto-utils 2013-03-11

Comments (none posted)

gksu-polkit: root privilege escalation

Package(s):gksu-polkit CVE #(s):CVE-2012-5617
Created:March 7, 2013 Updated:August 5, 2013

From the Red Hat Bugzilla entry:

Miroslav Trmac reported that gksu-polkit ships with an extremely permissive PolicyKit policy configuration file. Because gksu-polkit allows a user to execute a program with administrative privileges, and because the default allow_active setting is "auth_self" rather than "auth_admin", any local user can use gksu-polkit to execute arbitrary programs (like a bash shell) with root privileges.

Fedora FEDORA-2013-13616 gksu-polkit 2013-08-04
Fedora FEDORA-2013-13620 gksu-polkit 2013-08-04
Fedora FEDORA-2013-3032 gksu-polkit 2013-03-06

Comments (none posted)

kernel: multiple vulnerabilities

Package(s):kernel CVE #(s):CVE-2013-1828 CVE-2013-1792 CVE-2013-1825
Created:March 11, 2013 Updated:July 12, 2013
Description: From the Red Hat bugzilla [1], [2], [3]:

A local user could use the missing size check in sctp_getsockopt_assoc_stats() function to escalate their privileges. On x86 this might be mitigated by destination object size check as the destination size is known at compile time.

A race condition leading to a NULL pointer dereference is discovered in the Linux kernel. It occurs during parallel invocation of install_user_keyrings & lookup_user_key routines.

Linux kernels built with crypto user APIs are vulnerable to the information disclosure flaw. It occurs when user calls the `crypto_*_report' APIs via netlink based crypto API interface.

A privileged user/program (CAP_NET_ADMIN) could use this flaw to read kernel memory area.

openSUSE openSUSE-SU-2014:0204-1 kernel 2014-02-06
Oracle ELSA-2013-1645 kernel 2013-11-26
openSUSE openSUSE-SU-2013:1187-1 kernel 2013-07-12
Mandriva MDVSA-2013:176 kernel 2013-06-24
Oracle ELSA-2013-2525 kernel 2013-06-13
Oracle ELSA-2013-2525 kernel 2013-06-13
Red Hat RHSA-2013:0829-01 kernel-rt 2013-05-20
Mageia MGASA-2013-01451 kernel-vserver 2013-05-17
Mageia MGASA-2013-0150 kernel-rt 2013-05-17
Mageia MGASA-2013-0149 kernel-tmb 2013-05-17
Mageia MGASA-2013-0148 kernel-linus 2013-05-17
Mageia MGASA-2013-0147 kernel 2013-05-17
Debian DSA-2668-1 linux-2.6 2013-05-14
SUSE SUSE-SU-2013:0786-1 Linux kernel 2013-05-14
Oracle ELSA-2013-2523 kernel 2013-05-10
Oracle ELSA-2013-2523 kernel 2013-05-10
SUSE SUSE-SU-2013:0759-2 Linux kernel 2013-05-08
SUSE SUSE-SU-2013:0759-1 Linux kernel 2013-05-07
Oracle ELSA-2013-2520 kernel-2.6.32 2013-04-25
Oracle ELSA-2013-2520 kernel-2.6.32 2013-04-25
Oracle ELSA-2013-2519 kernel-2.6.39 2013-04-25
Oracle ELSA-2013-2519 kernel-2.6.39 2013-04-25
Oracle ELSA-2013-0744 kernel 2013-04-24
Scientific Linux SL-kern-20130424 kernel 2013-04-24
CentOS CESA-2013:0744 kernel 2013-04-24
Red Hat RHSA-2013:0744-01 kernel 2013-04-23
Ubuntu USN-1798-1 linux-ec2 2013-04-08
Ubuntu USN-1795-1 linux-lts-quantal 2013-04-08
Ubuntu USN-1797-1 linux-ti-omap4 2013-04-08
Ubuntu USN-1794-1 linux-ti-omap4 2013-04-08
Ubuntu USN-1796-1 linux 2013-04-08
Ubuntu USN-1787-1 linux 2013-04-02
Fedora FEDORA-2013-3909 kernel 2013-03-22
Fedora FEDORA-2013-3630 kernel 2013-03-11
Ubuntu USN-1793-1 linux 2013-04-08
Ubuntu USN-1792-1 linux 2013-04-08
Ubuntu USN-1788-1 linux-lts-backport-oneiric 2013-04-03

Comments (none posted)

kernel: denial of service

Package(s):kernel CVE #(s):CVE-2013-1772
Created:March 7, 2013 Updated:July 12, 2013

From the Red Hat advisory:

A flaw was found in the way file permission checks for the "/dev/kmsg" file were performed in restricted root environments (for example, when using a capability-based security model). A local user able to write to this file could cause a denial of service. (CVE-2013-1772, Low)

openSUSE openSUSE-SU-2013:1187-1 kernel 2013-07-12
Oracle ELSA-2013-2546 enterprise kernel 2013-09-17
Oracle ELSA-2013-2546 enterprise kernel 2013-09-17
SUSE SUSE-SU-2013:0786-1 Linux kernel 2013-05-14
SUSE SUSE-SU-2013:0759-2 Linux kernel 2013-05-08
SUSE SUSE-SU-2013:0759-1 Linux kernel 2013-05-07
Red Hat RHSA-2013:0566-01 kernel-rt 2013-03-06

Comments (none posted)

krb5: denial of service

Package(s):krb5 CVE #(s):CVE-2013-1415
Created:March 11, 2013 Updated:March 25, 2013
Description: From the CVE entry:

The pkinit_check_kdc_pkid function in plugins/preauth/pkinit/pkinit_crypto_openssl.c in the PKINIT implementation in the Key Distribution Center (KDC) in MIT Kerberos 5 (aka krb5) before 1.10.4 and 1.11.x before 1.11.1 does not properly handle errors during extraction of fields from an X.509 certificate, which allows remote attackers to cause a denial of service (NULL pointer dereference and daemon crash) via a malformed KRB5_PADATA_PK_AS_REQ AS-REQ request.

Ubuntu USN-2310-1 krb5 2014-08-11
Mandriva MDVSA-2013:157 krb5 2013-04-30
Mandriva MDVSA-2013:042 krb5 2013-04-05
openSUSE openSUSE-SU-2013:0523-1 krb5 2013-03-22
Fedora FEDORA-2013-3147 krb5 2013-03-22
openSUSE openSUSE-SU-2013:0498-1 krb5 2013-03-20
Oracle ELSA-2013-0656 krb5 2013-03-18
CentOS CESA-2013:0656 krb5 2013-03-18
Scientific Linux SL-krb5-20130318 krb5 2013-03-18
Red Hat RHSA-2013:0656-01 krb5 2013-03-18
Fedora FEDORA-2013-3116 krb5 2013-03-16
Mageia MGASA-2013-0087 krb5 2013-03-09

Comments (none posted)

libproxy: format string flaw

Package(s):libproxy CVE #(s):CVE-2012-5580
Created:March 11, 2013 Updated:March 13, 2013
Description: From the Red Hat bugzilla:

A format string flaw was reported in libproxy's proxy commandline tool (bin/proxy). This was corrected upstream and is included in the 0.4.0 release.

Fedora FEDORA-2012-20092 libproxy 2013-03-10

Comments (none posted)

MRG Grid: denial of service

Package(s):MRG Grid CVE #(s):CVE-2012-4462
Created:March 7, 2013 Updated:March 13, 2013

From the Red Hat advisory:

It was found that attempting to remove a job via "/usr/share/condor/aviary/" with CPROC in square brackets caused condor_schedd to crash. If aviary_query_server was configured to listen to public interfaces, this could allow a remote attacker to cause a denial of service condition in condor_schedd. While condor_schedd was restarted by the condor_master process after each exit, condor_master would throttle back restarts after each crash. This would slowly increment to the defined MASTER_BACKOFF_CEILING value (3600 seconds/1 hour, by default). (CVE-2012-4462)

Red Hat RHSA-2013:0565-01 MRG Grid 2013-03-06
Red Hat RHSA-2013:0564-01 MRG Grid 2013-03-06

Comments (none posted)

MRG Messaging: multiple vulnerabilities

Package(s):MRG Messaging CVE #(s):CVE-2012-4446 CVE-2012-4458 CVE-2012-4459
Created:March 7, 2013 Updated:March 13, 2013

From the Red Hat advisory:

It was found that the Apache Qpid daemon (qpidd) treated AMQP connections with the federation_tag attribute set as a broker-to-broker connection, rather than a client-to-server connection. This resulted in the source user ID of messages not being checked. A client that can establish an AMQP connection with the broker could use this flaw to bypass intended authentication. For Condor users, if condor-aviary is installed, this flaw could be used to submit jobs that would run as any user (except root, as Condor does not run jobs as root). (CVE-2012-4446)

It was found that the AMQP type decoder in qpidd allowed arbitrary data types in certain messages. A remote attacker could use this flaw to send a message containing an excessively large amount of data, causing qpidd to allocate a large amount of memory. qpidd would then be killed by the Out of Memory killer (denial of service). (CVE-2012-4458)

An integer overflow flaw, leading to an out-of-bounds read, was found in the Qpid qpid::framing::Buffer::checkAvailable() function. An unauthenticated, remote attacker could send a specially-crafted message to Qpid, causing it to crash. (CVE-2012-4459)

Red Hat RHSA-2013:0562-01 MRG Messaging 2013-03-06
Red Hat RHSA-2013:0561-01 MRG Messaging 2013-03-06

Comments (none posted)

openshift: multiple vulnerabilities

Package(s):openshift CVE #(s):CVE-2013-0327 CVE-2013-0328 CVE-2013-0329 CVE-2013-0330 CVE-2013-0331
Created:March 13, 2013 Updated:March 13, 2013
Description: From the Red Hat advisory:

It was found that Jenkins did not protect against Cross-Site Request Forgery (CSRF) attacks. If a remote attacker could trick a user, who was logged into Jenkins, into visiting a specially-crafted URL, the attacker could perform operations on Jenkins. (CVE-2013-0327, CVE-2013-0329)

A cross-site scripting (XSS) flaw was found in Jenkins. A remote attacker could use this flaw to conduct an XSS attack against users of Jenkins. (CVE-2013-0328)

A flaw could allow a Jenkins user to build jobs they do not have access to. (CVE-2013-0330)

A flaw could allow a Jenkins user to cause a denial of service if they are able to supply a specially-crafted payload. (CVE-2013-0331)

Red Hat RHSA-2013:0638-01 openshift 2013-03-12

Comments (none posted)

openssh: information disclosure

Package(s):openssh CVE #(s):CVE-2012-0814
Created:March 13, 2013 Updated:March 13, 2013
Description: From the CVE entry:

The auth_parse_options function in auth-options.c in sshd in OpenSSH before 5.7 provides debug messages containing authorized_keys command options, which allows remote authenticated users to obtain potentially sensitive information by reading these messages, as demonstrated by the shared user account required by Gitolite. NOTE: this can cross privilege boundaries because a user account may intentionally have no shell or filesystem access, and therefore may have no supported way to read an authorized_keys file in its own home directory.

Gentoo 201405-06 openssh 2014-05-11
Mandriva MDVSA-2013:022 openssh 2013-03-13

Comments (none posted)

perl: denial of service

Package(s):perl CVE #(s):CVE-2013-1667
Created:March 11, 2013 Updated:April 3, 2013
Description: From the Debian advisory:

Yves Orton discovered a flaw in the rehashing code of Perl. This flaw could be exploited to carry out a denial of service attack against code that uses arbitrary user input as hash keys. Specifically an attacker could create a set of keys of a hash causing a denial of service via memory exhaustion.

Gentoo 201401-11 perl 2014-01-19
Mandriva MDVSA-2013:113 perl 2013-04-10
Fedora FEDORA-2013-3673 perl 2013-04-03
Scientific Linux SL-perl-20130327 perl 2013-03-27
Oracle ELSA-2013-0685 perl 2013-03-27
Oracle ELSA-2013-0685 perl 2013-03-26
CentOS CESA-2013:0685 perl 2013-03-26
CentOS CESA-2013:0685 perl 2013-03-26
Red Hat RHSA-2013:0685-01 perl 2013-03-26
Fedora FEDORA-2013-3436 perl 2013-03-22
Debian DSA-2641-2 libapache2-mod-perl2 2013-03-20
openSUSE openSUSE-SU-2013:0502-1 perl 2013-03-20
openSUSE openSUSE-SU-2013:0497-1 perl 2013-03-20
Ubuntu USN-1770-1 perl 2013-03-19
Mageia MGASA-2013-0094 perl 2013-03-16
Slackware SSA:2013-072-01 perl 2013-03-13
SUSE SUSE-SU-2013:0442-1 Perl 2013-03-13
SUSE SUSE-SU-2013:0441-1 Perl 2013-03-13
Debian DSA-2641-1 perl 2013-03-09

Comments (none posted)

puppet: multiple vulnerabilities

Package(s):puppet CVE #(s):CVE-2013-1640 CVE-2013-1652 CVE-2013-1653 CVE-2013-1654 CVE-2013-1655 CVE-2013-2274 CVE-2013-2275
Created:March 13, 2013 Updated:August 2, 2013
Description: From the Debian advisory:

CVE-2013-1640: An authenticated malicious client may request its catalog from the puppet master, and cause the puppet master to execute arbitrary code. The puppet master must be made to invoke the `template` or `inline_template` functions during catalog compilation.

CVE-2013-1652: An authenticated malicious client may retrieve catalogs from the puppet master that it is not authorized to access. Given a valid certificate and private key, it is possible to construct an HTTP GET request that will return a catalog for an arbitrary client.

CVE-2013-1653: An authenticated malicious client may execute arbitrary code on Puppet agents that accept kick connections. Puppet agents are not vulnerable in their default configuration. However, if the Puppet agent is configured to listen for incoming connections, e.g. listen = true, and the agent's auth.conf allows access to the `run` REST endpoint, then an authenticated client can construct an HTTP PUT request to execute arbitrary code on the agent. This issue is made worse by the fact that puppet agents typically run as root.

CVE-2013-1654: A bug in Puppet allows SSL connections to be downgraded to SSLv2, which is known to contain design flaw weaknesses This affects SSL connections between puppet agents and master, as well as connections that puppet agents make to third party servers that accept SSLv2 connections. Note that SSLv2 is disabled since OpenSSL 1.0.

CVE-2013-1655: An unauthenticated malicious client may send requests to the puppet master, and have the master load code in an unsafe manner. It only affects users whose puppet masters are running ruby 1.9.3 and above.

CVE-2013-2274: An authenticated malicious client may execute arbitrary code on the puppet master in its default configuration. Given a valid certificate and private key, a client can construct an HTTP PUT request that is authorized to save the client's own report, but the request will actually cause the puppet master to execute arbitrary code.

CVE-2013-2275: The default auth.conf allows an authenticated node to submit a report for any other node, which is a problem for compliance. It has been made more restrictive by default so that a node is only allowed to save its own report.

Gentoo 201308-04 puppet 2013-08-23
Fedora FEDORA-2013-3935 puppet 2013-08-02
openSUSE openSUSE-SU-2013:0641-1 puppet 2013-04-08
Fedora FEDORA-2013-4187 puppet 2013-03-30
Ubuntu USN-1759-1 puppet 2013-03-12
Debian DSA-2643-1 puppet 2013-03-12
Red Hat RHSA-2013:0710-01 puppet 2013-04-04
SUSE SUSE-SU-2013:0618-1 puppet 2013-04-03

Comments (none posted)

ruby: denial of service

Package(s):ruby CVE #(s):CVE-2013-1821
Created:March 8, 2013 Updated:April 4, 2013

From the Red Hat advisory:

It was discovered that Ruby's REXML library did not properly restrict XML entity expansion. An attacker could use this flaw to cause a denial of service by tricking a Ruby application using REXML to read text nodes from specially-crafted XML content, which will result in REXML consuming large amounts of system memory.

Gentoo 201412-27 ruby 2014-12-13
Debian DSA-2809-1 ruby1.8 2013-12-04
Debian DSA-2738-1 ruby1.9.1 2013-08-18
Mandriva MDVSA-2013:124 ruby 2013-04-10
openSUSE openSUSE-SU-2013:0614-1 ruby 2013-04-03
openSUSE openSUSE-SU-2013:0603-1 ruby 2013-04-03
Ubuntu USN-1780-1 ruby1.8, ruby1.9.1 2013-03-25
Slackware SSA:2013-075-01 ruby 2013-03-16
Mageia MGASA-2013-0092 ruby 2013-03-16
CentOS CESA-2013:0612 ruby 2013-03-09
Oracle ELSA-2013-0611 ruby 2013-03-08
CentOS CESA-2013:0611 ruby 2013-03-08
Scientific Linux SL-ruby-20130307 ruby 2013-03-07
CentOS CESA-2013:0611 ruby 2013-03-08
Red Hat RHSA-2013:0612-01 ruby 2013-03-07
Red Hat RHSA-2013:0611-01 ruby 2013-03-07

Comments (none posted)

vdsm: insecure node image

Package(s):vdsm CVE #(s):CVE-2012-5518
Created:March 12, 2013 Updated:March 13, 2013
Description: From the Red Hat bugzilla:

When new node image is being created, vdsm.rpm is added to the node image and self-signed key (and certificate) is created. This key/cert allows vdsm to start and serve requests from anyone who has a matching key/cert which could be anybody holding the node image.

Upstream fix:

Fedora FEDORA-2013-0210 vdsm 2013-03-12

Comments (none posted)

xulrunner: code execution

Package(s):xulrunner CVE #(s):CVE-2013-0787
Created:March 8, 2013 Updated:June 3, 2013

From the Mozilla advisory:

VUPEN Security, via TippingPoint's Zero Day Initiative, reported a use-after-free within the HTML editor when content script is run by the document.execCommand() function while internal editor operations are occurring. This could allow for arbitrary code execution.

openSUSE openSUSE-SU-2014:1100-1 Firefox 2014-09-09
Gentoo 201309-23 firefox 2013-09-27
Debian DSA-2699-1 iceweasel 2013-06-02
Mageia MGASA-2013-0120 iceape 2013-04-18
Mageia MGASA-2013-0093 firefox, thunderbird 2013-03-16
SUSE SUSE-SU-2013:0471-1 Mozilla Firefox 2013-03-15
SUSE SUSE-SU-2013:0470-1 Mozilla Firefox 2013-03-15
openSUSE openSUSE-SU-2013:0466-1 xulrunner 2013-03-15
openSUSE openSUSE-SU-2013:0468-1 seamonkey 2013-03-15
openSUSE openSUSE-SU-2013:0465-1 MozillaThunderbird 2013-03-15
openSUSE openSUSE-SU-2013:0467-1 MozillaFirefox 2013-03-15
Fedora FEDORA-2013-3696 xulrunner 2013-03-15
Fedora FEDORA-2013-3696 thunderbird 2013-03-15
Fedora FEDORA-2013-3696 firefox 2013-03-15
Slackware SSA:2013-072-02 seamonkey 2013-03-13
Mandriva MDVSA-2013:024 firefox 2013-03-13
Fedora FEDORA-2013-3718 thunderbird 2013-03-14
Fedora FEDORA-2013-3718 xulrunner 2013-03-14
Fedora FEDORA-2013-3718 firefox 2013-03-14
Ubuntu USN-1758-2 thunderbird 2013-03-12
Scientific Linux SL-thun-20130312 thunderbird 2013-03-12
openSUSE openSUSE-SU-2013:0431-1 Mozilla 2013-03-12
Oracle ELSA-2013-0627 thunderbird 2013-03-11
CentOS CESA-2013:0627 thunderbird 2013-03-12
CentOS CESA-2013:0627 thunderbird 2013-03-12
Red Hat RHSA-2013:0627-01 thunderbird 2013-03-11
CentOS CESA-2013:0614 xulrunner 2013-03-09
Oracle ELSA-2013-0614 xulrunner 2013-03-08
Oracle ELSA-2013-0614 xulrunner 2013-03-08
Ubuntu USN-1758-1 firefox 2013-03-08
CentOS CESA-2013:0614 xulrunner 2013-03-08
Scientific Linux SL-xulr-20130308 xulrunner 2013-03-08
Red Hat RHSA-2013:0614-01 xulrunner 2013-03-08

Comments (none posted)

zfs-fuse: executable stack

Package(s):zfs-fuse CVE #(s):
Created:March 13, 2013 Updated:March 13, 2013
Description: From the Red Hat bugzilla:

Several programs in this package have an executable stack. This makes it susceptible to stack based exploits should another weakness be found in the affected programs:

  • /usr/bin/zdb
  • /usr/bin/zfs
  • /usr/bin/zfs-fuse
  • /usr/bin/zpool
  • /usr/bin/ztest
Fedora FEDORA-2013-3382 zfs-fuse 2013-03-12
Fedora FEDORA-2013-3425 zfs-fuse 2013-03-12

Comments (none posted)

Page editor: Jake Edge

Kernel development

Brief items

Kernel release status

The current development kernel is 3.9-rc2, released on March 10. "Hey, things have been reasonable calm. Sure, Dave Jones has been messing with trinity and we've had some excitement from that, but Al is back, and is hopefully now busy virtually riding to the rescue on a white horse. But otherwise it's been good for this phase in the rc window."

Stable updates: no stable updates have been released in the last week. As of this writing, the 3.8.3, 3.4.36, and 3.0.69 updates are in the review process; they can be expected on or after March 14.

Comments (none posted)

Quotes of the week

More importantly, does a vintage kernel sound better than a more recent one? I've been doing some testing and the results are pretty clear, not that they should surprise anyone who knows anything about recording:

1) Older kernels sound much warmer than newer ones.

2) Kernels compiled by hand on the machine they run on sound less sterile than upstream distro provided ones which also tend to have flabby low end response and bad stereo imaging.

3) As if it needed saying, gcc4 is a disaster for sound quality. I mean, seriously if you want decent audio and you use gcc4 you may as well be recording with a tin can microphone.

Ben Bell (Thanks to Johan Herland)

But this is definitely another of those "This is our most desperate hour. Help me, Al-biwan Ke-Viro, you're my only hope" issues.

Al? Please don't make me wear that golden bikini.

Linus Torvalds

Every Linux kernel maintainer with meaningful contributions to the security of the Linux kernel will be fully sponsored by the Pax Team. The LKSC organization team has hired strategically placed bouncers with bats to improve Linux kernel security and future LKML discussions.
Pax Team

Comments (2 posted)

Overlayfs for 3.10

By Jonathan Corbet
March 13, 2013
The "overlayfs" filesystem is one implementation of the union filesystem concept, whereby two or more filesystems can be combined into a single, virtual tree. LWN first reported on overlayfs in 2010; since then it has seen continued development and has been shipped by a number of distributors. It has not, however, managed to find its way into the mainline kernel.

In a recent posting of the overlayfs patch set, developer Miklos Szeredi asked if it could be considered for inclusion in the 3.10 development cycle. He has made such requests before, but, this time, Linus answered:

Yes, I think we should just do it. It's in use, it's pretty small, and the other alternatives are worse. Let's just plan on getting this thing done with.

At Linus's request, Al Viro has agreed to review the patches again, though he noted that he has not been entirely happy with them in the past. Unless something serious and unfixable emerges from that review, it looks like overlayfs is finally on track for merging into the mainline kernel.

Comments (3 posted)

Kernel development news

The SO_REUSEPORT socket option

By Michael Kerrisk
March 13, 2013

One of the features merged in the 3.9 development cycle was TCP and UDP support for the SO_REUSEPORT socket option; that support was implemented in a series of patches by Tom Herbert. The new socket option allows multiple sockets on the same host to bind to the same port, and is intended to improve the performance of multithreaded network server applications running on top of multicore systems.

The basic concept of SO_REUSEPORT is simple enough. Multiple servers (processes or threads) can bind to the same port if they each set the option as follows:

    int sfd = socket(domain, socktype, 0);

    int optval = 1;
    setsockopt(sfd, SOL_SOCKET, SO_REUSEPORT, &optval, sizeof(optval));

    bind(sfd, (struct sockaddr *) &addr, addrlen);

So long as the first server sets this option before binding its socket, then any number of other servers can also bind to the same port if they also set the option beforehand. The requirement that the first server must specify this option prevents port hijacking—the possibility that a rogue application binds to a port already used by an existing server in order to capture (some of) its incoming connections or datagrams. To prevent unwanted processes from hijacking a port that has already been bound by a server using SO_REUSEPORT, all of the servers that later bind to that port must have an effective user ID that matches the effective user ID used to perform the first bind on the socket.

SO_REUSEPORT can be used with both TCP and UDP sockets. With TCP sockets, it allows multiple listening sockets—normally each in a different thread—to be bound to the same port. Each thread can then accept incoming connections on the port by calling accept(). This presents an alternative to the traditional approaches used by multithreaded servers that accept incoming connections on a single socket.

The first of the traditional approaches is to have a single listener thread that accepts all incoming connections and then passes these off to other threads for processing. The problem with this approach is that the listening thread can become a bottleneck in extreme cases. In early discussions on SO_REUSEPORT, Tom noted that he was dealing with applications that accepted 40,000 connections per second. Given that sort of number, it's unsurprising to learn that Tom works at Google.

The second of the traditional approaches used by multithreaded servers operating on a single port is to have all of the threads (or processes) perform an accept() call on a single listening socket in a simple event loop of the form:

    while (1) {
        new_fd = accept(...);

The problem with this technique, as Tom pointed out, is that when multiple threads are waiting in the accept() call, wake-ups are not fair, so that, under high load, incoming connections may be distributed across threads in a very unbalanced fashion. At Google, they have seen a factor-of-three difference between the thread accepting the most connections and the thread accepting the fewest connections; that sort of imbalance can lead to underutilization of CPU cores. By contrast, the SO_REUSEPORT implementation distributes connections evenly across all of the threads (or processes) that are blocked in accept() on the same port.

As with TCP, SO_REUSEPORT allows multiple UDP sockets to be bound to the same port. This facility could, for example, be useful in a DNS server operating over UDP. With SO_REUSEPORT, each thread could use recv() on its own socket to accept datagrams arriving on the port. The traditional approach is that all threads would compete to perform recv() calls on a single shared socket. As with the second of the traditional TCP scenarios described above, this can lead to unbalanced loads across the threads. By contrast, SO_REUSEPORT distributes datagrams evenly across all of the receiving threads.

Tom noted that the traditional SO_REUSEADDR socket option already allows multiple UDP sockets to be bound to, and accept datagrams on, the same UDP port. However, by contrast with SO_REUSEPORT, SO_REUSEADDR does not prevent port hijacking and does not distribute datagrams evenly across the receiving threads.

There are two other noteworthy points about Tom's patches. The first of these is a useful aspect of the implementation. Incoming connections and datagrams are distributed to the server sockets using a hash based on the 4-tuple of the connection—that is, the peer IP address and port plus the local IP address and port. This means, for example, that if a client uses the same socket to send a series of datagrams to the server port, then those datagrams will all be directed to the same receiving server (as long as it continues to exist). This eases the task of conducting stateful conversations between the client and server.

The other noteworthy point is that there is a defect in the current implementation of TCP SO_REUSEPORT. If the number of listening sockets bound to a port changes because new servers are started or existing servers terminate, it is possible that incoming connections can be dropped during the three-way handshake. The problem is that connection requests are tied to a specific listening socket when the initial SYN packet is received during the handshake. If the number of servers bound to the port changes, then the SO_REUSEPORT logic might not route the final ACK of the handshake to the correct listening socket. In this case, the client connection will be reset, and the server is left with an orphaned request structure. A solution to the problem is still being worked on, and may consist of implementing a connection request table that can be shared among multiple listening sockets.

The SO_REUSEPORT option is non-standard, but available in a similar form on a number of other UNIX systems (notably, the BSDs, where the idea originated). It seems to offer a useful alternative for squeezing the maximum performance out of network applications running on multicore systems, and thus is likely to be a welcome addition for some application developers.

Full Story (comments: 18)

The trouble with CAP_SYS_RAWIO

By Michael Kerrisk
March 13, 2013

A February linux-kernel mailing list discussion of a patch that extends the use of the CAP_COMPROMISE_KERNEL capability soon evolved into a discussion of the specific uses (or abuses) of the CAP_SYS_RAWIO capability within the kernel. However, in reality, the discussion once again exposes some general difficulties in the Linux capabilities implementation—difficulties that seem to have no easy solution.

The discussion began when Kees Cook submitted a patch to guard writes to model-specific registers (MSRs) with a check to see if the caller has the CAP_COMPROMISE_KERNEL capability. MSRs are x86-specific control registers that are used for tasks such as debugging, tracing, and performance monitoring; those registers are accessible via the /dev/cpu/CPUNUM/msr interface. CAP_COMPROMISE_KERNEL (formerly known as CAP_SECURE_FIRMWARE) is a new capability designed for use in conjunction with UEFI secure boot, which is a mechanism to ensure that the kernel is booted from an on-disk representation that has not been modified.

If a process has the CAP_COMPROMISE_KERNEL capability, it can perform operations that are not allowed in a secure-boot environment; without that capability, such operations are denied. The idea is that if the kernel detects that it has been booted via the UEFI secure-boot mechanism, then this capability is disabled for all processes. In turn, the lack of that capability is intended to prevent operations that can modify the running kernel. CAP_COMPROMISE_KERNEL is not yet part of the mainline kernel, but already exists as a patch in the Fedora distribution and Matthew Garrett is working towards its inclusion in the mainline kernel.

H. Peter Anvin wondered whether CAP_SYS_RAWIO did not already suffice for Kees's purpose. In response, Kees argued that CAP_SYS_RAWIO is for governing reads: "writing needs a much stronger check". Kees went on to elaborate:

there's a reasonable distinction between systems that expect to strictly enforce user-space/kernel-space separation (CAP_COMPROMISE_KERNEL) and things that are fiddling with hardware (CAP_SYS_RAWIO).

This in turn led to a short discussion about whether a capability was the right way to achieve the goal of restricting certain operations in a secure-boot environment. Kees was inclined to think it probably was the right approach, but deferred to Matthew Garrett, implementer of much of the secure-boot work on Fedora. Matthew thought that a capability approach seemed the best fit, but noted:

I'm not wed to [a capability approach] in the slightest, and in fact it causes problems for some userspace (anything that drops all capabilities suddenly finds itself unable to do something that it expects to be able to do), so if anyone has any suggestions for a better approach…

In the current mainline kernel, the CAP_SYS_RAWIO capability is checked in the msr_open() function: if the caller has that capability, then it can open the MSR device and perform reads and writes on it. The purpose of Kees's patch is to add a CAP_COMPROMISE_KERNEL check on each write to the device, so that in a secure-boot environment the MSR devices are readable, but not writeable. The problem that Matthew alludes to is that this approach has the potential to break user space because, formerly, there was no capability check on MSR writes. An application that worked prior to the introduction of CAP_COMPROMISE_KERNEL can now fail in the following scenario:

  • The application has a full set of privileges.
  • The application opens an MSR device (requires CAP_SYS_RAWIO).
  • The application drops all privileges, including CAP_SYS_RAWIO and CAP_COMPROMISE_KERNEL.
  • The application performs a write on the previously opened MSR device (requires CAP_COMPROMISE_KERNEL).

The last of the above steps would formerly have succeeded, but, with the addition of the CAP_COMPROMISE_KERNEL check, it now fails. In a subsequent reply, Matthew noted that QEMU was one program that was broken by a scenario similar to the above. Josh Boyer noted that Fedora has had a few reports of applications breaking on non-secure-boot systems because of scenarios like this. He highlighted why such breakages are so surprising to users and why the problem is seemingly unavoidable:

… the general problem is people think dropping all caps blindly is making their apps safer. Then they find they can't do things they could do before the new cap was added…

Really though, the main issue is that you cannot introduce new caps to enforce finer grained access without breaking something.

Shortly afterward, Peter stepped back to ask a question about the bigger picture: why should CAP_SYS_RAWIO be allowed on a secure-boot system? In other words, rather than adding a new CAP_COMPROMISE_KERNEL capability that is disabled in secure-boot environments, why not just disable CAP_SYS_RAWIO in such environments, since it is the possession of that capability that permits compromising a booted kernel?

That led Matthew to point out a major problem with CAP_SYS_RAWIO:

CAP_SYS_RAWIO seems to have ended up being a catchall of "Maybe someone who isn't entirely root should be able to do this", and not everything it covers is equivalent to being able to compromise the running kernel. I wouldn't argue with the idea that maybe we should just reappraise most of the current uses of CAP_SYS_RAWIO, but removing capability checks from places that currently have them seems like an invitation for userspace breakage.

To see what Matthew is talking about, we need to look at a little history. Back in January 1999, when capabilities first appeared with the release of Linux 2.2, CAP_SYS_RAWIO was a single-purpose capability. It was used in just a single C file in the kernel source, where it governed access to two system calls: iopl() and ioperm(). Those system calls permit access to I/O ports, allowing uncontrolled access to devices (and providing various ways to modify the state of the running kernel); hence the requirement for a capability in order to employ the calls.

The problem was that CAP_SYS_RAWIO rapidly grew to cover a range of other uses. By the time of Linux 2.4.0, there were 37 uses across 24 of the kernel's C source files, and looking at the 3.9-rc2 kernel, there are 69 uses in 43 source files. By either measure, CAP_SYS_RAWIO is now the third most commonly used capability inside the kernel source (after CAP_SYS_ADMIN and CAP_NET_ADMIN).

CAP_SYS_RAWIO seems to have encountered a fate similar to CAP_SYS_ADMIN, albeit on a smaller scale. It has expanded well beyond its original narrow use. In particular, Matthew noted:

Not having CAP_SYS_RAWIO blocks various SCSI commands, for instance. These might result in the ability to write individual blocks or destroy the device firmware, but do any of them permit modifying the running kernel?

Peter had some choice words to describe the abuse of CAP_SYS_RAWIO to protect operations on SCSI devices. The problem, of course, is that in order to perform relatively harmless SCSI operations, an application requires the same capability that can trivially be used to damage the integrity of a secure-boot system. And that, as Matthew went on to point out, is the point of CAP_COMPROMISE_KERNEL: to disable the truly dangerous operations (such as MSR writes) that CAP_SYS_RAWIO permits, while still allowing the less dangerous operations (such as the SCSI device operations).

All of this leads to a conundrum that was nicely summarized by Matthew. On the one hand, CAP_COMPROMISE_KERNEL is needed to address the problem that CAP_SYS_RAWIO has become too diffuse in its meaning. On the other hand, the addition of CAP_COMPROMISE_KERNEL checks in places where there were previously no capability checks in the kernel means that applications that drop all capabilities will break. There is no easy way out of this difficulty. As Peter noted: "We thus have a bunch of unpalatable choices, **all of which are wrong**".

Some possible resolutions of the conundrum were mentioned by Josh Boyer earlier in the thread: CAP_COMPROMISE_KERNEL could be treated as a "hidden" capability whose state could be modified only internally by the kernel. Alternatively, CAP_COMPROMISE_KERNEL might be specially treated, so that it can be dropped only by a capset() call that operates on that capability alone; in other words, if a capset() call specified dropping multiple capabilities, including CAP_COMPROMISE_KERNEL, the state of the other capabilities would be changed, but not the state of CAP_COMPROMISE_KERNEL. The problem with these approaches is that they special-case the treatment of CAP_COMPROMISE_KERNEL in a surprising way (and surprises in security-related APIs have a way of coming back to bite in the future). Furthermore, it may well be the case that analogous problems are encountered in the future with other capabilities; handling each of these as a special case would further add to the complexity of the capabilities API.

The discussion in the thread touched on a number of other difficulties with capabilities. Part of the solution to the problem of the overly broad effect of CAP_SYS_RAWIO (and CAP_SYS_ADMIN) might be to split the capability into smaller pieces—replace one capability with several new capabilities that each govern a subset of the operations governed by the old capability. Each privileged operation in the kernel would then check to see whether the caller had either the old or the new privilege. This would allow old binaries to continue to work while allowing new binaries to employ the new, tighter capability. The risk with this approach is, as Casey Schaufler noted, the possibility of an explosion in the number of capabilities, which would further complicate administering capabilities for applications. Furthermore, splitting capabilities in this manner doesn't solve the particular problem that the CAP_COMPROMISE_KERNEL patches attempt to solve for CAP_SYS_RAWIO.

Another general problem touched on by Casey is that capabilities still have not seen wide adoption as a replacement for set-user-ID and set-group-ID programs. But, as Peter noted, that may well be

in large part because a bunch of the capabilities are so close to equivalent to "superuser" that the distinction is meaningless... so why go through the hassle?

With 502 uses in the 3.9-rc2 kernel, CAP_SYS_ADMIN is the most egregious example of this problem. That problem itself would appear to spring from the Linux kernel development model: the decisions about which capabilities should govern new kernel features typically are made by individual developer in a largely decentralized and uncoordinated manner. Without having a coordinated big picture, many developers have adopted the seemingly safe choice, CAP_SYS_ADMIN. A related problem is that it turns out that a number of capabilities allow escalation to full root privileges in certain circumstances. To some degree, this is probably unavoidable, and it doesn't diminish the fact that a well-designed capabilities scheme can be used to reduce the attack surface of applications.

One approach that might help solve the problem of overly broad capabilities is hierarchical capabilities. The idea, mentioned by Peter, is to split some capabilities in a fashion similar to the way that the root privilege was split into capabilities. Thus, for instance, CAP_SYS_RAWIO could become a hierarchical capability with sub-capabilities called (say) CAP_DANGEROUS and CAP_MOSTLY_HARMLESS. A process that gained or lost CAP_SYS_RAWIO would implicitly gain or lose both CAP_DANGEROUS and CAP_MOSTLY_HARMLESS, in the same way that transitions to and from an effective user ID of 0 grant and drop all capabilities. In addition, sub-capabilities could be raised and dropped independently of their "siblings" at the same hierarchical level. However, sub-capabilities are not a concept that currently exists in the kernel, and it's not clear whether the existing capabilities API could be tweaked in such a way that they could be implemented sanely. Digging deeper into that topic remains an open challenge.

The CAP_SYS_RAWIO discussion touched on a long list of difficulties in the current Linux capabilities implementation: capabilities whose range is too broad, the difficulties of splitting capabilities while maintaining binary compatibility (and, conversely, the administrative difficulties associated with defining too large a set of capabilities), the as-yet poor adoption of binaries with file capabilities vis-a-vis traditional set-user-ID binaries, and the (possible) need for an API for hierarchical capabilities. It would seem that capabilities still have a way to go before they can deliver on the promise of providing a manageable mechanism for providing discrete, non-elevatable privileges to applications.

Comments (38 posted)

LC-Asia: An Android upstreaming update

By Jonathan Corbet
March 12, 2013
Many people have talked about the Android kernel code and its relation to the mainline. One of the people who has done the most to help bring Android and the mainline closer together is John Stultz; at the 2013 Linaro Connect Asia event, he talked about the status of the Android code. The picture that emerged shows that a lot of progress has been made, but there is still a lot of work yet to be done.

What's out there

John started by reviewing the existing Android kernel patches by category, starting with the core code: the binder interprocess communication mechanism, the ashmem shared memory mechanism, the Android logger, and monotonic event timestamps. The timestamp patch is needed to get timestamps from the monotonic clock for input events; otherwise it is hard to be sure of the timing between events, which makes gesture recognition hard. The problem is that these events cannot be added without breaking the kernel's ABI, so they cannot be just merged without further consideration.

There is a set of changes that John categorized as performance and power-consumption improvements. At the top of the list is the infamous "wakelock" mechanism, used by Android to know when the system as a whole can be suspended to save power. There is a special alarm device that can generate alarms that will wake the system from a suspended state. The Android low-memory killer gets rid of tasks when memory gets [John Stultz] tight; it is designed to activate more quickly than the kernel's out-of-memory killer, which will not act until a memory shortage is seriously affecting system performance. Also in this category is the interactive CPU frequency governor, which immediately ramps the CPU up to its maximum speed in response to touch events; its purpose is to help the system provide the fastest response possible to user actions.

The "debugging features" category includes a USB gadget driver that supports communication with the adb debugging tools; it is also used to support file transfer using the media transfer protocol (MTP). The FIQ debugger is a low-level kernel debugger with some unique features — communication through the device's headphone jack being one of them. The RAM console will save kernel messages for later recovery in case of a crash. There is the "key-reset" driver, a kind of "control-alt-delete for phones." The patches to the ARM architecture's "embedded trace macrocell" and "embedded trace buffer" drivers offer improved logging of messages from peripheral processors. Then there is the "goldfish" emulator, derived from QEMU, which allows Android to be run in an emulated mode on a desktop system.

The list of networking features starts with the "paranoid networking framework," the mechanism that controls which applications have access to the network; it restricts that access to members of a specific group. There is a set of netfilter changes mostly aimed at providing better accounting for which applications are using data. There are some Bluetooth improvements and the Broadcom "bcmhd" WiFi driver.

In the graphics category is the ION memory allocator, which handles DMA buffer management. The "sync" driver provides a sort of mutex allowing applications to wait for a vertical refresh cycle. There is also a miscellaneous category that includes the battery meta-driver, which provides wakelock support and thermal management. That category contains various touch screen drivers, the "switch" class for dealing with physical switches, and the timed GPIO facility as well. Finally, the list of deprecated features includes the PMEM memory allocator, the early suspend mechanism, the "apanic" driver, and the yaffs2 filesystem, which has been replaced by ext4.

Upstreaming status

Having passed over the long list of Android patches, John moved on to discuss where each stands with regard to upstreaming. The good news is that some of these features are already upstream. Wakelocks are, arguably, the most important of those; Rafael Wysocki's opportunistic suspend work, combined with a user-space emulation library, has made it possible for Android to move over to a mainline-based solution. John's monotonic event timestamp patches are also in the mainline, controlled by a special ioctl() command to avoid breaking the ABI; Android is using this mechanism as of the 4.2 ("Jelly Bean") release. The RAM console functionality is available via the pstore mechanism. The switch class is now supported via the kernel's "extconn" driver, but Android is not yet using this functionality.

A number of the Android patches are currently in the staging tree. These include the binder, ashmem, the logger, the low-memory killer, the alarm device, the gadget device, and the timed GPIO feature. The sync driver was also just pulled into the staging tree for merging in the 3.10 development cycle. With all of the staging code, John said, Android "just works" on a mainline kernel.

That does not mean that the job is done, though; quite a few Android patches are still in need of more work to get upstream. One such patch is the FIQ debugger; work is being done to integrate it with the kdb debugger, but, among other problems, the developers are having a hard time getting review attention for their patches. The key-reset driver was partially merged for the 3.9 kernel, but there are a number of details to be dealt with still. The plan for the low-memory killer is to integrate it with the mempressure control group patch and use the low-memory notification interface that is part of that mechanism; the developers hope to merge that code sometime soon. Ashmem is to be reimplemented via one of the volatile ranges patch sets, but there is still no agreement on the right direction for this feature. Much of the goldfish code has been merged for the 3.9 release.

The ION memory allocator has not yet been submitted for consideration at all. Much of this code duplicates what has been done with the CMA allocator and the DMA buffer sharing mechanism; integrating everything could be a challenge. There should be pieces that can be carved out and submitted, John said, even if the whole thing requires more work.

The interactive CPU frequency driver has been rejected by the scheduler developers in its current form. Supporting this feature properly could require some significant reworking of the scheduler code.

The netfilter changes have been submitted for inclusion, but there is some cleanup required before they can be merged. The paranoid networking code, instead, is not appropriate for upstream and will not be submitted. The right solution here would appear to be for Android to use the network namespaces feature, but that would require some big changes on the Android side, so it is not clear when it might happen.

The alarm device code needs to be integrated with the kernel's timerfd subsystem. Much of that integration has been done, but it requires an Android interface change, which is slowing things down. The embedded trace driver changes have been submitted, but the developer who did that work has moved on, so the code is now unmaintained. It is also undocumented and nobody else fully understands it at this point. There is a desire to replace the Android gadget driver with the CCG ("configurable composite gadget") code that is currently in the staging tree, but CCG does not yet do everything that Android needs, and it appears to be unmaintained as well. There was talk in the session of Linaro possibly taking over the development of that driver in the future.

Finally, it would be good to get the binder and logger patches out of the staging tree. That, however, is "complicated stuff" and may take a while. There is hope that the upcoming patches to support D-Bus-like communication mechanisms in the kernel will be useful to provide binder-like functionality as well.

There are a few issues needing longer-term thought. The integration of the sync driver and the DMA buffer sharing mechanism is being thought through now; there are a lot of details to be worked out. The upstreaming of ION could bring its own challenges. Much of that code has superficial similarities to the GEM and TTM memory managers that already exist in the kernel. Figuring out how to merge the interactive CPU frequency driver is going to be hard, even before one gets into details like how it plays with the ongoing big.LITTLE initiative. Some fundamental scheduler changes will be needed, but it's not clear who is going to do this work. The fact that Google continues to evolve its CPU frequency driver is not helping in this regard. There will, in other words, be plenty to keep developers busy for some time.

Concluding remarks

In total, John said, there are 361 Android patches for the kernel, with the gadget driver being the largest single chunk. Some of these patches are quite old; one of the patches actually predates Android itself. Google is not standing still; there is new code joining that which has been around for a while. Current areas of intensive development include ION, the sync driver, the CPU frequency driver, the battery driver, and the netfilter code. While some of the code is going into the mainline, the new code adds to the pile of out-of-tree patches shipped by the Android project.

Why should we worry about this, John asked, when it really is just another one of many forks of the kernel? Forking is how development gets done; see, for example, the development of the realtime patches or how many filesystems are written. But, he said, forks of entire communities, where code does not get merged back, are more problematic. In this case, we are seeing a lot of ARM systems-on-chip being developed with Android in mind from the beginning, leading to an increase in the use of out-of-tree drivers and kernels. Getting the Android base into the mainline makes it easier for developers to work with, and makes it easier to integrate Android-related code developed by others. John would like Android developers to see the mainline kernel, rather than the Android world, as their community.

Things are getting better; Zach Pfeffer pointed out that the work being done to bring Android functionality into the mainline kernel is, indeed, being used by the Android team. The relationship between that team and the kernel development community is getting better in general. It is a good time for people who are interested to join the effort and help get things done.

[Your editor would like to thank Linaro for travel assistance to attend this event.]

Comments (17 posted)

Patches and updates

Kernel trees


Core kernel code

Development tools

Device drivers


Filesystems and block I/O

Memory management


Virtualization and containers


Page editor: Jonathan Corbet


A look at openSUSE 12.3

By Jake Edge
March 13, 2013

The March 13 release of openSUSE 12.3 comes just six months after its predecessor, which is a bit quicker than the target of eight months between releases that the project has set. But the shorter development cycle does not mean that openSUSE 12.3 is lacking for new features. From the kernel up through the desktops and applications, 12.3 offers much that is new.

[openSUSE 12.3 KDE]

The project was nice enough to provide members of the press with early access to the 12.3 final release. The distribution comes in multiple flavors, of course; I tried the Live KDE version to get a feel for the new release. It has been many years since I ran openSUSE, and I never ran it in anger, so I tried to put it through its paces a bit over the five or six days since it was made available.

Since a Live distribution is somewhat cumbersome to use as a regular system, I opted to install it as a dual-boot system with Fedora on my trusty laptop. While I was ultimately successful in getting things installed that way, it took a rather extensive detour through GRUB 2, GUID partition table (GPT) partitions, resize2fs, and so on to get there. It's not clear that openSUSE or its installer were the only culprits, as I have dark suspicions about the BIOS on the laptop, but it is clear that allowing the installer to write to the master boot record (MBR) in a dual-Linux setup leads to an unbootable system—at least it did for me, and more than once. It should be noted, though, that the Live media was quite useful in helping to recover from that state as it had all of the GRUB 2 tools and GPT-aware utilities needed to fix things up.

One new thing that came in with 12.3 is a change to Live media. It is now nearly 1G in size, which means it won't fit on a CD—either DVD or USB sticks must be used instead. That extra space allowed for additional packages, including LibreOffice 3.6 and OpenJDK 7, though not GIMP 2.8 as promised in the RC2 news item.

Installation was straightforward, with the only "tricky" piece being the partition and filesystem layout. 12.3 gives the option of using Btrfs for all non-boot filesystems, which seemed worth trying. I haven't done anything particularly interesting with Btrfs (yet), but it seems to be working just fine for / and /home.

Other than some cosmetic differences (theme, background, and so on), openSUSE 12.3 didn't seem much different from Fedora 18 once I logged into the KDE desktop (or Plasma workspace if that's the new terminology). It comes with KDE 4.10, which is more recent than Fedora's 4.9, but that difference was not particularly obvious. It works well for the limited desktop use cases I need—terminal windows, a browser, email client, and so on. I was able to use the Dolphin file manager to mount and access the encrypted /home that I use on the Fedora side, for example, which was convenient, but I still haven't gotten the hang of KDE Activities.

KDE is not the only desktop available for openSUSE 12.3; there is, of course, a GNOME version of the distribution based on GNOME 3.6. Community manager Jos Poortvliet put together a lengthy preview of openSUSE 12.3 for desktop users that covers both desktops. KDE was chosen as the default for openSUSE back in 2009, but its GNOME support is said to be top-notch as well.

UEFI secure boot support is available in 12.3, and the systemd integration that started in earlier versions has been completed. The switch to MariaDB as the "default MySQL" has been completed. MySQL is still available, but MariaDB has been chosen as a more community-oriented, drop-in replacement for MySQL.

The kernel is fairly recent, based on 3.7. It exhibited the same annoying blinking WiFi indicator behavior that I have seen on the laptop with other recent kernels, though it was easy set a driver parameter for iwlegacy and get rid of it. In fact, the same file I used on the Fedora side (with a minor name change) just dropped into /etc/modprobe.d on openSUSE. Perhaps that's not surprising, but it is indicative of how it felt to use 12.3; it was often hard to remember that I wasn't still running Fedora. Some adjustments were needed (e.g. retraining fingers to type "zypper" rather than "yum"), but the two distributions are quite similar.

There are a few oddities. The default is for the primary user to be logged in automatically, which doesn't seem like the most secure of choices. Installing Emacs led to a complaint about a lack of Asian fonts for Java. The auto-lock-screen appears not to work, as any key will unlock the screen, which seems to be a known problem, though it doesn't start working after 60 seconds for me. But those are pretty minor.

A more substantive complaint could be made about one of the more advanced features being touted for the release: using the Open Build Service (OBS) to get the latest and greatest packages. There is even a video in that news item describing how to use to update LibreOffice from the 3.6 version that comes with 12.3 to LibreOffice 4.0.

Perhaps LibreOffice was a poorly chosen example, but the video paints a picture that is very different from what a user will actually run into. In fact, it stops before things get interesting. The "one click install" offered does bring up the YaST software installer, but there are many more clicks ahead. If it were just extra clicks, it would be a pretty minor issue, but the new package conflicts with the old LibreOffice, so the user needs to make a decision about what to do—without a reasonable default (like "go ahead and break LibreOffice 3.6"). Beyond that, the upgrade caused YaST to choose an enormous number (over 100) of additional packages to install, many of which (telnet, screen, GIMP, ...) seemed to have nothing to do with LibreOffice. Licenses for Flash and Fluendo GStreamer plugins had to be clicked through as well. That said, once the process was complete, LibreOffice 4.0 was up and running on the system, it was just a lot more complicated than the video (which does feature some amusing Geeko animation) depicted.

But openSUSE is not specifically targeted at non-technical users, and anyone who has used Linux before has likely run into these kinds of issues once or twice. For technically savvy users, openSUSE provides a solid operating system with the ability to get bleeding-edge applications via OBS. For Fedora users, a switch will probably be uneventful, while other distribution users (non-systemd, .deb-based, or build-it-from-quarks-and-gluons, for example) may have some adjustments to make. It's not clear that there is a strong reason to do so, but if some "distro hopping" is in your plans, openSUSE should certainly be on the list. But for those who already use it, openSUSE 12.3 will be a welcome upgrade.

Comments (5 posted)

Brief items

Distribution quote of the week

If the general principle of 'specialized technical crap confuses people who don't understand it' is a mystical urban legend to you, you might want to try teaching a class to less-experienced computer users or watching usability test videos. Or maybe try volunteering at a community technical helpdesk. Your opinion will change pretty quickly.
-- Máirín Duffy

Comments (43 posted)

Release for CentOS-6.4

CentOS 6.4 is available. See the release notes for details.

Full Story (comments: 6)

openSUSE Project Releases openSUSE 12.3

openSUSE 12.3 has been released. "openSUSE 12.3 improves search, filesystem performance and networking, as well as makes great strides forward in ARM and cloud support. openSUSE 12.3 is the latest Linux distribution from the openSUSE Project, allowing users and developers to benefit from free and open source software in physical, virtual and cloud environments."

Full Story (comments: 1)

Window Maker Live

Window Maker Live (wmlive) is a new distribution which may be run from live media, or installed to a hard drive. The 0.95.4 release is based on Debian "wheezy". The distribution aims to showcase the Window Maker window manager.

Comments (3 posted)

Distribution News

Debian GNU/Linux

Debian Project Leader Elections 2013: Candidates

Gergely Nagy, Moray Allan, and Lucas Nussbaum have been nominated for Debian Project Leader. See this page for information about the vote and links to the candidates' platforms.

Full Story (comments: none)

Newsletters and articles of interest

Distribution newsletters

Comments (none posted)

Duffy: Improving the Fedora boot experience

Máirín Duffy has put together a lengthy summary of the current discussion within the Fedora project on how to improve the bootstrap experience. "While the mailing list thread on the topic at this point is high-volume and a bit chaotic, there is a lot of useful information and suggestions in there that I think could be pulled into a design process and sorted out. So I took 3 hours (yes, 3 hours) this morning to wade through the thread and attempt to do this."

Comments (85 posted)

Shuttleworth: Not convinced by rolling releases

On his blog, Mark Shuttleworth weighs in at some length on some of the issues that have been swirling in the Ubuntu community over the last few weeks. He thinks there has been some unwarranted melodrama surrounding Ubuntu, Canonical, decision making, and so on. In addition, he is not convinced that rolling releases are the right approach.
But cadence is good, releases are good discipline even if they are hard. In LEAN software engineering, we have an interesting maxim: when something is hard, DO IT MORE OFTEN. Because that way you concentrate your efforts on the hard problem, master it, automate and make it easy. That's the philosophy that underpins agile development, devops, juju and loads of other goodness.

In the web-lead world, software is moving faster than ever before. Is six months fast enough?

So I think it IS worth asking the question: can we go even faster? Can we make even MORE releases in a year? And can we automate that process to make it bulletproof for end-users?

Comments (67 posted)

Kali Linux arrives as enterprise-ready version of BackTrack (The H)

Offensive Security, provider of Backtrack, has announced the release of Kali Linux. The H takes a look. "Kali's suite of tools includes Metasploit, Wireshark, John the Ripper, Nmap and Aircrack-ng. The applications have been evaluated and selected specifically for suitability and usefulness and do away with the historically accumulated selection that is available in BackTrack. The new desktop interface also includes a category labelled "Top 10 Security Tools", which collects the applications user are most likely to use on a regular basis. All in all, Kali includes approximately 300 different tools."

Comments (none posted)

Educational Linux distro provides tech-bundle for kids and educators ( has an interview with Jim Klein, founder of ubermix. "Ubermix is designed to bring the power and flexibility of free software and an open operating system to kids and the education community. While there are a number of general purpose Linux builds out there, few have made significant inroads into schools, due in large part to their complexity and general purpose design language. What Ubermix brings is an easy entry point and sensible design decisions, having been assembled with a real understanding of the challenges education technologists face when attempting to implement something new. Features such as a five minute install from a USB key, extraordinary hardware compatibility, and a quick, 20-second reset process make it possible for understaffed and underfunded school technology teams to scale up significant technology access without increasing the need for technical support."

Comments (none posted)

GNOME and Kylin become official Ubuntu flavours (The H)

The H covers the addition of two official Ubuntu spins. "Ubuntu GNOME 3 sets out to deliver the GNOME 3 experience on Ubuntu, while UbuntuKylin aims to offer a fully customised Chinese user experience on Ubuntu 13.04. The official blessing gives the developers of each flavour access to Ubuntu's build infrastructure and allows them to be managed as part of the Ubuntu project rather than as an unsupported fork."

Comments (none posted)

Page editor: Rebecca Sobol


GCC's move to C++

March 13, 2013

This article was contributed by Linda Jacobson

The GNU Compiler Collection (GCC) was, from its inception, written in C and compiled by a C compiler. Beginning in 2008, an effort was undertaken to change GCC so that it could be compiled by a C++ compiler and take advantage of a subset of C++ constructs. This effort was jump-started by a presentation by Ian Lance Taylor [PDF] at the June 2008 GCC summit. As with any major change, this one had its naysayers and its problems, as well as its proponents and successes.


Taylor's slides list the reasons to commit to writing GCC in C++:

  • C++ is well-known and popular.

  • It's nearly a superset of C90, which GCC was then written in.

  • The C subset of C++ is as efficient as C.

  • C++ "supports cleaner code in several significant cases." It never requires "uglier" code.

  • C++ makes it harder to break interface boundaries, which leads to cleaner interfaces.

The popularity of C++ and its superset relationship to C speak for themselves. In stating that the C subset of C++ is as efficient as C, Taylor meant that if developers are concerned about efficiency, limiting themselves to C constructs will generate code that is just as efficient. Having cleaner interfaces is one of the main advantages of C++, or any object-oriented language. Saying that C++ never requires "uglier" code is a value judgment. However, saying that it supports "cleaner code in several significant cases" has a deep history, best demonstrated by gengtype.

According to the GCC Wiki:

As C does not have any means of reflection [...] gengtype was introduced to support some GCC-specific type and variable annotations, which in turn support garbage collection inside the compiler and precompiled headers. As such, gengtype is one big kludge of a rudimentary C lexer and parser.

What had happened was that developers were emulating features such as garbage collection, a vector class, and a tree class in C. This was the "ugly" code to which Taylor referred.

In his slides, Taylor also tried to address many of the initial objections: that C++ was slow, that it was complicated, that there would be a bootstrap problem, and that the Free Software Foundation (FSF) wouldn't like it. He addressed the speed issue by pointing out that the C subset of C++ is as efficient as C. As far as FSF went, Taylor wrote, "The FSF is not writing the code."

The complexity of a language is in the eye of the beholder. Many GCC developers were primarily, or exclusively, C programmers, so of necessity there would be a time period in which they would be less productive, and/or might use C++ in ways that negated all its purported benefits. To combat that problem, Taylor hoped to develop coding standards that limited development to a subset of C++.

The bootstrap problem could be resolved by ensuring that GCC version N-1 could always build GCC version N, and that they could link statically against libstdc++. GCC version N-1 must be linked against libstdc++ N-1 while it is building GCC N and libstdc++ N; GCC N, in turn, will need libstdc++ N. Static linking ensures that each version of the compiler runs with the appropriate version of the library.

For many years prior to 2008, there had been general agreement to restrict GCC code to a common subset of C and C++, according to Taylor (via email). However, there was a great deal of resistance to replacing the C compiler with a C++ compiler. At the 2008 GCC summit, Taylor took a poll on how large that resistance was, and approximately 40% were opposed. The C++ boosters paid close attention to identifying and addressing the specific objections raised by C++ opponents (speed, memory usage, inexperience of developers, and so on), so that each year thereafter the size of the opposition shrank significantly. Most of these discussions took place at the GCC summits and via unlogged IRC chats. Therefore, the only available record is in the GCC mailing list archives.

First steps

The first step, a proper baby step, was merely to try to compile the existing C code base with a C++ compiler. While Taylor was still at the conference, he created a gcc-in-cxx branch for experimenting with building GCC with a C++ compiler. Developers were quick to announce their intention to work on the project. The initial build attempts encountered many errors and warnings, which were then cleaned up.

In June 2009, almost exactly a year from proposing this switch, Taylor reported that phase one was complete. He configured GCC with the switch enable-build-with-cxx to cause the core compiler to be built with C++. A bootstrap on a single target system was completed. Around this time, the separate cxx branch was merged into the main GCC trunk, and people continued their work, using the enable-build-with-cxx switch. (However, the separate branch was revived on at least one occasion for experimentation.)

In May 2010, there was a GCC Release Manager Q&A on IRC. The conclusion from that meeting was to request permission from the GCC Steering Committee to use C++ language features in GCC itself, as opposed to just compiling with a C++ compiler. Permission was granted, with agreement also coming from the FSF. Mark Mitchell announced the decision in an email to the GCC mailing list on May 31, 2010.

In that thread, Jakub Jelinek and Vladimir Makarov expressed a lack of enthusiasm for the change. However, as Makarov put it, he had no desire to start a flame war over a decision that had already been made. That said, he recently shared via email that his primary concern was that the GCC community would rush into converting the GCC code base to C++ "instead of working on more important things for GCC users (like improving performance, new functionality and so on). Fortunately, it did not happen."

Richard Guenther was concerned about creating a tree class hierarchy:

It's a lot of work (tree extends in all three Frontends, middle-end and backends). And my fear is we'll only get a halfway transition - something worse than no transition at all.

The efforts of the proponents to allay concerns, and the "please be careful" messages from the opponents give some indication of the other concerns. In addition to the issues raised by Taylor at the 2008 presentation, Jelinek mentioned memory usage. Others, often as asides to other comments, worried that novice C++ programmers would use the language inappropriately, and create unmaintainable code.

There was much discussion about coding standards in the thread. Several argued for existing standards, but others pointed out that they needed to define a "safe" subset of C++ to use. There was, at first, little agreement about which features of C++ were safe for a novice C++ developer. Taylor proposed a set of coding standards. These were amended by Lawrence Crowl and others, and then were adopted. Every requirement has a thorough rationale and discussion attached. However, the guiding principle on maintainability is not the coding standard, but one that always existed for GCC: the maintainer of a component makes the final decision about any changes to that component.

Current status

Currently, those who supported the changes feel their efforts provided the benefits they expected. No one has publicly expressed any dissatisfaction with the effort. Makarov was relieved that his fear that the conversion effort would be a drain on resources did not come to pass. In addition, he cites the benefits of improved modularity as being a way to make GCC easier to learn, and thus more likely to attract new developers.

As far as speed goes, Makarov noted that a bootstrap on a multi-CPU platform is as fast as it was for C. However, on uniprocessor platforms, a C bootstrap was 30% faster. He did not speculate as to why that is. He also found positive impacts, like converting to C++ hash tables, which sped up compile time by 1-2%. This last work is an ongoing process, that Lawrence Crowl last reported on in October 2012. In keeping with Makarov's concerns, this work is done slowly, as people's time and interests permit.

Of the initial desired conversions (gengtype, tree, and vector), vector support is provided using C++ constructs (i.e., a class) and gengtype has been rewritten for C++ compatibility. Trees are a different matter. Although they have been much discussed and volunteered for several times, no change has been made to the code. This adds credence to the 2010 contention of Guenther (who has changed his surname to Biener) that it would be difficult to do correctly. Reached recently, Biener stated that he felt it was too early to assess the impact of the conversion because, compared to the size of GCC, there have been few changes to C++ constructs. On the negative side, he noted (as others have) that, because of the changes, long-time contributors must relearn things that they were familiar with in the past.

In 2008, 2009, and 2010, (i.e., at the beginning and after each milestone) Taylor provided formal plans for the next steps. There is no formal plan going forward from here. People will use C++ constructs in future patches as they deem necessary, but not just for the sake of doing so. Some will limit their changes to the times when they are patching the code anyway. Others approach the existing C code with an eye to converting code to C++ wherever it makes the code clearer or more efficient. Therefore, this is an ongoing effort on a meandering path for the foreseeable future.

As the C++ project has progressed, some fears have been allayed, while some developers are still in a holding pattern. For them it is too soon to evaluate things definitively, and too late to change course. However, the majority seems to be pleased with the changes. Only time will tell what new benefits or problems will arise.

Comments (14 posted)

Brief items

Quotes of the week

This is the fruit of an 8 hour debugging session. I hate writer.
— Caolán McNamara, commenting on a one-line, eleven-character commit. (Hat tip to Cesar Eduardo Barros)

There is a lot to lose when you centralize something that should really be left to the application authors. The app icon is the app's identity. Sure it's more difficult to convince the upstream to take your work despite it not being created by an algorithm, but taking away a project's identity in the name of policing the aesthetics of the overview is not the right approach.
Jakub Steiner

Comments (2 posted)

Jitsi 2.0 released

Version 2.0 of the cross-platform open source softphone application Jitsi has been released. An announcement on the XMPP Foundation blog includes some details, such as: "one of the most prominent new features in the 2.0 release is Multiparty Video Conferencing. Such conferences can work in an ad-hoc mode where one of the clients relays video to everyone else, or in cases that require scalability, Jitsi can use the Jitsi Videobridge: an RTP relaying server controlled over XMPP." Other changes include support for the royalty-free VP8 and Opus codecs, and support for integrating with Microsoft Outlook. Additional details are listed at the Jitsi site.

Comments (18 posted)

Ardour 3.0 released

Version 3.0 of the Ardour digital audio workstation system has been released. "This is the first release of Ardour that features support for MIDI recording, playback and editing. It also features a huge number of changes to audio workflow, which for many users may be more significant than the MIDI support." See the "What's new" page for details. (Thanks to Andreas Kågedal.)

Comments (2 posted)

Systemd 198 released

Version 198 of systemd has been released. The long list of changes in this release includes new ways to extend unit file configuration, dynamic runtime control of resource limits, a number of changes to nspawn, and "substantially larger unit test suite, but this continues to be work in progress."

Full Story (comments: 2)

TopGit 0.9 released

New maintainer Robin Green announced the availability of TopGit 0.9, the first new release of the patch queue management tool after a lengthy hiatus. The project has also moved to Github. Green notes: "Because it's been 3 years since the last release, there are quite a few patches since 0.8, but most of them are quite minor changes. If you are upgrading from the HEAD of the old TopGit repository, all of and only the patches by me, Andrey Borzenkov and Heiko Hund are new compared to that revision."

Full Story (comments: none)

Emacs 24.3 available

Version 24.3 of GNU Emacs has been released. Among the highlights are a new major mode for editing Python, an update to the Common Lisp emulation library, and the addition of generalized variables in core Emacs Lisp.

Full Story (comments: none)

Newsletters and articles

Development newsletters from the past week

Comments (none posted)

Hughes: GNOME Software overall plan

At his blog, Richard Hughes outlines his designs for a plugin-capable software installer for GNOME. "Of course, packages are so 2012. It’s 2013, and people want to play with redistributable things like listaller and glick2 static blobs. People want to play with updating an OS image like ostree and that’s all awesome. Packages are pretty useful in some situations, but we don’t want to limit ourselves to being just another package installer." The gnome-software tool Hughes is prototyping is currently alpha-quality, but is available in the GNOME git repository.

Comments (23 posted)

Proposals now open for new features for GNOME 3.10

Andre Klapper announced that the door is now open to propose new platform-wide features to be added for GNOME 3.10, which is slated for a September 2013 release. New proposals should be added to the GNOME wiki, but Klapper notes that "Proposed features must have an assignee working on them. The proposal period is planned to end in about a month."

Full Story (comments: none)

Page editor: Nathan Willis


Brief items

VP8 and MPEG LA (WebM blog)

Google and the MPEG Licensing Authority have announced an agreement that will stop MPEG LA from creating a patent pool around the VP8 video codec, the WebM blog reports. VP8 is part of the royalty-free WebM media file format; MPEG LA has been threatening to create a patent pool to change the "royalty-free" part. "The arrangement with MPEG LA and 11 patent owners grants a license to Google and allows Google to sublicense any techniques that may be essential to VP8 and are owned by the patent owners; we may sublicense those techniques to any VP8 user on a royalty-free basis. The techniques may be used in any VP8 product, whether developed by Google or a third party or based on Google's libvpx implementation or a third-party implementation of the VP8 data format specification. It further provides for sublicensing those VP8 techniques in one successor generation to the VP8 video codec."

Comments (49 posted)

Articles of interest

Open Source at CeBIT 2013 (The H)

The H reports from CeBIT 2013 at length. "Speaking to me following his presentation, [Klaus] Knopper also mentioned that he was contemplating creating a mobile version of Knoppix to run on smartphones. The developer noted that the hardware he would need is already available in many smartphones: a powerful processor, 1GB or more of RAM, and a large, high-resolution screen. He showed us his Samsung Galaxy Note II running the latest version of the CyanogenMod Project's custom Android firmware and suggested that Knoppix would run very well on the hardware and could be useful for applications such as GIMP (the Note II includes a pressure-sensitive stylus and Wacom technologies). Phones like the Note II could also be docked for use with an external display, keyboard and mouse, turning them into fully fledged desktop devices."

Comments (none posted)

R.I.P. LinuxDevices… Long live LinuxGizmos!

Rick Lehrbaum, founder of LinuxDevices, has a new site called LinuxGizmos. "Like its forerunner, LinuxGizmos is devoted to the use of Linux in embedded and mobile devices and applications. The site’s goal is to provide daily updates of news and information on embedded Linux distributions, application software, development tools, protocols, standards, and hardware of interest to technical, marketing, and management professionals in the embedded and mobile devices markets."

Comments (6 posted)

Upcoming Events

Events: March 14, 2013 to May 13, 2013

The following event listing is taken from the Calendar.

March 13
March 21
PyCon 2013 Santa Clara, CA, US
March 15
March 17
German Perl Workshop Berlin, Germany
March 15
March 16
Open Source Conference Szczecin, Poland
March 16
March 17
Chemnitzer Linux-Tage 2013 Chemnitz, Germany
March 19
March 21
FLOSS UK Large Installation Systems Administration Newcastle-upon-Tyne , UK
March 20
March 22
Open Source Think Tank Calistoga, CA, USA
March 23 Augsburger Linux-Infotag 2013 Augsburg, Germany
March 23
March 24
LibrePlanet 2013: Commit Change Cambridge, MA, USA
March 25 Ignite LocationTech Boston Boston, MA, USA
March 30 Emacsconf London, UK
March 30 NYC Open Tech Conference Queens, NY, USA
April 1
April 5
Scientific Software Engineering Conference Boulder, CO, USA
April 4
April 5
Distro Recipes Paris, France
April 4
April 7
OsmoDevCon 2013 Berlin, Germany
April 6
April 7
international Openmobility conference 2013 Bratislava, Slovakia
April 8 The CentOS Dojo 2013 Antwerp, Belgium
April 8
April 9
Write The Docs Portland, OR, USA
April 10
April 13
Libre Graphics Meeting Madrid, Spain
April 10
April 13
Evergreen ILS 2013 Vancouver, Canada
April 14 OpenShift Origin Community Day Portland, OR, USA
April 15
April 17
Open Networking Summit Santa Clara, CA, USA
April 15
April 17
LF Collaboration Summit San Francisco, CA, USA
April 15
April 18
OpenStack Summit Portland, OR, USA
April 16
April 18
Lustre User Group 13 San Diego, CA, USA
April 17
April 18
Open Source Data Center Conference Nuremberg, Germany
April 17
April 19
IPv6 Summit Denver, CO, USA
April 18
April 19
Linux Storage, Filesystem and MM Summit San Francisco, CA, USA
April 19 Puppet Camp Nürnberg, Germany
April 20 Grazer Linuxtage Graz, Austria
April 21
April 22
Free and Open Source Software COMmunities Meeting 2013 Athens, Greece
April 22
April 25
Percona Live MySQL Conference and Expo Santa Clara, CA, USA
April 26
April 27
Linuxwochen Eisenstadt Eisenstadt, Austria
April 26 MySQL® & Cloud Database Solutions Day Santa Clara, CA, USA
April 27
April 28
WordCamp Melbourne 2013 Melbourne, Australia
April 27
April 28
LinuxFest Northwest Bellingham, WA, USA
April 29
April 30
2013 European LLVM Conference Paris, France
April 29
April 30
Open Source Business Conference San Francisco, CA, USA
May 1
May 3
DConf 2013 Menlo Park, CA, USA
May 2
May 4
Linuxwochen Wien 2013 Wien, Austria
May 9
May 12
Linux Audio Conference 2013 Graz, Austria
May 10 CentOS Dojo, Phoenix Phoenix, AZ, USA
May 10 Open Source Community Summit Washington, DC, USA

If your event does not appear here, please tell us about it.

Page editor: Rebecca Sobol

Copyright © 2013, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds