User: Password:
Subscribe / Log in / New account Weekly Edition for August 11, 2011

Desktop Summit: Large companies and open source

By Jake Edge
August 10, 2011

As part of being the chief Linux and open source technologist at Intel, Dirk Hohndel flies all around the world to talk to conferences on various topics. This time around, he came to Berlin to talk to the Desktop Summit about the role of large companies in open source. Some recent events caused him to take a bit of a detour on the way to his topic, but, as it turned out, that detour fit in fairly well with the rest of his message.

[Dirk Hohndel]

By way of an introduction, Hohndel said he has been hacking since he was 11 years old, doing Linux for almost 20 years, and open source a bit longer than that—though it wasn't called open source back then. These days, he still writes code, though not as much as he might like, but he still considers himself a hacker.

The detour

Because he was coming to the summit, Hohndel said that he decided to install the latest versions of both GNOME and KDE. But then he made a mistake. He publicly complained (in this thread) about his experience with GNOME 3. He was, he said, trying to give some feedback, and the reaction was not quite what he expected.

But Hohndel is very happy about how far the free desktops have come. Both GNOME 3 and KDE 4 look "sleek and professional", he said. He is also glad to see that the free desktops have stopped following along with what Windows is doing, though that may have shifted to following Mac OS X a little too closely these days. The Mac interface is "awesome" as long as "you do exactly what Steve wants you to do", otherwise, it turns into: "you can't do that". He sees something of that attitude bleeding over into the Linux world, which is worrisome.

He noted that the most recent Mac OS X update changed the way scrolling worked, which made it very confusing and hard to work with for some people. But, he said, Apple also added an option to turn off the new scrolling behavior. That's what seems to be missing in the GNOME 3 situation. It is not helpful at all if the first reaction to a criticism is "you're an idiot", and that's some of what he's seeing. Radical change is difficult, he said, and he is very glad GNOME and KDE are taking it on, "but don't leave people stranded".

Large companies and open source

Shifting gears, Hohndel launched into his main topic and listed three "F"s that can be used to describe the relationship between large companies and open source—and "one is not the one you are thinking of". The three are funding, feedback, and freedom. Funding is more than just hiring developers to work on free software, he said, there is lots more that companies do. Sponsoring conferences (like the Desktop Summit), helping developers travel to various conferences, getting hardware into the hands of the right developers, and so on, are all things that companies do.

But, "don't just think of us as moneybags", he said. Companies have other uses as well and one of the biggest is in providing feedback. Companies spend huge amounts of time and money trying to figure out what their customers want. While open source developers mostly talk among themselves, companies are out talking to customers.

Unfortunately, the results of those customer talks are "conveyed in unbearable marketing-speak", he said, but there is value in it. Gaining that value requires listening carefully and filtering what you hear. Intel always has the Intel view, and other companies are the same, so one needs to take that into account. It is important to keep an open mind even when the feedback is critical of a particular feature or project.

One area where open source could do a much better job is in documenting the response to a bug report or feature request. It is often the case that reports and requests either get a "vicious response" or none at all. Two releases later, it might be fixed, or not, with very little indication of which is going to happen—or why. It would be much better for all concerned if requests that are rejected have a clear reason stated as to "why the request cannot or should not be implemented".

Freedom (or control)

The third element is freedom, which Hohndel said he really wanted to call "control", but that it didn't start with an F. Large companies have agendas and an ingrained need to control things. As long as open source developers understand a company's intentions, a good working relationship can come about. Companies "are not per se malicious, we just appear that way", he said. By definition, companies are "looking out for our interests first, yours second".

Big company managers "manage to numbers", Hohndel said, not freedom or the greater good and that affects their decision-making. Things can change quickly and three months down the road a particular project "may not be interesting anymore", and free software developers and projects need to recognize that. If the project is depending on a company to finish a particular library that is being used by the project, the developers should be prepared that the priorities or interest level of the company may change.

Most open source developers today are employed to do so, he said, like it or not—most of the developers themselves do like it. For the kernel, he cited statistics from LWN showing that 80% of kernel developers are paid to do that work. He couldn't find similar statistics for GNOME or KDE, but they are likely to be comparable.

"Companies are very important, but change the way things work with projects", he said. Most projects start small and are created by individuals for their peer group. That means that they are not designed for working with companies, particularly in the areas of licenses and governance. If the main developers are all suddenly snapped up by a particular company, it can change things. It is something that projects should consider because "freedom is often overlooked" when a project starts to look at working with companies.

In summary, Hohndel said, no big open source project is independent from large companies today because they have all been "engaged and infiltrated" by those companies and their employees. That means that projects need to understand how to work with companies to get the most out of them. Part of that understanding will come from listening. Companies can really help accelerate development on a project, which is one of the reasons it is worth figuring out how to make it work.

GNOME and KDE are "both beautiful looking desktops", he said, and "we are miles and miles ahead of where we were a few years ago". There is still some way to go before they are ready to go for everyone, but his hope is that the week-long summit will help solve some of those problems and get us that much closer.

[ I would like to thank KDE e.V. and the GNOME Foundation for assistance with travel funding to the Desktop Summit. ]

Comments (4 posted)

The story of

By Jonathan Corbet
August 10, 2011
Despite some severe setbacks, the MeeGo project continues to develop its platform. The software is getting better, and we are starting to see a few places where it can be had on real hardware. Unfortunately, MeeGo has just run into another snag involving the hosting of a proposed third-party application repository. The project can survive this rough patch too - without even all that much bother - but only if the participants can gain a better understanding of what is going on. What may look like a serious community disconnect is really a simpler story which has been muddied by poor communication.

Some developers within the MeeGo project have been working for a while on, intended to be a repository for open-source MeeGo applications written by individual developers. Plans include a vetting process to ensure the quality and benign nature of applications to be hosted there. Things were getting closer to being ready for deployment when the process ground to a halt. David Greaves described the problem this way:

The Linux Foundation have told us in private conversations that they will not permit to be served from the infrastructure hosted by them. They do not have the resource at this time to provide a statement giving their reasons. We can not assess what other services may be impacted in the future.

The Linux Foundation is in a position to make this decision because it is the nominal owner of the MeeGo project, the MeeGo trademark, and the domain name. One of the parent projects - Moblin - was handed over to the Foundation in 2009 as a way to establish it as a free and independent project; when Maemo was merged with Moblin to create MeeGo, the combination remained under the Foundation's management. The intent was to ensure that MeeGo was not driven by the needs of any one specific company, but, instead, to help it be a fully independent project.

In a separate "problem statement," David says that, in private communications, he has been told that patent worries are behind this decision. In the absence of an official statement from the Linux Foundation, David's statement has shaped the public discussion. LWN was able to get some more information by talking to a Linux Foundation VP Amanda McPherson and from an IRC conversation held on August 9. As one might expect, the full story is more complicated - but also more simple.

Patents - or something else?

The legal concerns are a good place to start, though. The Linux Foundation does a lot of work that is not related to MeeGo at all. It hosts Linux-related conferences around the world (and funds developer travel to those events), runs a growing training operation, helps companies join the contributor community, promotes various standards activities, provides a vendor-neutral paycheck for Linus Torvalds, and more. One could reasonably argue that putting all of those activities at risk for a MeeGo-related application repository would be an irresponsible thing for the Foundation's management to do. It is also possible to argue that this risk is not zero; there are organizations which might see value in disrupting the Foundation's operations. The risk of simple patent trolls is also, clearly, something that must be kept in mind.

So it is not possible to immediately conclude that the Linux Foundation's refusal to support is unreasonable or improper, even if legal worries were the only consideration. There may indeed be real concerns that could lead to collateral damage far beyond the MeeGo project itself. It would be a shame to see Linus on the street with a "will pull git trees for food" sign, so this is an outcome that is well worth avoiding. But, the Linux Foundation says, legal concerns are only a part of what is going on here.

The real issue, according to Amanda, is simply a matter of resources. Setting up this repository would require comprehensive legal review, certainly, but it also requires system administrator time, trademark and branding attention, and little things like user support. Even if the repository has big warnings to the effect that applications are unsupported and guaranteed to set devices on fire, there will still be a stream of users with battery life problems contacting the Foundation for support. The Linux Foundation is a small operation lacking much in the way of spare staff time; it simply felt that the resources required to establish and operate this repository were not available.

Beyond that, it seems, plans for MeeGo never called for a central "app store." Instead, MeeGo is meant to be the upstream for vendors to use in the creation of their products; it was assumed that those vendors (or their users) would set up multiple application repositories as they saw fit. A central application repository blessed and operated by the MeeGo project could be seen as competing with (and discouraging) those efforts.

Questions and the way forward

All told, it seems like a reasonable explanation for what has happened here. That said, there are some questions that arise from this incident.

First among those is: why was the Foundation so secretive about its decision? Somebody clearly took the time to think the problem through, but, somehow, they could not find the time to explain - in a public setting - the conclusion that they came to. This time of year, allowances need to be made for vacations and frantic preparations for LinuxCon, but it still should have been possible to avoid being completely silent on the issue. Silence is, at best, unfair to the developers who have been working to make a reality. At worst, it feeds doubts about the Foundation's motivations and allows other mysteries to endure. For example, why is distribution of MeeGo applications via the project's open build server (something which has been going on for a year) less worrisome? We are told that the Foundation will have an official statement out in the near future; hopefully that will clarify things.

A more important question, though, might be: is the Linux Foundation the right home for the MeeGo project? If the Foundation's concerns and limitations get in the way of what the MeeGo project wants to do, it might be time to ask if MeeGo needs to find a home more suitable to the activities it is trying to pursue. Thus far, it must be said, the number of people asking that question in public is quite small.

Then, of course, there is the most relevant question of all: what should the MeeGo project do now? The above-linked problem statement offers three alternatives:

  • Keep the plan as-is under the assumption that somehow the Linux Foundation's resistance will go away. This seems like an unlikely outcome.

  • Move the application repository to a new domain outside the control of the Linux Foundation - probably Everything else would remain where it is now with the possible exception of the build server.

  • Move everything away from the Linux Foundation and place it under the control of a separate non-profit organization. The Foundation would retain the MeeGo trademark and compliance process; everything else would happen elsewhere.

One does not have to read too deeply between the lines of the problem statement to get the sense that David favors the final option listed above. But David (whose job is "MeeGo devices build systems architect" at Nokia) is not in a position to make that decision. Some other participants in the MeeGo community have called for the project to become more independent, but, if there is truly a groundswell in favor of that idea among serious MeeGo developers and the companies with stakes in MeeGo, it has not yet become apparent in the public forums. There does not appear to be any widespread desire for a new home - or a fork - in this community.

So what is almost certainly going to happen is that will be set up, probably on servers installed in Europe, and third-party applications will find a home there. The use of that domain name has already been approved, so there need be no fear of repeat of the Smeegol incident in this case. It also seems that linking to from sites will not be a problem; linking to external repositories was always in the plans.

This should be a solution that works for everybody involved, even if some participants would rather have seen the repository live within the domain. The biggest failure in this story is not, as it might have initially seemed, one of project governance; it's really just a problem of communications. That particular failure will, with luck, be remedied soon. The project will get its repository, and everybody can go back to the real problem: getting interesting MeeGo-based devices into the market where the rest of us can actually get our hands on them.

[Update: the Linux Foundation's Brian Warner posted an update on the decision shortly after this article was published.]

Comments (3 posted)

Desktop Summit: Copyright assignments

By Jake Edge
August 10, 2011

Copyright assignment (or licensing) agreements for projects is a rather contentious issue that reflects differing views of how free software will be best-positioned to grow over the coming years. Several perspectives were on display at the "Panel on Copyright Assignment" held on August 6 at the Desktop Summit in Berlin. The panel consisted of two opponents of such agreements, Michael Meeks and Bradley Kuhn, as well as perhaps their most outspoken proponent, Mark Shuttleworth, with GNOME Foundation executive director Karen Sandler handling the moderation duties. In the end, each position was well-represented, but, as might be guessed, neither side convinced the other; each will likely continue to pursue its path over the coming years.


Sandler asked the assembled room—packed with 400 or more attendees—how many knew about the issues surrounding copyright assignment and around half raised their hands. More or less the same half responded that they already had strong feelings about the subject, though in which direction wasn't necessarily clear from the query itself. Based on the general feeling in the free software world—perhaps reflected in the 2-1 ratio on the panel—it is probably reasonable to assume that most of the strong feelings were in the opposition camp.

The differences between copyright assignment agreements (CAAs) and copyright licensing agreements (CLAs)—the difference between assigning copyright to an organization vs. giving the organization a broad license to do what it wishes with the contribution—was not really under discussion as Sandler pointed out in the introduction. For the most part, the differences between the two are not germane to the dispute. She then asked each of the panelists to introduce themselves and to outline their position.

Setting the stage

LibreOffice and longtime GNOME hacker Michael Meeks went first with his objections that he said came under three separate headings: scalability, conflict, and ownership. Those make a nice acronym that also summarizes his feelings, he said. Meeks was at one time an advocate of copyright agreements, and then changed his mind because he has "seen it go badly wrong".

The scalability problem is that giving the rights to your code to a company leads to them having a monopoly. The company typically has a strong copyleft outbound license (e.g. GPL) to drive any proprietary licensing to the company. This can lead to conflicts, he said, like "hackers vs. suits" or the community vs. the company. If contributors don't feel like they own part of the code, they "feel very differently about the project", he said. They don't necessarily feel any allegiance to the company, but the loss of ownership can make them feel like they aren't really part of the project either. That can cause less vibrant and excited communities.

Canonical and Ubuntu founder Mark Shuttleworth said that he thinks of himself as a gardener and "looks at how ecosystems grow and thrive". As a businessman, he wants to be part of a thriving ecosystem and believes that others in the room share that view. Today, we don't have a thriving ecosystem for the Linux desktop, he said. Even in the face of Microsoft domination, iOS and Android have built thriving ecosystems and he would like to see the Linux desktop do the same.

"Freedom is not on the table in these discussions", Shuttleworth said. While code that is contributed under one of these agreements could go proprietary, the code itself is not at risk as it will always be available under the free license that it was distributed under. The Linux ecosystem needs lots of smaller companies and startups to be involved, but that isn't happening, he said, as they are developing for Android, iOS, or the web—and are not at the Desktop Summit.

There are several ways to get companies to participate in a free software project, Shuttleworth said. One way is to a "nexus project" like the Linux Kernel, where companies have to participate in its development, though they "hate it" and wish that they weren't required to do so, he said. Another way is to have a "core shared platform" with a permissive license that allows companies to add "secret sauce extensions", and pointed to the PostgreSQL community as an example. Aggregation is another path—used by Linux distributions—to take the work of multiple communities, package them up, and make quality or IP promises about the bundle to attract customers. Lastly, he mentioned the single vendor model which clearly states that there is an organization behind the project, like Mozilla. There are fears about that model, he said, but the way those fears are dealt with in mature markets is via competition.

Bradley Kuhn of the Software Freedom Conservancy disagreed with Shuttleworth: "software freedom is always on the table", he said, and it is always under threat. Kuhn was formerly the executive director of the Free Software Foundation (FSF) and currently serves on its board. He noted that the FSF put a lot of effort into putting together a legal framework where projects can work with companies on equal footing. The license that is used by a community is in some ways the constitution of that community, but a copyright agreement can change that constitution in unilateral ways. Copyleft is designed to make sure that derivatives of the code are always available under the terms which the original code was released under.

Kuhn noted that some might be a bit surprised at his opposition, given that the FSF requires copyright assignment for its projects. He has been advocating making that optional rather than mandatory, but has so far been unable to convince the board to make that change. But there is "a tremendous amount of value" in assigning copyrights to an organization that is "completely aligned with free software" such as the FSF. The FSF has made promises that the code and its derivatives will always be available under a free license, but unless a company makes those same kind of promises, there is no such guarantee. So far his requests to some companies to make promises of that sort have been met with a change in the subject, he said.


Meeks asked Shuttleworth if he agreed that signing a copyright agreement with a company gives that company a monopoly, and Shuttleworth said that he didn't. If the code is available under the GPL, there is no monopoly, he said, though the company with a majority of the copyright is in a "beneficial position". Kuhn argued that Shuttleworth was changing the subject, because the monopoly is on the ability to license the code under proprietary terms. That is a "trite and obvious" observation, Shuttleworth said, in agreeing that it does give that kind of monopoly power to the copyright holder.

Meeks said that the reason that there are two major Linux desktop projects stems from the proprietary licensing problem, referring to the non-free Qt licensing that existed at the time of the GNOME project's founding. He believes that having both of those is a "sad waste". Part of the problem for Linux is lots of "pointless duplication", he said. In response to a question from Shuttleworth, Meeks said that having both the Firefox and Chrome browsers was pointless duplication in his view. "I see nothing wrong with Firefox", he said.

Signing requirements and "friction"

Shuttleworth pointed out that copyright agreements "can cause problems and we should be careful to address them". One of those problems is the "friction" caused by having to sign an agreement at all, noting that one of the great strengths of the GPL is that you don't have to sign it. But, in cases where an agreement is needed, we can reduce the friction, which is what Project Harmony was set up to do, he said. By reducing the number of differing agreements, companies like Canonical would not have to look at up to 300 different ones every year, he said.

Kuhn said that his goal would be for Canonical and others to never have to sign such an agreement at all. If the license under which the code is contributed is the same as that under which the project is released (i.e. "inbound == outbound"), there would be no need for an agreement. The GPL is designed to handle that situation properly, Kuhn said. He also noted that he was concerned about the Harmony agreements because they could lead to the same kind of confusion that the Creative Commons (CC) licenses did. By having multiple different kinds of agreements under the same top-level name (e.g. Harmony or CC), there can be confusion as to what is meant, he said. It took time to separate the freedom-oriented CC licenses from the non-free choices, and he worries that a similar situation may arise for Harmony.

Or later

Sandler asked the panelists about using the "or later" clause (e.g. GPLv2 or later, aka "plus" licenses) when licensing code and what the implications were. Kuhn noted that the Linux kernel famously does not use "or later". He said that doing so is putting trust in another organization, and that if you don't trust that organization "deeply", don't sign a copyright agreement with them or add an "or later" clause to a license that is under their control.

But Shuttleworth is concerned that using "inbound == outbound" licensing is "believing that the world won't change". While licensing won't change overnight, it will eventually to address changes in the legal landscape. Just as there needed to be a GPLv3 to address shortcomings in v2, there will be a GPLv4 and a GPLv5 some day, he said. Richard Stallman will not be around forever, so you are placing your trust in the institution of the FSF, he said. It would be better to place that trust in the project itself and allow it to decide if any license changes are needed down the road.

Essentially disagreeing with both, Meeks thinks that "or later" is "vital". He says that he trusts the FSF and thinks that others should too, but beyond that, "the FSF is less of a risk than killing your project through bureaucracy". One reason that companies want to be able to get proprietary licenses to free software is so that "they can get patent protection that isn't available to us", he said.

Patent concerns

There is also the question of patent licenses, Meeks said. The Harmony agreements assign patent rights along with the other rights and if the code is released under a permissive license (e.g. BSD), the patent rights accumulated by the company don't necessarily flow back to those who receive the code. It would be nice to have the community be in the same boat with respect to patents as the other companies that license the code, but that may not be true if the Harmony agreements are used, he said. "Harmony makes it more complicated, not simpler", he said.

Patents were "debated vigorously" as part of the process of coming up with the Harmony agreements, Shuttleworth said. He was a "tangential observer" of the process, he said, but did see that the patent issue was discussed at length. The problem is that you have to be careful what you ask for inbound with respect to patents if you want to be able to use various kinds of outbound licenses, he said. Patents are "a very serious problem", but the Harmony agreements just give the ability to ship the code with a license to any patents held by the contributor that read on the contribution.

The GPLv3 was designed to ensure that everyone is getting the same patent rights, Kuhn said. Part of the reason for the update was because the GPLv2 was not as good in that regard, he said.

Dead developers and companies

The problem of the "dead developer" is one place where some kind of copyright agreement can help, Sandler said. If there is a need to relicense a project where one or more copyright holders is dead or otherwise unreachable, what can be done if there is no agreement, she asked. Meeks said that the "dead company argument is also interesting". There are more developers than companies, so maybe they die more often, but we have already seen problems coming from dead companies, he said. "Plus" licenses can help there, he said. Meeks also said that he was happy to hear that Canonical was using plus licenses, but Shuttleworth was quick to point out that was not the case. Canonical's preferred license is GPLv3, though it will contribute to projects with plus licenses, he said.

We have seen problems with dead companies that have resulted in other companies coming in to "pick at the carcass", Kuhn said. Sometimes part of that carcass is free software projects where the new company then changes all of the policies going forward, he said. The dead developer situation is very different as there are very personal decisions that developers may want to make regarding their code after they are gone. That could include appointing someone to make those decisions—as Kuhn has done—after they pass. Shuttleworth was skeptical about relying on people to get their affairs in order before they go.

The panel wrapped up with a short discussion of competition, with Shuttleworth saying that the free software world fears competition and tries to prevent anyone from getting a competitive advantage. Meeks believes that there is already enough competition from the proprietary software companies, so adding it into the free software community is not needed. Kuhn's position is that the "ecosystem that has worked so far is a copyleft ecosystem" without any kind of copyright agreement.

While interesting, the panel was given too short of a slot, so that it felt very compressed. In addition, there was no opportunity for the audience to ask questions, which is something that Kuhn noted as one of the most important parts of any kind of panel discussion. The balance on the panel also seemed a bit skewed, though, as noted above, that may roughly reflect the community's opinion on the matter. A neutral third member of the panel, replacing either Meeks or Kuhn, might have been better, though Sandler did a nice job of steering things as a neutral moderator. In some ways like the topic itself, the panel was quite interesting, but vaguely unsatisfying. There are certainly no easy answers, and we will likely struggle with it for many years to come.

[ I would like to thank the GNOME Foundation and KDE e.V. for their assistance in funding my trip to the Desktop Summit. ]

Comments (40 posted)

Page editor: Jonathan Corbet


Desktop Summit: Crypto consolidation

By Jake Edge
August 10, 2011

While it is "boring stuff" at some level, consolidating the various desktop cryptography solutions has the potential to "pleasantly surprise our users and impress them", according to Stef Walter in his presentation at this year's Desktop Summit in Berlin. His talk, "Gluing Together Desktop Crypto" outlined his plan to use PKCS#11 as the "plumbing" that will serve as the foundation for desktop-wide and cross-desktop sharing of keys and certificates. Once the boring parts are taken care of, "we can do more interesting things", he said.

[Stef Walter]

The basic problem that Walter is trying to solve is that many of our applications have their own ideas of where to store keys and certificates, which leads to duplication and, sometimes, user confusion. If two (or more) separate applications are accessing the same site and service (e.g. https), it would be much simpler if they were both using the same cryptographic information. Centralizing the storage of keys and certificates will make it easier for users to migrate that data to new systems as well.

The current crop of desktop encryption tools is good, and many of the tools are very stable, he said, but they need to be glued together to make them more usable. He is involved in both gnome-keyring and Seahorse development and noted that those applications already do some consolidation. For example, gnome-keyring stores both ssh and GPG keys, but it needs to be done "one level lower", he said. There needs to be a single store for keys and certificates, "so the user doesn't have to care about where they live". There is lots of diversity on the desktop, which "should be celebrated" but not pushed out to users, he said.

Fedora is currently porting applications to use Mozilla's Network Security Services (NSS), which has a certificate store, but he has an alternate proposal that will still work with Fedora's solution. He is proposing to use PKCS#11 (p11) as the standard for keys and certificates, because it has a number of different useful characteristics including integrating with hardware crypto devices and smart cards.

There are three steps that need to be taken to hide the complexity of crypto on the desktop from the user, he said. First you need to be able to store keys and certificates in such a way that all applications and crypto libraries can access the same ones. Second is that the framework needs to make consistent trust decisions. Today it is too often the case that one desktop application can connect to a particular service or system, but that others on the same box cannot—or they each may need to be configured separately. Lastly, there needs to be a way for applications to refer to keys and certificates in a consistent way, so that they (and users) can refer to them in standard ways.

Key storage is special, Walter said, so that a simple file or database cannot be used for that purpose. The idea is that some keys never leave the key store, instead the store signs those keys and returns that blob for use elsewhere. According to Walter, p11 makes for a good key store that can be used to glue together the different libraries that are used in various applications.

The p11 standard has "modules" that are something like drivers for different kinds of devices or storage facilities. It has a C API that is "old and baroque", but it does what is needed, he said. It is "not perfectly awesome, but what it has going for it is that it is supported in everything". Gnome-keyring and NSS already support it, while support in GLib, OpenSSL, and others is a work in progress.

When using p11 on the desktop, there are some coordination issues that need to be resolved for a single application that is using multiple libraries which all use the same p11 module. That's where p11-kit comes into play. It will coordinate the access to p11 modules as well as providing a consistent way for applications to determine which modules are installed and enabled on the system. It also handles some configuration duties, for example by telling applications and crypto libraries what key store they should be using.

Any application that is using p11 can use p11-kit because it can be used as a p11 proxy module. Due to the module mode, it can be used in various places without actually integrating it into the code. Walter specifically mentioned Java and Solaris as two possibilities there. It's BSD licensed and has no dependencies other than libc.

Trust decisions are another area that Walter would like to see addressed and he thinks that can be done using p11 as well. A trust decision is a positive or negative assertion about a particular object (e.g. key or certificate). It can also associate a level of trust in the object. The p11-glue umbrella project, which is where this work is being done, has a proposed specification for storing these assertions.

Since the keys and other objects don't leave the key store, there needs to be a way to consistently refer to them so that they can be reused. There is an IETF RFC draft for p11 universal resource identifiers (URIs) that could be used. Those objects could then be referred to using the pkcs11: URI scheme.

None of the p11 support is "written in stone", Walter said. It is all still being designed and developed so he invited interested parties to get involved and help shape it. Once the code is written and gets into the distributions, users will have a much easier time dealing with crypto objects for multiple applications and desktops.

Several audience questions further explored the possibilities that this work would enable, including Mac OS X and Windows support and how to handle PIN queries for accessing smart cards. One area that is still needing a lot of attention is certificate revocation lists (CRLs). Revocations are essentially negative trust assertions that could be stored. Another possibility is to make a p11 module that exposes a directory of CRLs that can be used by applications. There is "no actual progress there", he said, but there are "solid plans". It is a topic that is under active IETF discussion as well, he said.

Making desktop crypto work reliably, and largely invisibly, across all of the applications on the free desktop would be an enormous boon for users. Having Firefox and Chromium (as well as other browsers) share certificates (and the level of trust the user has in them) would be quite nice, but it reaches out even further than that. Lots of other applications are rendering web content these days, so those could share the same information too. Multiple email clients could hook into the same keys, regardless of whether they were GPG keys or X.509 keys from some other email encryption scheme. Switching to a different desktop environment would no longer necessitate a repopulation of keys and certificates for various services. And so on. As Walter said, it certainly has the possibility of surprising and even impressing users who have likely been bitten by some of these problems in the past.

[ I would like to thank KDE e.V. and the GNOME Foundation for assistance with my travel to the Desktop Summit. ]

Comments (10 posted)

Brief items

Quotes of the week

[Through 20 years of effort, we've successfully trained everyone to
use passwords that are hard for humans to remember, but easy for computers
to guess.
-- xkcd

Hint to distributions and software developers: if you're going to use libavcodec (or libavformat, etc.) for your project, consider restricting the default build to include only *commonly* used codecs and demuxers. The code quality of many of the more obscure formats is questionable at best.
-- Dan Rosenberg

Comments (7 posted)

EFF: Encrypt the Web with HTTPS Everywhere

The Electronic Frontier Foundation (EFF), in collaboration with the Tor Project, has launched an official 1.0 version of HTTPS Everywhere. "HTTPS Everywhere was first released as a beta test version in June of 2010. Today's 1.0 version includes support for hundreds of additional websites, using carefully crafted rules to switch from HTTP to HTTPS." LWN covered HTTPS Everywhere in June 2010.

Full Story (comments: 10)

New vulnerabilities

cifs-utils: denial of service

Package(s):cifs-utils CVE #(s):CVE-2011-2724
Created:August 9, 2011 Updated:September 23, 2011
Description: From the Red Hat bugzilla:

Originally the CVE-2010-0547 identifier has been assigned by Common Vulnerabilities and Exposures to the following security issue:

client/mount.cifs.c in mount.cifs in smbfs in Samba 3.4.5 and earlier does not verify that the (1) device name and (2) mountpoint strings are composed of valid characters, which allows local users to cause a denial of service (mtab corruption) via a crafted string.

Later a bug was found in the upstream patch for this issue. More specifically:

check_mtab() calls check_newline() to check device and directory name. check_newline() returns EX_USAGE (1) when error is detected, while check_mtab() expects -1 to indicate an error.

This bug in original CVE-2010-0547 fix (not to propagate the error properly) caused mount.cifs command on specially-crafted mount point (containing newline character) still to succeed and potentially, to corrupt mtab table on the systems, where CVE-2010-0296 glibc fix was not applied yet.

Gentoo 201206-22 samba 2012-06-24
Oracle ELSA-2012-0313 samba 2012-03-07
Mandriva MDVSA-2011:148 samba 2011-10-11
Ubuntu USN-1226-1 samba 2011-10-04
Ubuntu USN-1226-2 cifs-utils 2011-10-04
CentOS CESA-2011:1220 samba3x 2011-09-22
Scientific Linux SL-samb-20110829 samba3x 2011-08-29
Scientific Linux SL-Samb-20110829 samba, cifs-utils 2011-08-29
Red Hat RHSA-2011:1221-01 samba, cifs-utils 2011-08-29
Red Hat RHSA-2011:1220-01 samba3x 2011-08-29
Fedora FEDORA-2011-9847 cifs-utils 2011-07-31
Fedora FEDORA-2011-9831 cifs-utils 2011-07-31

Comments (none posted)

drupal7: restriction bypass

Package(s):drupal7 CVE #(s):
Created:August 9, 2011 Updated:August 10, 2011
Description: From the Drupal advisory:

Drupal 7 contains two new features: the ability to attach File upload fields to any entity type in the system and the ability to point individual File upload fields to the private file directory.

If a Drupal site is using these features on comments, and the parent node is denied access (either by a node access module or by being unpublished), the file attached to the comment can still be downloaded by non-privileged users if they know or guess its direct URL.

Fedora FEDORA-2011-9893 drupal7 2011-07-31
Fedora FEDORA-2011-9845 drupal7 2011-07-31

Comments (none posted)

ecryptfs-utils: multiple vulnerabilities

Package(s):ecryptfs-utils CVE #(s):CVE-2011-1831 CVE-2011-1832 CVE-2011-1833 CVE-2011-1834 CVE-2011-1835 CVE-2011-1836 CVE-2011-1837
Created:August 10, 2011 Updated:January 9, 2012
Description: From the Ubuntu advisory:

Vasiliy Kulikov and Dan Rosenberg discovered that eCryptfs incorrectly validated permissions on the requested mountpoint. A local attacker could use this flaw to mount to arbitrary locations, leading to privilege escalation. (CVE-2011-1831)

Vasiliy Kulikov and Dan Rosenberg discovered that eCryptfs incorrectly validated permissions on the requested mountpoint. A local attacker could use this flaw to unmount to arbitrary locations, leading to a denial of service. (CVE-2011-1832)

Vasiliy Kulikov and Dan Rosenberg discovered that eCryptfs incorrectly validated permissions on the requested source directory. A local attacker could use this flaw to mount an arbitrary directory, possibly leading to information disclosure. A pending kernel update will provide the other half of the fix for this issue. (CVE-2011-1833)

Dan Rosenberg and Marc Deslauriers discovered that eCryptfs incorrectly handled modifications to the mtab file when an error occurs. A local attacker could use this flaw to corrupt the mtab file, and possibly unmount arbitrary locations, leading to a denial of service. (CVE-2011-1834)

Marc Deslauriers discovered that eCryptfs incorrectly handled keys when setting up an encrypted private directory. A local attacker could use this flaw to manipulate keys during creation of a new user. (CVE-2011-1835)

Marc Deslauriers discovered that eCryptfs incorrectly handled permissions during recovery. A local attacker could use this flaw to possibly access another user's data during the recovery process. This issue only applied to Ubuntu 11.04. (CVE-2011-1836)

Vasiliy Kulikov discovered that eCryptfs incorrectly handled lock counters. A local attacker could use this flaw to possibly overwrite arbitrary files. The default symlink restrictions in Ubuntu 10.10 and 11.04 should protect against this issue. (CVE-2011-1837)

Oracle ELSA-2013-1645 kernel 2013-11-26
Debian DSA-2443-1 linux-2.6 2012-03-26
SUSE SUSE-SU-2012:0364-1 Real Time Linux Kernel 2012-03-14
Oracle ELSA-2012-0150 kernel 2012-03-07
Debian DSA-2382-1 ecryptfs-utils 2012-01-07
SUSE SUSE-SU-2011:1319-2 Linux kernel 2011-12-14
SUSE SUSE-SU-2011:1319-1 Linux kernel 2011-12-13
SUSE SUSE-SA:2011:046 kernel 2011-12-13
Ubuntu USN-1256-1 linux-lts-backport-natty 2011-11-09
openSUSE openSUSE-SU-2011:1222-1 kernel 2011-11-08
openSUSE openSUSE-SU-2011:1221-1 kernel 2011-11-08
Ubuntu USN-1245-1 linux-mvl-dove 2011-10-25
Ubuntu USN-1240-1 linux-mvl-dove 2011-10-25
Ubuntu USN-1239-1 linux-ec2 2011-10-25
Scientific Linux SL-kern-20111020 kernel 2011-10-20
CentOS CESA-2011:1386 kernel 2011-10-21
Red Hat RHSA-2011:1386-01 kernel 2011-10-20
Ubuntu USN-1227-1 kernel 2011-10-11
Fedora FEDORA-2011-12874 kernel 2011-09-18
Scientific Linux SL-kern-20111005 kernel 2011-10-05
Red Hat RHSA-2011:1350-01 kernel 2011-10-05
Ubuntu USN-1219-1 linux-lts-backport-maverick 2011-09-29
CentOS CESA-2011:1241 ecryptfs-utils 2011-09-22
Ubuntu USN-1211-1 linux 2011-09-21
Ubuntu USN-1212-1 linux-ti-omap4 2011-09-21
Ubuntu USN-1204-1 linux-fsl-imx51 2011-09-13
Ubuntu USN-1202-1 linux-ti-omap4 2011-09-13
Ubuntu USN-1253-1 linux 2011-11-08
Fedora FEDORA-2011-10718 ecryptfs-utils 2011-08-12
Fedora FEDORA-2011-10733 ecryptfs-utils 2011-08-12
Scientific Linux SL-ecry-20110831 ecryptfs-utils 2011-08-31
Red Hat RHSA-2011:1241-01 ecryptfs-utils 2011-08-31
openSUSE openSUSE-SU-2011:0902-1 ecryptfs-utils 2011-08-15
SUSE SUSE-SU-2011:0898-1 ecryptfs-utils 2011-08-12
Ubuntu USN-1188-1 ecryptfs-utils 2011-08-09

Comments (none posted)

flash-plugin: doom and destruction

Package(s):flash-plugin CVE #(s):CVE-2011-2130 CVE-2011-2134 CVE-2011-2135 CVE-2011-2136 CVE-2011-2137 CVE-2011-2138 CVE-2011-2139 CVE-2011-2140 CVE-2011-2414 CVE-2011-2415 CVE-2011-2416 CVE-2011-2417 CVE-2011-2425
Created:August 10, 2011 Updated:November 8, 2011
Description: The proprietary flash plugin contains another long list of vulnerabilities exploitable remotely via a hostile flash file.
Gentoo 201201-19 acroread 2012-01-30
Red Hat RHSA-2011:1434-01 acroread 2011-11-08
Gentoo 201110-11 adobe-flash 2011-10-13
SUSE SUSE-SU-2011:0894-1 flash-player 2011-08-12
openSUSE openSUSE-SU-2011:0897-1 flash-player 2011-08-12
SUSE SUSE-SA:2011:033 flash-player 2011-08-10
Red Hat RHSA-2011:1144-01 flash-plugin 2011-08-10

Comments (1 posted)

glpi: information disclosure

Package(s):glpi CVE #(s):CVE-2011-2720
Created:August 4, 2011 Updated:February 7, 2012

From the Red Hat Bugzilla entry:

It was found that GLPI, the Information Resource-Manager with an additional Administration-Interface, did not properly blacklist certain sensitive variables (like GLPI username and password). A remote attacker could use this flaw to obtain access to plaintext form of these values via specially-crafted HTTP POST request.

Mandriva MDVSA-2012:014 glpi 2012-02-06
Fedora FEDORA-2011-9690 glpi-mass-ocs-import 2011-07-26
Fedora FEDORA-2011-9690 glpi-pdf 2011-07-26
Fedora FEDORA-2011-9690 glpi-data-injection 2011-07-26
Fedora FEDORA-2011-9690 glpi 2011-07-26
Fedora FEDORA-2011-9639 glpi 2011-07-23

Comments (none posted)

libcap: insecure chroot

Package(s):libcap CVE #(s):
Created:August 8, 2011 Updated:August 10, 2011
Description: From the CWE entry:

Improper use of chroot() may allow attackers to escape from the chroot jail. The chroot() function call does not change the process's current working directory, so relative paths may still refer to file system resources outside of the chroot jail after chroot() has been called.

Fedora FEDORA-2011-9844 libcap 2011-07-31

Comments (none posted)

p7zip: multiple vulnerabilities

Package(s):p7zip CVE #(s):
Created:August 9, 2011 Updated:August 10, 2011
Description: p7zip is a port of 7za.exe for Unix. 7-Zip is a file archiver with a very high compression ratio. The original version can be found at

p7zip 9.20.1 fixes multiple bugs.

Fedora FEDORA-2011-9853 p7zip 2011-07-31

Comments (none posted)

phpMyAdmin: multiple vulnerabilities

Package(s):phpMyAdmin CVE #(s):CVE-2011-2643 CVE-2011-2718 CVE-2011-2719
Created:August 5, 2011 Updated:August 15, 2011
Description: From the CVE entries:

Directory traversal vulnerability in sql.php in phpMyAdmin 3.4.x before, when configuration storage is enabled, allows remote attackers to include and execute arbitrary local files via directory traversal sequences in a MIME-type transformation parameter. (CVE-2011-2643)

Multiple directory traversal vulnerabilities in the relational schema implementation in phpMyAdmin 3.4.x before allow remote authenticated users to include and execute arbitrary local files via directory traversal sequences in an export type field, related to (1) libraries/schema/User_Schema.class.php and (2) schema_export.php. (CVE-2011-2718)

libraries/auth/swekey/swekey.auth.lib.php in phpMyAdmin 3.x before and 3.4.x before does not properly manage sessions associated with Swekey authentication, which allows remote attackers to modify the SESSION superglobal array, other superglobal arrays, and certain swekey.auth.lib.php local variables via a crafted query string, a related issue to CVE-2011-2505. (CVE-2011-2719)

Gentoo 201201-01 phpmyadmin 2012-01-04
Mandriva MDVSA-2011:124 phpmyadmin 2011-08-14
Fedora FEDORA-2011-9734 phpMyAdmin 2011-07-26
Fedora FEDORA-2011-9725 phpMyAdmin 2011-07-26

Comments (none posted)

quake3: arbitrary command/code execution

Package(s):quake3 CVE #(s):CVE-2011-1412 CVE-2011-2764
Created:August 9, 2011 Updated:March 8, 2012
Description: From the CVE entries:

sys/sys_unix.c in the ioQuake3 engine on Unix and Linux, as used in World of Padman 1.5.x before and OpenArena 0.8.x-15 and 0.8.x-16, allows remote game servers to execute arbitrary commands via shell metacharacters in a long fs_game variable. (CVE-2011-1412)

The FS_CheckFilenameIsNotExecutable function in qcommon/files.c in the ioQuake3 engine 1.36 and earlier, as used in World of Padman, Smokin' Guns, OpenArena, Tremulous, and ioUrbanTerror, does not properly determine dangerous file extensions, which allows remote attackers to execute arbitrary code via a crafted third-party addon that creates a Trojan horse DLL file. (CVE-2011-2764)

Mageia MGASA-2012-0148 tremulous 2012-07-09
Fedora FEDORA-2011-9898 openarena 2011-07-31
Fedora FEDORA-2011-9774 openarena 2011-07-31
Fedora FEDORA-2011-9898 quake3 2011-07-31
Fedora FEDORA-2011-9774 quake3 2011-07-31

Comments (none posted)

rsync: denial of service

Package(s):rsync CVE #(s):
Created:August 5, 2011 Updated:August 10, 2011
Description: From the Scientific Linux advisory:

The previous rsync security errata update, which was applied with the rsync tool update to version 3.0.6-4, introduced a patch which fixed the issue with missing memory deallocation. Due to an error in that patch, the following new issue appeared: when specifying the source or destination argument of the rsync command without the optional user@ argument, rsync failed to provide the correct parameters to an external command, such as ssh, and thus rsync failed with an error.

Scientific Linux SL-rsyn-20110721 rsync 2011-07-21

Comments (none posted)

squirrelmail: multiple vulnerabilities

Package(s):squirrelmail CVE #(s):CVE-2011-2752 CVE-2011-2753
Created:August 8, 2011 Updated:August 15, 2011
Description: From the CVE entries:

CRLF injection vulnerability in SquirrelMail 1.4.21 and earlier allows remote attackers to modify or add preference values via a \n (newline) character, a different vulnerability than CVE-2010-4555. (CVE-2011-2752)

Multiple cross-site request forgery (CSRF) vulnerabilities in SquirrelMail 1.4.21 and earlier allow remote attackers to hijack the authentication of unspecified victims via vectors involving (1) the empty trash implementation and (2) the Index Order (aka options_order) page, a different issue than CVE-2010-4555. (CVE-2011-2753)

Scientific Linux SL-squi-20120208 squirrelmail 2012-02-08
Oracle ELSA-2012-0103 squirrelmail 2012-02-09
Oracle ELSA-2012-0103 squirrelmail 2012-02-09
CentOS CESA-2012:0103 squirrelmail 2012-02-08
CentOS CESA-2012:0103 squirrelmail 2012-02-08
Red Hat RHSA-2012:0103-01 squirrelmail 2012-02-08
Mandriva MDVSA-2011:123 squirrelmail 2011-08-13
Debian DSA-2291-1 squirrelmail 2011-08-08

Comments (none posted)

typo3-src: multiple vulnerabilities

Package(s):typo3-src CVE #(s):
Created:August 8, 2011 Updated:August 10, 2011
Description: From the Debian advisory:

Several remote vulnerabilities have been discovered in the TYPO3 web content management framework: cross-site scripting, information disclosure, authentication delay bypass, and arbitrary file deletion. More details can be found in the Typo3 security advisory:

Debian DSA-2289-1 typo3-src 2011-08-07

Comments (none posted)

virtualbox: privilege escalation

Package(s):virtualbox CVE #(s):CVE-2011-2300 CVE-2011-2305
Created:August 5, 2011 Updated:April 10, 2012
Description: From the CVE entries:

Unspecified vulnerability in Oracle VM VirtualBox 4.0 allows local users to affect confidentiality, integrity, and availability via unknown vectors related to Guest Additions for Windows. (CVE-2011-2300)

Unspecified vulnerability in Oracle VM VirtualBox 3.0, 3.1, 3.2, and 4.0 allows local users to affect confidentiality, integrity, and availability via unknown vectors. (CVE-2011-2305)

Gentoo 201204-01 virtualbox 2012-04-09
openSUSE openSUSE-SU-2011:0873-1 virtualbox 2011-08-05

Comments (none posted)

wireshark: denial of service

Package(s):wireshark CVE #(s):CVE-2011-2698
Created:August 8, 2011 Updated:January 14, 2013
Description: From the Pardus advisory:

An infinite loop was found in the way ANSI A Interface (IS-634/IOS) dissector of the Wireshark network traffic analyzer processed certain ANSI A MAP capture files. If Wireshark read a malformed packet off a network or opened a malicious packet capture file, it could lead to denial of service (Wireshark hang).

Oracle ELSA-2013-1569 wireshark 2013-11-26
Oracle ELSA-2013-0125 wireshark 2013-01-12
Scientific Linux SL-wire-20130116 wireshark 2013-01-16
CentOS CESA-2012:0509 wireshark 2012-04-24
Oracle ELSA-2012-0509 wireshark 2012-04-23
Scientific Linux SL-wire-20120423 wireshark 2012-04-23
Red Hat RHSA-2012:0509-01 wireshark 2012-04-23
openSUSE openSUSE-SU-2011:1142-1 wireshark 2011-10-18
Gentoo 201110-02 wireshark 2011-10-09
Fedora FEDORA-2011-9638 wireshark 2011-07-23
Fedora FEDORA-2011-9640 wireshark 2011-07-23
Pardus 2011-107 wireshark 2011-08-04

Comments (none posted)

Page editor: Jake Edge

Kernel development

Brief items

Kernel release status

The current development kernel is 3.1-rc1, released on August 7. According to Linus:

Notable? It depends on what you look for. VM writeback work? You got it. And there was some controversy over the iscsi target code. There's networking changes, there's the rest of the generic ACL support moving into the VFS layer proper, simplifying the filesystem code that was often cut-and-paste duplicated boiler plate. And making us faster at doing it at the same time. And there are power management interface cleanups.

But there's nothing *huge* here. Looks like a fairly normal release, as I said. Unless I've forgotten something.

All the details can be found in the long-format changelog.

Stable updates: 3.0.1 was released on August 4 with a long list of fixes. and came out on August 8. Greg has let it be known that the maintenance period for the 2.6.33 kernel may be coming to an end before too long.

Comments (none posted)

Quotes of the week

We just have to understand, that preempt_disable, bh_disable, irq_disable are cpu local BKLs with very subtle semantics which are pretty close to the original BKL horror. All these mechanisms are per cpu locks in fact and we need to get some annotation in place which will help understandability and debugability in the first place. The side effect that it will help RT to deal with that is - of course desired from our side - but not the primary goal of that exercise.
-- Thomas Gleixner

In fact, I'm seriously considering a rather draconian measure for next merge window: I'll fetch the -next tree when I open the merge window, and if I get anything but trivial fixes that don't show up in that "next tree at the point of merge window open", I'll just ignore that pull request. Because clearly people are just not being careful enough.
-- Linus Torvalds

We shouldn't do voodoo stuff. Or rather, I'm perfectly ok if you guys all do your little wax figures of me in the privacy of your own homes - freedom of religion and all that - but please don't do it in the kernel.
-- Linus Torvalds

Every time I get frustrated with doing paperwork, I simply imagine having the job of estimating how much time it takes to do paperwork, and I feel better immediately.
-- Valerie Aurora

Comments (none posted)

Mel Gorman releases MMTests 0.01

Kernel hacker Mel Gorman has released a test suite for the Linux memory management subsystem. He has cleaned up some scripts that he uses and made them less specific to particular patch sets. While not "comprehensive in any way", they may be useful to others. He has also published some raw results on tests that he has run recently. "I know the report structure looks crude but I was not interested in making them pretty. Due to the fact that some of the scripts are extremely old, the quality and coding styles vary considerably. This may get cleaned up over time but in the meantime, try and keep the contents of your stomach down if you are reading the scripts."

Full Story (comments: 1)

Kernel development news

TCP connection hijacking and parasites - as a good thing

By Jonathan Corbet
August 9, 2011
The 3.1 kernel will include a number of enhancements to the ptrace() system call by Tejun Heo. These improvements are meant to make reliable debugging of programs easier, but Tejun, it seems, is not one to be satisfied with mundane objectives like that. So he has posted an example program showing how the new features can be used to solve a difficult problem faced by checkpoint/restart implementations: capturing and restoring the state of network connections. The code is in an early stage of development; it's audacious and scary, but it may show how interesting things can be done.

The traditional ptrace() API calls for a tracing program to attach to a target process with the PTRACE_ATTACH command; that command puts the target into a traced state and stops it in its tracks. PTRACE_ATTACH has never been perfect; it changes the target's signal handling and can never be entirely transparent to the target. So Tejun supplemented it with a new PTRACE_SEIZE command; PTRACE_SEIZE attaches to the target but does not stop it or change its signal handling in any way. Stopping a seized process is done with PTRACE_INTERRUPT which, again, does not send any signals or make any signal handling changes. The result is a mechanism which enables the manipulation of processes in a more transparent, less disruptive way.

All of this seems useful, but it does not necessarily seem like part of a checkpoint/restart implementation. But it can help in an important way. One of the problems associated with saving the state of a process is that not all of that state is visible from user space. Getting around this limitation has tended to involve doing checkpointing from within the kernel or the addition of new interfaces to expose the required information; neither approach is seen as ideal. But, in many cases, the required information can be had by running in the context of the targeted process; that is where an approach based on ptrace() can have a role to play.

Tejun took on the task of saving and restoring the state of an open TCP connection for his example implementation. The process starts by using ptrace() to seize and stop the target thread(s); then it's just a matter of running some code in that process's context to get the requisite information. To do so, Tejun's example program digs around in the target's address space for a nice bit of memory which has execute permission; the contents of that memory are saved and replaced by his "parasite" code. A bit of register manipulation allows the target process to be restarted in the injected code, which does the needed information gathering. Once that's done, the original code and registers are restored, and the target process is as it was before all this happened.

The "parasite" code starts by gathering the basic information about open connections: IP addresses, ports, etc. The state of the receive side of each connection is saved by (1) copying any buffered incoming data using the MSG_PEEK option to recvmsg(), and (2) getting the sequence number to be read next with a new SIOCGINSEQ ioctl() command. On the transmit side, the sequence number of each queued outgoing packet - along with the packet data itself must be captured with another pair of new ioctl() commands. With that done, the checkpointing of the network connection is complete.

Restarting the connection - possibly in a different process on a different machine entirely - is a bit tricky; the kernel's idea of the connection must be made to match the situation at checkpoint time without perturbing or confusing the other side. That requires the restart code to pretend to be the other side of the connection for as long as it takes to get things in sync. The kernel already provides most of the machinery needed for this task: outgoing packets can be intercepted with the "nf_queue" mechanism, and a raw socket can be used to inject new packets that appear to be coming from the remote side.

So, at restart time, things start by simply opening a new socket to the remote end. Another new ioctl() command (SIOCSOUTSEQ) is used to set the sequence number before connecting to make it match the number found at checkpoint time. Once the connection process starts, the outgoing SYN packet will be intercepted - the remote side will certainly not be prepared to deal with it - and a SYN/ACK reply will be injected locally. The outgoing ACK must also be intercepted and dropped on the floor, of course. Once that is done, the kernel thinks it has an open connection, with sequence numbers matching the pre-checkpoint connection, to the remote side.

After that, it's a matter of restoring the incoming data that had been found queued in the kernel at checkpoint time; that is done by injecting new packets containing that data and intercepting the resulting ACKs from the network stack. Outgoing data, instead, can be replaced with a series of simple send() calls, but there is one little twist. Packets in the outgoing queue may have already been transmitted and received by the remote side. Retransmitting those packets is not a problem, as long as the size of those packets remains the same. If, instead, the system uses different offsets as it divides the outgoing data into packets, it can create confusion at the remote end. To keep that from happening, Tejun added one more ioctl() (SIOCFORCEOUTBD) to force the packets to match those created before the checkpoint operation began.

Once the transmit queue is restored, the connection is back to its original state. At this point, the interception of outgoing packets can stop.

All of this seems somewhat complex and fragile, but Tejun states that it "actually works rather reliably." That said, there are a lot of details that have been ignored; it is, after all, a proof-of-concept implementation. It's not meant to be a complete solution to the problem of checkpointing and restarting network connections; the idea is to show that the problem can, indeed, be solved. If the user-space checkpoint/restart work proceeds, it may well adopt some variant of this approach at some point. In the meantime, though, what we have is a fun hack showing what can be done with the new ptrace() commands. Those wanting more details on how it works can find them in the README file found in the example code repository.

Comments (35 posted)

The Extensible Firmware Interface - an introduction

August 9, 2011

This article was contributed by Matthew Garrett

In the beginning was the BIOS.

Actually, that's not true. Depending on where you start from, there was either some toggle switches used to enter enough code to start booting from something useful, a ROM that dumped you straight into a language interpreter or a ROM that was just barely capable of reading a file from tape or disk and going on from there. CP/M was usually one of the latter, jumping to media that contained some hardware-specific code and a relatively hardware-agnostic OS. The hardware-specific code handled receiving and sending data, resulting in it being called the "Basic Input/Output System." BIOS was born.

When IBM designed the PC they made a decision that probably seemed inconsequential at the time but would end up shaping the entire PC industry. Rather than leaving the BIOS on the boot media, they tied it to the initial bootstrapping code and put it in ROM. Within a couple of years vendors were shipping machines with reverse engineered BIOS reimplementations and the PC clone market had come into existence.

There's very little beauty associated with the BIOS, but what it had in its favor was functional hardware abstraction. It was possible to write a fairly functional operating system using only the interfaces provided by the system and video BIOSes, which meant that vendors could modify system components and still ship unmodified install media. Prices nosedived and the PC became almost ubiquitous.

The BIOS grew along with all of this. Various arbitrary limits were gradually removed or at least papered over. We gained interfaces for telling us how much RAM the system had above 64MB. We gained support for increasingly large drives. Network booting became possible. But limits remained.

The one that eventually cemented the argument for moving away from the traditional BIOS turned out to be a very old problem. Hard drives still typically have 512 byte sectors, and the MBR partition table used by BIOSes stores sectors in 32-bit variables. Partitions above 2TB? Not really happening. And while in the past this would have been an excuse to standardize on another BIOS extension, the world had changed. The legacy BIOS had lasted for around 30 years without ever having a full specification. The modern world wanted standards, compliance tests and management capabilities. Something clearly had to be done.

And so for the want of a new partition table standard, EFI arrived in the PC world.

Expedient Firmware Innovation

[1] Intel's other stated objection to Open Firmware was that it had its own device tree which would have duplicated the ACPI device tree that was going to be present in IA64 systems. One of the outcomes of the OLPC project was an Open Firmware implementation that glued the ACPI device tree into the Open Firmware one without anyone dying in the process, while meanwhile EFI ended up allowing you to specify devices in either the ACPI device tree or through a runtime enumerated hardware path. The jokes would write themselves if they weren't too busy crying.

[2] To be fair to Intel, choosing to have drivers be written in C rather than Forth probably did make EFI more attractive to third party developers than Open Firmware

Intel had at least 99 problems in 1998, and IA64 was certainly one of them. IA64 was supposed to be a break from the PC compatible market, and so it made sense for it to have a new firmware implementation. The 90s had already seen several attempts at producing cross-platform legacy-free firmware designs with the most notable probably being the ARC standard that appeared on various MIPS and Alpha platforms and Open Firmware, common on PowerPC and SPARCs. ARC mandated the presence of certain hardware components and lacked any real process for extending the specification, so got passed over. Open Firmware was more attractive but had a very limited third party developer community[1], so the choice was made to start from scratch in the hope that a third party developer community would be along eventually[2]. This was the Intel Boot Initiative, something that would eventually grow into EFI.

EFI is intended to fulfill the same role as the old PC BIOS. It's a pile of code that initializes the hardware and then provides a consistent and fairly abstracted view of the hardware to the operating system. It's enough to get your bootloader running and, then, for that bootloader to find the rest of your OS. It's a specification that's 2,210 pages long and still depends on the additional 727 pages of the ACPI spec and numerous ancillary EFI specs. It's a standard for the future that doesn't understand surrogate pairs and so can never implement full Unicode support. It has a scripting environment that looks more like DOS than you'd have believed possible. It's built on top of a platform-independent open source core that's already something like three times the size of a typical BIOS source tree. It's the future of getting anything to run on your PC. This is its story.

Eminently Forgettable Irritant

[3] The latest versions of EFI allow for a pre-PEI phase that verifies that the EFI code hasn't been modified. We heard you like layers.

[4] Those of you paying attention have probably noticed that the PEI sounds awfully like a BIOS, EFI sounds awfully like an OS and bootloaders sound awfully like applications. There's nothing standing between EFI and EMACS except a C library and a port of readline. This probably just goes to show something, but I'm sure I don't know what.

The theory behind EFI is simple. At the lowest level[3] is the Pre-EFI Initialization (PEI) code, whose job it is to handle setting up the low-level hardware such as the memory controller. As the entry point to the firmware, the PEI layer also handles the first stages of resume from S3 sleep. PEI then transfers control to the Driver Execution Environment (DXE) and plays no further part in the running system.

The DXE layer is what's mostly thought of as EFI. It's a hardware-agnostic core capable of loading drivers from the Firmware Volume (effectively a filesystem in flash), providing a standardized set of interfaces to everything that runs on top of it. From here it's a short step to a bootloader and UI, and then you're off out of EFI and you don't need to care any more[4].

The PEI is mostly uninteresting. It's the chipset-level secret sauce that knows how to turn a system without working RAM into a system with working RAM, which is a fine and worthy achievement but not typically something an OS needs to care about. It'll bring your memory out of self refresh and jump to the resume vector when you're coming out of S3. Beyond that? It's an implementation detail. Let's ignore it.

The DXE is where things get interesting. This is the layer that presents the interface embodied in the EFI specification. Devices with bound drivers are represented by handles, and each handle may implement any number of protocols. Protocols are uniquely identified with a GUID. There's a LocateHandle() call that gives you a reference to all handles that implement a given protocol, but how do you make the LocateHandle() call in the first place?

This turns out to be far easier than it could be. Each EFI protocol is represented by a table (ie, a structure) of data and function pointers. There's a couple of special tables which represent boot services (ie, calls that can be made while you're still in DXE) and runtime services (ie, calls that can be made once you've transitioned to the OS), and in turn these are contained within a global system table. The system table is passed to the main function of any EFI application, and walking it to find the boot services table then gives a pointer to the LocateHandle() function. Voilà.

So you're an EFI bootloader and you want to print something on the screen. This is made even easier by the presence of basic console io functions in the global EFI system table, avoiding the need to search for an appropriate protocol. A "Hello World" function would look something like this:

    #include <efi.h>
    #include <efilib.h>

    efi_main (EFI_HANDLE image, EFI_SYSTEM_TABLE *systab)

        conout = systab->ConOut;
        uefi_call_wrapper(conout->OutputString, 2, conout, L"Hello World!\n\r");

        return EFI_SUCCESS;

In comparison, graphics require slightly more effort:

    #include <efi.h>
    #include <efilib.h>

    extern EFI_GUID GraphicsOutputProtocol;

    efi_main (EFI_HANDLE image, EFI_SYSTEM_TABLE *systab)
	UINTN SizeOfInfo;

	uefi_call_wrapper(BS->LocateProtocol, 3, &GraphicsOutputProtocol,
	                  NULL, &gop);
	uefi_call_wrapper(gop->QueryMode, 4, gop, 0, &SizeOfInfo, &info);

	Print(L"Mode 0 is running at %dx%d\n", info->HorizontalResolution,

	return 0;

[5] Well, except that things are obviously more complicated. It's possible for multiple device handles to implement a single protocol, so you also need to work out whether you're speaking to the right one. That can end up being trickier than you'd like it to be.
Here we've asked the firmware for the first instance of a device implementing the Graphics Output Protocol. That gives us a table of pointers to graphics related functionality, and we're free to call them as we please.[5]

Extremely Frustrating Issues

So far it all sounds straightforward from the bootloader perspective. But EFI is full of surprising complexity and frustrating corner cases, and so (unsurprisingly) attempting to work on any of this rapidly leads to confusion, anger and a hangover. We'll explore more of the problems in the next part of this article.

Comments (52 posted)

Network transmit queue limits

By Jonathan Corbet
August 9, 2011
Network performance depends heavily on buffering at almost every point in a packet's path. If the system wants to get full performance out of an interface, it must ensure that the next packet is ready to go as soon as the device is ready for it. But, as the developers working on bufferbloat have confirmed, excessive buffering can lead to problems of its own. One of the most annoying of those problems is latency; if an outgoing packet is placed at the end of a very long queue, it will not be going anywhere for a while. A classic example can be reproduced on almost any home network: start a large outbound file copy operation and listen to the loud complaints from the World of Warcraft player in the next room; it should be noted that not all parents see this behavior as a bad thing. But, in general, latency caused by excessive buffering is indeed worth fixing.

One assumes that the number of Warcraft players on the Google campus is relatively small, but Google worries about latency anyway. Anything that slows down response makes Google's services slower and less attractive. So it is not surprising that we have seen various latency-reducing changes from Google, including the increase in the initial congestion window merged for 2.6.38. A more recent patch from Google's Tom Herbert attacks latency caused by excessive buffering, but its future in its current form is uncertain.

An outgoing packet may pass through several layers of buffering before it hits the wire for even the first hop. There may be queues within the originating application, in the network protocol code, in the traffic control policy layers, in the device driver, and in the device itself - and probably in several other places as well. A full solution to the buffering problem will likely require addressing all of these issues, but each layer will have its own concerns and will be a unique problem to solve. Tom's patch is aimed at the last step in the system - buffering within the device's internal transmit queue.

Any worthwhile network interface will support a ring of descriptors describing packets which are waiting to be transmitted. If the interface is busy, there should always be some packets buffered there; once the transmission of one packet is complete, the interface should be able to begin the next one without waiting for the kernel to respond. It makes little sense, though, to buffer more packets in the device than is necessary to keep the transmitter busy; anything more than that will just add latency. Thus far, little thought has gone into how big that buffer should be; the default is often too large. On your editor's system, ethtool says that the length of the transmit ring is 256 packets; on a 1G Ethernet, with 1500-byte packets, that ring would take almost 4ms to transmit completely. 4ms is a fair amount of latency to add to a local transmission, and it's only one of several possible sources of latency. It may well make sense to make that buffer smaller.

The problem, of course, is that the ideal buffer size varies considerably from one system - and one workload - to the next. A lightly-loaded system sending large packets can get by with a small number of buffered packets. If the system is heavily loaded, more time may pass before the transmit queue can be refilled, so that queue should be larger. If the packets being transmitted are small, it will be necessary to buffer more of them. A few moments spent thinking about the problem will make it clear that (1) the number of packets is the wrong parameter to use for the size of the queue, and (2) the queue length must be a dynamic parameter that responds to the current load on the system. Expecting system administrators to tweak transmit queue lengths manually seems like a losing strategy.

Tom's patch adds a new "dynamic queue limits" (DQL) library that is meant to be a general-purpose queue length controller; on top of that he builds the "byte queue limits" mechanism used within the networking layer. One of the key observations is that the limit should be expressed in bytes rather than packets, since the number of queued bytes more accurately approximates the time required to empty the queue. To use this code, drivers must, when queueing packets to the interface, make a call to one of:

    void netdev_sent_queue(struct net_device *dev, unsigned int pkts, unsigned int bytes);
    void netdev_tx_sent_queue(struct netdev_queue *dev_queue, unsigned int pkts,
			      unsigned int bytes);

Either of these functions will note that the given number of bytes have been queued to the given device. If the underlying DQL code determines that the queue is long enough after adding these bytes, it will tell the upper layers to pass no more data to the device for now.

When a transmission completes, the driver should call one of:

    void netdev_completed_queue(struct net_device *dev, unsigned pkts, unsigned bytes);
    void netdev_tx_completed_queue(struct netdev_queue *dev_queue, unsigned pkts,
				   unsigned bytes);

The DQL library will respond by reenabling the flow of packets into the driver if the length of the queue has fallen far enough.

In the completion routine, the DQL code also occasionally tries to adjust the queue length for optimal performance. If the queue becomes empty while transmission has been turned off in the networking code, the queue is clearly too short - there was not time to get more packets into the stream before the transmitter came up dry. On the other hand, if the queue length never goes below a given number of bytes, the maximum length can probably be reduced by up to that many bytes. Over time, it is hoped that this algorithm will settle on a reasonable length and that it will be able to respond if the situation changes and a different length is called for.

The idea behind this patch makes sense, so nobody spoke out against it. Stephen Hemminger did express concerns about the need to add explicit calls to drivers to make it all work, though. The API for network drivers is already complex; he would like to avoid making it more so if possible. Stephen thinks that it should be possible to watch traffic flowing through the device at the higher levels and control the queue length without any knowledge or cooperation from the driver at all; Tom is not yet convinced that this will work. It will probably take some time to figure out what the best solution is, and the code could end up changing significantly before we see dynamic transmit queue length control get into the mainline.

Comments (20 posted)

Patches and updates

Kernel trees

  • Peter Zijlstra: 3.0-rt7 . (August 6, 2011)


Core kernel code

Development tools

Device drivers

Filesystems and block I/O

Memory management



Page editor: Jonathan Corbet


Security testing tools for Fedora

August 10, 2011

This article was contributed by Nathan Willis

Red Hat's Steve Grubb recently unveiled a suite of lightweight Linux security-testing tools. Grubb's tools are not a post-installation system audit; rather, they help the distribution team find and catalog common problems with binary packages, scripts, and applications. Although he announced them on a Fedora project mailing list, they are not inherently distribution-specific, so other distributions may want to look into them and adapt them for their needs.

The security assessment tools

The announcement came in an August 3 email to the Fedora testers' list. Grubb described the collection as "for assessing security of the distribution. It is by no means a comprehensive auditing tool, but the scripts definitely find problems." The collection, which is hosted at Grubb's personal Red Hat page, is introduced with:

Sometimes you want to check different aspects of a distribution for security problems. This can be anything from file permissions to correctness of code. This page is a collection of those tools. Depending on what information the tool has to access, it may need to be run as root.

The page currently hosts thirteen utilities, all of them shell scripts. The first script's comment block describes it as being available under GPLv2. Most of the others simply mention "the GNU public license," although a few extremely short scripts (of one to three lines each) do not mention licensing at all. The longer scripts are commented with the purpose and usage instructions, although Grubb also includes a plain English description of each script on the main page.

The scripts are designed to iterate through the filesystem looking for specific security-setting, permissions, or behavioral problems that might not get caught by the normal QA process. Grubb said in his introductory email that he has found problems in Fedora 15 with all of the scripts.

The first two on the page tackle compatibility with particular policies. The first,, is the longest, and tests for compatibility with the US Defense Information Systems Agency's (DISA's) Security Technical Implementation Guide (STIG) for Unix systems. The STIGs contain "technical guidance to 'lock down' information systems/software that might otherwise be vulnerable to a malicious computer attack," including basic guidelines on things like file and directory ownership and permissions. The script incorporates 35 individual tests for log, library, configuration, system command files, and miscellaneous directories. Each test includes a comment that references a section of the most recent Unix STIG.

The second script, rpm-chksec, tests for compatibility with Fedora's recommendations for ELF executables. It checks RPMs to see if the binaries it contains were compiled with the Position Independent Executables (PIE) and Relocation Read-Only (RELRO) compiler flags. It can either be run against a single RPM or against all packages installed on the target system.

The next five scripts each deal with detecting an individual risk factor with executables. The find-nodrop-groups and rpm-nodrop-groups scripts look for programs (installed programs in the former script; RPM packages in the latter) that use setgid without using either initgroups or setgroups. Without such safeguards, the program inherits all of the groups of the calling user, and can thus unintentionally run with elevated privileges.

The find-chroot and find-chroot-py scripts look for programs and Python scripts that call chroot but do not call chdir, and thus may have the current working directory outside the chroot sandbox. The find-execstack script looks for programs that have marked the stack as executable (which opens the door to an attacker using a stack buffer overflow to execute arbitrary code).

The find-hidden-exec script looks for executables that are "hidden" — in the sense that either their name or the name of one of their parent directories begins with a ".". That is not intrinsically a security problem (particularly on a production machine, post-installation), but from the distribution's perspective it is highly suspect for a fresh install and warrants investigation. The find-sh4errors script simply attempts to parse all shell scripts with sh's -n flag, noting any that fail. As Grubb put it on the mailing list, it is most likely to catch accidentally-broken scripts, but is still important.

There are two SELinux scripts, one that checks for devices in /dev that are incorrectly labeled, and one that checks the running processes to find daemons that do not have an SELinux policy. In both cases the results depend on the hardware and the daemons that happen to be in use on the test system, rather than reporting all possible problems in the distribution.

Finally, there are two scripts to check the behavior of programs with regard to temporary file creation. One checks all shell scripts to look for any that are using /tmp as a temporary storage location but have not called the mktemp utility. The other checks binary executables to find programs that are using /tmp but do not appear to be using a good pseudo-random name generator to create file names. Both of these scripts are called works-in-progress that may generate false positives, but they are helpful nonetheless. A program that uses /tmp but employs predictable file names may be vulnerable to attacks that replace the real temporary file with a malicious payload. There are also problems with privileged programs that use predictable file names that could be symbolically linked to another file by an attacker.

Fedora QA

Adam Williamson of the Fedora QA team replied to Grubb's post asking whether or not the tests could be integrated into the distribution's AutoQA test framework, which watches for new package builds (among other events) and executes tests against them. Grubb cautioned that not all of the scripts were robust enough yet to be relied on in automated tests, but that most were good enough for the general QA process.

For example, the shell error test...why would anyone purposely write shell script that does not work? This can always be fixed before a release. Some tests are still under development like the ELF binary well known tmp file test. This can make some false positives, but there are enough good things in it to start asking real questions about is that in any program?

AutoQA developer Kamil Paral expressed interest in adding the tests to the system, with a few changes. The scripts would need AutoQA wrappers to be written, in order to be managed by AutoQA's test "harnesses." The scripts would also need to be added to the AutoQA Git repository, as the infrastructure does not handle outside tests, and would need to be modified to report results via email to package maintainers, which is AutoQA's only notification method. As they are now, the scripts report the security faults they discover to stdout, most with color-coding to highlight the severity of problems found.

"That said," Paral concluded, "we would love to execute more tests for Fedora. But until the proper support is ready, it takes quite some effort. The first approach is go through the tests, select some appropriate ones and do that now." An alternative suggested by Paral would be to wait until AutoQA could be adapted to handle third-party tests, but there was no time frame for that feature enhancement.

In the meantime, Grubb asked Fedora testers to run the scripts manually, particularly the rpm-chksec script that looks for the recommended compiler flags. Fedora 16 has adopted a new policy with regard to RELRO, recommending that all packages be compiled with at least "partial" RELRO support, and important programs use RELRO in full.

For other distributions

Most other distributions have started moving towards RELRO and PIE in recent years, and although SELinux is still not the default in several major distributions, the majority of Grubb's other scripts could prove to be useful QA tools on non-Fedora systems as well. The chief obstacle is that many of them rely on the rpm command-line tool to inspect packages (and not just those that look for RPM package problems).

For example, the find-chroot and find-chroot-py scripts both use rpm to detect and then report which package owns any problematic file. When you execute it on a Debian-based system, it flags the packages but tacks on the semantically-valid-sounding (but incorrect) message "file FOO is not owned by any package." The same is true of find-sh4errors, find-execstack, and the /tmp usage scripts. Apt offers the same functionality, so adapting the rpm-dependent scripts to work in an Apt-based distribution would not be difficult.

Still, the other scripts are useful today even on Debian-based systems. I ran several of them on my normal Ubuntu desktop machine, and was surprised to find (for example) how many hidden executables there were tucked away. The majority were not actually ELF binaries or scripts of any kind — although there were a handful of oddly-named PHP files from some packages I have tested — instead they were backup data files, such as OS X resource files of raw photos from the last time I traveled with an Apple laptop and a camera. They were named .IMG_NNNN and were of no value — but they definitely did not need the execute bit set.

That example is still a good illustration of the value of Grubb's QA tools, however: the permissions were wrong, even if (as far as I could tell) the contents were not dangerous. From a security standpoint, finding seemingly-innocuous permissions or ownership problems is just as important for a distribution as verifying patches that fix known CVEs. Although no one would intentionally ship a broken shell script or choose a guessable temporary file name, their presence on the installer image still constitutes a risk to users.

Comments (2 posted)

Brief items

Distribution quotes of the week

I go to my local Porsche dealer, and he has some nice cars, but they really don't have the options I am looking for on the car I want on the lot. (though they have plenty of the brand and model (analogous to Distribution and version). Instead - the dealer offers to order from the factory the Brand and Model of car that I want, with the factory options that I want. They aren't going to give me a Yugo, Ford, or Ferrari engine, that's not one of the options the factory (analogous for the official repository of fedora packages) will deliver. And if the dealer offered to rip the engine out, and put a Yugo engine in it for me, I'd agree, it'd stop being a Porsche. It would be based on a Porsche, but not truly a Porsche anymore. I think this is even more apropos because there are factory approved and supplied options that the dealer can install and still sell and call the vehicle by the Brand.
-- David Nalley

Let's re-use the Porche thing again.

You can go to a Porche dealership and order what they have on the lot, or you can have them install genuine Porche accessories at the dealership, or at the factory. This is all well and good. Where this breaks down is if you buy a Porche, buy some genuine parts from the factory and install them yourself, and now try to sell the car as a genuine new Porche. It's a car, it has Porche parts on it, but was the install done right? Did all the guidelines and directions get followed correctly?

This is more in line with Fedora creating a virt image and a 3rd party creating a virt image. The creation involves taking what we ship and $DOING_STUFF with it. That stuff can make a difference to the end user experience, and I think that's the concern here. Is the STUFF being done in an acceptable way to Fedora the project? Are we comfortable letting random folks DO_STUFF to our offering and still call it Fedora?

-- Jesse Keating

During said DebConf8, I had a dream (it was almost a nightmare, actually): I woke up and just like that, I was the DPL. I spoke to some people about this dream and to my complete surprise many said that I should actually do it.
--Margarita Manterola

Comments (none posted)

No Btrfs by default in Fedora 16

The Fedora project has backed off of its goal to run on Btrfs by default in the F16 release. "Fesco outlined basic requirements that needed to be met by Alpha for the switch to be allowed to happen and we have not met those requirements so it won't be happening for F16." Among other things, those requirements include a working filesystem checker, and that is not yet on offer.

Full Story (comments: 37)

Gentoo Linux 11.2 LiveDVD

The Gentoo Linux 11.2 LiveDVD is available. "System packages include: Linux kernel 3.0 (with Gentoo patches), Accessibility Support with Speakup, Bash 4.2, GLIBC 2.13-r2, GCC 4.5.2, Binutils 2.21.1, Python 2.7.2 and 3.2, Perl 5.12.4, and more." There's a good selection of desktop environments and window managers along with plenty of new and updated packages throughout.

Comments (none posted)

Ubuntu Oneiric Ocelot Alpha 3 Released

The third alpha of Ubuntu Oneiric Ocelot (11.10) is available for testing. Edubuntu, Kubuntu, Mythbuntu, and Lubuntu are also available.

Full Story (comments: none)

Distribution News


openSUSE Strategy done!

The openSUSE project has been working on strategy document for nearly two years. The process has been completed. "Almost 2 years ago, at the first openSUSE conference, a discussion started about Strategy. A few months ago a final document was ready and on July 14th 2011, the strategy voting ended. Over 200 of the openSUSE Members voted, with 90% in favor of the document."

Comments (none posted)

Ubuntu family

Vacant Ubuntu Developer Membership Board seat: Call for nominations

Nominations are open for a vacant seat on the Ubuntu Developer Membership Board. "The DMB is responsible for reviewing and approving new Ubuntu developers, meeting for about an hour once a fortnight. Candidates should be Ubuntu developers themselves, and should be well qualified to evaluate prospective Ubuntu developers and decide when to entrust them with developer privileges or to grant them Ubuntu membership status." The nomination period ends August 22.

Full Story (comments: none)

Newsletters and articles of interest

Distribution newsletters

Comments (none posted)

Open Embedded: An alternative way to build embedded Linux distributions (EETimes)

EETimes takes a look at the Open Embedded (OE) build environment. "In this article we present an overview of the key elements of the OE build environment and illustrate how these elements can be applied to build and customize Linux distributions. The Texas Instruments Arago distribution, which is based on the Angstrom distribution, will be used as example of how to create a new distribution based on OE and the distributions that already use it."

Comments (4 posted)

The Six Best Linux Community Server Distributions (

Carla Schroder looks for the best community server distribution. "I have a soft spot for community-powered distros because they are labors of love, and provide a useful counterbalance to corporate follies. The two top Linux distros, Red Hat and Debian, represent opposite sides of the same Linux coin; Red Hat is a commercial success, while Debian will always be both libre software and free of cost. Both have been around since the early days of Linux, both have a commitment to free software, and they are the two fundamental distros that the majority of other distros are descended from. This shows that both models work, that both have their merits and are complementary."

Comments (none posted)

Millan: Debian GNU/kFreeBSD

Robert Millan looks at recent improvements in Debian GNU/kFreeBSD and tests it on his main workstation. From the second article: "During the last few weeks I had to work through some of the limitations that were holding me back, such automated driver load and FUSE. I was lucky enough that other people filled the missing pieces I wanted, such as NFS client support and a GRUB bugfix that broke booting from Mirrored pools. I have to say that I'm very satisfied."

Comments (none posted)

The best Linux distro of 2011! (TuxRadar)

TuxRadar takes Fedora, Mint, Arch, Ubuntu, Debian and openSUSE for a spin and grades them on ease of installation, hardware support, desktop, customization, community, performance, package management, cutting edge, and security. "Whilst no scientific stone has been left unsubjected to a transformation matrix, bear in mind that their isn't any science known to man or penguin that can accurately quantify a lot of the qualities we look for in a version of Linux. A lot of it will be completely subjective, depending on the wants and needs of the individual user. For one thing, when we totted up the medal table to produce the result, we were assuming that all categories were equal. This is very unlikely to be the case, to be honest - if it was, everyone would be using the same distro."

Comments (none posted)

Page editor: Rebecca Sobol


Mozilla Tilt: Web debugging in a whole new dimension

August 9, 2011

This article was contributed by Nathan Willis

Mozilla has released a Firefox extension named Tilt that renders web pages as 3D stacks of box-like elements. The 3D structure, so the thinking goes, will offer web developers an important visualization tool when debugging pages. In addition to its practical value, however, Tilt is also a live demonstration of Firefox's WebGL stack, an emerging API for displaying 3D content within the browser.

Preview videos of Tilt were made available in early June, but the first publicly-installable version did not hit the web until the end of July. The code is hosted on developer Victor Porof's GitHub page; anyone interested in taking Tilt for a whirl can download the .xpi file from the bin/ directory there and manually install it from Firefox's Add-ons Manager. Firefox 4.0 or newer is required. The source is available there as well, of course.

[Tilt Google News]

Once they have installed it, users can activate Tilt from Firefox's Tools menu (or with the key combination Control-Shift-M). This activates the Tilt visualization for the current tab only. The page is rendered as a 3D "mesh" in WebGL; elements (including all text and images) are in full-color, which makes a head-on view look virtually identical to the original page (albeit shrunken down by about 25% to make it easier to manipulate). However the depth of the page elements' boxes are drawn in a flat, opaque gray. You can see how many levels deep the stack is when looking at a side or angled view, but you cannot tell which elements are which.

The visualization is rendered within the normal Firefox content area. Technically, it is drawn with WebGL on a canvas element that is overlaid on the normal frame, so that the page persists without interrupting any ongoing operations while it is hidden. As a result, navigation via the location bar, bookmarks and forward/back buttons is still possible. Any such navigation does switch the Tilt visualization off, however — as does switching into a different tab and back again.

Although the normal Gecko-rendered version of the page persists in the background, the "Tilt view" of a given page is a read-only structure generated like a snapshot. That is, you cannot interact with the page's contents at all. Instead, your mouse and keyboard function as spatial navigation controls within the 3D space. Mouse movement with the left button held down twists and rotates the visualization in three dimensions (in what Porof calls "virtual trackball" mode in the GitHub documentation). Holding down the right button enables left-right and up-down panning. The scroll wheel zooms in and out. The arrow and W-A-S-D keys provide keyboard access to the same controls.


You can also double-click any page element in the visualization and bring up a sub-window containing the HTML code that corresponds to it. For elements on the "top" of the stack (which means the innermost-nested elements in the page), only the topmost contents are displayed. For elements lower in the stack, the pop-up window shows the HTML for the element you clicked on highlighted in blue, plus the child elements nested within it on an un-highlighted background. That can be helpful to trace through peculiar-looking stacks. The extension also renders a small "help" button (which displays the keyboard and mouse commands) and an "exit" button that returns you to normal browsing.

Tilting at elements

Obviously, finding the right 3D box to click on mid-way down the stack somewhere on a crowded page can involve a minute or two of zooming, panning, and manipulating the Tilt visualization, but that is precisely what the extension lets you do: separate out the layers of the document in a way that the normal 2D render does not offer. At the heart of Tilt's functionality is the tree-like structure of the document object model (DOM). Tilt takes the DOM elements in nested order, starting with body, and draws one layer for each. Every element within is rendered as its own layer stack on top of its parent: div, span, ul, img, etc.

The elements' dimensions and X,Y position are scraped from the already-rendered representation of the page (so that contents are not re-rendered to display or to update the the 3D visualization). Thus nested elements stack naturally on top of one another. Special treatment is given to off-screen elements (such as iframes or divs that were not displayed when Tilt was switched on); they float by themselves above the top edge of the page's main body stack.

In practice, because Tilt grabs the internal representation of the page without being aware of the screen height, those pages that are more than one screen-full tall appear to be extremely long in the Y direction and take some panning to inspect. Also, although the Z-ordering of element usually makes the relationship between them clear, there are some peculiar cases where elements seem to float above their parents with nothing in between, or are physically larger than the elements beneath them.

That is probably just the magic of HTML at work. After all, elements can be positioned absolutely rather than relatively — and that should logically interfere with the apparent "stacking" of boxes in Tilt. Still, the binary version of the extension offers a minimalist interface. Screenshots from Porof's blog entries on show that a richer UI is in the works, which ought to make inspecting the DOM easier. A July 27 entry, for example, shows a "thumbnail" navigator that offers a whole-page overview, as well as a DOM tree navigator and control over the thickness and spacing of elements' boxes.

I ran Tilt in Firefox 5.0 on a quad-core Phenom machine with NVIDIA 600GT graphics; 3D performance was adequate for twisting and rotating the visualization stack — if not exactly snappy. In particular, zooming in and out produced some noticeable lag, as did generating the initial 3D mesh view for notoriously complex pages like those served up by your favorite social networking sites. Tilt does not re-fetch or re-render the page contents, so all of the lag is attributable to creating the 3D mesh itself.

I am certain it is a tricky proposition (and Porof has discussed its challenges in his Tilt blog posts); to me the only takeaway from the speed issues is a lingering doubt about the viability of WebGL on systems that do not support full hardware acceleration. Inspecting a web page in 3D is not a speed-sensitive task, but editing 3D content or playing live games would be. WebGL is a derivative of OpenGL ES 2.0, so it is a well-established standard, and Firefox has supported it since 4.0. However, currently only the Nvidia binary OpenGL drivers support WebGL hardware acceleration on Linux using Firefox 4 and 5, which leaves out a significant number of users. Firefox 6 changes the way the browser detects the video card driver and thus "whitelists" more OpenGL drivers.

Inspection versus modification

At the moment Tilt is limited to displaying the DOM frozen at a single moment of time (and more specifically, before the extension was activated). That allows the user to visualize the depth and relationship between page elements, which can make for decent static analysis. But to make Tilt useful for developers, the team is working on exposing an HTML and CSS editor component and making Tilt cope with dynamic content.

That planned enhancement of the extension has two distinct parts: making the 3D mesh itself modifiable on-the-fly, and integrating an HTML editor. Making the mesh modifiable (as opposed to a static snapshot of the page) has other benefits as well; it would be able to show CSS transformations and animation, and potentially could be used to make the 3D visualization interactive. Seeing how the DOM responds to interactivity would be valuable to developers (plus, the ability to navigate between pages in 3D view would just plain look cool.).

An HTML editor inside the extension would also make Tilt more useful for debugging, as it would allow live updating of the DOM without the multi-step process currently required of reloading the page and then re-enabling the Tilt extension. Porof discusses this work in the July 27 blog post referenced above, saying that the current HTML display component (lifted from the Ace editor) will need to be replaced, and a less memory-intensive method for drawing the WebGL content developed.

There are other possibilities further out, such as visually distinguishing between elements with absolute and relative positioning, and the ability to "zoom in" to a specific DOM element and restrict display to that element and its children alone. Both of those features imply additional UI design — browsing the Tilt videos on YouTube, it is clear that the team iterated through several different looks before settling on the current one, and making large quantities of small HTML element blocks easy to scan visually is not simple.

Apparently users have also asked for the ability to export the 3D mesh from the extension to a file, which would open the door to all kinds of new cross-page analysis opportunities. Tilt has already begun to attract attention from web developers who have begun to propose their own ideas — such as "editing" page contents by moving and restacking the blocks in the 3D visualization itself.

That last suggestion would clearly demand significantly more work, so we should probably not expect to see it anytime soon. But Tilt's rapid progress is encouraging. At the moment, simple read-only inspection is all that the binary XPI provides, but that alone can be a useful debugging tool. It is similar to the structure-revealing functionality of the Web Developer extension with its outline tools, but the addition of a third dimension automatically brings some problems right to the forefront. Making that technique interactive for the user can only make it more valuable.

Comments (9 posted)

Brief items

Quotes of the week

It seems I am almost unique in my insistence that Python's dynamic features be used very sparingly.
-- Guido van Rossum

Okay, so I'm learning a lot and also applying that learning... a good thing except that I've forgotten all about it.
-- Avi Kivity

Comments (2 posted)

Cython 0.15 released

Cython is a language designed to make the addition of C extensions to Python easy. The 0.15 release has been announced; new features include full support for generators, a new nonlocal keyword, OpenMP support, exception chaining, relative imports, and more.

Full Story (comments: none)

KMyMoney 4.6.0 released

Version 4.6.0 of the KMyMoney personal finance manager has been released; significant changes include a new CSV import plugin, new translations, performance improvements, and more.

Full Story (comments: none)

QEMU 0.15.0

The 0.15.0 release of the QEMU hardware emulator is out. There are many new features, including support for Lattice Mico32 and UniCore32 targets, improved ARM support, Xen support (long maintained out of tree), progress in merging with qemu-kvm, a "pimped up threading model," support for a lot of new virtual hardware, and more.

Full Story (comments: none)

Samba 3.6.0 released

The Samba 3.6.0 release is out. Changes include a more secure set of defaults, SMB2 support (though it's still disabled by default), massively reworked printing support, reworked and simplified ID mapping, a new traffic analysis module, and more.

Full Story (comments: none)

Newsletters and articles

Development newsletters from the last week

Comments (none posted)

Is Glark a Better Grep? (

Joe 'Zonker' Brockmeier introduces glark on "What is glark? Basically, it's a utility that's similar to grep, but it has a few features that grep does not. This includes complex expressions, Perl-compatible regular expressions, and excluding binary files. It also makes showing contextual lines a bit easier."

Comments (48 posted)

Jansen: Thoughts on RESTful API Design

Geert Jansen has posted a lengthy document on the design of REST-based APIs based on his experience working on Red Hat's virtualization products. "In my definition, a real-world RESTful API is an API that provides answers to questions that you won't find in introductory texts, but that inevitably surface in the real world, such as whether or not resources should be described formally, how to create useful and automatic command-line interfaces, how to do polling, asynchronous and other non-standard type of requests, and how to deal with operations that have no good RESTful mapping."

Comments (none posted)

Page editor: Jonathan Corbet


Articles of interest

FSFE Newsletter - August 2011

The August edition of the Free Software Foundation newsletter is available.

Full Story (comments: none)

Calls for Presentations

Call for papers - PGConf.EU 2011 (2nd call)

PostgreSQL Conference Europe 2011 will be held October 18-21 in Amsterdam, The Netherlands. The call for papers will be open until August 21. Talks may be in English, Dutch, German and French.

Full Story (comments: none)

3rd Call For Papers, 18th Annual Tcl/Tk Conference 2011

This is the third call for papers for the Tcl/Tk Conference, October 24-28, 2011 in Manassas, Virgina. Proposals are due by August 26.

Full Story (comments: none)

Upcoming Events

PyCon Ireland 2011

PyCon Ireland takes place October 8-9, 2011 in Dublin, Ireland. Registration is open, although the early bird rates are gone. There's still time to submit a talk or to become a sponsor. The first confirmed keynote speaker will be Damien Marshall.

Full Story (comments: none)

Events: August 18, 2011 to October 17, 2011

The following event listing is taken from the Calendar.

August 17
August 19
LinuxCon North America 2011 Vancouver, Canada
August 20
August 21
PyCon Australia Sydney, Australia
August 20
August 21
Conference for Open Source Coders, Users and Promoters Tapei, Taiwan
August 22
August 26
8th Netfilter Workshop Freiburg, Germany
August 23 Government Open Source Conference Washington, DC, USA
August 25
August 28
EuroSciPy Paris, France
August 25
August 28
GNU Hackers Meeting Paris, France
August 26 Dynamic Language Conference 2011 Edinburgh, United-Kingdom
August 27
August 28
Kiwi PyCon 2011 Wellington, New Zealand
August 27 PyCon Japan 2011 Tokyo, Japan
August 27 SC2011 - Software Developers Haven Ottawa, ON, Canada
August 30
September 1
Military Open Source Software (MIL-OSS) WG3 Conference Atlanta, GA, USA
September 6
September 8
Conference on Domain-Specific Languages Bordeaux, France
September 7
September 9
Linux Plumbers' Conference Santa Rosa, CA, USA
September 8 Linux Security Summit 2011 Santa Rosa, CA, USA
September 8
September 9
Italian Perl Workshop 2011 Turin, Italy
September 8
September 9
Lua Workshop 2011 Frick, Switzerland
September 9
September 11
State of the Map 2011 Denver, Colorado, USA
September 9
September 11
Ohio LinuxFest 2011 Columbus, OH, USA
September 10
September 11
PyTexas 2011 College Station, Texas, USA
September 10
September 11
SugarCamp Paris 2011 - "Fix Sugar Documentation!" Paris, France
September 11
September 14
openSUSE Conference Nuremberg, Germany
September 12
September 14
X.Org Developers' Conference Chicago, Illinois, USA
September 14
September 16
Postgres Open Chicago, IL, USA
September 14
September 16
GNU Radio Conference 2011 Philadelphia, PA, USA
September 15 Open Hardware Summit New York, NY, USA
September 16 LLVM European User Group Meeting London, United Kingdom
September 16
September 18
Creative Commons Global Summit 2011 Warsaw, Poland
September 16
September 18
Pycon India 2011 Pune, India
September 18
September 20
Strange Loop St. Louis, MO, USA
September 19
September 22
BruCON 2011 Brussels, Belgium
September 22
September 25
Pycon Poland 2011 Kielce, Poland
September 23
September 24
Open Source Developers Conference France 2011 Paris, France
September 23
September 24
PyCon Argentina 2011 Buenos Aires, Argentina
September 24
September 25
PyCon UK 2011 Coventry, UK
September 27
September 30
PostgreSQL Conference West San Jose, CA, USA
September 27
September 29
Nagios World Conference North America 2011 Saint Paul, MN, USA
September 29
October 1
Python Brasil [7] São Paulo, Brazil
September 30
October 3
Fedora Users and Developers Conference: Milan 2011 Milan, Italy
October 1
October 2
WineConf 2011 Minneapolis, MN, USA
October 1
October 2
Big Android BBQ Austin, TX, USA
October 3
October 5
OpenStack "Essex" Design Summit Boston, MA, USA
October 4
October 9
PyCon DE Leipzig, Germany
October 6
October 9
EuroBSDCon 2011 Netherlands
October 7
October 9
Linux Autumn 2011 Kielce, Poland
October 7
October 10
Open Source Week 2011 Malang, Indonesia
October 8
October 9
PyCon Ireland 2011 Dublin, Ireland
October 8
October 9
Pittsburgh Perl Workshop 2011 Pittsburgh, PA, USA
October 8 PHP North West Conference Manchester, UK
October 8
October 10
GNOME "Boston" Fall Summit 2011 Montreal, QC, Canada
October 8 FLOSSUK / UKUUG's 2011 Unconference Manchester, UK
October 9
October 11
Android Open San Francisco, CA, USA
October 11 PLUG Talk: Rusty Russell Perth, Australia
October 12
October 15
LibreOffice Conference Paris, France
October 14
October 16
MediaWiki Hackathon New Orleans New Orleans, Louisiana, USA
October 14 Workshop Packaging BlankOn Jakarta , Indonesia
October 15 Packaging Debian Class BlankOn Surabaya, Indonesia

If your event does not appear here, please tell us about it.

Page editor: Rebecca Sobol

Copyright © 2011, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds