LWN.net Weekly Edition for April 14, 2011
LFCS: Linux penetration, Yocto, and hardware success stories
The first day of the Linux Foundation Collaboration Summit (LFCS) tends to have a wide variety of talks and panels that range all over the Linux ecosystem. This year's edition didn't disappoint, though the topics seemed a bit more focused toward the embedded space and Linux hardware support. There was also the always interesting kernel panel, which we cover elsewhere. LF executive director Jim Zemlin kicked things off with a look at Linux penetration along with some thoughts he has on where Linux (and the LF itself) is headed.
While he didn't reprise his "it's like kicking a puppy
" comment
regarding Microsoft bashing
in his LFCS keynote, he did mention it earlier in the
week at a Camp KDE talk. Kicking anything is not in Zemlin's immediate
future, however, as he gave his talks from a wheelchair—"on heavy
narcotics
"—due to an ugly looking break in his leg from a skiing
mishap. He showed before and after x-rays that made it very clear why no
one will want to "get behind me at airport security
".
Zemlin went through some examples of eye-opening places where Linux is running, including air-traffic control systems, supercomputing, nuclear submarines, and so on. He noted that 72% of the world's equities are traded using Linux (and that may be low as some exchanges are not releasing any information about their platform). Linux first appeared on the list of the top 500 supercomputers in 1998, by 2004 Linux was running over half of the raw computing power on that list; now that number is 95%.
Embedded and mobile devices are also an arena where Linux has made huge
strides. The mobile space, in particular, is
"massively hedged
" because there are multiple big bets on
Linux in that area (i.e. Android, MeeGo, WebOS, and others). He also said
that there is "no declared winner
" in mobile and that while
he has no real predictions what the mobile landscape will look like in three
to five years—they would "inevitably be wrong
"—he
is sure of one thing: the leaders will be running Linux.
Other trends worth watching in the future according to Zemlin are things like pre-integrated, minimal configuration computing based on Linux. Those systems will be for both the enterprise and consumer markets. They will come with pre-configured applications that are ready to use out of the box. Another trend is toward specialized high-performance computing systems as typified by "Watson", which is IBM's Linux-based supercomputer targeted at "solving" the Jeopardy game show. He did note that he thought the humans could have won if one of the answers (in the form of a question, of course) contained the commands used to format Linux disks.
"Don't believe a lot of the FUD [fear, uncertainty, and
doubt]
" that is being spread by Linux competitors, Zemlin said.
There is an ongoing effort to discredit Linux and various Linux-using
companies on the basis of copyright and patent issues. It is not very
important in terms of Linux adoption, but more than that, it is not
open-source-specific as any platform can be attacked in this way; it's
based on how successful the platform is. It will lead to efforts to
reform the patent system and increase the quality of patents that are
issued, he said, but it won't slow down the adoption of Linux.
There are a number of changes at the LF, which Zemlin passed along, including the merger with the CE Linux Forum (CELF), which was announced in October, the first annual Android Builders Summit to be held the week following LFCS, and a new LinuxCon for Europe, which will be held in Prague in October. There is also a new high availability workgroup within the LF that was announced as part of LFCS as well as the release of Carrier Grade Linux 5.0.
Yocto project and panel
The release of Yocto 1.0 was another announcement that was made. The Yocto project is a collaborative effort to
simplify the process of creating an embedded Linux distribution under the
LF umbrella. There are multiple companies participating, "many of
them are fierce competitors
", but they see the advantage of working
together on these tools, Zemlin said. The next breakthrough device or
gadget will very
likely be built on Linux, because no other platform has the breadth of
architecture support. The role of the LF is not to find that product or
the person who is building it, but help the community provide the best
tools to create those products. "That's what's going on in the Yocto
project
", he said, and it is "all going on in the open
".
The update on the project was a transition to a panel discussion that Zemlin led with three embedded industry veterans: William Mills of Texas Instruments, Mark Hatle of Wind River, and Steve Sakoman of Sakoman, Inc., which is an embedded consulting company. The overall theme of the discussion was collaboration—not surprising given the name of the conference—but collaboration is not something the embedded industry, or embedded Linux, is known for.
According to Mills, it is
"far too difficult
" to create embedded Linux distributions
currently, but that Yocto changes that. It is important that Yocto is
based on OpenEmbedded as there are a number of companies that already use
those tools, and Yocto is not trying to start from scratch. Sakoman agreed
that building atop OpenEmbedded is the right way to go, but that Yocto adds
some credibility that was lacking because customers didn't "see any
big names behind OpenEmbedded
". Hatle pointed out that having each
company create and maintain its own build tools doesn't make sense and is a
waste of money, "let's do it once and share the results
", he
said.
The companies involved are still going to compete Hatle said, they will
"just compete on a higher level, not on the version of grep
"
that one is shipping. Vendors can add better integration or better
debugging, he added. Sakoman concurred, saying that there will always
be opportunities available for companies that can assist in embedded
development, particularly because the time to market for products is so
short.
Embedded Linux is definitely important these days and only getting more so.
Zemlin noted that a while back he was getting a lot of calls from companies
looking
to hire talented embedded Linux developers. More recently those calls have
changed to: "Do you know any companies I can acquire to
get Linux talent?
"
When asked about their "dream participant
" in the project,
both Hatle and Mills pointed to the smaller players, who can't afford to
hire companies like Wind River, as the ones they would like to see get
involved. Mills said that there were more silicon vendors and operating
system companies involved than he expected, but that he would like to see
more community projects, like OpenWRT for example, switching to use Yocto.
It is a highly usable system, Hatle said, that "works right out of
the box
", which is something that he has never seen for embedded
Linux before.
Hardware success stories for Linux
Later in the day, Greg Kroah-Hartman moderated a panel targeted at hardware success stories for Linux. It featured Jason Kridner of Texas Instruments (TI), Mark Charlebois of the Qualcomm Innovation Center (QuIC), and Dirk Hohndel of Intel, who discussed their companies' reasons for supporting Linux, along with a number of other hardware-related topics.
Kroah-Hartman started things off by asking why it is that each company
decided to contribute to Linux. Charlebois said that "Linux is a core
part of the mobile ecosystem
", which made it important to Qualcomm's
customers. QuIC had worked out of tree for a long time and "felt the
pain of doing that
", so it needed to start working upstream.
Hohndel noted that Intel has a "pretty long track record
" of
working with open source and that is because "open source is the
shortest path to innovation
". Kridner said that TI wanted to make
its "huge investment in mobile processors more easily
available
" to its customers. Linux enables that and "a whole
lot more things than just mobile phones
", he said.
How the panelists make the case for working upstream was the next topic
tackled. It is "easy when customers are asking for it
",
Charlebois said. Building up a set of out-of-tree patches is "too
painful
" because integrating them with the upstream kernel takes too
much time. Hohndel said that it is the financial argument that
"really resonates with upper management
". If you look at the
costs of porting things forward, backward, and sidewards, he said, you will
find that working upstream makes the most sense. "Fundamentally it is
in your own best financial interests
". Kroah-Hartman noted that he
was happy to hear that, as he had always hoped it was true.
From the audience, Jonathan Corbet asked about Charlebois's statement that
"the mobile space is about proprietary drivers
", which he had
made in a talk earlier in the day. Corbet noted that having proprietary
drivers goes against all of the points described above. Charlebois said
that it is "not always possible to open source
" drivers due to
third party intellectual property (IP). Kroah-Hartman pointed out that by doing
so, QuIC and others were "weighing two legal risks
" as some
kernel developers would claim that distributing binary-only drivers
violates their copyright.
While Charlebois didn't seem to have much more to say about that, Hohndel
described the situation as a "rock and a hard place
". When a
vendor buys IP blocks, the third parties they purchase them from often have
"strange ideas of GPL compliance
". There are lots of
constraints that go into building SoCs, he said, and vendors have to deal
with the real world, not the one they wish they lived in. He noted that
Intel had done a lot of work to get acceptable drivers for
PowerVR-based graphics devices into the kernel.
Hohndel said he is pretty happy with how the staging tree is working out, in
response to a question from Kroah-Hartman. He would rather have solid
drivers developed, but that doesn't always happen, so the "halfway
house
" provided by the staging tree is valuable. It allows the
community to help improve the drivers, which has been happening, "and
it's awesome
", he said. But the staging tree is "not a
panacea
", has failed in some places, and doesn't solve all the problems.
Making more open documentation available to the community was the topic of
another question from the audience. Kridner said that documentation is
"fantastic
" and that TI is always improving theirs, but that
personal contact with the engineers involved with a particular feature is
even more important. Hohndel said that documentation is "incredibly
expensive to produce
" and is legally difficult if third parties are
involved. Getting the driver is usually easier because there needs to be a
belief that documentation will bring in more customers before it will be
produced.
Grant Likely asked whether hardware companies felt that there was a
lack of input from Linux developers, which was something that came up (in the context of disks) at the
recently concluded Linux Filesystem,
Storage, and Memory Management Summit. Hohndel said that Intel gets a
"ton of direct feedback from community members
" and was
"very happy that we get it
". Kridner said that TI gets input
on device architectures from a lot of different places, while Charlebois
said that QuIC is focused on customer requirements. Kroah-Hartman noted
that both Intel and IBM have gotten kernel developers together with their
employees at various times and the kernel hackers will give
"unabashed input
" on hardware architectures.
There were, of course, several other talks and panels on the first day, and one cannot help but be struck be the number of competing companies that shared the stage. Obviously, Linux and open source are providing a platform not just for those companies to build products with, but also a place for them to collaborate, something which shouldn't be much of a surprise at a summit that is targeted at accelerating that process.
Project Harmony decloaks
The "Project Harmony" name has a long and not entirely encouraging history; it is usually applied to projects aimed at obnoxious licensing situations (examples being Qt and Java), and the projects have, on the face of it, failed to achieve their goals. The most recent use of this name looks like a variation on that theme: this project, which seeks to create a set of standard agreements for contributors to open source projects, has been widely derided as a secretive attempt by a specific vendor to push copyright assignment policies on the community. During a session at the Linux Foundation's Collaboration Summit, this project came out and actually showed the world what it has been doing.Harmony was represented by none other than Allison Randal, perhaps best known for her work on Parrot and her current role as the technical architect of the Ubuntu distribution. Allison has put her hands into the legal realm before; among other things, she played a major role in the creation of version 2 of the Artistic License. She joined Harmony as a community representative prior to taking the job at Canonical, and, she said, she still is not representing her employer in this endeavor.
The idea behind Harmony is that contributor agreement proliferation is no better than license proliferation. A lot of the agreements out there are poorly thought out and poorly written; they are also causing developers to sign away a lot of their rights without always knowing what they are agreeing to. Corporate lawyers are also getting tired of having to review a new and different agreement every time an employee wants to participate in a new project. A smaller set of well-understood agreements, it is hoped, would make life easier for everybody involved.
Allison started off by stating that the project recognizes "none of the above" (no contributor agreement at all) as an entirely valid option. The Linux kernel was cited as an example of a successful project without an agreement in place, even though the kernel does, in fact, use the Developer's Certificate of Origin as its contributor agreement. The real point, perhaps, is that the Harmony project does not believe that its agreements are suited to every project out there, and that there will always be reasons for some to use something else.
Assuming that one of the project's agreements are used, contributors (and the projects they contribute to) will agree to a number of conditions, many of which can be thought of as standard. Both sides disclaim any warranty, for example, and the contributor has to certify that he or she actually has the right to contribute the code. There is a "generic" patent grant which applies only to the contributed code itself. After that, though, there are a few options which must be selected for each specific project, starting with how the code is to be contributed. There are two choices here:
- The contributor grants a broad license to the project, but retains
ownership of the contribution. The project is able to use the
contribution as it sees fit (but see below).
- The contributor transfers ownership of the contribution to the project and gets, in return, a broad license for further use of it. While the contributor no longer owns the work, the back-license allows almost anything to be done with it. There is a fallback clause under this option saying that if an actual transfer of copyright is not possible (not all countries allow that), an open-ended license is granted instead, and the contributor agrees not to sue over anything which cannot be conveyed in the license grant. It is worth noting that giving the license back to the contributor, while broad, does not include the agreement not to sue.
Missing from this list of options is one which says "the project may use this code under the terms of its open source license, and needs neither ownership nor a broader license to do so." Those are the terms under which many projects actually accept contributions. A similar effect can be had with a proper selection among the other set of options, but it's not quite the same.
The second choice controls what the project is allowed to do with the contributions it gets under the agreement. All of the options require the project to release the contribution under the project's license if the contribution is used at all; the sort of "we will ordinarily release your work under a free license" language found in Canonical's contributor agreement is not present here. Beyond that, though, the project can choose between the promises it will make to contributors:
- The project can restrict itself to releasing the work under the
original license, with the common "any future version" language being
the only exception. This option yields results similar to simply
accepting the contribution under that license to begin with; it does
not allow any sort of proprietary relicensing.
- The project can specify an explicit list of additional licenses that
it may apply to the work.
- There are options which allow relicensing to any license approved by
the Open Software Initiative, or any license recommended by the Free
Software Foundation. These options would not directly allow
proprietary relicensing, but the OSI option, at least, would allow
relicensing to a permissive license, which would have very similar
effects.
- The final option is the full "we can use it under any license we choose" language, which explicitly allows proprietary licensing.
The agreement also allows the specification of an additional license to apply to the "media" portion of any contribution. The intent here is to allow documentation, artwork, videos, and so on to be licensed under a Creative Commons license or under the GNU Free Documentation License.
After describing the agreements that the project has produced, Allison acknowledged that Harmony "failed" at marketing its work. In an attempt to do better, the project has set up a web site describing the agreements and allowing the public to comment on them. Comments will be accepted through May 6; there will be a meeting held on May 18 at the Open Source Business Conference to discuss the next steps.
The discussion in the room made it clear that there is some marketing work yet to be done. It's still not clear who was participating in the project, and it seems unlikely that the mailing list archives will be made available. There was a strange and tense interlude during which some members of the audience tried to bring out who actually drafted the agreements. It turns out that the Software Freedom Law Center had participated in the discussion, but had explicitly withdrawn (for unstated reasons) from the writing of the legalese.
It eventually came out that the author of the agreements was Mark Radcliffe, who is a bit of a controversial figure in this area; his previous works include Sun's CDDL license, the SugarCRM badgeware license, and The CPAL badgeware license. Bradley Kuhn also said that Mark has defended GPL violators in the past - a history which makes Bradley rather nervous. The unsolved mystery of who actually paid for Mark's time doesn't help either. Ted Ts'o suggested, though, that, now that the agreements are public, they should be read and judged on their own merits rather than by making attacks on the author.
Bradley had another complaint: the current agreements, with all their options, look a lot like the Creative Commons agreements. Some of them are more moral than others, but there is no moral guidance built in, and no way to guide projects away from the least moral alternatives. It will be, he say, confusing for developers. Just like "under a Creative Commons license" says little about what can be done with a work, "a Harmony agreement" does not really, on its own, describe the terms under which contributions are made. It took ten years, he said, to make some sense of the Creative Commons "quagmire"; the Harmony agreements will create the same sort of mess.
Allison responded that we already have that kind of mess, only worse. Ted said that, with the Harmony agreements, at least we'll have a set of reasonably well understood contracts which don't suck. We may disagree about some of the choices, he said, but they will at least be built on a decent base. He expressed hopes that Harmony would help end the problem of developers blindly signing away their rights.
What happens now depends, at least partly, on how the discussion goes; now that comments are in the open, the project will need to either respond to them or risk discrediting the entire process. Anybody who has an interest in this area should probably read the agreements and submit their comments; the alternative is to live with whatever they come up with in the absence of your input.
TXLF: Defining and predicting the mobile "ecosystem"
At Texas Linux Fest (TXLF) on April 2, MeeGo advocate and Maemo Community Council member Randall Arnold took Nokia CEO Stephen Elop to task over his February comments about open source handsets — not on the reversal-of-strategy signaled by Nokia's deal with Microsoft, but on Elop's assessment of mobile devices as an "ecosystem
". The Nokia chief misses the big picture of open source's effect on the ecosystem, Arnold argued, but the open source community hasn't done much to prepare for it, either.
Elop was installed as Nokia CEO in September of 2010, after two years heading up Microsoft's Business Division, which oversees Office and various Microsoft ERP and CRM products. On February 11, 2011, he announced that Nokia was pulling MeeGo from its future product lines (at least in the medium term), and would instead ship devices running Microsoft's Windows Phone 7 (a move predicted by many LWN readers as early as day one of Elop's tenure). Although it received less attention at the time, the February 11 partnership deal also placed Microsoft's Bing as the default search engine on Nokia hardware and Microsoft's adCenter as the mobile advertising service, and tied Nokia's mapping service into Bing.
All the mobile horses
In his talk, Arnold launched the discussion from what he considered two curious statements by Elop in the CEO's justification for the controversial Nokia-Microsoft deal: "The game has changed from a war of devices to a war of ecosystems
", and "
It's now a three-horse race
". Elop's three horses are Apple's iOS, Google's Android, and Microsoft's Windows Phone 7.
That assessment of the marketplace rang false to Arnold, and he displayed a chart of the major mobile operating systems, including iOS, Windows Phone 7, Blackberry, HP's webOS, Android, Symbian, and MeeGo. Each was represented by a bubble roughly corresponding to its relative market share, with the bubble sorted on the horizontal axis by the openness of the platform. Windows Phone 7 rates solidly among the smallest.
In addition, Arnold said, although historically device makers have favored the more closed platforms because it simplifies the manufacturing and QA process, the chipmakers and carriers are increasingly interested in the open platforms because of the flexibility they can offer to customers. Chipmakers and carriers are driving much of Android's momentum, he said, and chipmakers already account for many of MeeGo's supporters. Considering those factors together, Elop's "three-horse race" comment seems way off base. The stock market did not care for Nokia-Microsoft deal either, Arnold observed, with Nokia's share price dropping 14% on the news and continuing to slide, which shows that the news was not what the mobile industry wanted to hear.
Arnold then paused to invite feedback from the audience on the bubble chart, both on the size of the various OSes' market share, and their openness. Naturally, considering Google's recent decision to withhold the source code to Android's "Honeycomb" release until an unspecified future drop date, some thought that Android belonged further on the "closed" end of the spectrum. Some also commented that Symbian was no longer open source, because Nokia had re-absorbed the Symbian Foundation. Although at the time the released Symbian source was still under the Eclipse Public License, Nokia has since taken the source code down, and moved Symbian to a proprietary license.
What exactly is an ecosystem, anyhow?
Aside from whether or not there is room for a "fourth horse" in the mobile device race, Arnold continued, the open source community ought to consider what a mobile device "ecosystem" really is, and how openness can be disruptive. As Elop used the term, it seemed to mean, broadly, anyone making money from the platform: device makers, carriers, even independent developers.
Device makers are facing shrinking margins on handsets as hardware prices fall and carriers subsidize phones. That accounts for Elop's decision to abandon MeeGo-based products, Arnold said — Elop simply couldn't see how Nokia could monetize an open platform, and viewed Apple's profitable locked-down iOS with envy.
But closing down the platform doesn't guarantee money either, Arnold observed, not even for the application maker. Apple takes a cut of all non-free apps in its App Store, but an ever-increasing number of those apps are becoming free (or at least, zero-cost). The chief reason, he said, is that independent developers are finding it difficult to compete in a marketplace of 100,000 apps, so they have had to look for other business models.
The most notable is the subscription service model, where the app itself is a free download, but a small fee is required to connect to the remote service on an ongoing basis. Second to that is the fee-based add-on model, where (for example) a game is available as a free download, but additional levels or content are sold.
In neither model is the openness of the underlying platform a particularly large factor. Furthermore, as is the case with "desktop" web services, Amazon (the de-facto web service infrastructure provider) may actually be poised to become the biggest money-maker of all in the mobile ecosystem. Arnold pointed to Amazon's "Cloud Drive" music-storage service, and asked "what's to prevent them from doing the same thing with videos, or games, or anything else?
"
The challenge — and the open question to the audience — is how the open source community fits into the picture, at the platform level (with MeeGo, Android, webOS, and other projects), as well as higher in the stack. If in the long term, web services are where the money is to be made, does source code and hackability of the system matter? As one audience member added, HTML5 is poised to make the platform irrelevant altogether, even for profitable application categories like games.
Arnold, at least, seemed to think that source code and hackability are important. He is a staunch believer in open platforms, with experience on the mobile side to back it up. He was a QA engineer for Nokia's first Maemo device, the 770 tablet, and part of the launch team for the N800. As he explained to the audience, he wants to see handhelds reach the "commodity" point just as desktop PCs did in the 1980s, and get there faster. "I want to install anything I want to on my hardware
", he said, without having to jailbreak it first, and without fear that the "
IP police
" will come after him.
The "open repository" model used by Maemo — where any project, no matter how alpha-quality, is installable if the user adds the right repository to the package manager — might not work for a lot of casual handheld users. But the "Ubuntu Software Store" implemented in Ubuntu is an example of building a user-friendly application store done right; Arnold referred to it as a good "best practices
" example.
Feedback
About one third of the session time was reserved for questions and feedback from the audience. As one might expect from a "Linux crowd" the vast majority agreed on the importance of platform openness. One audience member likened the current handset market to the pre-1984 land-line phone market. Before the US government's anti-trust breakup of AT&T, he said, no one could even own their own phone; they had to rent it from the phone company. We are right back in the same position with cell phones today, he observed, and yet no one seems to care.
Where there was less consensus was on Arnold's predictions for the future of the third-party application market and how open source could influence it. Some did not seem convinced that service-based sales were really the future, citing the large volume of app purchases on iOS and Android. Others agreed with that premise, but could not see how the openness of the platform could have an effect. After all, if anyone can develop an HTML5-based application and offer it under proprietary terms, the browser is essentially a "write once, run anywhere" environment that makes the operating system irrelevant.
Arnold responded, in effect, that it was true that paid services would run equally well on closed and open platforms, but that predicting the economics did not make the underlying platform unimportant. Chipmakers and carriers may be pushing for more open platforms for their own reasons, but if the end result is a market filled with PC-like commodity handsets where the user can install any OS he or she wants, the user still wins.
Arnold closed out the session by inviting the audience to send him feedback to further refine the talk for subsequent conferences. As of this week, he said he has already heard from a number of attendees about Symbian and other recent events that may shake up the landscape.
Predicting the future is a risky business, but less so for the open source community than for Stephen Elop. No matter what ships on Nokia's phones in 2011 and 2012, the open source platforms will not disappear. Wherever there is money to be made in the mobile ecosystem, even if the bulk of it is web-delivered services, the cost pressures that come with proprietary platforms will certainly drive more of Nokia's competitors to examine the open alternatives. But now that Nokia's search, advertising, and map services are also intertwined with Microsoft, it may not have as much flexibility to adapt. In the meantime, no one else is standing still waiting to find out — Amazon has opened its own "app store" targeting Android devices, loosening Google's control over the ecosystem surrounding its own product.
Security
MeeGo rethinks privacy protection
Companies operating in the handset market have different approaches to almost everything, but they do agree on one thing: they have seen the security problems which plague desktop systems and they want no part of them. There is little consistency in how the goal of a higher level of security is reached, though. Some companies go for heavy-handed central control of all software which can be installed on the device. Android uses sandboxing and a set of capabilities enforced by the Dalvik virtual machine. MeeGo's approach has been based on traditional Linux access control paired with the Smack mandatory access control module. But much has changed in the MeeGo world, and it appears that security will be changing too.In early March, the project sent out a notice regarding a number of architectural changes made after Nokia's change of heart. With regard to security, the announcement said:
It appears that at least some of this reexamination has been done; the results were discussed in this message from Ryan Ware which focused mainly on the problem of untrusted third-party applications.
The MeeGo project, it seems, is reconsidering its decision to use the Smack access control module; a switch to SELinux may be in the works. SELinux would mainly be charged with keeping the trusted part of the system in line. All untrusted code would be sandboxed into its own container; each container gets a disposable, private filesystem in the form of a Btrfs snapshot. Through an unspecified mechanism (presumably the mandatory access control module), these untrusted containers could be given limited access to user data, device resources, etc.
It probably surprised nobody that Casey Schaufler, the author of Smack, was not sold on the value of a change to SELinux. This change would, he said, add a great deal of complexity to the system without adding any real security:
The people who built SELinux fell into a trap that has been claiming security developers since computers became programmable. The availability of granularity must not be assumed to imply that everything should be broken down into as fine a granularity as possible. The original Flask papers talk about a small number of well defined domains. Once the code was implemented however the granularity gremlins swarmed in and now the reference policy exceeds 900,000 lines. And it enforces nothing.
Ryan's response was that the existing SELinux reference policy is not relevant because MeeGo does not plan to use it:
What this means is that he is talking about creating a new SELinux policy from the beginning. The success of such an endeavor is, to put it gently, not entirely assured. The current reference policy has taken many years and a great deal of pain to reach its current state of utility; there are very few examples of viable alternative policies out there. Undoubtedly other policies are possible, and they need not necessarily be as complex as the reference policy, but anybody setting out on such a project should be under no illusions that it will be easily accomplished.
The motivation for the switch to SELinux is unclear; Ryan suggests that manufacturers have been asking for it. He also said that manufacturers would be able to adjust the policy for their specific needs, a statement that Casey was not entirely ready to accept:
Ryan acknowledged that little difficulty, but he seems determined to press on in this direction.
The end goal of all this work is said to be preventing the exposure of end-user data. That will not be an easy goal to achieve either, though. Once an application gets access to a user's data, even the firmest SELinux policy is going to have a hard time preventing the disclosure of that data if the application is coded to do so; Ryan has acknowledged this fact. Any Android user who pays attention knows that even trivial applications tend to ask for combinations of privileges (address book access and network access, for example) which amount to giving away the store. Preventing information leakage through a channel like that - while allowing the application to run as intended - is not straightforward.
So it may be that the "put untrusted applications in a sandbox and limit what they can see" model is as good as it's going to get. As Casey pointed out, applications are, for better or worse, part of the security structure on these devices. If an application has access to resources with security implications, the application must implement any associated security policy. That's a discouraging conclusion for anybody who wants to install arbitrary applications from untrusted sources.
Brief items
Security quotes of the week
New vulnerabilities
dhcp: man-in-the-middle attack
| Package(s): | dhcp | CVE #(s): | CVE-2011-0997 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Created: | April 7, 2011 | Updated: | May 31, 2011 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Description: | From the Slackware advisory:
In dhclient, check the data for some string options for reasonableness before passing it along to the script that interfaces with the OS. This prevents some possible attacks by a hostile DHCP server. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
ikiwiki: cross-site scripting
| Package(s): | ikiwiki | CVE #(s): | CVE-2011-1401 | ||||||||||||
| Created: | April 11, 2011 | Updated: | April 22, 2011 | ||||||||||||
| Description: | From the Debian advisory:
Tango discovered that ikiwiki, a wiki compiler, is not validating if the htmlscrubber plugin is enabled or not on a page when adding alternative stylesheets to pages. This enables an attacker who is able to upload custom stylesheets to add malicious stylesheets as an alternate stylesheet, or replace the default stylesheet, and thus conduct cross-site scripting attacks. | ||||||||||||||
| Alerts: |
| ||||||||||||||
kdelibs: HTML injection
| Package(s): | kdelibs | CVE #(s): | CVE-2011-1168 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Created: | April 12, 2011 | Updated: | May 31, 2011 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Description: | From the KDE advisory:
When Konqueror cannot fetch a requested URL, it renders an error page with the given URL. If the URL contains JavaScript or HTML code, this code is also rendered, allowing for the user to be tricked into visiting a malicious site or providing credentials to an untrusted party. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
kernel: multiple vulnerabilities
| Package(s): | kernel | CVE #(s): | CVE-2011-0695 CVE-2011-0716 CVE-2011-1478 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Created: | April 8, 2011 | Updated: | September 13, 2011 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Description: | From the Red Hat advisory:
A race condition was found in the way the Linux kernel's InfiniBand implementation set up new connections. This could allow a remote user to cause a denial of service. (CVE-2011-0695, Important) A flaw was found in the way the Linux Ethernet bridge implementation handled certain IGMP (Internet Group Management Protocol) packets. A local, unprivileged user on a system that has a network interface in an Ethernet bridge could use this flaw to crash that system. (CVE-2011-0716, Moderate) A NULL pointer dereference flaw was found in the Generic Receive Offload (GRO) functionality in the Linux kernel's networking implementation. If both GRO and promiscuous mode were enabled on an interface in a virtual LAN (VLAN), it could result in a denial of service when a malformed VLAN frame is received on that interface. (CVE-2011-1478, Moderate) | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Kernel: two denial of service vulnerabilities
| Package(s): | kernel | CVE #(s): | CVE-2011-1010 CVE-2011-1090 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Created: | April 13, 2011 | Updated: | September 13, 2011 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Description: | A missing check in the kernel's Mac OS partition support allows an attacker to cause a kernel oops by mounting a maliciously-crafted filesystem (CVE-2011-1010).
A local user can force a kernel panic by way of access control lists on an NFSv4-mounted filesystem (CVE-2011-1090). | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
libvirt: denial of service
| Package(s): | libvirt | CVE #(s): | CVE-2011-1486 | ||||||||||||||||||||||||||||||||||||||||||||||||
| Created: | April 8, 2011 | Updated: | August 4, 2011 | ||||||||||||||||||||||||||||||||||||||||||||||||
| Description: | From the openSUSE advisory:
libvirtd could mix errors from several threads leading to a crash | ||||||||||||||||||||||||||||||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||||||||||||||||||||||||||||||
moonlight: multiple vulnerabilities
| Package(s): | moonlight | CVE #(s): | CVE-2011-0989 CVE-2011-0990 CVE-2011-0991 CVE-2011-0992 | ||||||||||||||||
| Created: | April 8, 2011 | Updated: | April 19, 2011 | ||||||||||||||||
| Description: | From the openSUSE advisory:
CVE-2011-0989: modification of read-only values via RuntimeHelpers.InitializeArray CVE-2011-0990: buffer overflow due to race condition in in Array.FastCopy CVE-2011-0991: use-after-free due to DynamicMethod resurrection CVE-2011-0992: information leak due to improper thread finalization | ||||||||||||||||||
| Alerts: |
| ||||||||||||||||||
php: symlink attack
| Package(s): | php | CVE #(s): | CVE-2011-0441 | ||||||||||||||||
| Created: | April 8, 2011 | Updated: | May 5, 2011 | ||||||||||||||||
| Description: | From the Mandriva advisory:
It was discovered that the /etc/cron.d/php cron job for php-session allows local users to delete arbitrary files via a symlink attack on a directory under /var/lib/php. | ||||||||||||||||||
| Alerts: |
| ||||||||||||||||||
php: denial of service
| Package(s): | php | CVE #(s): | CVE-2011-1148 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Created: | April 8, 2011 | Updated: | February 13, 2012 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Description: | From the Pardus advisory:
CVE-2011-1148: Use-after-free vulnerability in the substr_replace function in PHP 5.3.6 and earlier allows context-dependent attackers to cause a denial of service (memory corruption) or possibly have unspecified other impact by using the same variable for multiple arguments. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
python-feedparser: multiple vulnerabilities
| Package(s): | python-feedparser | CVE #(s): | CVE-2009-5065 CVE-2011-1156 CVE-2011-1157 CVE-2011-1158 | ||||||||||||||||||||||||||||
| Created: | April 8, 2011 | Updated: | August 20, 2012 | ||||||||||||||||||||||||||||
| Description: | From the openSUSE advisory:
Various issues in python-feedparser have been fixed, including fixes for crashes due to missing input sanitization and a XSS vulnerability. CVE-2011-1156, CVE-2011-1157, CVE-2011-1158 and CVE-2009-5065 have been assigned to these issues. | ||||||||||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||||||||||
rsyslog: multiple vulnerabilities
| Package(s): | rsyslog | CVE #(s): | CVE-2011-1488 CVE-2011-1489 CVE-2011-1490 | ||||||||
| Created: | April 12, 2011 | Updated: | April 19, 2011 | ||||||||
| Description: | From the openSUSE advisory:
rsyslog was updated to version 5.6.5 to fix a number of memory leaks that could crash the syslog daemon (CVE-2011-1488, CVE-2011-1489, CVE-2011-1490). | ||||||||||
| Alerts: |
| ||||||||||
shadow: denial of service
| Package(s): | shadow | CVE #(s): | |||||
| Created: | April 11, 2011 | Updated: | April 13, 2011 | ||||
| Description: | From the Slackware advisory:
Corrected a packaging error where incorrect permissions on /usr/sbin/lastlog and /usr/sbin/faillog allow any user to set login failure limits on any other user (including root), potentially leading to a denial of service. Thanks to pyllyukko for discovering and reporting this vulnerability. | ||||||
| Alerts: |
| ||||||
spice-xpi: multiple vulnerabilities
| Package(s): | spice-xpi | CVE #(s): | CVE-2011-0012 CVE-2011-1179 | ||||||||||||
| Created: | April 8, 2011 | Updated: | April 15, 2011 | ||||||||||||
| Description: | From the Red Hat advisory:
An uninitialized pointer use flaw was found in the SPICE Firefox plug-in. If a user were tricked into visiting a malicious web page with Firefox while the SPICE plug-in was enabled, it could cause Firefox to crash or, possibly, execute arbitrary code with the privileges of the user running Firefox. (CVE-2011-1179) It was found that the SPICE Firefox plug-in used a predictable name for one of its log files. A local attacker could use this flaw to conduct a symbolic link attack, allowing them to overwrite arbitrary files accessible to the user running Firefox. (CVE-2011-0012) | ||||||||||||||
| Alerts: |
| ||||||||||||||
tmux: privilege escalation
| Package(s): | tmux | CVE #(s): | CVE-2011-1496 | ||||||||||||
| Created: | April 8, 2011 | Updated: | April 19, 2011 | ||||||||||||
| Description: | From the Debian advisory:
Daniel Danner discovered that tmux, a terminal multiplexer, is not properly dropping group privileges. Due to a patch introduced by Debian, when invoked with the -S option, tmux is not dropping permissions obtained through its setgid installation. | ||||||||||||||
| Alerts: |
| ||||||||||||||
vlc: arbitrary code execution
| Package(s): | vlc | CVE #(s): | CVE-2010-3275 CVE-2010-3276 | ||||||||
| Created: | April 7, 2011 | Updated: | April 13, 2011 | ||||||||
| Description: | From the CVE entries:
libdirectx_plugin.dll in VideoLAN VLC Media Player before 1.1.8 allows remote attackers to execute arbitrary code via a crafted width in an AMV file, related to a "dangling pointer vulnerability." (CVE-2010-3275) libdirectx_plugin.dll in VideoLAN VLC Media Player before 1.1.8 allows remote attackers to execute arbitrary code via a crafted width in an NSV file. (CVE-2010-3276) | ||||||||||
| Alerts: |
| ||||||||||
vlc: arbitrary code execution
| Package(s): | vlc | CVE #(s): | |||||
| Created: | April 12, 2011 | Updated: | April 13, 2011 | ||||
| Description: | From the Debian advisory:
Aliz Hammond discovered that the MP4 decoder plugin of vlc, a multimedia player and streamer, is vulnerable to a heap-based buffer overflow. This has been introduced by a wrong data type being used for a size calculation. An attacker could use this flaw to trick a victim into opening a specially crafted MP4 file and possibly execute arbitrary code or crash the media player. | ||||||
| Alerts: |
| ||||||
Page editor: Jake Edge
Kernel development
Brief items
Kernel release status
The current development kernel is 2.6.39-rc3, released on April 11. Linus said:
And it is possible that this really ended up being a very calm release cycle. We certainly didn't have any big revolutionary changes like the name lookup stuff we had last cycle. So I'm quietly optimistic that no shoe-drop will happen.
The short-form changelog is in the announcement, or see the full changelog for all the details.
Stable updates: no stable updates have been released in the last week. The 2.6.38.3 update is in the review process as of this writing; it can be expected sometime on or after April 14. The 2.6.33.10 and 2.6.32.37 updates are also in the review process, with an expected release on or after April 15.
Quotes of the week
core_internal_state__do_not_mess_with_it is clear enough, annoying to type and easy to grep for. Offenders will be tracked down and slapped with stinking trouts.
Collecting condolences for the family of David Brownell
As many LWN readers will have heard, long-time kernel developer David Brownell recently passed away. His contributions to the code are many, but it is clear that they were outweighed by his contributions to the community. He will be much missed.A collection point has been set up for condolences to be passed on to David's family. People outside of our community are often not fully aware of the role a loved one plays in the community; this is a chance to let David's family know more about how many lives he touched and how valuable his work was. If you would like to share your memories of David, they may be sent to dbrownell-condolences@kernel.org; from there, they will be passed on to his family.
The native KVM tool
The KVM subsystem provides native virtualization support in the Linux kernel. To that end, it provides a virtualized CPU and access to memory, but not a whole lot more; some other software component is needed to provide virtual versions of all the hardware (console, disk drives, network adapters, etc) that a kernel normally expects to find when it boots. With KVM, a version of the QEMU emulator is normally used to provide that hardware. While QEMU is stable and capable, it is not universally loved; a competitor has just come along that may not displace QEMU, but it may claim some of its limelight.Just over one year ago, LWN covered an extended discussion about KVM, and, in particular, about the version of QEMU used by KVM. At that time, there were some suggestions that QEMU should be forked and brought into the kernel source tree; the idea was that faster and more responsive development would result. That fork never happened, and the idea seemed to fade away.
That idea is now back, in a rather different form, with Pekka Enberg's announcement of the "native KVM tool." In short, this tool provides a command (called kvm) which can substitute for QEMU - as long as nobody cares about most of the features provided by QEMU. The native tool is able to boot a kernel which can talk over a serial console. It lacks graphics support, networking, SMP support, and much more, but it can get to a login prompt when run inside a terminal emulator.
Why is such a tool interesting? There seem to be a few, not entirely
compatible reasons. Replacing QEMU is a nice idea because, as Avi Kivity
noted, "It's an ugly gooball
".
The kvm code - being new and with few features - is compact,
clean, and easy to work with. Some developers have said that kvm
makes debugging (especially for early-boot problems) easier, but others
doubt that it can ever replace QEMU, with its extensive hardware emulation,
in that role. There's also talk of moving kvm toward the
paravirtualization model in the interest of getting top performance, but
there is also resistance to doing anything which would make it unable to
run native kernels.
Developers seem to like the idea of this project, and chances are that it will go somewhere even if it never threatens to push QEMU aside. There are a few complaints about the kvm name - QEMU already has a kvm command and the name is hard to search for anyway - but no alternative names seem to be in the running as of this writing. Regardless of its name, this project may be worth watching; it's clearly the sort of tool that people want to hack on.
Kernel development news
A JIT for packet filters
The Berkeley Packet Filter (BPF) is a mechanism for the fast filtering of network packets on their way to an application. It has its roots in BSD in the very early 1990's, a history that was not enough to prevent the SCO Group from claiming ownership of it. Happily, that claim proved to be as valid as the rest of SCO's assertions, so BPF remains a part of the Linux networking stack. A recent patch from Eric Dumazet may make BPF faster, at least on 64-bit x86 systems.The purpose behind BPF is to let an application specify a filtering function to select only the network packets that it wants to see. An early BPF user was the tcpdump, which used BPF to implement the filtering behind its complex command-line syntax. Other packet capture programs also make use of it. On Linux, there is another interesting application of BPF: the "socket filter" mechanism allows an application to filter incoming packets on any type of socket with BPF. In this mode, it can function as a sort of per-application firewall, eliminating packets before the application ever sees them.
The original BPF distribution came in the form of a user-space library, but the BPF interface quickly found its way into the kernel. When network traffic is high, there is a lot of value in filtering unwanted packets before they are copied into user space. Obviously, it is also important that BPF filters run quickly; every bit of per-packet overhead is going to hurt in a high-traffic situation. BPF was designed to allow a wide variety of filters while keeping speed in mind, but that does not mean that it cannot be made faster.
BPF defines a virtual machine which is almost Turing-machine-like in its simplicity. There are two registers: an accumulator and an index register. The machine also has a small scratch memory area, an implicit array containing the packet in question, and a small set of arithmetic, logical, and jump instructions. The accumulator is used for arithmetic operations, while the index register provides offsets into the packet or into the scratch memory areas. A very simple BPF program (taken from the 1993 USENIX paper [PDF]) might be:
ldh [12]
jeq #ETHERTYPE_IP, l1, l2
l1: ret #TRUE
l2: ret #0
The first instruction loads a 16-bit quantity from offset 12 in the packet to the accumulator; that value is the Ethernet protocol type field. It then compares the value to see if the packet is an IP packet or not; IP packets are accepted, while anything else is rejected. Naturally, filter programs get more complicated quickly. Header length can vary, so the program will have to calculate the offsets of (for example) TCP header values; that is where the index register comes into play. Scratch memory (which is the only place a BPF program can store to) is used when intermediate results must be kept.
The Linux BPF implementation can be found in net/core/filter.c; it provides "standard" BPF along with a number of Linux-specific ancillary instructions which can test whether a packet is marked, which CPU the filter is running on, which interface the packet arrived on, and more. It is, at its core, a long switch statement designed to run the BPF instructions quickly. This code has seen a number of enhancements and speed improvements over the years, but there has not been any fundamental change for a long time.
Eric Dumazet's patch is a fundamental change: it puts a just-in-time compiler into the kernel to translate BPF code directly into the host system's assembly code. The simplicity of the BPF machine makes the JIT translation relatively simple; every BPF instruction maps to a straightforward x86 instruction sequence. There are a few assembly language helpers which help to implement the virtual machine's semantics; the accumulator and index are just stored in the processor's registers. The resulting program is placed in a bit of vmalloc() space and run directly when a packet is to be tested. A simple benchmark shows a 50ns savings for each invocation of a simple filter - that may seem small, but, when multiplied by the number of packets going through a system, that difference can add up quickly.
The current implementation is limited to the x86-64 architecture; indeed, that architecture is wired deeply into the code, which is littered with hard-coded x86 instruction opcodes. Should anybody want to add a second architecture, they will be faced with the choice of simply replicating the whole thing (it is not huge) or trying to add a generalized opcode generator to the existing JIT code.
An obvious question is: can this same approach be applied to iptables, which is more heavily used than BPF? The answer may be "yes," but it might also make more sense to bring back the nftables idea, which is built on a BPF-like virtual machine of its own. Given that there has been some talk of using nftables in other contexts (internal packet classification for packet scheduling, for example), the value of a JIT-translated nftables could be even higher. Nftables is a job for another day, though; meanwhile, we have a proof of the concept for BPF that appears to get the job done nicely.
Explicit block device plugging
Since the dawn of time, or for at least as long as I have been involved, the Linux kernel has deployed a concept called "plugging" on block devices. When I/O is queued to an empty device, that device enters a plugged state. This means that I/O isn't immediately dispatched to the low level device driver, instead it is held back by this plug. When a process is going to wait on the I/O to finish, the device is unplugged and request dispatching to the device driver is started. The idea behind plugging is to allow a buildup of requests to better utilize the hardware and to allow merging of sequential requests into one single larger request. The latter is an especially big win on most hardware; writing or reading bigger chunks of data at the time usually yields good improvements in bandwidth. With the release of the 2.6.39-rc1 kernel, block device plugging was drastically changed. Before we go into that, lets take a historic look at how plugging has evolved.Back in the early days, plugging a device involved global state. This was before SMP scalability was an issue, and having global state made it easier to handle the unplugging. If a process was about to block for I/O, any plugged device was simply unplugged. This scheme persisted in pretty much the same form until the early versions of the 2.6 kernel, where it began to severely impact SMP scalability on I/O-heavy workloads.
In response to this problem, the plug state was turned into a per-device entity in 2004. This scaled well, but now you suddenly had no way to unplug all devices when going to sleep waiting for page I/O. This meant that the virtual memory subsystem had to be able to unplug the specific device that would be servicing page I/O. A special hack was added for this: sync_page() in struct address_space_operations; this hook would unplug the device of interest.
If you have a more complicated I/O setup with device mapper or RAID components, those layers would in turn unplug any lower-level device. The unplug event would thus percolate down the stack. Some heuristics were also added to auto-unplug the device if a certain depth of requests had been added, or if some period of time had passed before the unplug event was seen. With the asymmetric nature of plugging where the device was automatically plugged but had to be explicitly unplugged, we've had our fair share of I/O stall bugs in the kernel. While crude, the auto-unplug would at least ensure that we would chuck along if someone missed an unplug call after I/O submission.
With really fast devices hitting the market, once again plugging had become a scalability problem and hacks were again added to avoid this. Essentially we disabled plugging on solid-state devices that were able to do queueing. While plugging originally was a good win, it was time to reevaluate things. The asymmetric nature of the API was always ugly and a source of bugs, and the sync_page() hook was always hated by the memory management people. The time had come to rewrite the whole thing.
The primary use of plugging was to allow an I/O submitter to send down multiple pieces of I/O before handing it to the device. Instead of maintaining these I/O fragments as shared state in the device, a new on-stack structure was created to contain this I/O for a short period, allowing the submitter to build up a small queue of related requests. The state is now tracked in struct blk_plug, which is little more than a linked list and a should_sort flag informing blk_finish_plug() whether or not to sort this list before flushing the I/O. We'll come back to that later.
struct blk_plug {
unsigned long magic;
struct list_head list;
unsigned int should_sort;
};
The magic member is a temporary addition to detect uninitialized use cases, it will eventually be removed. The new API to do this is straightforward and simple to use:
struct blk_plug plug; blk_start_plug(&plug); submit_batch_of_io(); blk_finish_plug(&plug);
blk_start_plug() takes care of initializing the structure and tracking it inside the task structure of the current process. The latter is important to be able to automatically flush the queued I/O should the task end up blocking between the call to blk_start_plug() and blk_finish_plug(). If that happens, we want to ensure that pending I/O is sent off to the devices immediately. This is important from a performance perspective, but also to ensure that we don't deadlock. If the task is blocking for a memory allocation, memory management reclaim could end up wanting to free a page belonging to a request that is currently residing on our private plug. Similarly, the caller may itself end up waiting for some of the plugged I/O to finish. By flushing this list when the process goes to sleep, we avoid these types of deadlocks.
If blk_start_plug() is called and the task already has a plug structure registered, it is simply ignored. This can happen in cases where the upper layers plug for submitting a series of I/O, and further down in the call chain someone else does the same. I/O submitted without the knowledge of the original plugger will thus end up on the originally assigned plug, and be flushed whenever the original caller ends the plug by calling blk_finish_plug(), or if some part of the call path goes to sleep or is scheduled out.
Since the plug state is now device agnostic, we may end up in a situation where multiple devices have pending I/O on this plug list. These may end up on the plug list in an interleaved fashion, potentially causing blk_finish_plug() to grab and release the related queue locks multiple times. To avoid this problem, a should_sort flag in the blk_plug structure is used to keep track of whether we have I/O belonging to more than I/O distinct queue pending. If we do, the list is sorted to group identical queues together. This scales better than grabbing and releasing the same locks multiple times.
With this new scheme in place, the device need no longer be notified of unplug events. The queue unplug_fn() used to exist for this purpose alone, it has now been removed. For most drivers it is safe to just remove this hook and the related code. However, some drivers used plugging to delay I/O operations in response to resource shortages. One example of that was the SCSI midlayer; if we failed to map a new SCSI request due to a memory shortage, the queue was plugged to ensure that we would call back into the dispatch functions later on. Since this mechanism no longer exists, a similar API has been provided for such use cases. Drivers may now use blk_delay_queue() for this:
blk_delay_queue(queue, delay_in_msecs);
The block layer will re-invoke request queueing after the specified number of milliseconds have passed. It will be invoked from process context, just as it would have been with the unplug event. blk_delay_queue() honors the queue stopped state, so if blk_stop_queue() was called before blk_delay_queue(), or if is called after the fact but before the delay has passed, the request handler will not be invoked. blk_delay_queue() must only be used for conditions where the caller doesn't necessarily know when that condition will change states. If resources internal to the driver cause it to need to halt operations for a while, it is more efficient to use blk_stop_queue() and blk_start_queue() to manage those directly.
These changes have been merged for the 2.6.39 kernel. While a few problems have been found (and fixed), it would appear that the plugging changes have been integrated without greatly disturbing Linus's calm development cycle.
LFCS: ARM, control groups, and the next 20 years
The recently held Linux Foundation Collaboration Summit (LFCS) had its traditional kernel panel on April 6 at which Andrew Morton, Arnd Bergmann, James Bottomley, and Thomas Gleixner sat down to discuss the kernel with moderator Jonathan Corbet. Several topics were covered, but the current struggles in the ARM community were clearly at the forefront of the minds of participants and audience members alike.
Each of the kernel hackers introduced themselves, some with tongue planted
firmly in cheek, such as Bottomley with a declaration that he was on the
panel "to meet famous kernel developers
", and Morton who said
he spent most of his time trying to figure out what the other kernel
hackers are doing to the memory management subsystem. Bergmann was a bit
modest about his contributions, so Gleixner pointed out that Bergmann had
done the last chunk of work required to remove the big kernel lock, which
was greeted with a big round of applause. For his part, Gleixner was a bit
surprised to find out that he manages bug reports for NANA flash (based
on a typo on the giant slides on either side of the stage), but noted that he
specialized in
"impossible tasks
" like getting the realtime preemption
patches into the mainline piecewise.
There is a "high-level architectural issue
" that Corbet wanted
the panel to tackle first, and that was the current problems in the ARM
world. It is "one of our more important architectures
", he
said, without which we wouldn't have all these different Android phones to
play with. So, it is "discouraging to see that there is a
mess
" in the ARM kernel community right now. What's the situation,
he asked, and how can we improve things?
For a long time, the problem in the ARM community was convincing
system-on-chip (SoC) and
board vendors to get their code upstream, Bergmann said, but now there is a
new problem in that they all "have their own subtrees that don't work
very well together
". Each of those trees is going their own way,
which means that core and driver code gets copied "five times or twenty
times
" into different SoC trees.
Corbet asked how the kernel community can do better with respect to
ARM. Gleixner noted that ARM maintainer Russell King tries to push back on
bad code coming in, "but he simply doesn't scale
". There are
70 different sub-architectures and 500 different SoCs in the ARM tree, he
said. In addition, "people have been pushing sub-arch trees directly
to Linus
", Bergmann said, so King does not have any control over
those. It is a consequence of the "tension between cleanliness and
time-to-market
", Bottomley said.
Gleixner thinks that the larger kernel community should be providing the
ARM vendors with "proper abstractions
" and that because of a
lack of a big picture view, those vendors cannot be expected to come up
with those themselves. By and large the ARM vendor community has a
different mindset that comes from other operating systems where changes to
the core code were impossible, so 500 line workarounds in drivers were the
norm. Bergmann suggested that the vendors get the code reviewed and
upstream before shipping products with that code. Morton said that
as the "price of admission
" vendors need to be asked to
maintain various pieces horizontally across the ARM trees. Actually
motivating them to do that is difficult, he said.
From the audience, Wolfram Sang asked whether more code review for the ARM
patches would help. All agreed that more code review is good, but
Bottomley expressed some reservations because there are generally only a
few reviewers that a subsystem maintainer can trust to spot important
issues, so all code review is not created equal. Morton suggested a
"review economy
" where one patch submitter needs to review the
code of another and vice versa. That would allow developers to justify the
time spent reviewing code to their managers. But, Bottomley said,
"collaborating with competitors
" is a hard concept for
organizations that are new to open source development.
If a driver looks like one that is already in the tree, it should not be
merged, and instead someone needs to
get the developers to work with the existing driver, Bergmann said. There
is a lot of reuse of IP blocks in SoCs, but the developers aren't aware of
it because different teams are work on the different SoCs, Gleixner said.
The kernel
community needs people that can figure that out, he said. Bottomley
observed that "the first question should be: did anyone do it before and
can I 'steal' it?
".
In response to an audience question about the panel's thoughts on Linaro,
Bergmann, who works with Linaro, said "I think it's great
" with
a smile. He went on to say that Linaro is doing work that is closely related
to the ARM problems that had been discussed. Getting different SoC vendors
to work together is a big part of what Linaro is doing, and that
"everyone suffers
" if that collaboration doesn't happen.
"ARM is one of the places where it [collaboration] is needed
most
", he said.
Control groups
The discussion soon shifted to control groups, with Corbet noting that they
are becoming more pervasive in the kernel, but that lots of kernel hackers
hate them. It will soon be difficult to get a distribution to boot and run
without control groups, he said, and wondered if adding them to the kernel
was the right move: "did we make a
mistake?
" Gleixner said that there is nothing wrong with control
groups conceptually, "just that the code is a
horror
". Bottomley lamented the code that is getting "grafted
onto the side of control groups
" as each resource in the kernel that is
getting controlled requires reaching into multiple subsystems in rather
intrusive ways.
As with "everything that sucks
" in the kernel, control groups
needs to be
cleaned up by someone who looks at it from a global perspective; that
person will have to
"reimplement it and radically modify it
", Gleixner said. That
is difficult to do because it is both a technical and a political problem,
Bottomley said. The technical part is to get the interaction right,
while the political part is that it is difficult to make changes across
subsystem boundaries in the kernel.
But Morton said that he hadn't seen much in the way of specific complaints
about control groups cross his desk. Conceptually, it extends what an
operating system should do in terms of limiting resources. "If it's
messy, it's because of how it was developed
" on top of a production
kernel that gets updated every three months. Bottomley said that the
problem with doing cross-subsystem work is often just a matter of
communication, but it also requires someone to take ownership and talk to
all of the affected subsystems rather than just picking the "weakest
subsystem
" and getting changes in through there.
Corbet wondered if the independence of subsystems in the kernel, something that was very helpful in allowing its development to scale, was changing. The panel seemed to think there wasn't much of an issue there, that while control groups crossed a lot of boundaries, naming five things like that in the kernel would be hard to do as Bottomley pointed out.
Twenty years ahead
With the 20 year anniversary of Linux being celebrated this year, Jon Masters asked from the audience, what would things be like 20 years from now. Bottomley promptly replied that four-fifths of the panel would be retired, but Gleixner expected that the 2038 bug would have brought them all back out of retirement. Morton said that unless some kind of quantum computer came along to make Linux obsolete, it would still be there in 20 years. He also expected that the first thing to be done with any new quantum computer would be to add an x86 emulation layer.
When Corbet posited that perhaps the realtime preempt code would be merged
by then, Gleixner made one his strongest predictions yet for merging that
code: "I am planning to be done with it before I retire
".
More seriously, he said that it is on a good track, he has talked to the
relevant subsystem maintainers, and is optimistic about getting it all
merged—eventually.
In 20 years, the kernel will still be supporting the existing user-space
interfaces, Corbet said. He quoted Morton from a recent kernel mailing list post: "Our hammer is kernel patches and all problems
look like nails
", and wondered whether there was a problem with how
the kernel hackers developed user-space interfaces. Morton noted that the
quote was about doing more pretty printing inside the kernel, which he is
generally opposed to. It has been done in the past because it was
difficult for the kernel hackers to ship user-space code, so that it would
stay in sync with kernel changes. But perf has demonstrated that the
kernel can ship user-space code, which could be a way forward.
Gleixner noted that there was quite a bit of resistance to shipping perf,
but that it worked out pretty well as a way to "keep the strict
connection between the kernel and user space
". Perf is meant to be
a simple tool to allow users to try out perf events gathering, he said, and
that people are building more full-blown tools on top of perf. Having
tools shipped with the kernel allows more freedom to experiment with the
ABI, Bottomley said. Morton said that there needs to be a middle ground,
noting that Google had a patch that exported a procfs file that
contained a shell script inside.
Ingo Molnar recently pointed out that FreeBSD is getting Linux-like quality
with a much smaller development community and suggested that it was because
the user space and kernel are developed together. Corbet asked whether
Linux was holding itself back by not taking that route. Bottomley thought
that Molnar was "both right and wrong
", and that FreeBSD has
an entire distribution in its kernel tree. "I hope Linux never gets
to that
", he said.
From perf to control groups, FreeBSD to ARM, as usual, the panel ranged over a number of topics in the hour allotted. The format and participants vary from year to year, but it is always interesting to hear what kernel developers are thinking about issues that Linux is facing.
Patches and updates
Kernel trees
Architecture-specific
Build system
Core kernel code
Device drivers
Filesystems and block I/O
Memory management
Networking
Page editor: Jonathan Corbet
Distributions
First Look at Elementary OS
What started out as a theme and set of icons for Ubuntu has snowballed into a full-fledged Linux distribution called Elementary OS. The project, kicked off by Daniel Fore, has pushed out the beta of its first release, code-named Jupiter. Based on Ubuntu 10.10, Jupiter is not perfect, but it does show promise and hints at better things to come.
Elementary is taking its release names from ancient mythologies, hence "Jupiter," for the first release. Jupiter, as many geeks will already remember, was the Roman analog to Zeus — the ruler of the gods. An ambitious codename for the first release of a new OS, no doubt.
Ubuntu-based distributions are not exactly rare. What makes Elementary different is the project's effort to create its own application stack in addition to the distribution. Whereas Canonical has largely been content to package applications developed by third parties, the Elementary team is creating its own mail client (Postler), contact manager (Dexter), and dictionary application and thesaurus (Lingo) in the first release — and is working additional applications and a desktop environment for future releases.
However, Elementary OS is not terribly far removed from Ubuntu 10.10. It's using the same installer and draws on Ubuntu's package repos for the most part — though it also uses a few Elementary project Personal Package Archives (PPAs). The Elementary folks are using Launchpad for their development and coordination.
Using Elementary OS
Once Elementary OS is installed, there's a distinct resemblance to Mac OS X. Elementary uses Docky for the desktop dock and application launcher, though it also has the standard top GNOME Panel with the Applications, Places, and System menus. Unlike Unity for Ubuntu 11.04, there's no effort (currently) to emulate Mac OS X's menu placement — where the File, Edit, and assorted other menus are located in the top panel rather than the application windows themselves.
Though Elementary is using GNOME 2.32, it's a bit different than you'd find on Ubuntu 10.10 or Linux Mint. For example, the actual desktop — where one would usually find folders and the trash icon, etc., is totally bare. There is a Desktop directory, but nothing placed there will display on the desktop itself. You can't right-click on the desktop to change wallpaper or anything like that, either.
The GNOME Panel is locked down, so that right-clicking on the panel only provides the Help and About Panels options. You can't move the menus or system tray, or add any apps to the panel. Docky, likewise, is locked down so that the Docky icon does not display and its configuration options are not readily visible. The FAQ does provide a link to a customization guide (PDF) with instructions to use gconf-editor to unlock the panel and so on. You can also use gconf-editor to set Nautilus to allow managing the desktop.
Nautilus has been reworked or re-themed for Elementary OS. It's not radically different, but it does look like the Elementary gang is trying hard to replicate the Mac OS X look and feel. The menu for Nautilus is hidden by default, so all that's available is are the navigation icons, the search icon, and the path indicator which shows the present working directory with buttons for each parent directory.
The default application set is much different than you'll find with Ubuntu 10.10. In addition to the aforementioned custom applications, Elementary includes AbiWord and Gnumeric rather than shipping the OpenOffice.org or LibreOffice office suites. Currently there's no presentation application offered by default.
Elementary uses the Midori Web browser, which is based on WebKit and GTK+. Like Ubuntu, Elementary ships with Shotwell for photo management, and Empathy for instant messaging.
That leaves a few things missing, such as a music player and IRC client. Cassidy James, of the Elementary OS team, says that the effort is to aim for the "typical" computer user, which explains the absence of the IRC client. What about the music player? James says that the project is developing its own music player and didn't want to ship an existing player for the Jupiter release and a different player for the next one.
Elementary OS is a good first effort for the project. After using Jupiter for a few days, with the default applications, I ran into a few frustrations and areas where the default applications don't seem quite ready for prime time — but it's not bad.
Postler, for example, is a nice and simple mail client. It is probably very well suited for users who have a low volume of email — though it has some rough edges and it's certainly not going to make power users very happy. It's very limited in its feature set. It offers no filtering features, for example, and doesn't handle poorly formed reply-to addresses at all well. It does, however, have a full complement of keyboard shortcuts for standard actions (reply, forward, and so on).
I didn't make much use of Dexter while using Elementary OS — but I did note that it's currently not well-integrated with Postler. Again, it bears a striking resemblance to an Address Book application from a certain company in Cupertino.
Naturally, I spent much of my time using Midori. It seems quite fast and usable for about 90 to 95% of the sites that I use on a daily basis, though I found that it didn't quite work perfectly with some sites that make heavy use of JavaScript. Some functions just didn't seem to work, like posting a video or link in Facebook or exporting my address book from Gmail — though those functions work fine in Firefox and Chrome. Due to privacy concerns, the Midori project uses DuckDuckGo as the built-in search in its unified search bar / location bar. You can enable a separate search field, similar to Firefox's, and set Google or another engine as the default there — but it doesn't seem possible to change the default in Midori to Google. You can preface a search with "g" to indicate that Midori should use Google for a search.
It is important to remember, though, that this is a beta release. I expect a few of the rough edges that I encountered will be smoothed over before it's an official and final release. Also, what may stand out to one user (me) as annoyances or non-standard are not necessarily going to be deal-breakers or even noticeable to the stated target audience of the "average" user. A free OS that looks and feels much like the Mac might appeal quite a bit to a segment of the desktop market that isn't quite looking to invest in Mac hardware but would like to move away from Windows, for example.
And Jupiter is merely a stepping stone to what James suggested would be more ambitious releases in the future.
Why go to all this effort? James says that the team is trying to provide "the best, simplest, and most polished open computing experience possible. We've obviously made some decisions to simplify and streamline the interface through our apps and entire computing experience...
" Indeed, the attempts to simplify even the GNOME interface are apparent. In a short interview after the introduction of Postler the lead designer (Dan Rabbit) said that he was consciously trying to avoid "needless clutter and useless features that plague most of the current crop of desktop-based e-mail clients to give users "
what they really want: an e-mail client that does e-mail.
"
While it's quite likely that the average LWN reader would disagree that many of these features are "needless," most LWN readers don't qualify as "typical" users either.
Roadmap
The Elementary OS site is still under heavy development, and there's not a lot online to indicate where the project is going or when the next release will be coming down the pike.
James says that the Jupiter+1 release (J+1) will be based on Ubuntu 11.04, but after that it may not track Ubuntu releases. "It's important to note that elementary does not, however, intend to always be an 'Ubuntu spin.'
" He says that a number of applications and a full desktop environment are in the works for future releases.
Most of the work is being done on Launchpad, and there's little to see on the project pages, but this includes elements of the new desktop environment called Pantheon, a new control center called Switchboard, and a music player called Beatbox. Some of the apps in progress are little more than mockups or plans at this point, however — so one might be justified in being a little skeptical until these things show up in a release.
If you like what you see and want to get involved, James suggests checking into the #elementary IRC channel on Freenode, which does seem to be fairly active. At this time, there's little on the Elementary Web site for developers or contributors looking to become involved and no mailing lists to join.
The project is also taking donations and users can order CDs to support the project as well. Where's the money going? James says that "money primarily pays for the creation and distribution of CDs, along with the recurring costs of our web servers. Any additional money goes into a private fund set aside for elementary. Our council votes on all major issues, including both monetary and non-monetary decisions.
" Who's the council? Elementary founder Fore, and "some lead developers and team members.
" James says that the site could do with more information on the project's governance, and is passing that on to the Web team to "see what we can do
".
Though information is sparse, the Elementary OS looks so far like it has some momentum behind it. The first effort is not perfect, but it's not bad either. What's on the drawing board looks like it could be very interesting, so it might be worth keeping an eye on Elementary OS over the next few months. The lack of information about roadmaps, getting involved, and governance are worrisome, but if the project can solve those issues quickly it might do well in attracting contributors and new users.
Brief items
Distribution quotes of the week
Release for CentOS-5.6 i386 and x86_64
CentOS 5.6 has been released. See the announcement (click below) or the release notes for more details.CyanogenMod 7.0 released
CyanogenMod 7.0 is out. "CM7 is based on the 2.3.3 (Gingerbread) release of Android from Google. We've added most of the great features from CM6 you know and love, and many new ones including support for several tablets. We are currently providing support for 30 devices! I continue to be amazed with this community and the dedication of everyone involved." CyanogenMod is an alternative Android distribution; LWN reviewed this release in March.
Qubes beta 1 released
The first beta of the Qubes distribution, which focuses on providing security by running everything in virtual machines, has been released. New features include a better template sharing mechanism, a standalone virtual machine feature, a "reasonably complete" user guide, and more. Worth noting is that plans for this distribution call for splitting into separate "commercial" and "open source" branches after the 1.0 release. (LWN reviewed Qubes last May).Slackware 13.37 RC 4.6692
The April 8 entry in the Slackware-current changelog announces the final release candidate for v13.37 (x86, x86_64). "One more. We'll call this 13.37 RC 4.6692. Thanks to Nicola for suggesting the first Feigenbaum constant could be useful since we used pi, and it's too late for e. This is pretty much it, but last call for any showstoppers."
Yocto 1.0 Released
The Yocto 1.0 release is out. "This release provides many improvements to the build system, developer interface, and individual board support packages (BSPs) with support for ARM, MIPS, PPC, x86, and x86_64 systems, including virtual machines." More information can be found in the release notes.
Distribution News
Debian GNU/Linux
Bits from the 4th Debian Groupware Meeting
The Debian Groupware Meeting was recently held in Essen, Germany. Click below for a short summary. "We pushed new versions of icedove and iceowl to unstable fixing several issues with gnome-shell in the later. Unstable now ships packages based on Thunderbird's current 3.1 branch."
Fedora
FUDCon EMEA 2011 will be held in Milan, Italy
The next FUDCon EMEA will be held in Milan, Italy. The date is yet undetermined.Results of Fedora 16 Release Name Voting
The voting is over for the Fedora 16 release name and the winner is "Verne".
SUSE Linux
Novell Offers Enterprise Linux Support Program for SLES 10
Novell has announced the expansion of its Long Term Service Pack Support (LTSS) program for SUSE Linux Enterprise Server. Novell also released SLES 10 Service Pack 4, which bundles all previously released patches, fixes and updates for SUSE Linux Enterprise 10.
Ubuntu family
Ubuntu 8.04 reaches end-of-life on May 12 2011
Support for Ubuntu 8.04 LTS Desktop edition will end on May 12, 2011. "The supported upgrade path from Ubuntu 8.04 Desktop is via Ubuntu 10.04 Desktop."
11.10 Ubuntu Release - Call for Topics
Ubuntu release manager Kate Stewart has issued a call for topics for Oneiric Ocelot (11.10). "We will have a release feedback session again, early in UDS, to go over what worked well, and what can be improved for Oneiric. However there may be some topics that are wider in scope than that one feedback session."
Newsletters and articles of interest
Distribution newsletters
- DistroWatch Weekly, Issue 400 (April 11)
- Fedora Weekly News Issue 270 (April 6)
- openSUSE Weekly News, Issue 170 (April 9)
7.5 Reasons to Look Forward to Fedora 15 (Linux.com)
Joe 'Zonker' Brockmeier takes a look at some of the features to be found in Fedora 15, including systemd, GNOME 3, KDE 4.6, BoxGrinder, Dynamic Firewall, new SELinux troubleshooting GUI, and RPM 4.9. "And the half point goes to support for 4kB sector disk boot support. Why does this matter, and why does it only get a measly half of a point? This is a feature that's really going to be irrelevant to the vast majority of Fedora users right now, but it likely to be important for Fedora users in the not too far future. Specifically, this feature will bring support for drives with 4kB sectors, which are possible to use with UEFI machines. There's not a lot of these on the market right now — but when they do hit, Linux will likely be first in line to support them. So the chances you'll need this feature in Fedora 15, or even Fedora 16, aren't too likely — but it's a very good thing to have for the users and developers who do have access to those machines. By the time they become mainstream, Linux should work without a glitch."
Page editor: Rebecca Sobol
Development
Drupal Government Days: Governments contributing back
The open source (GPLv2) content management system Drupal is used for governmental web sites in more than 120 countries, as well as many intergovernmental organizations and local governments. The Drupal Government Days in Brussels was the first conference where government representatives, Drupal users, and developers had the opportunity to meet and swap ideas. Your author was present at the first two days, which had sessions about solutions for governments.
Co-organizer Bart Van Herreweghe, internet communication expert at the Chancellery of the Prime Minister of the conference's guest country, Belgium, welcomed the attendees with the message that the goals of the event were not only to inform about the use of Drupal in governments, but also to encourage sharing source code of Drupal sites between authorities. In his keynote speech, Drupal founder Dries Buytaert mentioned some well-known examples of government sites running on Drupal: The White House, the official portal of the French government, the website of London, and so on.
The success of Drupal in governments
"Why did Drupal succeed in so many governments?
", was the key question Buytaert asked in his presentation, and he sees a couple of reasons for this success. Obviously the fact that Drupal is open source is a big advantage, but so is the community: the Drupal community has no shortage of developers who can build and maintain web sites. Another factor according to Buytaert is that Drupal is much more than a CMS: it's a platform with a modular design and over 7,000 community-contributed add-ons to extend its functionality. "Also, the advantage of using one standardized platform for all your web sites over having to combine various platforms is obvious,
" Buytaert said.
But the success of Drupal in governments also depends on some accidental
circumstances, such as the support of some influential people in
governments. This started with Howard Dean, who used Drupal in his 2004 US
presidential campaign. This was one of Drupal's major growth points: thanks
to the internet support he raised with his Drupal site, the underdog Dean
raised $3 million in a week. "This gave a lot of visibility and
credibility to Drupal as a platform,
", Buytaert said,
"especially for political and governmental use.
"
The Drupal-based DeanSpace CMS later became the CivicSpace distribution, and subsequently its features (focused on organizing political and other events) have been incorporated into Drupal 5.0. The CRM side has been offloaded to a separate set of modules, the AGPL3-licensed CiviCRM, which was SourceForge's project of the month for January 2011. As you can see here, Dean's developers not only used Drupal, but also contributed back to it.
Another important person behind the success of Drupal in governments is
Vivek Kundra, the first Chief Information Officer of the United States of
America. He was responsible for a couple of high-profile government web
sites built atop Drupal. About the whitehouse.gov site mentioned above Buytaert said: "This web site is really a testament to the scalability and security of Drupal, as you can imagine that it attracts a lot of welcome and unwelcome visitors. It also was a catalyst for Drupal use in the US government.
" So all in all, Drupal is having a profound impact in the US on how politicians and governments communicate and engage with their citizens.
High-profile Drupal sites open sourced
Jeff Walpole, CEO of Phase2 Technology, has been responsible for many Drupal-based solutions for the US government, including whitehouse.gov, and at Drupal Government Days he talked about what open source in general and more specifically Drupal can mean for government agencies. Some important points he highlighted were freedom, innovation, speed, reusability, and cost:
Referring to the US government's Open Government Initiative, an
effort by the Obama administration to create more openness in the
government, Walpole said that Drupal is central in a modern OpenGov
stack. The Open
Government Directive lists some requirements that fit naturally into
Drupal, such as open data sets and mechanisms for public feedback. Drupal
use in the US government has trickled down from the top agencies and
high-profile sites to smaller web sites. "We are lucky that Drupal
has been embraced at the top first, which gave it the visibility and
credibility it needed.
", Walpole said.
One of these high-profile Drupal sites that Phase2 Technology built for the government is the IT Dashboard, which shows details of federal information technology investments. At the end of March, the White House released the code of the IT Dashboard to the public. The White House blog post, written by Vivek Kundra, mentions two reasons to open source the code:
The code of the IT Dashboard can be found on SourceForge in a Subversion repository, and it is distributed under the GPLv2. The open source version differs from the production version in that it uses the FusionCharts free library instead of the enterprise version, and of course it comes pre-populated with only sample data. Some of the custom Drupal modules that this code has are a Data Controller module that supports aggregations and efficient query generation for multiple data sources, a PHP Dataset module that describes data sets for views using custom PHP code, and the FusionCharts display style plugin to render output as a FusionChart graph. More information about the IT Dashboard showcase can be found in the Drupal forum and on Civic Commons.
An open senate
Noel Hidalgo talked about the website of the New York State Senate, a complex Drupal
site that hosts 62 Senator websites, 44 Committee sites, the Senate's Open Data portal, and the
Senate-wide calendar with integrated live stream and archive video integration. The focus of the project was that it should be collaborative, transparent, and participatory. "We adopted Drupal because it's malleable and flexible,
" Hidalgo said, and an example is that it was quite straightforward to add an OpenStreetMap map.
The web site has a whole developer page with information about the technologies and APIs used. Moreover, the web site also publishes its source code, using the GPLv3 and the BSD licenses. All source code can be found on the GitHub account of the NY Senate CIO Office. This includes the Drupal code for NYSenate.gov, ready to be used for other government web sites. Although many of these modules are specifically designed for the NY State Senate's content types and policy-driven needs, some of them could provide more general functionality.
Some of the more general modules that can be used as-is on other Drupal web sites are released separately. For instance, this holds for Whitelist, an input filter that disables HTML forms whose domains are not part of an approved whitelist. The developers have also released the source code of the mobile applications (for Android and the iPhone/iPod Touch and for the iPad) that belong to the site, as well as a lot of other accompanying projects. Overall, the NYsenate.gov web site looks like a model for the web sites of world-wide parliaments.
Drupal distributions for governments
In his keynote speech, Buytaert also said that he expects a lot from specialized Drupal distributions. A distribution in this context is made of the Drupal core, extended with specific modules, a specific configuration, theme, and documentation. Distributions make it easier to install Drupal on a web server for a specific purpose. A Drupal distribution is much more a product than the platform Drupal, and Buytaert calls them accelerators for Drupal adoption:
One of these specialized Drupal distributions for governments is OpenPublic, presented at Drupal Government Days by Jeff Walpole and Ivo Radulovski. The latter mentioned that the many good examples of Drupal use in the US makes the decision to use Drupal, and open source in general, in other governments much easier. OpenPublic was initiated by Phase2 Technology, built upon the work it has done on some of the aforementioned web sites, to provide an open source product to build government web sites on Drupal. When installed, it already comes with a lot of features built-in to accommodate government needs, such as security, privacy, citizen engagement, and feedback.
The initial alpha release (dated February 2011) provides a basic site structure, a proof of concept for compliance considerations (such as the Section 508 Accessibility guidelines), and some basic functionality such as blogs, press releases, events, documents, photo and video galleries, along with staff directory and contact profiles. Some of the proposed new features are enhancements to the administrative interface and map visualization. According to Jeff, one of the long-term goals is to have an OpenPublic web site up and running on an infrastructure/cloud provider in an hour. ProPeople, the company where Ivo Radulovski is CEO, also offers an additional OpenPublic theme for European governments, OpenPublic Europe.
Towards even better government web sites
The US government has clearly been a forerunner in the use of Drupal for government web sites, and their example has been followed by many European governments in recent years. What's more interesting is not only that all these governments are using Drupal, but also that many of these projects are contributing their custom modules back to the Drupal community, or are even building specialized Drupal distributions. All this will surely make Drupal an even better platform for building government web sites in the future.
Brief items
Flock browser project shutting down
LWN reported on the Flock browser - a Firefox derivative aimed at social network integration - in late 2005. We noted that the project's business model was not entirely clear, but the project persisted (with diminishing visibility) for over five years. That has come to an end: visitors to the Flock web site now get a notice that support for Flock will end on April 26. Users are advised to move to Chrome or Firefox.GottenGeography 1.0
GottenGeography is a graphical tool which performs the geotagging of photographs using either GPS header data or manual positioning in a map window. The 1.0 release is now available; see the tutorial for an overview of the application and what it can do.LLVM 2.9 released
Version 2.9 of the LLVM compiler suite is out. "Some of the major features include integrated assembler support for ELF targets (allowing direct object file writing), substantial improvements for Link Time Optimization (LTO) which make it build faster and able to compile large apps like Firefox 4, automatic recognition of memset and memcpy loops, debugging optimized code improvements, infrastructure for region based optimizations, better use of condition code registers, and progress on a major register allocator rewrite." See the release notes for more information.
nginx 1.0.0 released
Version 1.0.0 of the nginx HTTP and mail proxy server has been released. "nginx development was started about 9 years ago. The first public version 0.1.0 has been released on October 4, 2004. Now W3Techs reports that 6.8% of the top 1 million sites on the web (according to Alexa) use nginx. And 46.9% of top Russian sites use nginx." This server seems to have an active community of satisfied users who find it faster and easier to deal with than some of the alternatives. Some change information can be found in the changelog.
OpenOffice.org 3.4 Beta released
The OpenOffice.org 3.4 beta release is available; the "testing period" for this release lasts through May 2. Some moderately inaccessible information on what's in the 3.4 release can be found in the release notes and the feature page.
Newsletters and articles
Development newsletters from the last week
- Caml Weekly News (April 12)
- OpenOffice.org Newsletter (April)
- PostgreSQL Weekly News (April 10)
- Python-URL! (April 9)
A shiny new ornament for your Linux lawn (ars technica)
Ryan Paul reviews GNOME 3.0. "The effort to deliver GNOME 3.0 has a long history. It took the developers years to reach a consensus about how to proceed with the new version, and years more to implement it. The protracted development period has largely paid off in stability and coherence. It's fit for duty out of the starting gate, though there is still plenty of room for further improvement."
IBM Launches Maqetta HTML5 Tool (eWeek)
eWeek reports on IBM's announcement that it has released Maqetta, an HTML5 authoring tool, as free software. "The primary target users for Maqetta are user-experience designers, or UXD, within enterprise development teams, with the goal of improving overall team efficiency around HTML5 application development. To support enterprise team development, Maqetta's extensible architecture allows plug-in widget libraries and plug-in CSS styling themes, including company-specific libraries and themes."
systemd for Administrators, Part VI
Lennart Poettering presents part 6 of the series 'systemd for Administrators'. This one looks at chroot() environments. "One of the big advantages of systemd is that all daemons are guaranteed to be invoked in a completely clean and independent context which is in no way related to the context of the user asking for the service to be started. While in sysvinit-based systems a large part of the execution context (like resource limits, environment variables and suchlike) is inherited from the user shell invoking the init skript, in systemd the user just notifies the init daemon, and the init daemon will then fork off the daemon in a sane, well-defined and pristine execution context and no inheritance of the user context parameters takes place."
Poettering: systemd for Administrators, Part VII
Lennart Poettering continues the series with a look at systemd-analyze blame and systemd-analyze plot. "Now, before you now take these tools and start filing bugs against the worst boot-up time offenders on your system: think twice. These tools give you raw data, don't misread it. As my optimization example above hopefully shows, the blame for the slow bootup was not actually with udev-settle.service, and not with the ModemManager prober run by it either. It is with the subsystem that pulled this service in in the first place. And that's where the problem needs to be fixed. So, file the bugs at the right places. Put the blame where the blame belongs."
Page editor: Jonathan Corbet
Announcements
Brief items
Groklaw shutting down in May
Pamela Jones has announced that Groklaw will stop publishing articles on May 16. "I know a lot of you will be unhappy to hear it, so let me briefly explain, because my decision is made and it's firm. In a simple sentence, the reason is this: the crisis SCO initiated over Linux is over, and Linux won. SCO as we knew it is no more. There will be other battles, and there already are, because the same people that propped SCO up are still going to try to destroy Linux, but the battlefield has shifted, and I don't feel Groklaw is needed in the new battlefield the way it was in the SCO v. Linux wars." Pamela, you did great work; we hope your next project is as fruitful and satisfying.
Open Standards law approved in Portugal
The Portuguese Parliament has approved a law for the adoption of Open Standards on public IT systems.
Articles of interest
Nokia transitions Symbian source to non-open license (ars technica)
Ryan Paul reports that Nokia is transitioning Symbian away from an open source software model. "It's possible that Nokia has given up on using the open EPL license because moving the development in-house has made the boundary between the company's own proprietary bits and the underlying platform rather blurry. It's extremely unfortunate that this model will effectively prevent Nokia's Symbian code base from going off into the sunset as an open project that can be repurposed by the remaining Symbian enthusiasts. It's also disappointing that Nokia doesn't seem to care anymore. After spending hundreds of millions of euros and many years of effort to be able to distribute the code under the EPL, it seems absurd to throw it all away and revert to a license that imposes bizarre restrictions on source code access."
Project Harmony opens website (The H)
Project Harmony is a group focused on creating contributor agreements for free and open source software. The opening of its website is covered at The H. "Project Harmony positions itself as an organisation that will create a standard suite of language for contribution agreements both between individuals and between companies. A number of "alpha" level agreements are already available to review and the project is inviting feedback. The current phase of the review process runs until 6 May. To assist reviewers, there is also a guide to the agreements and FAQ pages on Harmony and the agreements themselves."
Randal: A Brief History of Harmony
Here's a first-person account of the Harmony Project as told by Allison Randal. "So, I started asking tough questions, and what I found was both better and worse than I expected. I found that no one at Canonical had a bizarre agenda to force copyright assignment on the world. I also found that Canonical had an interest in replacing their current contributor agreement with a Harmony one, and that 'success' for them was measured in community-driven, community-approved, and community-adopted agreements. All good. I also found that Harmony was pretty much stalled, all meetings on hold, waiting on a draft with some changes requested by the Harmony group (substantial changes, but shouldn't have taken terribly long). Not good."
The New Commodore 64, Updated With Its Old Exterior (New York Times)
The New York Times covers the return of the Commodore 64. "The new Commodore 64, which will begin shipping at the end of the month, has been souped up for the modern age. It comes with a 1.8 gigahertz dual-core processor, an optional Blu-ray player and built-in ethernet and HDMI ports. It runs the Linux operating system but the company says you can install Windows if you like. The new Commodore is priced between $250 to $900." (Thanks to Pete Link)
New Books
Two New Titles on Asterisk from O'Reilly Media
O'Reilly Media has released "Asterisk Cookbook", by Leif Madsen and Russell Bryant, and "Asterisk: The Definitive Guide, Third Edition", by Leif Madsen, Jim Van Meggelen and Russell Bryant.
Contests and Awards
Celebrating 20 Years of Linux: Video contest
The Linux Foundation has announced its annual Linux Foundation Video Contest. "The theme for this year's contest is dedicated to the 20th Anniversary of Linux, and Linus Torvalds will choose the winner among community favorites."
Calls for Presentations
DebConf11 registration and call for contributions
DebConf11 will take place in Banja Luka, in Republika Srpska, Bosnia and Herzegovina July 24-30, 2011. DebConf will be preceded by DebCamp, July 17-23, 2011. The call for participation is open until May 8, 2011, as is the sponsored registration deadline.1st Call For Papers, 18th Annual Tcl/Tk Conference 2011
The annual Tcl/Tk Conference (Tcl'2011) will be held in Manassas, Virgina, October 24-28, 2011. Abstracts and proposals are due by August 26, 2011.
Upcoming Events
2011 Red Hat Summit and JBoss World keynotes
Red Hat has announced the keynote speakers for Red Hat Summit and JBoss World. The conference will be held May 3-6, 2011 in Boston, MA.Events: April 21, 2011 to June 20, 2011
The following event listing is taken from the LWN.net Calendar.
| Date(s) | Event | Location |
|---|---|---|
| April 25 April 26 |
WebKit Contributors Meeting | Cupertino, USA |
| April 26 April 29 |
OpenStack Conference and Design Summit | Santa Clara, CA, USA |
| April 28 April 29 |
Puppet Camp EU 2011: Amsterdam | Amsterdam, Netherlands |
| April 29 | Ottawa IPv6 Summit 2011 | Ottawa, Canada |
| April 29 April 30 |
Professional IT Community Conference 2011 | New Brunswick, NJ, USA |
| April 30 May 1 |
LinuxFest Northwest | Bellingham, Washington, USA |
| May 3 May 6 |
Red Hat Summit and JBoss World 2011 | Boston, MA, USA |
| May 4 May 5 |
ASoC and Embedded ALSA Conference | Edinburgh, United Kingdom |
| May 5 May 7 |
Linuxwochen Österreich - Wien | Wien, Austria |
| May 6 May 8 |
Linux Audio Conference 2011 | Maynooth, Ireland |
| May 9 May 11 |
SambaXP | Göttingen, Germany |
| May 9 May 10 |
OpenCms Days 2011 Conference and Expo | Cologne, Germany |
| May 9 May 13 |
Linaro Development Summit | Budapest, Hungary |
| May 9 May 13 |
Ubuntu Developer Summit | Budapest, Hungary |
| May 10 May 13 |
Libre Graphics Meeting | Montreal, Canada |
| May 10 May 12 |
Solutions Linux Open Source 2011 | Paris, France |
| May 11 May 14 |
LinuxTag - International conference on Free Software and Open Source | Berlin, Germany |
| May 12 | NLUUG Spring Conference 2011 | ReeHorst, Ede, Netherlands |
| May 12 May 15 |
Pingwinaria 2011 - Polish Linux User Group Conference | Spala, Poland |
| May 12 May 14 |
Linuxwochen Österreich - Linz | Linz, Austria |
| May 16 May 19 |
PGCon - PostgreSQL Conference for Users and Developers | Ottawa, Canada |
| May 16 May 19 |
RailsConf 2011 | Baltimore, MD, USA |
| May 20 May 21 |
Linuxwochen Österreich - Eisenstadt | Eisenstadt, Austria |
| May 21 | UKUUG OpenTech 2011 | London, United Kingdom |
| May 23 May 25 |
MeeGo Conference San Francisco 2011 | San Francisco, USA |
| June 1 June 3 |
Workshop Python for High Performance and Scientific Computing | Tsukuba, Japan |
| June 1 | Informal meeting at IRILL on weaknesses of scripting languages | Paris, France |
| June 1 June 3 |
LinuxCon Japan 2011 | Yokohama, Japan |
| June 3 June 5 |
Open Help Conference | Cincinnati, OH, USA |
| June 6 June 10 |
DjangoCon Europe | Amsterdam, Netherlands |
| June 10 June 12 |
Southeast LinuxFest | Spartanburg, SC, USA |
| June 13 June 15 |
Linux Symposium'2011 | Ottawa, Canada |
| June 15 June 17 |
2011 USENIX Annual Technical Conference | Portland, OR, USA |
If your event does not appear here, please tell us about it.
Page editor: Rebecca Sobol

![[The panelists]](https://static.lwn.net/images/conf/2011/lfs-cs/hwpanel2-sm.jpg)