User: Password:
|
|
Subscribe / Log in / New account

LWN.net Weekly Edition for September 2, 2010

Messing around with CyanogenMod 6

By Jake Edge
September 1, 2010

Perhaps the novelty should be wearing off, but the ability to update your phone to code of your choosing is still rather amazing. For those who have older phones, which are generally neglected by carriers and manufacturers who quickly move on to the next "big thing", it can extend the life of a fairly large investment. For others, who just want to explore the capabilities of the phone hardware outside of the box created by the industry, changing the firmware provides the freedom many of us have come to expect from computers. The much-anticipated release of CyanogenMod 6.0 (CM6), bringing Android 2.2 ("Froyo") along with a bunch of additional features to many different Android phones, serves as a reminder of the possibilities available with some phones today.

[CM6 home]

Amazing though it may be, reflashing a phone is always something of a nerve-wracking experience. Since I had a "spare" ADP1 phone—fully supported by CM6—sitting around, I decided to try the release on that phone. While the process went fairly smoothly, ADP1 owners should be warned that actually using the phone with that code is a bit painful—or was on my phone. The interface is fairly slow and unresponsive at times. While poking around, I realized that the Dalvik just-in-time (JIT) compiler was not turned on by default, but turning it on and rebooting only made for minor speed improvements. It did work well enough to convince me to try it on the Nexus One (N1) I use as my regular phone though.

The instructions for both phones (N1, ADP1) require installing a custom recovery image that gives more control over updating various portions of the firmware. I put the Amon_Ra recovery image on both, but not without a bit of consternation on the N1. Perhaps naïvely, I didn't think I needed to unlock the bootloader on the N1. The phone was given to me by Google at the Collaboration Summit in April and I expected it to essentially be the equivalent of the ADP1. So trying to use the fastboot utility to flash the recovery image gave an error and choosing recovery from the bootloader menu brought up a picture of an unhappy android—pulling the battery seemed the only way to get rid of it.

Unlocking the bootloader is a simple process using fastboot but it does void the warranty—however much warranty there is on a free phone—and it also "wipes" the phone, losing any settings, call records, applications, and so on. I considered fiddling with various backup solutions before deciding that a clean slate wouldn't be such a bad thing.

For the ADP1, three pieces were needed after the Amon_Ra recovery image: DangerSPL, CyanogenMod itself, and the optional Google Apps package. DangerSPL is a "second program loader" (SPL) that repartitions the system flash to provide enough space for CM6. It was ported from the HTC Magic phone to the Dream (aka G1, ADP1) and can brick the phone if used incorrectly, thus the Danger moniker.

On an amusing side note, I still had the Vodafone pre-paid SIM card from my recent trip to the Netherlands in the ADP1. I didn't pay too much attention to the setup screens and suddenly found myself in a Dutch-language interface. Given the SIM, that's a reasonable assumption for the phone to make, of course, but it required another wipe to get it into English. If I had been able to puzzle out enough Dutch to work through the settings menus, I could presumably have switched it back, but that proved challenging so I resorted to the wipe.

[CM6 screens]

For the N1, just CM6 and a Google Apps package was required after Amon_Ra. The zip files for the various pieces need to be stored on the SD card of the phone and the Amon_Ra recovery image then allows choosing files from SD to update the device. While it all worked quite well, and there are detailed, though somewhat scattered, instructions, it always seems like these upgrades have a few unexpected, heart-stopping wrinkles. A second, unexplained appearance of the unhappy android on the N1 was one such wrinkle.

The Google Apps package contains the closed-source applications that normally come with the phone: Market, Gmail, Maps, and so on. CyanogenMod created a separate package for those ("for copyright reasons" according to the site) after receiving a cease-and-desist letter from Google for distributing them as part of CyanogenMod. The "gapps" package does come from a different site, but at least so far, there are no reports of nastygrams from the Googleplex. It doesn't seem completely unreasonable to allow those applications to be distributed; the applications won't run on anything other than Android phones and phone owners are already licensed to use them. Newer applications that aren't officially available for older handsets like the ADP1 are perhaps more of a gray area.

So, after 30 minutes of futzing with installation (per phone roughly), what is CM6 like? It seems to be a very solid release, and I ran it for half a day on the ADP1 and more than a day on the N1 with no problems (other than the slowness on the ADP1, which is presumably due to older, slower hardware). There are many features in CM6 that are different than the stock Froyo that was previously running on the N1—too many to fully discover in the short time since it was released.

[CM6 drawer]

But there are several obvious things that stand out, starting with differences in the application "drawer" which shows the icons for all of the applications installed. Instead of the stock Froyo four-icon-width screen with 3D effects when scrolling, the application drawer in CM6 has five icons across and leaves out the scrolling effects. That, like many things in CM6, is configurable so that the number of columns can be changed for both portrait and landscape orientations.

Configurability is definitely one of the strengths of CM6. There are options for changing the user interface in various ways such as customizing the status bar, changing the available screen rotations, choosing the trackball notifications, modifying how the buttons and trackball input work, and lots more. While the ability to make so many changes is great for sophisticated users, one can see why Google and the carriers might be interested in reducing the number of calls they get from customers with wildly different settings.

It's not just the user interface that can be tweaked in CM6 as there are options to change application storage behavior (allowing all applications to be moved to SD and to set a default location for applications to be stored) as well as performance tweaks. The performance settings screen comes with a "Dragons ahead" warning because changing them could cause the phone's performance to get worse. Furthermore, no bug reports will be accepted for problems found when mucking with things like the VM heap size or locking the home application into memory. Those options are purely for experimentation purposes.

There are a whole host of other changes for CM6 that are listed in the changelog, including things like support for FLAC playback for those with a distaste for lossy audio formats (and lots of storage space). OpenVPN support, music application enhancements, browser incognito mode, home button long-press options, themable UI elements, and many more are on the list. There is much to discover in CM6, and I look forward to (too) many hours playing with the new software.

Unsurprisingly, the N1 (and ADP1 for that matter) still function quite well as a regular old cell phone since CM6 uses most of the Android code. Also as expected, it still sends and receives text messages, browses the web, and plays the bouncing cows game. It is, in short, a major enhancement to the capabilities already present in Android.

Unlike other Froyo-based "mods", CM6 is built from the Android source, rather than extracting various binaries from stock firmware. That makes it easier to trust the CyanogenMod code—one could build their own version for verification or customization purposes for example—but it is also what allows CM6 to support so many different handsets. There are nine separate phones listed as being supported by CM6.

As always, reflashing firmware, unlocking bootloaders, wiping settings, and voiding warranties should be done with some care and thought. $500+ bricks are not much fun. But the process has been successfully completed by many, including notoriously fumble-fingered LWN editors such as myself, and the additional capabilities that come with CM6 make it well worth the effort. I certainly can't see any good reason to return to the firmware distributed by T-Mobile.

Comments (23 posted)

Mozilla re-launches its developer network

September 1, 2010

This article was contributed by Nathan Willis

The Mozilla project officially re-launched its developer information and outreach program this week. Previously known as the Mozilla Developer Center (MDC), it has now been rebranded the Mozilla Developer Network (MDN), which is a new site with refreshed content, a reorganized and expanded mission, and new community features.

Mozilla has discussed the renovation project in the open for the better part of 2010 on Mozilla-hosted blogs as well as in its newsletters and public call-in conferences. The driving principle is to broaden the focus of the site from the inward-looking MDC — which effectively served only those developers looking to build Firefox/Thunderbird extensions and XUL applications — to a wider perspective. The MDN site is meant to serve developers working on Mozilla add-ons and applications, but also to be a resource for, as the site's tagline says, "everyone developing for the Open Web."

A "soft launch" of content for this wider target audience was rolled out with the 2009 debut of the Mozilla Hacks blog, which covered web development and open standards topics in addition to Mozilla-specific news, and placed an emphasis on providing demo code instead of text-only discussions. The switch-over from MDC to MDN took place on August 27th.

Former MDC participants will be pleased to learn that their existing account information has been preserved and will allow them to log in to MDN as well. Thus far, an MDN account only enables two features — the ability to post to the discussion forum and the ability to edit documentation.

Content

Now that MDN has been officially unveiled, visitors can see content divided into four main developer categories: Web, Mobile, Add-ons, and Applications. Mozilla feels that these represent distinct and (for the most part) non-overlapping segments of the development community. Each section presents targeted content drawn from official Mozilla documentation, curated news articles and blog entries from external sites, and links to specific software projects. There is some variation between the sections; Mobile, for example, includes both "Favorite" and "Recent" article categories, and the Add-ons section includes a "Latest Comments from Our Community" box not found elsewhere, which suggests that the MDN platform is still evolving.

The Add-ons and Applications sections encompass most of the documentation content that was previously featured at MDC. For add-on developers, there are references and tutorials for the major APIs and languages, case studies for extensions, plugins, and other add-ons, along with validation and packaging help. The Application section provides an overview of the Mozilla platform, as well as guides to the tools related to building Mozilla-derived projects, including Bugzilla, Mercurial, Bonsai, Talkback, and other utilities.

The Mobile section focuses not only on Mozilla's Firefox Mobile browser, but also on developing location-aware web and mobile applications, and on using Mozilla technologies for other mobile software. A prime example of the latter is Mozilla's own Firefox Home application for the Apple iPhone, which is an iOS program that connects to Mozilla's Sync service.

The Web section represents the bigger change. Building on the content at Mozilla Hacks, it features documentation on HTML5, CSS, JavaScript, DOM, SVG, and other open web standards, as well as news stories that relate to Mozilla's support for open standards. Naturally, Mozilla cares about standards, and its browsers try to implement them as best they can, but reaching out to directly to web application and site developers is a new approach. The Web section does spotlight development tools available as Firefox add-ons, such as Firebug and JSONView, but very little of the content is Firefox-specific.

Emphasizing the Open Web philosophy also ties in to Mozilla's Drumbeat initiative. Drumbeat is an umbrella project that encourages individuals to organize software projects and in-person events that advance Open Web adoption. It differs from MDN in its focus on non-developer community action, however. One of the goals of Drumbeat is to encourage online communities other than the "tech" circle to build their sites using open standards. On the other hand, both Drumbeat and MDN's Web section try to promote practical software projects (such as Universal Subtitles or Privacy Icons) that reinforce open standards.

Interactivity

Currently, all of the MDN sections place the primary emphasis on official, Mozilla-hosted documentation. News articles and blog entries appear lower in the page, and at the moment seem to be drawn entirely from external content sources (although many are from Mozilla blogs, and thus do originate from Mozilla authors). Blog coverage such as Mozilla intern Brian Louie's indicate that the content mix will expand and get better as more features are added to the site.

For example, community-created content is not yet included; even in the Twitter sidebar, only MDN accounts are shown, as opposed to Mozilla or MDN-related hash tags. Because most of the news headlines are links to external blogs, direct commenting on the stories is not possible without leaving the site. Louie mentions that (among other features), interactive tagging and rating of news stories is on the roadmap.

But potentially the biggest feature of the new site is the community discussion forum, which is already active. At the top of each page is a "Community" link to the phpBB-powered forum. The forum boards do not break down into quite the same categories as the MDN main sections, which is puzzling — there are separate boards for Open Web, Mozilla Platform, MDN Community, Mozilla Add-ons, and Mozilla Labs.

Nevertheless, hosting the discussion forums at MDN is a big step for the organization. Previously, official Mozilla community interaction has taken place entirely on mailing lists and newsgroups. The major discussion forum web site is Mozillazine, which is not affiliated with the Mozilla Foundation. MDN is following the lead taken earlier by the support.mozilla.com (a.k.a. SUMO) project, bringing discussion into a central location hosted by Mozilla itself.

Future Developments

There is a wealth of information already accessible at MDN, from the news articles to the documentation. Mozilla says that all of the content that was at MDC has been migrated to MDN; a direct link to the documentation landing page is available in the header of each page.

In addition to Louie's comments, Mozilla's Jay Patel has given a glimpse of where the organization intends to take MDN from here, via his blog. The first order of business is to replace the old, Mindtouch-based documentation backend with a new system built with Django. The effort is already underway for SUMO; MDN will simply "piggyback" on that tool. The plan is to migrate MDN over to the new system over several months, with the goal of moving slowly enough to add new content that is entirely translated and localized.

Further out, user-given article ratings and comments are mentioned, which may indicate either that MDN-hosted original content is on the way, or else that comments on the Mozilla blogs will simply be integrated as RSS or ATOM feed sources. In addition, Mozilla plans to hold topic-focused documentation sprints and hold developer focus groups over the last quarter of 2010 and into 2011.

Hopefully there are more changes coming still further out. It is particularly ironic that Mozilla's "Open Web" emphasis is launched on a site that dedicates its entire sidebar to the decidedly non-open Twitter service, and that its forums do not support OpenID logins. It is also a little bit troubling that the entire focus of MDN seems to be on Firefox and Firefox Mobile, to the exclusion of Thunderbird, Lightning, and other Mozilla applications. Perhaps that simply reflects the organizational divide between Mozilla Corporation and Mozilla Messaging, but it can hardly be healthy for non-Firefox projects in the long run.

Also in the long run, though, Mozilla is doing itself and the open web development community a great service by consolidating its documentation and developer resources into a single, unified whole, complete with the one thing that it has long lacked — a web-based open discussion forum.

Comments (2 posted)

LinuxCon Brazil: Q&A with Linus and Andrew

By Jonathan Corbet
August 31, 2010
Linus Torvalds rarely makes appearances at conferences, and it's even less common for him to get up in front of the crowd and speak. He made an exception for LinuxCon Brazil, though, where he and Andrew Morton appeared in a question and answer session led by Linux Foundation director Jim Zemlin. The resulting conversation covered many aspects of kernel development, its processes, and its history.

Jim started things off by asking: did either Linus or Andrew ever expect Linux to get so big? Linus did not; he originally wrote the kernel as a stopgap project which he expected to throw away when something better came along. Between the GNU Project and various efforts in the BSD camp, he thought that somebody would surely make a more capable and professional kernel. Meanwhile, Linux was a small thing for his own use. But, in the end, nothing better ever did come along. [Andrew Morton] Andrew added that, as a kernel newbie (he has "only" been hacking on it for ten years), he has less of a long-term perspective on things. But, to him, the growth of Linux has truly been surprising.

How, Jim asked, do they handle the growth of the kernel? Andrew responded that, as the kernel has grown, the number of developers has expanded as well. Responsibility has been distributed over time, so that he and Linus are handling a smaller proportion of the total work. Distributors have helped a lot with the quality assurance side of things. At this point, Andrew says, responsibilities have shifted to where the kernel community provides the technology, but others take it from there and turn it into an actual product.

Linus noted that he has often been surprised at how others have used Linux for things which do not interest him personally at all. For example, he always found the server market to be a boring place, but others went for it and made Linux successful in that area. That, he says, is one of the key strengths of Linux: no one company is interested in all of the possible uses of the system. That means that nobody bears the sole responsibility of maintaining the kernel for all uses. And Linus, in particular, really only needs to concern himself with making sure that all of the pieces come together well. The application of a single kernel to a wide range of use cases is something which has never worked well in more controlled environments.

From there, Jim asked about the threat of fragmentation and whether it continues to make sense to have a single kernel which is applicable to such a wide range of tasks. Might there come a point where different versions of the kernel need to go their separate ways?

According to Linus, we are doing very well with a single kernel; he would hate to see it fragment. There are just too many problems which are applicable in all domains. So, for example, people putting Linux into phones care a lot about power management, but it turns out that server users care a lot too. In general, people in different areas of use tend to care about the same things, they just don't always care at the same time. Symmetric multiprocessing was once only of interest to high-end server applications; now it is difficult to buy a desktop which does not need SMP support, and multicore processors are moving into phones as well. Therein lies the beauty of the single kernel approach: when phone users need SMP support, Linux is there waiting for them.

Andrew claimed that its wide range of use is the most extraordinary technical attribute of the kernel. And it has been really easy to make it all work. It is true, though, that this process has been helped by the way that "small" devices have gotten bigger over time. Unfortunately, people who care about small systems are still not well represented in the kernel community. But the community as a whole cares about such systems, so we have managed to serve the embedded community well anyway.

[Andrew Morton and Linus Torvalds] Next question: where do kernel developers come from, and how can Brazilian developers in particular get more involved? Linus responded that it's still true that most kernel developers come from North America, Europe, and Australia. Cultural and language issues have a lot to do with that imbalance. When you run a global project, you need to settle on a common language, and, much to Linus's chagrin, that language wasn't Finnish. It can be hard to find people in many parts of the world who are simultaneously good developers and comfortable interacting in English. What often works is to set up local communities with a small number of people who are willing to act as gateways between the group and the wider development community.

Andrew pointed out that participation from Japan has grown significantly in recent years; he credited the work done by the Linux Foundation with helping to make that happen. He also noted that working via email can be helpful for non-native speakers; they can take as much time as needed at each step in a conversation. As for where to start, his advice was to Just Start: pick an interesting challenge and work on it.

Open source software, Linus said, is a great way to learn real-world programming. Unlike classroom projects, working with an active project requires interacting with people and addressing big problems. Companies frequently look at who is active in open source projects when they want to find good technical people, so working on such projects is a great way to get introduced to the world. In the end, good programmers are hard to find; they will get paid well, often for working on open source software. Andrew agreed that having committed changes makes a developer widely visible. At Google, he is often passed resumes by internal recruiters; his first action is always to run git log to see what the person has done.

Linus advised that the kernel might not be the best place for an aspiring developer to start, though. The kernel has lots of developers, and it can be kind of scary to approach sometimes. Smaller projects, instead, tend to be desperate for new developers and may well be a more welcoming environment for people who are just getting started.

At this point, a member of the audience asked about microkernel architectures. Linus responded that this question has long since been answered by reality: microkernels don't work. That architecture was seen as an easy way to compartmentalize problems; Linus, too, originally thought that it was a better way to go. But a monolithic kernel was easier to implement back at the beginning, so that's what he did. Since then, the flaw in microkernel architectures has become clear: the various pieces have to communicate, and getting the communication right is a very hard problem. A better way, he says, is to put everything you really need into a single kernel, but to push everything possible into user space.

What about tivoization - the process of locking down Linux-based systems so that the owner cannot run custom kernels? Linus admitted to having strong opinions on this subject. He likes the fundamental bargain behind version 2 of the GPL, which he characterizes as requiring an exchange [Linus Torvalds] of source code but otherwise allowing people to do whatever they want with the software. He does not like locked-down hardware at all, but, he says, it's not his hardware. He did not develop it, and, he says, he does not feel that he has any moral right to require that it be able to run any kernel. The GPLv2 model, he feels, is the right one - at least, for him. Licensing is a personal decision, and he has no problem with other projects making different choices.

Another member of the audience questioned the single-kernel idea, noting that Android systems are shipping kernels which differ significantly from the mainline. Jim responded that people who created forked versions of the kernel always come back - it's just too expensive to maintain a separate kernel. Andrew said that the Android developers are "motivated and anxious" to get their work upstream, both because it's the right thing to do and because the kernel changes too quickly. Nobody, he says, has the resources to maintain a fork indefinitely.

Linus cautioned that, while forks are generally seen as a bad thing, the ability to fork is one of the most important parts of the open source development model. They can be a way to demonstrate the validity of an idea when the mainline project is not ready to try it out. At times, forks have demonstrated that an approach is right, to the point that the kernel developers have put in significant work to merge the forked code back into the mainline. In the end, he says, the best code wins, and a fork can be a good way to show that specific code is the best. Rather than being scary, forks are a good way to let time show who is right.

Another audience member asked Linus if he would continue to work on the kernel forever. Are there any other projects calling to him? Linus said that "forever is a long time." That said, he'd originally thought that the kernel was a two-month project; he is still doing it because it stays interesting. There are always new problems to solve and new hardware to support; it has been an exciting project for 19 years and he is planning to continue doing it for a long time. He may have an occasional dalliance elsewhere, like he did when writing Git, but he always comes back to the kernel because that's where the interesting problems are.

[Closing the session]

Jim described Linus and Andrew as a couple of the most influential people in technology. They are, he said, at the same level as people like Bill Gates, Steve Jobs, and Larry Ellison. Those people are some of the richest in the world. His questions to Linus and Andrew were: "are you crazy?" and "what motivates you?"

Andrew replied that his work is helping people getting what they want done. It is cool that this work affects millions of people; that is enough for him. In typical fashion, Linus answered that he is just the opposite: "I don't care about all you people." He is, he says, in this for selfish reasons. He was worried about finding a way to have fun in front of a computer; the choice of the GPL for Linux made life more fun by getting people involved. He has been gratified that people appreciate the result - that, he says, gives meaning to life in a way that money does not. It is, he says, "taking by giving."

The session ended with a short speech from Jim on the good things that Linus and Andrew have done, followed by a standing ovation from the audience.

Comments (43 posted)

Page editor: Jonathan Corbet

Security

Thwarting internet censors with Collage

By Jake Edge
September 1, 2010

Steganography is an ancient method of hiding a message in plain sight. In the digital age, steganography is often associated with hiding data inside of a binary file, typically using the low bits of an image or audio file in such a way that the message makes very little difference in the output. The Collage project looks to use steganography in conjunction with sites that host lots of user-generated content to provide a communication channel that resists censorship.

As the slides [PDF] and paper [PDF] from a recent presentation on Collage describe, there are increasing attempts to censor internet communications. It is not just repressive regimes that are guilty of such censorship either, as various democratic governments are trying—sometimes succeeding—to get into the game. Existing methods to route around things like the "great firewall of China" rely on using proxies (e.g. Tor) outside of the censorship wall. But, proxies are relatively easily identified and blocked. Worse yet, anyone attempting to use one of the proxies can be identified and punished.

By using sites that regular "law abiding" citizens use on a regular basis, Collage seeks to appear completely innocuous to the censoring devices. The specific example used is photo-sharing sites like Flickr. Many people legitimately browse the photos there, so it will be difficult to determine that a particular user may be browsing for photos that contain a steganographic message. In addition, the sheer number of photos stored on the site make it difficult for the censors to catalog those that may contain a hidden message.

It is, essentially, a form of "security through obscurity", but one that can offer a level of deniability if used properly. If a censored user frequently visited Flickr for photo uploading and browsing, and only infrequently used it to pass messages, it would be difficult to detect by anything other than a targeted monitoring of that user's traffic. Unlike proxies, there is no need for anyone to maintain an infrastructure of hosts to handle the traffic; Flickr, YouTube, and others are already doing so.

The basic idea is that a simple message is encrypted (using some key agreed upon separately), then broken into pieces, with erasure coding added so that the entire message can be re-assembled from just a subset of the pieces. Those chunks then get steganographically inserted into multiple photos, which are uploaded to a photo-sharing site.

The project also used a text steganography technique to hide messages in the text of comments on blogs, YouTube, Twitter, and so on. In either case, the presence of steganography is likely to be detectable if the censoring agency tries. But with proper encryption, the actual message text will not be recoverable. The paper also discusses the use of watermarking to hide information that may be more easily detected but is hard to remove without disrupting the containing photo or file.

In order for a message to reach its recipient, though, there needs to be some way for them to know which of the billions of photos at Flickr actually contain bits of interest. In addition, the downloads made by the user must appear to be "normal" tasks that a Flickr user might perform. The paper outlines a rather elaborate protocol that could be used to map messages to "deniable tasks" that the recipient must perform. It's a tricky problem as is acknowledged in the paper:

The challenge, of course, is finding sets of tasks that are deniable, yet focused enough to allow a user to retrieve content in a reasonable amount of time.

It is a clever technique, but there are, of course, some pitfalls. The complexity will make it challenging to use, and automated retrievals may be difficult to do in a non-suspicious manner. It could also end up pointing a finger at "innocent" users of a site like Flickr, who unwittingly just happen to perform the task associated with a Collage message. The paper notes that risk, but also points out that "organizations can already implicate users with little evidence".

Essentially Collage is a proof-of-concept that uses off-the-shelf free software to handle the encryption, encoding, and steganography pieces. So far, the code for a demonstration client, which downloads a message that the project stored in Flickr, is available. The web site does not specifically mention further code releases, but one hopes the code for the sending side will also become available. There are also some performance measurements in the paper that show "acceptable" overhead for sending small, textual messages.

The complexity is daunting, but for those who really need to communicate in a largely deniable fashion, the Collage technique certainly has some appeal. It doesn't suffer from some of the obvious "red flags" that arise when using Tor or normal encrypted traffic (e.g. SSL/TLS, ssh, GPG), which may make it disappear into the noise of normal network traffic. Collage, or something like it, may find a place in the toolkit of those trying to evade internet censorship.

Comments (2 posted)

Brief items

Security quotes of the week

But of course, an RFID chip allows for far more than that minimal record-keeping. Instead, it provides the potential for nearly constant monitoring of a child's physical location. If readings are taken often enough, you could create an extraordinarily detailed portrait of a child's school day - one that's easy to imagine being misused, particularly as the chips substitute for direct adult monitoring and judgment. If RFID records show a child moving around a lot, could she be tagged as hyper-active? If he doesn't move around a lot, could he get a reputation for laziness? How long will this data and the conclusions rightly or wrongly drawn from it be stored in these children's school records? Can parents opt-out of this invasive tracking? How many other federal grants are underwriting programs like these?
-- Rebecca Jeschke of the EFF

We show that we can observe private activities in the home such as cooking, showering, toileting, and sleeping by eavesdropping on the wireless transmissions of sensors in a home, even when all of the transmissions are encrypted. We call this the Fingerprint and Timing-based Snooping (FATS) attack. This attack can already be carried out on millions of homes today, and may become more important as ubiquitous computing environments such as smart homes and assisted living facilities become more prevalent. In this paper, we demonstrate and evaluate the FATS attack on eight different homes containing wireless sensors.
-- Vijay Srinivasan, John Stankovic, and Kamin Whitehouse (unfortunately, only the abstract of the paper is freely available at the site)

Comments (4 posted)

New vulnerabilities

bugzilla: multiple vulnerabilities

Package(s):bugzilla CVE #(s):CVE-2010-2756 CVE-2010-2757 CVE-2010-2758 CVE-2010-2759
Created:August 27, 2010 Updated:September 1, 2010
Description: From the Red Hat bugzilla:

An unprivileged user is normally not allowed to view other users' group membership. But boolean charts let the user use group-based pronouns, indirectly disclosing group membership. CVE-2010-2756

Normally, when a user is impersonated, he receives an email informing him that he is being impersonated, containing the identity of the impersonator. However, it was possible to impersonate a user without this notification being sent. CVE-2010-2757

An error message thrown by the "Reports" and "Duplicates" page confirmed the non-existence of products, thus allowing users to guess confidential product names. CVE-2010-2758

If a comment contained the phrases "bug X" or "attachment X", where X was an integer larger than the maximum 32-bit signed integer size, PostgreSQL would throw an error, and any page containing that comment would not be viewable. On most Bugzillas, any user can enter a comment on any bug, so any user could have used this to deny access to one or all bugs. Bugzillas running on databases other than PostgreSQL are not affected. CVE-2010-2759

Alerts:
Fedora FEDORA-2010-13072 bugzilla 2010-08-20
Fedora FEDORA-2010-13086 bugzilla 2010-08-20

Comments (none posted)

firefox: denial of service

Package(s):Firefox CVE #(s):CVE-2010-1990
Created:August 30, 2010 Updated:September 1, 2010
Description: From the MeeGo advisory:

Mozilla Firefox 3.6.x, 3.5.x, 3.0.19, and earlier, and SeaMonkey, executes a mail application in situations where an IFRAME element has a mailto: URL in its SRC attribute, which allows remote attackers to cause a denial of service (excessive application launches) via an HTML document with many IFRAME elements. CVSS v2 Base: 5.0 (MEDIUM) Access Vector: Network exploitable

Alerts:
MeeGo MeeGo-SA-10:12 Firefox 2010-08-03

Comments (none posted)

gdm: access restriction bypass

Package(s):gdm CVE #(s):CVE-2007-5079
Created:August 27, 2010 Updated:September 1, 2010
Description: From the Red Hat advisory:

A flaw was found in the way the gdm package was built. The gdm package was missing TCP wrappers support on 64-bit platforms, which could result in an administrator believing they had access restrictions enabled when they did not.

Alerts:
CentOS CESA-2010:0657 gdm 2010-08-27
Red Hat RHSA-2010:0657-02 gdm 2010-08-26

Comments (none posted)

httpd: information disclosure

Package(s):httpd CVE #(s):CVE-2010-2791
Created:August 30, 2010 Updated:October 18, 2010
Description: From the Red Hat advisory:

A flaw was discovered in the way the mod_proxy module of the Apache HTTP Server handled the timeouts of requests forwarded by a reverse proxy to the back-end server. If the proxy was configured to reuse existing back-end connections, it could return a response intended for another user under certain timeout conditions, possibly leading to information disclosure.

Alerts:
Gentoo 201206-25 apache 2012-06-24
rPath rPSA-2010-0060-1 httpd 2010-10-17
CentOS CESA-2010:0659 httpd 2010-08-31
Red Hat RHSA-2010:0659-01 httpd 2010-08-30

Comments (none posted)

kdegraphics: memory corruption

Package(s):kdegraphics CVE #(s):CVE-2010-2575
Created:August 27, 2010 Updated:December 1, 2013
Description: From the Ubuntu advisory:

Stefan Cornelius of Secunia Research discovered a boundary error during RLE decompression in the "TranscribePalmImageToJPEG()" function in generators/plucker/inplug/image.cpp of okular when processing images embedded in PDB files, which can be exploited to cause a heap-based buffer overflow.

Alerts:
Gentoo 201311-20 okular 2013-11-28
Slackware SSA:2010-240-03 kdegraphics 2010-08-30
Fedora FEDORA-2010-13661 kdegraphics 2010-08-27
Fedora FEDORA-2010-13629 kdegraphics 2010-08-27
Mandriva MDVSA-2010:162 kdegraphics4 2010-08-26
Ubuntu USN-979-1 kdegraphics 2010-08-27
SUSE SUSE-SR:2010:018 samba libgdiplus0 libwebkit bzip2 php5 ocular 2010-10-06
openSUSE openSUSE-SU-2010:0691-1 okular 2010-10-04

Comments (none posted)

libgdiplus: arbitrary code execution

Package(s):libgdiplus CVE #(s):CVE-2010-1526
Created:September 1, 2010 Updated:January 6, 2014
Description: From the Mandriva advisory:

Multiple integer overflows in libgdiplus 2.6.7, as used in Mono, allow attackers to execute arbitrary code via (1) a crafted TIFF file, related to the gdip_load_tiff_image function in tiffcodec.c; (2) a crafted JPEG file, related to the gdip_load_jpeg_image_internal function in jpegcodec.c; or (3) a crafted BMP file, related to the gdip_read_bmp_image function in bmpcodec.c, leading to heap-based buffer overflows

Alerts:
Gentoo 201401-01 libgdiplus 2014-01-04
openSUSE openSUSE-SU-2010:0665-1 mono 2010-09-23
Fedora FEDORA-2010-13698 libgdiplus 2010-08-30
Fedora FEDORA-2010-13695 libgdiplus 2010-08-30
Mandriva MDVSA-2010:166 libgdiplus 2010-08-31
Ubuntu USN-993-1 libgdiplus 2010-09-29
SUSE SUSE-SR:2010:018 samba libgdiplus0 libwebkit bzip2 php5 ocular 2010-10-06

Comments (none posted)

libhx: arbitrary code execution

Package(s):libHX CVE #(s):CVE-2010-2947
Created:August 31, 2010 Updated:October 25, 2010
Description: From the Mandriva advisory:

Heap-based buffer overflow in the HX_split function in string.c in libHX before 3.6 allows remote attackers to execute arbitrary code or cause a denial of service (application crash) via a string that is inconsistent with the expected number of fields.

Alerts:
Ubuntu USN-994-1 libhx 2010-09-29
Fedora FEDORA-2010-13155 libHX 2010-08-20
Fedora FEDORA-2010-13127 libHX 2010-08-20
Fedora FEDORA-2010-13155 pam_mount 2010-08-20
Fedora FEDORA-2010-13127 pam_mount 2010-08-20
Mandriva MDVSA-2010:165 libHX 2010-08-30
openSUSE openSUSE-SU-2010:0723-1 libHX 2010-10-14
SUSE SUSE-SR:2010:019 OpenOffice_org, acroread/acroread_ja, cifs-mount/samba, dbus-1-glib, festival, freetype2, java-1_6_0-sun, krb5, libHX13/libHX18/libHX22, mipv6d, mysql, postgresql, squid3 2010-10-25

Comments (none posted)

libtiff: denial of service

Package(s):libtiff CVE #(s):CVE-2010-2443
Created:August 30, 2010 Updated:January 19, 2011
Description: From the MeeGo advisory:

The OJPEGReadBufferFill function in tif_ojpeg.c in LibTIFF before 3.9.3 allows remote attackers to cause a denial of service (NULL pointer dereference and application crash) via an OJPEG image with undefined strip offsets, related to the TIFFVGetField function. CVSS v2 Base: 5.0 (MEDIUM) Access Vector: Network exploitable

Alerts:
Gentoo 201209-02 tiff 2012-09-23
MeeGo MeeGo-SA-10:27 libtiff 2010-09-03
MeeGo MeeGo-SA-10:20 libtiff 2010-08-03

Comments (none posted)

mutter-moblin: denial of service

Package(s):mutter-moblin CVE #(s):
Created:August 30, 2010 Updated:September 1, 2010
Description: From the MeeGo advisory:

The DBus message handling in mutter-moblin was not safe. Crash could be induced by a simple:

python -c "import dbus; dbus.Interface (dbus.SessionBus ().get_object \
('org.freedesktop.Notifications', '/org/freedesktop/Notifications'), \
'org.freedesktop.Notifications').Notify ('', 0, '', '', '', [''], {}, \
0)"
Alerts:
MeeGo MeeGo-SA-10:18 mutter-moblin 2010-08-03

Comments (none posted)

openssl: denial of service

Package(s):openssl CVE #(s):CVE-2010-2939
Created:August 31, 2010 Updated:January 19, 2011
Description: From the Debian advisory:

George Guninski discovered a double free in the ECDH code of the OpenSSL crypto library, which may lead to denial of service and potentially the execution of arbitrary code.

Alerts:
Gentoo 201110-01 openssl 2011-10-09
MeeGo MeeGo-SA-10:28 openssl 2010-09-03
Slackware SSA:2010-326-01 openssl 2010-11-22
SUSE SUSE-SR:2010:021 mysql, dhcp, monotone, moodle, openssl 2010-11-16
openSUSE openSUSE-SU-2010:0952-1 openssl 2010-11-16
openSUSE openSUSE-SU-2010:0951-1 openssl 2010-11-16
Ubuntu USN-1003-1 openssl 2010-10-07
Pardus 2010-119 openssl 2010-09-03
Mandriva MDVSA-2010:168 openssl 2010-09-01
Debian DSA-2100-1 openssl 2010-08-30

Comments (none posted)

opera: multiple vulnerabilities

Package(s):opera CVE #(s):CVE-2010-2576 CVE-2010-3019 CVE-2010-3020 CVE-2010-3021
Created:August 26, 2010 Updated:September 1, 2010
Description:

From the SUSE advisory:

- CVE-2010-2576: CVSS v2 Base Score: 6.8 (CWE-94): unexpected changes in tab focus could be used to run programs from the Internet, as reported by Jakob Balle and Sven Krewitt of Secunia

- CVE-2010-3019: CVSS v2 Base Score: 9.3 (CWE-119): heap buffer overflow in HTML5 canvas could be used to execute arbitrary code, as reported by Kuzzcc

- CVE-2010-3020: CVSS v2 Base Score: 5.0 (CWE-264): news feed preview could subscribe to feeds without interaction, as reported by Alexios Fakos

- CVE-2010-3021: CVSS v2 Base Score: 4.3 (CWE-399): remote attackers could trigger a remote denial of service (CPU consumption and application hang) via an animated PNG image

Alerts:
Gentoo 201206-03 opera 2012-06-15
SUSE SUSE-SR:2010:016 yast2-webclient-patch_updates, perl, openldap2, opera, freetype2/libfreetype6, java-1_6_0-openjdk 2010-08-26

Comments (none posted)

phpmyadmin: php code execution

Package(s):phpmyadmin CVE #(s):CVE-2010-3055
Created:August 30, 2010 Updated:September 13, 2010
Description: From the Debian advisory:

The configuration setup script does not properly sanitise its output file, which allows remote attackers to execute arbitrary PHP code via a crafted POST request. In Debian, the setup tool is protected through Apache HTTP basic authentication by default.

Alerts:
Gentoo 201201-01 phpmyadmin 2012-01-04
Debian DSA-2097-2 phpmyadmin 2010-09-11
Mandriva MDVSA-2010:163 phpmyadmin 2010-08-30
Debian DSA-2097-1 phpmyadmin 2010-08-29

Comments (none posted)

polkit: information disclosure

Package(s):polkit CVE #(s):
Created:August 30, 2010 Updated:September 1, 2010
Description: From bugs.freedesktop.org:

pkexec is vulnerable to a minor information disclosure vulnerability that allows an attacker to verify whether or not arbitrary files exist, violating directory permissions.

Alerts:
MeeGo MeeGo-SA-10:14 polkit 2010-08-03

Comments (none posted)

wireshark: arbitrary code execution

Package(s):wireshark CVE #(s):CVE-2010-2994
Created:September 1, 2010 Updated:April 19, 2011
Description: From the Debian advisory:

Several implementation errors in the dissector of the Wireshark network traffic analyzer for the ASN.1 BER protocol and in the SigComp Universal Decompressor Virtual Machine may lead to the execution of arbitrary code.

Alerts:
Gentoo 201110-02 wireshark 2011-10-09
SUSE SUSE-SR:2011:007 NetworkManager, OpenOffice_org, apache2-slms, dbus-1-glib, dhcp/dhcpcd/dhcp6, freetype2, kbd, krb5, libcgroup, libmodplug, libvirt, mailman, moonlight-plugin, nbd, openldap2, pure-ftpd, python-feedparser, rsyslog, telepathy-gabble, wireshark 2011-04-19
openSUSE openSUSE-SU-2011:0010-2 wireshark 2011-01-12
SUSE SUSE-SR:2011:001 finch/pidgin, libmoon-devel/moonlight-plugin, libsmi, openssl, perl-CGI-Simple, supportutils, wireshark 2011-01-11
SUSE SUSE-SR:2011:002 ed, evince, hplip, libopensc2/opensc, libsmi, libwebkit, perl, python, sssd, sudo, wireshark 2011-01-25
openSUSE openSUSE-SU-2011:0010-1 wireshark 2011-01-04
Debian DSA-2101-1 wireshark 2010-08-31

Comments (none posted)

yast2-webclient-patch_updates: installation specific secret key

Package(s):yast2-webclient-patch_updates CVE #(s):CVE-2010-1507
Created:August 26, 2010 Updated:September 1, 2010
Description:

From the SUSE advisory:

WebYaST generates installation specific secret key during RPM installation (CVE-2010-1507)

Alerts:
SUSE SUSE-SR:2010:016 yast2-webclient-patch_updates, perl, openldap2, opera, freetype2/libfreetype6, java-1_6_0-openjdk 2010-08-26

Comments (none posted)

Page editor: Jake Edge

Kernel development

Brief items

Kernel release status

The current development kernel is 2.6.36-rc3, which was released on August 29. "Nothing in particular stands out that I can recall. As usual, it's mostly driver updates (65%), of which a large piece (by line count) is just the removal of a staging driver that isn't really ready nor making any progress. But on the 'somewhat more likely to cause excitement' front, there's some radeon/nouveau drm updates too." See the full changelog for all the details.

Stable updates: The 2.6.27.53, 2.6.32.21, 2.6.34.6, and 2.6.35.4 stable kernels were released on August 27. As usual, they contain fixes throughout the kernel tree. Note that this is the last stable update for the 2.6.34 kernel series.

Comments (none posted)

Quotes of the week

SIGEV_THREAD is the best proof that the whole posix timer interface was comitte[e]d under the influence of not to be revealed mind-altering substances.
-- Thomas Gleixner

Reaching into the first bag, quite worn out with a faded "27" on the outside of it, he grabbed one of the remaining kernels in there and tossed it into the group. A few small people at the back of the crowd caught the kernel, and slowly walked off toward the village. Og wondered about these people, constantly relying on the old kernels to save them for another day, while resisting the change to move on, harboring some kind of strange reverence for this specific brand that Og just could not understand.
-- Greg Kroah-Hartman

Comments (9 posted)

Kernel development news

Some numbers and thoughts on the stable kernels

By Jonathan Corbet
August 27, 2010
Much attention goes toward mainline kernel releases, but relatively few users are actually running those kernels. Instead, they run kernels provided by their distributors, and those kernels, in turn, are based off the stable kernel series. The practice of releasing stable kernels has been going for well over five years now, so perhaps it's time to look back at how it has been going.

Back in March 2005, the community was discussing ways of getting important fixes out to users of mainline releases. There was talk of maintaining a separate tree containing nothing but fixes; Linus, at the time, thought that any such attempt was doomed to failure:

I'll tell you what the problem is: I don't think you'll find anybody to do the parallel "only trivial patches" tree. They'll go crazy in a couple of weeks. Why? Because it's a _damn_ hard problem. Where do you draw the line? What's an acceptable patch? And if you get it wrong, people will complain _very_ loudly, since by now you've "promised" them a kernel that is better than the mainline. In other words: there's almost zero glory, there are no interesting problems, and there will absolutely be people who claim that you're a dick-head and worse, probably on a weekly basis.

With such strong words of encouragement, somebody clearly had to step up to the job; that somebody turned out to be Greg Kroah-Hartman and Chris Wright. They released 2.6.11.1 on March 4, 2005 with all of three fixes. More than five years later, Greg (a.k.a. "Og") is still at it (Chris has not been active with stable updates for a while). During that time, the stable release history has looked like this:

Kernel UpdatesChanges
TotalPer release
2.6.1112797
2.6.126539
2.6.135449
2.6.1479614
2.6.15711016
 
2.6.1662105317
2.6.171419114
2.6.18824030
2.6.19718927
2.6.202144721
 
2.6.21716223
2.6.221937920
2.6.231630219
2.6.24724635
2.6.252049225
 
2.6.26832140
2.6.27 531553 29
2.6.281061361
2.6.29638364
2.6.301041942
 
2.6.311482659
2.6.32 21179385
2.6.337883126
2.6.345599120
2.6.35 422857

In the table above, the kernels appearing in bold are those which are still receiving stable updates as of this writing (though 2.6.27 is clearly reaching the end of the line).

A couple of conclusions immediately jump out of the table above. The first is that the number of fixes going into stable updates has clearly increased over time. From this one might conclude that our kernel releases have steadily been getting buggier. That is hard to measure, but one should bear in mind that there is another important factor at work here: the kernel developers are simply directing more fixes toward the stable tree. Far more developers are looking at patches with stable updates in mind, and suggestions that a patch should be sent in that direction are quite common. So far fewer patches fall through the cracks than they did in the early days.

There is another factor at work here as well. The initial definition of what constituted an appropriate stable tree patch was severely restrictive; if a bug did not cause a demonstrable oops or vulnerability, the fix was not considered for the stable tree. By the time we get to the 2.6.35.4 update, though, we see a wider variety of "fixes," including Acer rv620 laptop support, typo fixes, tracepoint improvements to make powertop work better, the optimistic spinning mutex scalability work, a new emu10k1 sound driver module parameter, and oprofile support for a new Intel processor. These enhancements are, arguably, all things that stable kernel users would like to have. But they definitely go beyond the original charter for this tree.

Your editor has also, recently, seen an occasional complaint about regressions finding their way into stable updates; given the volume of patches going into stable updates now, a regression every now and then should not be surprising. Regressions in the stable tree are a worrisome prospect; one can only hope that the problem does not get worse.

Another noteworthy fact is that the number of stable updates for most kernels appears to be falling slowly; the five updates for the entire 2.6.34 update history is the lowest ever, matched only by the 2.6.13 series. Even then, 2.6.34 got one more update than had been originally planned as the result of a security issue. It should seem obvious that handling this kind of patch flow for as many as four kernels simultaneously will be a lot of work; Greg, who has a few other things on his plate as well, may be running a little short on time.

Who is actually contributing patches to stable kernels? Your editor decided to do a bit of git data mining. Two kernels were chosen: 2.6.32, which is being maintained for an extended period as the result of its use in "enterprise" distributions, and 2.6.34, being the most recent kernel which has seen its final stable update. Here are the top contributors for both:

Most active stable contributors
2.6.32
Greg Kroah-Hartman362.0%
Daniel T Chen321.8%
Linus Torvalds231.3%
Trond Myklebust231.3%
Borislav Petkov231.3%
Ben Hutchings211.2%
David S. Miller201.1%
Theodore Ts'o201.1%
Tejun Heo201.1%
Dmitry Monakhov201.1%
Takashi Iwai181.0%
Ian Campbell181.0%
Jean Delvare170.9%
Henrique de Moraes Holschuh170.9%
Yan, Zheng170.9%
Zhao Yakui170.9%
Alan Stern170.9%
Al Viro160.9%
Alex Deucher150.8%
Dan Carpenter150.8%
2.6.34
Alex Deucher142.8%
Joerg Roedel142.8%
Tejun Heo102.0%
Daniel T Chen91.8%
Neil Brown81.6%
Rafael J. Wysocki81.6%
Linus Torvalds71.4%
Greg Kroah-Hartman71.4%
Alan Stern71.4%
Jesse Barnes71.4%
Trond Myklebust71.4%
Ben Hutchings71.4%
Tilman Schmidt71.4%
Avi Kivity71.4%
Sarah Sharp71.4%
Ian Campbell61.2%
Johannes Berg61.2%
Jean Delvare61.2%
Johan Hovold61.2%
Will Deacon51.0%

Some names on this list will be familiar. Linus never shows up on the list of top mainline contributors anymore, but he does generate a fair number of stable fixes. Other names are seen less often in the kernel context: Daniel Chen, for example, is an Ubuntu community contributor; his contributions are mostly in the welcome area of making audio devices actually work. Some of the people are in the list above because they introduced the bugs that their patches fix - appearing in that role is not necessarily an honor. But - admittedly without having done any sort of rigorous study - your editor suspects that most of the people listed above are fixing bugs introduced by others. They are performing an important and underappreciated service, turning mainline releases into kernels that the rest of the world actually wants to run.

We can also look at who is supporting this work:

Most active stable contributors by employer
2.6.32
(None)27515.3%
Red Hat26714.9%
Intel19410.8%
(Unknown)1759.8%
Novell1669.3%
IBM955.3%
AMD603.3%
Oracle402.2%
Fujitsu331.8%
Atheros301.7%
Parallels291.6%
Citrix271.5%
(Academia)261.5%
Linux Foundation241.3%
NetApp231.3%
Google231.3%
(Consultant)201.1%
NEC181.0%
Canonical150.8%
Nokia140.8%
2.6.34
(None)9518.7%
Red Hat6112.0%
(Unknown)5811.4%
Novell458.9%
Intel438.5%
AMD356.9%
IBM173.3%
(Academia)163.1%
MontaVista91.8%
Fujitsu91.8%
ARM81.6%
Citrix81.6%
NetApp71.4%
Oracle71.4%
(Consultant)71.4%
Linux Foundation71.4%
Google61.2%
Nokia61.2%
NTT51.0%
VMWare51.0%

These numbers quite closely match those for mainline kernel contributions, especially at the upper end. Fixing bugs is said to be boring and unglamorous work, but volunteers are still our leading source of fixes.

We did without a stable tree for the first ten 2.6.x releases, though, at this point, it's hard to imagine just how. In an ideal world, a mainline kernel release would not happen until there were no bugs left; the history of (among others) the 2.3 and 2.5 kernel development cycles shows that this approach does not work in the real world. There comes a point where the community has to create a stable release and go on to the next cycle; the stable tree allows that fork to happen without ending the flow of fixes into the released kernel.

The tables above suggest that the stable kernel process is working well, with large numbers of fixes being directed into stable updates and with participation from across the community. There may come a point, though, where that community needs to revisit the standards for patches going into stable updates. At some point, it may also become clear that the job of maintaining these kernels is too big for one person to manage. For now, though, the stable tree is clearly doing what it is intended to do; Greg deserves a lot of credit for making it work so well for so long.

Comments (41 posted)

Another union filesystem approach

By Jake Edge
September 1, 2010

Creating a union of two (or more) filesystems is a commonly requested feature for Linux that has never made it into the mainline. Various implementations have been tried (part 1 and part 2 of Valerie Aurora's look from early 2009), but none has crossed the threshold for inclusion. Of late, union mounts have been making some progress, but there is still work to do there. A hybrid approach—incorporating both filesystem- and VFS-based techniques—has recently been posted in an RFC patchset by Miklos Szeredi.

The idea behind unioning filesystems is quite simple, but the devil is in the details. In a union, one filesystem is mounted "atop" another, with the contents of both filesystems appearing to be in a single filesystem encompassing both. Changes made to the filesystem are reflected in the "upper" filesystem, and the "lower" filesystem is treated as read-only. One common use case is to have a filesystem on read-only media (e.g. CD) but allow users to make changes by writing to the upper filesystem stored on read-write media (e.g. flash or disk).

There are a number of details that bedevil developers of unions, however, including various problems with namespace handling, dealing with deleted files and directories, the POSIX definition of readdir(), and so on. None of them are insurmountable, but they are difficult, and it is even harder to implement them in a way that doesn't run afoul of the technical complaints of the VFS maintainers.

Szeredi's approach blends the filesystem-based implementations, like unionfs and aufs, with the VFS-based implementation of union mounts. For file objects, an open() is forwarded to whichever of the two underlying filesystems contains it, while directories are handled by the union filesystem layer. Neil Brown's very helpful first cut at documentation for the patches lumped directory handling in with files, but Szeredi called that a bug. Directory access is never forwarded to the other filesystems and directories need to "come from the union itself for various reasons", he said.

As outlined in Brown's document, most of the action for unions takes place in directories. For one thing, it is more accurate to look at the feature as unioning directory trees, rather than filesystems, as there is no requirement that the two trees reside in separate filesystems. In theory, the lower tree could even be another union, but the current implementation precludes that.

The filesystem used by the upper tree needs to support the "trusted" extended attributes (xattrs) and it must also provide valid d_type (file type) for readdir() responses, which precludes NFS. Whiteouts—that is files that exist in the lower tree, but have been removed in the upper—are handled using the "trusted.union.whiteout" xattr. Similarly, opaque directories, which do not allow entries in the lower tree to "show through", are handled with the "trusted.union.opaque" xattr.

Directory entries are merged with fairly straightforward rules: if there are entries in both the upper and lower layers with the same name, the upper always takes precedence unless both are directories. In that case, a directory in the union is created that merges the entries from each. The initial mount creates a merged directory of the roots of the upper and lower directory trees and subsequent lookups follow the rules, creating merged directories that get cached in the union dentry as needed.

Write access to lower layer files is handled by the traditional "copy up" approach. So, opening a lower file for write or changing its metadata will cause the file to be copied to the upper tree. That may require creating any intervening directories if the file is several levels down in a directory tree on the lower layer. Once that's done, though, the hybrid union filesystem has little further interaction with the file, at least directly, because operations and handed off to the upper filesystem.

The patchset is relatively small, and makes very few small changes to VFS—except for a change to struct inode_operations that ripples through the filesystem tree. The permissions() member of that structure currently takes a struct inode *, but the hybrid union filesystem needs to be able to access the filesystem-specific data (d_fsdata) that is stored in the dentry, so it was changed to take a struct dentry * instead. David P. Quigley questioned the need for the change, noting that unionfs and aufs did not require it. Aurora pointed out that union mounts would require something similar and that, along with Brown's documentation, seemed to put the matter to rest.

The rest of the patches make minor changes. The first adds a new struct file_operations member called open_other() that is used to forward open() calls to the upper or lower layers as appropriate. Another allows filesystems to set a FS_RENAME_SELF_ALLOW flag so that rename() will still process renames on the identical dummy inodes that the filesystem uses for non-directories. The bulk of the code (modulo the permissions() change) is the new fs/union filesystem itself.

While "union" tends to be used for these kinds of filesystems (or mounts), Brown noted that it is confusing and not really accurate, suggesting that "overlay" be used in its place. Szeredi is not opposed to that, saying that "overlayfs" might make more sense. Aurora more or less concurred, saying that union mounts were called "writable overlays" for one release. The confusion stemming from multiple uses of "union" in existing patches (unionfs, union mounts) may provide additional reason to rename the hybrid union filesystem to overlayfs.

The readdir() semantics are a bit different for the hybrid union as well. Changes to merged directories while they are being read will not appear in the entries returned by readdir(), and offsets returned from telldir() may not return to the same location in a merged directory on subsequent directory opens. The lists of directory entries in merged directories are created and cached on the first readdir() call, with offsets assigned sequentially as they are read. For the most part, these changes are "unlikely to be noticed by many programs", as Brown's documentation says.

A bigger issue is one that all union implementations struggle with: how to handle changes to either layer that are done outside of the context of the union. If users or administrators directly change the underlying filesystems, there are a number of ugly corner cases. Making the lower filesystem be read-only is an attractive solution, but it is non-trivial to enforce, especially for filesystems like NFS.

Szeredi would like to define the problem away or find some way to enforce the requirements that unioning imposes:

The easiest way out of this mess might simply be to enforce exclusive modification to the underlying filesystems on a local level, same as the union mount strategy. For NFS and other remote filesystems we either

a) add some way to enforce it,
b) live with the consequences if not enforced on the system level, or
c) disallow them to be part of the union.

There was some discussion of the problem, without much in the way of conclusions other than a requirement that changing the trees out from under the union filesystem not cause deadlocks or panics.

In some ways, hybrid union seems a simpler approach than union mounts. Whether it can pass muster with Al Viro and other filesystem maintainers remains to be seen however. One way or another, though, some kind of solution to the lack of an overlay/union filesystem in the mainline seems to be getting closer.

Comments (6 posted)

A look inside the OCFS2 filesystem

September 1, 2010

This article was contributed by Goldwyn Rodrigues

The Oracle Cluster Filesystem (ocfs2) is a filesystem for clustered systems accessing a shared device, such as a Storage Area Network (SAN). It enables all nodes on the cluster to see the same files; changes to any files are visible immediately on other nodes. It is the filesystem's responsibility to ensure that nodes do not corrupt data by writing into each other's files. To guarantee integrity, ocfs2 uses the Linux Distributed Lock Manager (DLM) to serialize events. However, a major goal of a clustered filesystem is to reduce cluster locking overhead in order to improve overall performance. This article will provide an overview of ocfs2 and how it is structured internally.

A brief history

Version 1 of the ocfs filesystem was an early effort by Oracle to create a filesystem for the clustered environment. It was a basic filesystem targeted to support Oracle database storage and did not have most of the POSIX features due to its limited disk format. Ocfs2 was a development effort to convert this basic filesystem into a general-purpose filesystem. The ocfs2 source code was merged in the Linux kernel with 2.6.16; since this merger (in 2005), a lot of features have been added to the filesystem which improve data storage efficiency and access times.

Clustering

Ocfs2 needs a cluster management system to handle cluster operations such as node membership and fencing. All nodes must have the same configuration, so each node knows about the other nodes in the cluster. There are currently two ways to handle cluster management for ocfs2:

  • Ocfs2 Cluster Base (O2CB) - This is the in-kernel implementation of cluster configuration management; it provides only the basic services needed to have a clustered filesystem running. Each node writes to a heartbeat file to let others know the node is alive. More information on running the ocfs2 filesystem using o2cb can be found int the ocfs2 user guide [PDF]. This mode does not have the capability of removing nodes from a live cluster and cannot be used for cluster-wide POSIX locks.

  • Linux High Availability - uses user-space tools, such as heartbeat and pacemaker, to perform cluster management. These packages are complete cluster management suites which can be used for advanced cluster activities such as different fail-over, STONITH (Shoot The Other Node In The Head - yes, computing terms can be barbaric), migration dependent services etc. It is also capable of removing nodes from a live cluster. It supports cluster-wide POSIX locks, as opposed to node-local locks. More information about cluster management tools can be found at clusterlabs.org and linux-ha.org

Disk format

ocfs2 separates the way data and metadata are stored on disk. To facilitate this, it has two types of blocks:
  • Metadata or simply "blocks" - the smallest addressable unit. These blocks contain the metadata of the filesystem, such as the inodes, extent blocks, group descriptors etc. The valid sizes are 512 bytes to 4KB (incremented in powers of two). Each metadata block contains a signature that says what the block contains. This signature is cross-checked while reading that specific data type.

  • Data Clusters - data storage for regular files. Valid cluster sizes range from 4KB to 1MB (in powers of two). A larger data cluster reduces the size of filesystem metadata such as allocation bitmaps, making filesystem activities such as data allocation or filesystem checks faster. On the other hand, a large cluster size increases internal fragmentation. A large cluster size is recommended for filesystems storing large files such as virtual machine images, while a small data cluster size is recommended for a filesystem which holds lots of small files, such as a mail directory.

Inodes

An inode occupies an entire block. The block number (with respect to the filesystem block size) doubles as the inode number. This organization may result in high disk space usage for a filesystem with a lot of small files. To minimize that problem, ocfs2 packs the data files into the inode itself if they are small enough to fit. This feature is known as "inline data." Inode numbers are 64 bits, which gives enough room for inode numbers to be addressed on large storage devices.

Ocfs2 inode layout Data in a regular file is maintained in a B-tree of extents; the root of this B-tree is the inode. The inode holds a list of extent records which may either point to data extents, or point to extent blocks (which are the intermediate nodes in the tree). A special field called l_tree_depth contains the depth of the tree. A value of zero indicates that the blocks pointed to by extent records are data extents. The extent records contain the offset from the start of the file in terms of cluster blocks, which helps in determining the path to take while traversing the B-tree to find the block to be accessed.

The basic unit of locking is the inode. Locks are granted on a special DLM data structure known as a lock resource. For any access to the file, the process must request a DLM lock on the lock resource over the cluster. DLM offers six lock modes to differentiate between the type of operation. Out of these, ocfs2 uses only three: exclusive, protected read, and null locks. The inode maintains three types of lock resources for different operations:

  • read-write lock resource: is used to serialize writes if multiple nodes perform I/O at the same time on a file.

  • inode lock resource: is used for metadata inode operations

  • open lock resource: is used to identify deletes of a file. When a file is open, this lock resource is opened in protected-read mode. If the node intends to delete it, it will request for a exclusive lock. If successful, it means that no other node is using the file and it can be safely deleted. If unsuccessful, the inode is treated as an orphan file (discussed later)

Directory

Directory entries are stored in name-inode pairs in blocks known as directory blocks. Access to the storage pattern of directory blocks is the same as for a regular file. However, directory blocks are allocated as cluster blocks. Since a directory block is considered to be a metadata block, the first allocation uses only a part of the cluster block. As the directory expands, the remaining unused blocks of the data cluster are filled until the data cluster block is fully used.

ocfs2 directory
layout on disk A relatively new feature is indexing the directory entries for faster retrievals and improved lookup times. Ocfs2 maintains a separate indexed tree based on the hash of the directory names; the hash index points to the directory block where the directory entry can be found. Once the directory block is read, the directory entry is searched linearly.

A special directory trailer is placed at the end of a directory block which contains additional data about that block. It keeps a track of the free space in the directory block for faster free space lookup during directory entry insert operations. The trailer also contains the checksum for the directory block, which is used by the metaecc feature (discussed later).

Filesystem Metadata

A special system directory (//) contains all the metadata files for the filesystem; this directory is not accessible from a regular mount. Note that the // notation is used only for the debugfs.ocfs2 tool. Files in the system directory, known as system files, are different from regular files, both in the terms on how they store information and what they store.

An example of a system file is the slotmap, which defines the mapping of a node in the cluster. A node joins a cluster by providing its unique DLM name. The slot map provides it with a slot number, and the node inherits all system files associated with the slot number. The slot number assignment is not persistent across boots, so a node may inherit the system files of another node. All node-associated files are suffixed by the slot number to differentiate the files of different slots.

A global bitmap file in the system directory keeps a record of the allocated blocks on the device. Each node also maintains a "local allocations" system file, which manages chunks of blocks obtained from the global bitmap. Maintaining local allocations decreases contention over global allocations.

The allocation units are divided into the following:

  • inode_alloc: allocates inodes for the local node.

  • extent_alloc: allocates extent blocks for the local node. Extent blocks are intermediate leaf nodes in the B-tree storage of the files.

  • local_alloc: allocates data in data cluster sizes for the use of regular file data.

Allocator layout Each allocator is associated with an inode; it maintains allocations in units known as "block groups." The allocation groups are preceded by a group descriptor which contains details about the block group, such as free units, allocation bitmaps etc. The allocator inode contains a chain of group descriptor block pointers. If this chain is exhausted, group descriptors are added to the existing ones in the form of linked list. Think of it as an array of linked lists. The new group descriptor is added to the smallest chain so that number of hops required to reach an allocation unit is small.

Things get complicated when allocated data blocks are freed because those blocks could belong to the allocation map of another node. To resolve this problem, a "truncate log" maintains the blocks which have been freed locally, but not yet returned to the global bitmap. Once the node gets a lock on the global bitmap, the blocks in the local truncate log are freed.

A file is not physically deleted until all processes accessing the file close it. Filesystems such as ext3 maintain an orphan list which contains a list of files which have been unlinked but still are in use by the system. Ocfs2 also maintains such a list to handle orphan inodes. Things are a bit more complex, however, because a node must check that a file to be deleted is not being used anywhere in the cluster. This check is coordinated using the inode lock resource. Whenever a file is unlinked, and the removed link happens to be the last link to the file, a check is made to determine whether another node is using the file by requesting an exclusive lock over inode lock resource. If the file is being used, it will be moved to the orphan directory and marked with a OCFS2_ORPHANED_FL flag. The orphan directory is later scanned to check for files not being accessed by any of the nodes in order to physically remove them from the storage device.

Ocfs2 maintains a journal to deal with unexpected crashes. It uses the Linux JBD2 layer for journaling. The journal files are maintained, per node, for all I/O performed locally. If a node dies, it is the responsibility of the other nodes in the cluster to replay the dead node's journal before proceeding with any operations.

Additional Features

Ocfs2 has a couple of other distinctive features that it can boast about. They include:

  • Reflinks is a feature to support snapshotting of files using copy-on-write (COW). Currently, a system call interface, to be called reflink() or copyfile() is being discussed upstream. Until the system call is finalized, users can access this feature via the reflink system tool which uses an ioctl() call to perform the snapshotting.

  • Metaecc is an error correcting feature for the metadata using Cyclic Redundancy Check (CRC32). The code warns if the calculated error-correcting code is different from the one stored, and re-mounts the filesystem read-only in order to avoid further corruption. It is also capable of correcting single-bit errors on the fly. A special data structure, ocfs2_block_check, is embedded in most metadata structures to hold the CRC32 values associated with the structure.

Ocfs2 developers continue to add features to keep it up to par with other new filesystems. Some features to expect in the near future are delayed allocation, online filesystem checks, and defragmentation. Since one of the main goals of ocfs2 is to support a database, file I/O performance is considered a priority, making it one of the best filesystems for the clustered environment.

[Thanks to Mark Fasheh for reviewing this article.]

Comments (7 posted)

Patches and updates

Kernel trees

Architecture-specific

Core kernel code

Development tools

Device drivers

Filesystems and block I/O

Memory management

Networking

Benchmarks and bugs

Miscellaneous

Page editor: Jonathan Corbet

Distributions

Can Fedora Ship on Time?

August 31, 2010

This article was contributed by Joe 'Zonker' Brockmeier.

If a Fedora release has been on schedule throughout the entire development process, it hasn't been in recent memory. Fedora Project Leader Jared Smith's announcement that Fedora 14 would suffer a one week slip came as no surprise, but sparked a long discussion on the fedora-devel list about the project's long history of schedule slippage.

Mike McGrath compiled an extensive list of Fedora slips, but it's by no means complete. A quick trip to Google shows that every release since Fedora Core 4 has suffered a slip of at least a week. Reasons for the slippage vary by release, but the blocker for Fedora 14 Alpha was the inability to choose the basic vesa driver when booting and after installation.

The specific cause may not be that important, however. John Poelstra points out that every release has different blockers:

Anecdotally, what happened in this release is not unusual. Each release we have these "this shouldn't ever happen again" items and while they have different names the reasons are almost always the same -- landing changes to packages or infrastructure right on the deadline or not budgeting enough time to recover before the deadline if something goes wrong.

One example may be large changes that derail other system components. Bruno Wolff III cites Python and systemd as major causes for delays, and suggests that "big changes" land a month earlier for easier testing.

Some distributions have different schedules for different components. openSUSE, for example, has different freeze dates for the toolchain, base system, and remainder of the system. In theory this means that the pieces with the greatest impact on the rest of the system are stabilized first and allow for testing of other components without breakage going on "underneath" them. In practice, Adam Williamson says that it's rare for the blocker bugs to reside in the core components in Fedora.

But Fedora does have critical path packages that could be frozen earlier in the schedule. This would include components like Anaconda, GNU Bash, GCC, Grub, the kernel, X.org, RPM, Yum, and many others.

Smith offered three suggestions starting with documenting the reasons for failure, as Poelstra has already been doing. Poelstra has a detailed list of things that have worked well and those that have not with the Fedora 14 release so far. Some of the problems Poelstra has identified include the Koji outage on July 10 and 11, and a need for redundancy on the Release Engineering team. Smith called for more work to document failures (and presumably successes) to try to address them in future releases.

Secondly, Smith called on the Fedora Engineering Steering Committee (FESCo) to "take a more active role in tracking the major changes that land in the distribution and judging the impact that they might have before freezes." Smith noted that Python and systemd weren't the blockers for F14 Alpha, but "had an impact on our ability to build test composes, and also our ability to thoroughly test the RCs" before evaluating the release.

Lastly, Smith asked for contributors to help the Quality Assurance (QA) team with automated testing, as too much is being done manually — and Fedora does not have the resources for extensive manual testing. Fedora has been boosting its testing efforts lately with the "proven testers," who are specifically focusing on the critical path packages. Fedora 13 was the first cycle that had the proven testers at the helm, and F13 slipped as well. However, one of the goals for the team is to recruit more people for testing, which may help in the long run.

Other posters suggested extending the release cycle, some by just a month, others suggesting a nine to 12 month cycle, but this was not met with a great deal of enthusiasm. Will Woods argued against it, saying that no matter how long the cycle was, it wouldn't "stop people trying to cram fixes/changes in at the last minute." As with Smith and others, Woods suggested "better tools and processes" rather than longer development cycles.

How long are the freezes? Users outside the development process may be surprised at how short they are. Peter Jones noted that it depends a great deal on where one starts counting. From development start to feature freeze, the freeze was 43 days for F12, 53 days for F13, and 63 days for F14 — including holidays that fell within the development cycle. From development start to alpha freeze, F13 had 84 days, F12 only 76 days, and F14 just 70 days. In the end, Jones says that "from computing these numbers I think the best lesson is that our schedules have been so completely volatile that it's very difficult to claim they support any reasonable conclusions." Given the frequent changes to the development process for Fedora, there's something to be said for that conclusion.

In the end, are the slips really a major problem for Fedora? Williamson points out that the releases are still "reasonably fixed" with less than a two weeks slippage on the last three releases of Fedora. Others have chimed in to say the process isn't broken, and that slipping to fix issues "is a feature not a bug."

While that might sound glib, there's some validity to it as well. Fedora long suffered from a bad reputation for pushing out releases with serious, sometimes show-stopping bugs. The latest releases have been, if not perfect, of much higher quality. What of the arguments that Fedora is less dependable for "a project manager looking at using a Linux OS in my project" or that it's losing users due to slips? Fedora's relatively short lifecycle after being released makes it an unlikely candidate for many projects even if it were on time. At any rate, Fedora's release cycle is fairly dependable — it can be expected that the release will slip by a few weeks past the scheduled date, but not much beyond that. The suggestion that Fedora is "losing users" over a few weeks of slippage is also unsupported.

Nothing final has been put into place yet, but it does look like the Fedora developers are taking the slips seriously. Over the years, the Fedora community has been aggressive about improving its development processes and addressing perceived issues with the Fedora distribution. If the community remains concerned and focused on avoiding slips, it just might be that Fedora 15 arrives as scheduled. But if it's a choice of quality vs. meeting the schedule, the slips aren't so bad.

Comments (8 posted)

New Releases

CyanogenMod 6.0 released

The long-awaited CyanogenMod 6.0 release is out, bringing a bunch of new features to several different Android platforms. "This is our first stable release based on Android 2.2, and we’ve hit our target list of devices. I’m completely amazed at what this project has become and the community that has developed around it, and it’s only just getting started."

Comments (8 posted)

Distribution News

Distribution quote of the week

Why should update policies for the kernel, dbus, firefox, inkscape, xorg-x11-server, and cowsay be the same? Does that make sense? If an update breaks my graphics, I can't use anything. If an update breaks cowsay, well... my clever MOTD is a little less clever but it shouldn't break other apps. So why not bundle critical stuff that'll really hurt our users in a huge way — the basics like networking, graphical display, hardware support, i18n input methods, sound — and put much more stringent guidelines on them than apps like figlet, xbiff, or xbill? If an application is relatively self-contained and can really only break itself — is it so necessary to be as strict about updates to it within a stable release?
-- Máirín Duffy

Comments (none posted)

Debian GNU/Linux

Debian Project mourns the loss of Frans Pop

The Debian Project has put up a brief notice on the passing of longtime contributor Frans Pop. "Frans was involved in Debian as a maintainer of several packages, a supporter of the S/390 port, and one of the most involved members of the Debian Installer team. He was a Debian Listmaster, editor and release manager of the Installation Guide and the release notes, as well as a Dutch translator."

Comments (6 posted)

DebConf10 DPL report: comm., CUT, RC, init, derivatives, ...

Debian Project Leader Stefano Zacchiroli has a report on his activities during DebConf. Topics include Communication, CUT (Constantly Usable Testing) BoF, RC squashing activities, init system(s), Derivatives, Collaboration with SFLC, and DebConf & Debian.

Full Story (comments: none)

Bits from ARM porters

Hector Oron looks at the status of the Debian ARM port. Topics include Official buildds, Hardfloat ARM port, Linaro bits, and Hardware donations.

Full Story (comments: none)

Fedora

Ksplice Removes Need for Rebooting on Fedora

Ksplice Uptrack is a Linux update service that does not require reboots, and it is now available free of charge for Fedora.

Full Story (comments: 2)

Fedora Board Meeting 27 August 2010

Máirín Duffy covers the August 27 public meeting of the Fedora Board. The board addressed questions about decentralized technical direction, funding and organization of Fedora Activity Days, FESCo and enforcement, and Fedora's target user.

Comments (none posted)

Newsletters and articles of interest

Distribution newsletters

Comments (none posted)

Duffy: A story about updates and people

On her blog, Máirín Duffy describes four archetypes of Fedora users (Caroline Casual-User, Pamela Packager, Connie Community, and Nancy Ninja) and how they relate to updates of the distribution. Fedora has been discussing its update policy for a bit and Duffy uses the user stories to present her thoughts on how to proceed. "Pamela wants updates to be constant throughout a release, no holds barred — she wants the latest Gimp and she wants it yesterday. Caroline just wants her computer to work — "please don't change a thing — it worked yesterday — if it breaks before my presentation I'm screwed!" Can both their needs be met? I think so! But it’s easy to completely miss where interests and needs can both be met when the language is so easily interpreted to mean the problem is untenable."

Comments (8 posted)

Page editor: Rebecca Sobol

Development

A licensing change for syslog-ng

August 31, 2010

This article was contributed by Robert Fekete

Many have criticized syslog-ng, a replacement for the syslog logging daemon with many additional features, for not being open enough. Syslog-ng has a closed-source commercial version and keeps the entire code base under a single copyright by requiring copyright transfer for contributions, which has been a sore spot in the eyes of many people. This may be part of the cause for syslog-ng failing to become the default system-logging daemon of modern Linux distributions. Now the project seeks to relieve these concerns and attract a wider contributor base with a new licensing model.

Over ten years ago, the development of syslog-ng started out as a purely GPL project, but required copyright transfer for contributed patches. Similar to other projects, for example MySQL, this aimed to keep the codebase under a single copyright, leaving the possibility open for future relicensing and proprietary versions.

For about eight years, syslog-ng was virtually a one-man show, maintained by its core developer Balázs Scheidler. This changed in 2006 when BalaBit — the company founded by Balázs originally for developing application-layer firewalls — decided to back the project and release a commercial version, syslog-ng Premium Edition. The model was to develop both versions in parallel and release additional features in the Premium Edition, which over time get released in the Open Source Edition as well.

Some of the original features of the commercial version, like logging to SQL databases or TLS-encrypted log transport have already been shifted to the open-source version. For others — like storing messages in encrypted and timestamped files, buffering messages on the client's hard disk during network outages, or the eventlog to syslog forwarding agent for Microsoft Windows platforms — that day has yet to come. Certain features like support for the new syslog protocol (RFC5424-28) appeared in both versions at the same time, while others were released in the Open Source Edition first.

With version 3.2 of syslog-ng Open Source Edition the project introduces significant changes in its licensing and its core framework. The monolithic codebase has been restructured into a core and a set of plugins, moving most of the functionality into plugins, with the core being the glue that keeps everything working. The core is licensed under the GNU Lesser General Public License (LGPL), while the plugins remain under the GNU General Public License (GPL) version 2.

Reasons and motives for the changes

The reasons for changing to a plugin-based architecture are both technical and administrative. For one, plugins make it easier to extend the functionality of syslog-ng, much easier than hacking the monolithic codebase of earlier versions, thus simplifying the life of contributors. Since copyright transfer is not required any longer, the project hopes to become more accessible to new developers.

In addition, distributions have differing requirements and policies on how system daemons access external libraries. For example, in openSUSE it is a much-debated problem that the system logging daemon — which is started early in the boot process — should not need access to the SSL libraries which only become available much later when the /usr partition is mounted. In syslog-ng 3.2 it is possible to start syslog-ng with limited functionality, and then reload it with additional plugins (like encrypted network destinations) at a later point.

Because of the dual licensing (and at the beginning to keep that possibility open), the syslog-ng project required contributors to transfer the copyright of their patches to BalaBit. That allowed the patches to be incorporated in both the open source and the commercial versions to keep the codebase of the two branches as close as possible to simplify the development process of both. Many would-be contributors did not like this idea, or simply found it overly bureaucratic. It even raised issues if the contributor created the patch as part of their work, as many companies do not have a policy to handle such situations, or simply refuse to sign contributory agreements. All of that discouraged many potential contributors from sending their patches upstream.

Also, the dual licensing of syslog-ng caused much controversy in open-source communities, and was one of the main reasons why Fedora — and subsequently most major Linux distributions — chose rsyslog when replacing syslogd. On the other hand, a significant portion of BalaBit's revenues comes from the commercial syslog-ng Premium Edition, landing BalaBit on Deloitte's regional lists of fastest-growing technology companies. Offering commercial support for the GPL version for many years did not produce the income that the commercial version generated in a year.

What all this means

The modularity of the new, plugin-based architecture and the license change boils down to the following consequences:

  • The contributors retain the copyright of their patches, decreasing overall bureaucracy and making contribution easier.

  • Anyone can develop and distribute open or proprietary plugins. Formerly BalaBit had exclusivity to develop a commercial version. Obviously proprietary plugins can link only to the LGPL syslog-ng core, and not to plugins licensed under the GPL.

Switching the license of the syslog-ng core to LGPL allows the project to use exactly the same core for the open source and commercial versions, and transfer the features of the Premium Edition to proprietary plugins, making the development and maintenance of both versions simpler and more reliable. Also, compared to many other mixed licensing models dealing with open-source and proprietary versions, BalaBit's approach is more open in the sense that it drops the exclusivity for proprietary plugins, now anyone is free to create such plugins.

Naturally, the license change does not fully address the concerns about whether the upstream syslog-ng project would accept patches for the open-source version that replicate the functionality of an existing proprietary plugin. However, the switch to the plugin-based architecture mitigates this problem as it makes it easy to develop and distribute plugins independently of the upstream version. For example, a Linux distribution could create plugins specifically tailored to its needs, and include them in its stock syslog-ng package — or a separate package if needed.

Other new features

Apart from the license change, syslog-ng Open Source Edition 3.2 introduces some other new features:

  • To make configuring syslog-ng easier and more customizable, configuration snippets describing how to process the log messages of specific applications can be created and easily reused. For example, the syslog-ng configuration needed to read the logs of an application (e.g. Apache, Tomcat) can be stored in a central configuration library, and reused as needed.

  • The syslog-ng application can automatically recognize the platform it is running on, and configure itself to collect platform-specific log messages. For example, it will read the default door file on Solaris, /dev/log on Linux and BSD, and so on. Potentially, this makes configuration management of syslog-ng easier, especially in environments that use many different operating systems.

  • The new version of syslog-ng adds support for reading BSD-style process-accounting (pacct) log messages. Process accounting logs are stored in a binary file; this feature is an initial step to extend syslog-ng to read various non-syslog messages from binary files.

For a more detailed list, refer to this developer blog posting.

Conclusion

The syslog-ng project continues to grow as new features appear on a regular basis, making log collection and processing easier and more flexible. The new licensing model and the plugin-based architecture provide a range of opportunities for developers and contributors alike, and also increases the competition between system logging solutions, whether open-source or proprietary. As we have seen countless times, competition is good for users because it increases the range of possibilities to choose from.

Comments (17 posted)

Brief items

Chromium Graphics Overhaul (The Chromium Blog)

The Chromium blog reports on some developments in graphics handling in the free Google Chrome-based browser. The intent is to speed up graphics rendering by taking advantage of the GPU. "At its core, this graphics work relies on a new process (yes, another one) called the GPU process. The GPU process accepts graphics commands from the renderer process and pushes them to OpenGL or Direct3D (via ANGLE). Normally, renderer processes wouldn’t be able to access these APIs, so the GPU process runs in a modified sandbox. Creating a specialized process like this allows Chromium’s sandbox to continue to contain as much as possbile: the renderer process is still unable to access the system’s graphics APIs, and the GPU process contains less logic."

Comments (10 posted)

An overdue update from the Diaspora team

The Diaspora team has put out an update on their progress in creating a "privacy aware, personally controlled, do-it-all, open source social network". The project raised a fairly large sum of money to fund its efforts, and now it is reporting back. "We are spending a good chunk of time concentrating on building clear, contextual sharing. That means an intuitive way for users to decide, and not notice deciding, what content goes to their coworkers and what goes to their drinking buddies. We know that’s a hard UI problem and we take it seriously. The publicity and money that you have given us has let us work with great designers like Janice Frasier, through her new program LUXr, whose constant reminders that we are not the user have kept us honest and focused. Pivotal Labs has also helped us prioritize, and we have pushed back more technical features like plugins and APIs in favor of simple and high value features."

Comments (41 posted)

KDE SC 4.5.1 Released

KDE has updated the Applications, Platform and Plasma Workspaces to 4.5.1. "This release will make 4.5 users life more pleasant by adding a number of important bugfixes, bringing more stability and better functionality to the Plasma Desktop, and many applications and utilities."

Full Story (comments: none)

Nanny 2.30.0 released

The first stable version (2.30.0) of the Nanny filter for parents who wish to restrict their children's access to the computer has been released. "Nanny is an easy way to control what your kids are doing in the computer. You can limit how much time a day each one of them is browsing the web, chatting or doing email. You can also decide at which times of the day [they] can do this things."

Full Story (comments: none)

NumPy 1.5.0 release

Version 1.5.0 of NumPy, a package for scientific computing in Python, has been released. This is the first release that is Python 3 compatible, though the testing framework relies on nose, which does not yet have a Python 3 compatible release (though there is a development branch).

Full Story (comments: none)

PostgreSQL 9.0 Release Candidate 1

The first release candidate for PostgreSQL 9.0 is available for testing. "No changes in commands, interfaces or APIs are expected between this release candidate and the final version. Applications which will deploy on 9.0 can and should test against 9.0rc1. Depending on bug reports, there may or may not be more release candidates before the final release."

Full Story (comments: 15)

Newsletters and articles

Development newsletters from the last week

Comments (none posted)

GNOME Journal Issue 21 released

Issue 21 of the GNOME Journal is out; topics covered include simple real-time games, Grilo, and an interview with Bradley Kuhn.

Full Story (comments: 1)

Callaway: The long, sordid tale of Sun RPC, abbreviated somewhat, to protect the [guilty] and the irresponsible

Fedora engineering manager Tom "spot" Callaway closes the book on a multi-year effort to relicense Sun's RPC code. The license, which was originally released with the code in 1984—before there were free software licenses—had distribution restrictions that made it clearly non-free. Debian noticed it first in 2002 and Fedora started working on the problem in 2005. "So, we restarted the effort with Oracle, and on August 18, 2010, Wim Coekaerts, on behalf of Oracle America, gave permission for the remaining files that we knew about under the Sun RPC license (netkit-rusers, krb5, and glibc) to be relicensed under the 3 clause BSD license."

Comments (24 posted)

Kügler: Demystifying Akonadi

KDE hacker Sebastian Kügler writes about Akonadi on his blog. In the article, he describes Akonadi, looks at its current status, and its future. "Akonadi is a groupware cache that runs on the local machine, a shared data store for all your personal [information]. Akonadi offers a unified API to receive and synchronise data with groupware, email servers or other online services. Agents called "resources" are responsible for communicating with the remote service, say your email server. These resources run out-of-process and communicate via separate control and data channels with the mothership (your local Akonadi). Resources are easy to implement and can interface any data source with Akonadi, be it your local calendar file, your companies groupware, email servers or address directories, or anything else you can come up with. More on that specifically later."

Comments (2 posted)

Page editor: Jonathan Corbet

Announcements

Legal Announcements

Fedora trademark defense -- you can help!

Red Hat legal is looking for help in defending the Fedora trademark. "Red Hat Legal is currently working on just such a defense. They've asked me to pass on a request for assistance in gathering physical evidence of our use of the Fedora logo worldwide prior to *January 30, 2007*."

Full Story (comments: none)

Welte: More GPL enforcement work again.. and a very surreal but important case

On his blog, Harald Welte writes about work he is doing as part of the gpl-violations.org project. "Right now I'm facing what I'd consider the most outrageous case that I've been involved so far: A manufacturer of Linux-based embedded devices (no, I will not name the company) really has the guts to go in front of court and sue another company for modifying the firmware on those devices. More specifically, the only modifications to program code are on the GPL licensed parts of the software. None of the proprietary userspace programs are touched! None of the proprietary programs are ever distributed either." If the manufacturer were to succeed with its claims, it could jeopardize many different projects that provide alternate code for devices, he says.

Comments (22 posted)

Articles of interest

Should Open Source Communities Avoid Contributor Agreements? (ComputerWorld)

Simon Phipps ponders contributor agreements, and copyright assignment policies in particular in ComputerWorld. "Even with these benefits available, there are many communities that choose not to aggregate their copyrights - notably the Linux kernel, GNOME, Apache and Mozilla communities. The recent policy and guidelines on copyright assignment by the GNOME Foundations are especially worth reading. Having diverse copyright ownership leads to a deeper mutual trust and an assurance that the playing-field remains level. Insisting on copyright aggregation is one of the more certain ways a company can ensure the open source community it is seeding remains small and lacking co-developers."

Comments (10 posted)

Hold The Celebrations; H.264 Is Not The Sort Of Free That Matters (ComputerWorld UK)

Over at ComputerWorld UK, Simon Phipps says there is nothing to celebrate in the recent announcement [PDF] that MPEG-LA will not charge royalties on "web uses" of the H.264 codec for the remaining life of the patents it administers. "First, the H.264-format video needs to be created - but that isn't free under this move. Then it needs to be served up for streaming - but that isn't free under this move. There then needs to be support for decoding it in your browser - but adding that isn't free under this move. Finally it needs to be displayed on your screen. [...] The only part of this sequence being left untaxed is the final one. Importantly, they are not offering to leave the addition of support for H.264 decoding in your browser untaxed. In particular, this means the Mozilla Foundation would have to pay to include the technology in Firefox." He also posits that MPEG-LA may try to join forces with Oracle and Paul Allen's Interval Research to create a three-way patent attack on Google—this time against WebM.

Comments (75 posted)

Google bails out of JavaOne

It should probably surprise nobody that Google has announced that its employees will not be attending JavaOne this year. "So we’re sad to announce that we won't be able to present at JavaOne this year. We wish that we could, but Oracle’s recent lawsuit against Google and open source has made it impossible for us to freely share our thoughts about the future of Java and open source generally."

Comments (114 posted)

Novell Disappoints as Ownership Concerns Continue (Datamation)

Datamation looks at Novell's third quarter financial results, which have fallen short of the company's projections. "The decline in revenues in the third quarter extended across Novell's multiple product lines, including its security-management and operating platforms, as well as its Linux business. Novell's reported revenue of $108 million for its security-management and operating platforms, down 2 percent year-over-year. Earlier this week, Novell announced a new cloud security service to manage access, identity and compliance. Novell's SUSE Linux platform products revenue in the third quarter netted $36 million, a decline of 7 percent from the third quarter of 2009. " (Thanks to Don Marti)

Comments (5 posted)

GCC - 'We make free software affordable' (The H)

Richard Hillesley delves into the history of GCC over at the H. "GCC began life as the GNU C Compiler and achieved its first release on March 22, 1987. Michael Tiemann, who contributed as much as anyone to the later development of GCC, and who had dreamed of writing the perfect compiler, said that the day of GCC's release was "the most thrilling and most terrifying day of my life (up to that point).""

Comments (13 posted)

New Books

Google Analytics: Understanding Visitor Behavior--New from O'Reilly

O'Reilly has released "Google Analytics: Understanding Visitor Behavior" by Justin Cutroni.

Full Story (comments: none)

web2py book 3rd ed

The third edition of the web2py book is available from lulu.com. It's also available in PDF or html.

Full Story (comments: none)

Resources

CE Linux Forum Newsletter: August 2010

The CE Linux Forum newsletter for August covers Embedded Linux Conference Europe Program Announced, LinuxCon Japan, SquashFS Support for LZO Accepted into Mainline, 34th Japan Technical Jamboree, and eLinux wiki Moving to a New Host.

Full Story (comments: none)

Upcoming Events

Program announced for Embedded Linux Conference Europe

The program for the Embedded Linux Conference Europe has been announced. The conference will be held October 26-28 in Cambridge, UK, and will be co-located with the GStreamer conference on October 26. "The conference will host almost 50 sessions, including presentations, Birds-of-a-Feather sessions, keynotes and tutorials." Some of the speakers include Ari Rauch, Ralf Baechle, Wolfram Sang, Armijn Hemel, Kevin Hillman, Wookey, Grant Likely, and Frank Rowand. Click below for the full announcement.

Full Story (comments: none)

Linux Plumbers Conference announces opening keynote and presentations

The Linux Plumbers Conference (LPC) has announced that Jonathan Corbet—a name that may be familiar to LWN readers—will be the opening keynote speaker at the conference. LPC will be held in Cambridge, MA November 3-5. "We're also pleased to announce that speakers have been selected for the presentation track. Click here for the full list of accepted presentations. We're very excited about this year's speaker line up, and we're equally excited about the broad range of topics proposed for the micro-conferences. It is shaping up to be an interesting and productive conference." Early bird registration ($275) ends on August 31.

Comments (1 posted)

Events: September 9, 2010 to November 8, 2010

The following event listing is taken from the LWN.net Calendar.

Date(s)EventLocation
September 6
September 9
Free and Open Source Software for Geospatial Conference Barcelona, Spain
September 7
September 9
DjangoCon US 2010 Portland, OR, USA
September 8
September 10
CouchCamp: CouchDB summer camp Petaluma, CA, United States
September 10
September 12
Ohio Linux Fest Columbus, Ohio, USA
September 11 Open Tech 2010 London, UK
September 13
September 15
Open Source Singapore Pacific-Asia Conference Sydney, Australia
September 16
September 18
X Developers' Summit Toulouse, France
September 16
September 17
Magnolia-CMS Basel, Switzerland
September 16
September 17
3rd International Conference FOSS Sea 2010 Odessa, Ukraine
September 17
September 18
FrOSCamp Zürich, Switzerland
September 17
September 19
Italian Debian/Ubuntu Community Conference 2010 Perugia, Italy
September 18
September 19
WordCamp Portland Portland, OR, USA
September 18 Software Freedom Day 2010 Everywhere, Everywhere
September 21
September 24
Linux-Kongress Nürnberg, Germany
September 23 Open Hardware Summit New York, NY, USA
September 24
September 25
BruCON Security Conference 2010 Brussels, Belgium
September 25
September 26
PyCon India 2010 Bangalore, India
September 27
September 29
Japan Linux Symposium Tokyo, Japan
September 27
September 28
Workshop on Self-sustaining Systems Tokyo, Japan
September 29 3rd Firebird Conference - Moscow Moscow, Russia
September 30
October 1
Open World Forum Paris, France
October 1
October 2
Open Video Conference New York, NY, USA
October 1 Firebird Day Paris - La Cinémathèque Française Paris, France
October 3
October 4
Foundations of Open Media Software 2010 New York, NY, USA
October 4
October 5
IRILL days - where FOSS developers, researchers, and communities meet Paris, France
October 7
October 9
Utah Open Source Conference Salt Lake City, UT, USA
October 8
October 9
Free Culture Research Conference Berlin, Germany
October 11
October 15
17th Annual Tcl/Tk Conference Chicago/Oakbrook Terrace, IL, USA
October 12
October 13
Linux Foundation End User Summit Jersey City, NJ, USA
October 12 Eclipse Government Day Reston, VA, USA
October 16 FLOSS UK Unconference Autumn 2010 Birmingham, UK
October 16 Central PA Open Source Conference Harrisburg, PA, USA
October 18
October 21
7th Netfilter Workshop Seville, Spain
October 18
October 20
Pacific Northwest Software Quality Conference Portland, OR, USA
October 19
October 20
Open Source in Mobile World London, United Kingdom
October 20
October 23
openSUSE Conference 2010 Nuremberg, Germany
October 22
October 24
OLPC Community Summit San Francisco, CA, USA
October 25
October 27
GitTogether '10 Mountain VIew, CA, USA
October 25
October 27
Real Time Linux Workshop Nairobi, Kenya
October 25
October 27
GCC & GNU Toolchain Developers’ Summit Ottawa, Ontario, Canada
October 25
October 29
Ubuntu Developer Summit Orlando, Florida, USA
October 26 GStreamer Conference 2010 Cambridge, UK
October 27 Open Source Health Informatics Conference London, UK
October 27
October 29
Hack.lu 2010 Parc Hotel Alvisse, Luxembourg
October 27
October 28
Embedded Linux Conference Europe 2010 Cambridge, UK
October 27
October 28
Government Open Source Conference 2010 Portland, OR, USA
October 28
October 29
European Conference on Computer Network Defense Berlin, Germany
October 28
October 29
Free Software Open Source Symposium Toronto, Canada
October 30
October 31
Debian MiniConf Paris 2010 Paris, France
November 1
November 2
Linux Kernel Summit Cambridge, MA, USA
November 1
November 5
ApacheCon North America 2010 Atlanta, GA, USA
November 3
November 5
Linux Plumbers Conference Cambridge, MA, USA
November 4 2010 LLVM Developers' Meeting San Jose, CA, USA
November 5
November 7
Free Society Conference and Nordic Summit Gorthenburg, Sweden
November 6
November 7
Technical Dutch Open Source Event Eindhoven, Netherlands
November 6
November 7
OpenOffice.org HackFest 2010 Hamburg, Germany

If your event does not appear here, please tell us about it.

Audio and Video programs

View Keynote and Conference Session Videos from LinuxCon 2010

Videos from all of the keynotes sessions and a number of conference sessions from LinuxCon 2010 are now available to everyone for viewing. Registration is required.

Full Story (comments: none)

Page editor: Rebecca Sobol


Copyright © 2010, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds