User: Password:
|
|
Subscribe / Log in / New account

LWN.net Weekly Edition for March 1, 2012

The unstoppable Perl release train?

By Jonathan Corbet
February 29, 2012
There are a number of release management styles to be observed in the free software community, but many of them fall within the two broad categories of "time-based" or "feature-based". A time-based project fixes a date for its next release, then adjusts the set of features included in that release to make the intended date at least somewhat plausible. Feature-based projects, instead, pick a set of desired features and hold the actual release "until it's ready." Arguably, the trend over the last decade has been in the direction of time-based releases. Perl 5 is a relatively recent convert to the time-based model; the project is now faced with a decision between making a release with known problems - including possible security issues - or delaying its 5.16 release.

Contemporary Perl's release schedule is approximately one major release per year. The 5.14 release came out on May 14, 2011, so it is not too soon to be thinking about 5.16. That release is indeed stabilizing with a number of new features. A key aspect of this release is a familiar story: as with most other languages, Perl developers are trying to improve their support for Unicode in all situations. Many developers have participated in this work, but Tom Christiansen has arguably been the most visible. His recent work includes the preparation of an extensive Perl Unicode cookbook demonstrating Perl's Unicode-related features which, he has suggested, are now second to none.

As this cookbook was being discussed, Karl Williamson pointed out that some of the examples where Perl is put into full-time UTF8 mode (a very useful mode for contemporary programs) are unsafe because the language does not properly handle malformed strings. At best, such problems could lead to the corrupted strings seen in so many settings where character encodings are not properly handled. But, as Christian Hansen suggested, things can be worse than that:

I would love for this to happen, I have advocated this on #p5p several times, but there is always the battle of "backwards compatibility disease". About 10 months ago I reported a security issue reading the relaxed UTF-8 implementation (still undisclosed and still exploitable) on the perl security mailing list.

What followed was a classic missive from Tom trying to get to the bottom of the problem. He listed nine different ways to tell Perl to operate with UTF8 and asked how many of them were truly vulnerable to undetected encoding problems. If Perl's UTF8 handling is unsafe by default, he said, it needs to be fixed:

If there's something so important that it must be done everytime to ensure correct behavior, then that is too important to be left up to the programmer to forget to do. It needs to be done for him.

And, he said, this situation really needs to be treated as a blocker for the 5.16 release; to do otherwise would be to delay the fix excessively and cause the world to be filled with bad workaround code.

In many projects, the prospect of open security-related problems would at least cause people to think about delaying a release. On the Perl list, though, Tom found little support. Aristotle Pagaltzis responded:

5.16 *must* be released whatever the state of this issue. To not do so is to fall into the thinking and behavioural pattern that stifled the release of 5.10 by several years. Perl 5 switched to a timeboxed release cycle because “this one more thing has to be polished before we can ship it” meant it never shipped at all.

Ricardo Signes added:

It's definitely true, though, that 5.even.0 releases *are no longer milestones.* Or, rather, they are milestones in a much more literal sense than is often meant. 5.16.0 means that we've come about one year since 5.15.0. It does *not* mean that we have met a series of goals named at 5.15.0, for example.

And, with those words, the public email discussion faded away. At this point, there is little clarity on which Unicode features are safe to use, what might be required to fix the rest, how many of those fixes might be ready in time for the 5.16 release, or whether programs using UTF8 in Perl 5.16 (and earlier releases) suffer from known-exploitable security problems. Tom, who probably understands these issues better than just about anybody else, said:

Right now I'm very hazy on the real status of all this stuff, and I am very uncomfortable with the idea of relentlessly charging ahead toward a release like a freight train with no brakes.

The response he got claimed another train would be coming along in a year and the fixes could catch a ride on that one.

Releasing software with known bugs is a common practice; even a project like Debian cannot make the claim that there are no known problems with its releases. To do otherwise would make software releases into rare events indeed. It is also true that time-based releases have value; users know that they will get useful new code in a bounded period of time. Anybody who has watched release dates slip indefinitely knows how frustrating that can be for both users and developers; the 2.4.0 or 2.6.0 kernel releases are a classic example - as is Perl 5.10. So the Perl developers who say that the show must go on are doing so in accordance with many years of free software development experience.

That said, releasing software with known, security-related issues in something as fundamental as UTF8 support risks tarnishing the image of the project for a long time. There is not enough information publicly available to say whether Perl's UTF8 problems are severe enough to incur this risk. But it is probably safe to assume that, as a result of this conversation, crackers are looking at Perl's UTF8 handling rather more closely than they were before. Perl may have had some of these problems for years without massive ill effect, but they may not remain undiscovered and undisclosed for much longer. If the problem is real and exploitable, people are going to figure it out and take advantage of it.

One would assume that, behind the public positioning, the relevant developers understand what's at stake and are taking the time to understand the scope of the problem. They may not want to stop the release train, but seeing it derailed by a known security problem after release would not be much fun either. A release delay now may prove less painful than a security-update fire drill later.

Comments (24 posted)

Mozilla announces HTML5-based phone

February 29, 2012

This article was contributed by Nathan Willis

Mozilla announced a deal with Latin American mobile carrier Telefónica Digital on February 27 to start shipping HTML5-driven smartphones by the end of 2012. Although the press release dubs the new platform "Open Web Device," many Mozilla watchers know it better as Boot To Gecko (B2G), a lightweight Linux-based system that uses Gecko as its primary (if not sole) application framework. B2G has been in development since July 2011, but it has advanced significantly since we last examined it in mid-August.

B2G architecture

Initially, the project debated several options for the underlying operating system on which to perch the Gecko runtime — including Android (which served as the basis for the earliest experimental builds), MeeGo, and webOS. The team has since settled on an architecture of its own, which takes some components from Android, but constructs its own stack, called Gonk.

[Home screen]

Gonk is a stripped-down Linux distribution, using the kernel and some standard userspace libraries for hardware access (e.g., libusb and Bluez). It also borrows hardware abstraction layer code from Android — the architecture document on the Mozilla wiki lists camera and GPS devices in particular. The architecture document also makes references to both the Android kernel and to modified kernels supplied by hardware vendors; evidently B2G is designed to tolerate minor variations.

Gonk also includes an Android-derived init program, D-Bus, wpa_supplicant, and Network Manager. The media playback functionality also comes from Android: there is a mediaserver player for the audio and video formats that Gecko can decode natively, and libstagefright to access binary-only codecs and hardware decoding chips.

Mozilla's B2G team could not be reached for comment, but several of the Android components in the Gonk stack appear to have been chosen for the practical task of bootstrapping the OS on available smartphone hardware (such as the Samsung Galaxy S II, which is the testbed device at Mozilla) rather than on technical grounds. For instance, the setup currently uses Android's Bionic library, but there has been some discussion of replacing it with a more general libc implementation.

Apart from those components and the cellular modem layer, however, Gecko is the only runtime process started. In B2G, there is a master "Gecko server" process named b2g that spawns lower-privileged child processes for each web application run. Only the b2g process can directly access hardware devices; it mediates access requested by the child processes. b2g and the application processes can pass messages to each other using what Mozilla calls "IPC (Interprocess communication) Protocol Definition Language" or IPDL, which originates from the Electrolysis project. Electrolysis is Mozilla's effort to refactor Firefox as a multi-process application; it is still in development, but it provides both synchronous and asynchronous communication, and isolates child processes from each other.

Ultimately, Gonk is different enough from both Android and typical desktop Linux systems to warrant its own build target for Gecko. Among other distinctions, it implements a lightweight input event system, and draws directly to OpenGL ES rather than the X Window system. But in order to provide the full smartphone application framework, the Gecko engine must also have direct access to low-level functionality that is normally handled by other processes — such as the telephony stack.

New APIs

B2G's telephony stack is called the Radio Interface Layer (RIL), and is derived from the corresponding code in Android. RIL caters to device vendors by providing a rilproxy mechanism to manage connections between the b2g process and the cellular modem driver. The driver, called rild, can be proprietary (in fact, the RIL design assumes it is proprietary) and thus link to binary-only firmware blobs without triggering the license inheritance conditions from other parts of the stack.

RIL implements voice and 3G data calls, network status, text and MMS messaging, and SIM card commands (such as storing and retrieving contacts, or entering and validating unlock codes). These commands are exposed to HTML5 applications through the WebTelephony and WebSMS JavaScript APIs, which are still in development.

In fact, much of the B2G-specific functionality requires the definition of new APIs that "traditional" Gecko applications do not expect. Mozilla has already invested in the drafting of OS-independent web APIs for other reasons, including its Web App Store project. Mozilla is also participating in the standardization effort being managed by the W3C, primarily the Device APIs Working Group. At the moment, not every API created by Mozilla has a corresponding W3C specification, although Mozilla says standardizing all of them is the ultimate goal. Mozilla's list includes a mixture of high-level interfaces (such as the Contacts or Camera APIs) and low-level interfaces (such as the Bluetooth and near field communication (NFC) stack APIs).

The most far-reaching APIs in the library are Open Web Apps, which specifies a manifest file format and packaging guidelines for "installable" web applications, and WebRTC, which provides a real-time communication framework used for streaming media as well as network-centric features like peer-to-peer networking. Some of the less familiar specifications under development are APIs for detecting screen orientation, processing notifications when the device is idle, and locking hardware resources (such as preventing screen-dimming).

Applications and interfaces

[Dialer history]

The plan is for every application in a B2G device to be written in HTML5, CSS, and JavaScript, and naturally the list begins with the generic phone applications: a "home" screen, phone dialer, camera, contact list, calendar, and so on. While phone vendors may prefer to write their own code in-house, Mozilla is at least in the planning and mock-up stage of creating some demo B2G applications.

The demo application suite is code-named Gaia. The wiki page has placeholders for a dozen or so basic applications, including more detailed mock-ups for the web browser, image gallery, and camera application. There are also mock-up images on the Demo page for a home screen, lock screen, and dialer.

More interesting than static mock-ups are the in-development demos found at B2G developer Andreas Gal's GitHub site (screen shots of some are shown here). A recent version of Firefox will open the HTML files from the repository; the homescreen.html file shows the home screen and lock screen, and several other applications are accessible through home screen launchers (the Mozilla wiki's Demo page also links to a "live" demo, but that appears to be out-of-order at the moment). The UI responds to mouse clicks and "slide" gestures, although the applications vary widely in completion, of course — some are just static images. Still, the home screen and generic applications are in place along with basic navigation and UI elements.

The road ahead

The B2G roadmap is ambitious; it predicts a product-ready milestone by the end of the second quarter of 2012. Considering that not all of the fourth quarter 2011 goals have been met yet, that may not be attainable without an infusion of developer time from a hardware partner, but the project did bring a B2G demonstration to Mobile World Congress alongside the February 27 announcement.

[Marketplace]

Perhaps notably, the Bluetooth, NFC, and USB tracking bugs appear to be behind schedule at the moment, while most of the higher-level work for sensor support and user applications appear to be on track. Arguably the biggest piece of the puzzle yet to get underway is WebRTC support. WebRTC itself is still undergoing rapid development, of course, but without it, the goal of supporting real-time audio and video on a web-runtime smartphone is difficult to imagine.

Resources for potential Open Web Device application developers are also in short supply at the moment. Although Mozilla curates a collection of HTML5 resources at its Mozilla Developer Network (MDN) site, so far only the Open WebApps APIs are covered. Mozilla formally opened its Open Web App "Marketplace" on February 28, in an announcement that referenced the in-progress Web API effort in addition to the traditional desktop platform. That is probably a sign that the Web APIs will make their way to MDN in short order as well.

In its own press release, Telefónica Digital says only that its first Open Web Device-powered phone (which has not been detailed in hardware specification or price) will ship in 2012. That leaves considerable time to complete the project on schedule, although from Mozilla's perspective getting the new JavaScript APIs through the not-known-for-its-rapidity W3C standardization process may be a tight fit.

Comments (82 posted)

Various notes on /usr unification

By Jonathan Corbet
February 28, 2012
"/usr unification" (or simply "usrmove") is the idea of moving the contents of /bin, /lib, and related root-level directories into their equivalents under /usr. That this scheme is controversial can be seen from that fact that, when LWN pointed to a document describing the motivations behind the move, a thread of well over 250 comments resulted. One would think, from the volume of comments on LWN and elsewhere, that this change is a threat to the very nature of Linux. Either that, or it has inspired one of the biggest bikeshedding exercises in some time.

One of the "advantages" of running a Rawhide system is that one gets to try out the new toys ahead of the crowd. Said toys also have a certain tendency to arrive at random and inopportune times. In the case of the /usr move, most Rawhide users got little or no warning before updates simply failed to work. The Rawhide repository, it seems, had suddenly been populated with packages requiring a system where the /usr move had already been done. Actually causing this move to happen was left as an exercise for the system administrator.

As it happens, this is one of the more controversial aspects of the /usr move in the Fedora community. Fedora policy has always said that using the yum tool to update an installation to a newer release of the distribution is not supported; one is supposed to use a CD or the preupgrade tool for that purpose. But, in reality, just using yum tends to work; the prospect of it not working to get to Fedora 17 has caused some grumpiness in the Fedora user community. If this limitation remains in the Fedora 17 release, expect to hear some complaints; users don't like to have their ponies taken away, even if nobody had ever said they could have a pony in the first place.

For Rawhide users, it is possible to move past the usrmove flag day by following this set of instructions. The process involves editing the grub configuration files, installing a special initramfs image, then rebooting with a special option that causes the actual thrashing of the system to be done from the initramfs before the real root is mounted. Your editor, setting out along this path with a well backed-up system, was reminded of doing the manual transition from the a.out to the ELF executable format on a Slackware system many years ago. In both cases, one had the sense of toying with fundamental forces with no guarantee of a non-catastrophic outcome.

In both cases, as it happens, things worked out just fine. Directories like /bin, /lib, /lib64, and /sbin are now symbolic links into /usr, and the system works just like it always did. No programs or scripts needed to be changed, custom kernels work as they always did, and so on. That will almost certainly be most users' experience of this transition - they will not even notice that it has happened. This is not a particularly surprising result; the practice of moving files and leaving a symbolic link in their place has a long history. Of course, somebody has probably patented it anyway.

Of course, an equally successful result for all systems depends on the Fedora developers creating a bombproof automatic mechanism for carrying out this transition prior to the Fedora 17 release. Hopefully something will have been learned from the GRUB 2 mess, where the Fedora 16 upgrade left a lot of bricked systems in its wake. If Fedora 17 creates similar messes on systems that, perhaps, are not set up quite as the Fedora developers expect, a lot of unhappiness will ensue.

As an aside, it is interesting to compare how Fedora is managing this transition with Debian's multiarch transition. Fedora has forced a flag day requiring the entire thing to happen at once; Debian, instead, has carefully avoided flag days in favor of a carefully-planned step-by-step change. One could argue that the Debian approach is better: it lets the transition "just happen" through normal package updates without the need for any special actions on the part of administrators or users. On the other hand, the multiarch transition has been a multi-year process and is still not complete; Fedora, instead, has pushed this change through in a small number of months.

Now that Fedora has made this jump and seems to be past the turning-back point, will other distributions follow suit? Some investigation suggests that Fedora will not be alone in this change:

  • There have been no official statements from Red Hat, naturally, but it seems likely that, given how much effort has been put into this change by Red Hat employees, RHEL7 will feature the unified directory organization. The RHEL clones, including Oracle's distribution and CentOS, will have little choice but to incorporate this change.

  • OpenSUSE has been discussing the idea for a while and seems to be receptive to it. There does not appear to be an official statement that openSUSE will make the /usr move, but there is a web page set up to track progress in that direction.

  • Debian has had a number of long discussion threads about usrmove; it will not surprise longtime Debian watchers that there is a variety of opinions on the issue. There is some concern about possibly breaking systems with strange setups, though the most often cited case - systems with /usr on a separate partition - has been thought about and is relatively easy to support. In the end, it may come down to making the change to keep life easier; as Marco d'Itri put it: "I am not really looking forward to keep reverting these changes in my package, and since Red Hat controls most Linux infrastructure now other packages will face the same problem."

    Of course, Debian has some time to think about the problem. Nobody has suggested that usrmove could be added to the list of goals for the upcoming "wheezy" release, so any transition, even in the unstable tree, is likely to be at least a year away.

  • Ubuntu has been relatively quiet about the idea; what has been posted by Ubuntu developers suggests a lack of enthusiasm for the idea. But if Debian makes the change, it could be hard for Ubuntu to retain the current filesystem layout.

In short, it looks like, over time, the move toward keeping all binaries and libraries in /usr will spread through most Linux distributions.

Given that most users will likely not even notice the change, it is natural to wonder why the change is so controversial. Undoubtedly, some people dislike significant changes to how traditional Linux systems have been laid out. Possibly, some creatively-organized systems will not handle the transition well, leading to problems that must be fixed. Conceivably, the change strikes some people as useless churn with no real benefits - at least for them. And, almost certainly, it is a bikeshed-type issue that is easy to have an opinion on and complain about.

The truth of the matter is that it appears to be a reasonably well-thought-out change driven by some plausible rationales. Filesystem hierarchies are not set in stone; there are times when it makes sense to clean things up. The usrmove operation yields a filesystem that is easier to understand, easier to manage, and more consistent in its organization. Having been through the transition, your editor thinks it is probably worth the trouble.

Comments (131 posted)

Page editor: Jonathan Corbet

Security

Fedora introduces Network Zones

February 29, 2012

This article was contributed by Nathan Willis

Fedora is exploring a new security feature in the upcoming Fedora 17 release that is intended to make firewall configuration simpler for the average user. Based on features found in the new, D-Bus-enabled firewall tool, the Network Zones feature will automatically activate different sets of firewall rules based on the network location, and provide sensible defaults for different classes of network — from trusted connections at home to potentially dangerous open WiFi networks in public.

Background

At the heart of Network Zones is the notion that each network has an associated trust level — fully trusted, mostly trusted, mostly untrusted, or fully untrusted. To serve the needs of users that regularly move between network locations, the firewall should ratchet up the netfilter rules to a stricter level when in an untrusted location such as a restaurant or public park, but ratchet them back down when reconnecting to the home or office network. Exactly which rules are applied at each trust level is up to the user or administrator.

Existing firewall tools, however, fall short in two ways. First, they know only about network interfaces, not about the networks themselves (trust information in particular). Second, they require stopping and restarting the firewall in order to change any settings. Red Hat's Thomas Woerner developed a solution tackling both issues based on GNOME's NetworkManager, and a new firewall application named Firewalld.

NetworkManager, which was designed to keep track of WiFi networks and opportunistically switch between them when roaming, already implemented some of the required functionality. It maintains a history of networks visited, and allows users to associate preferences with each saved network connection. Adding a trust level to each network connection would allow NetworkManager to instruct the firewall to change to a different rule set.

Firewalld solves the restarting problem by running as a D-Bus enabled daemon, listening for state change commands, and emitting information on the current status of the firewall. NetworkManager also uses D-Bus, so it can simply send a message to Firewalld telling it to raise or lower specific firewall rules as the situation dictates. To keep the firewall state consistent, though, Firewalld must be the only application modifying the settings (thus requiring that other firewall management packages be disabled), and for security reasons, all rule change requests accepted by Firewalld must be authenticated with PolicyKit.

In order for Firewalld to expose a straightforward set of configuration options to users (and other applications) over D-Bus, the firewall rule set had to be refactored into a clean set of distinct chains that correspond to individual features and services. Thus, adding or removing a rule in one chain will not interfere with the others. Firewalld implements chains for feature sets like virtualization, masquerading (NAT), port forwarding, and open ports, plus predefined chains for individual services like Samba or FTP.

The Zones zone

The official Network Zones proposal on the Fedora wiki defines nine initial network trust levels: trusted (meaning fully trusted, with all incoming traffic permitted), home, work, and internal (all three of which are for mostly trusted networks), dmz, public, and external (which are mostly untrusted), and block and drop (which are both fully untrusted).

Firewalld provides an initial configuration for each zone in an XML file. By default, the mostly-trusted zones open only a select set of ports (SSH, mDNS, Samba, and Internet Printing Protocol (IPP) for the home and internal zones, and SSH and IPP for work). The mostly-untrusted zones are more restricted, with SSH being the only allowed protocol for public and dmz, but SSH allowed and IP Masquerading enabled for external — though it's not exactly clear what the benefit of the latter is. The block zone rejects all incoming connections, while the drop zone drops them silently.

Of course, the intent is for users or system administrators to customize each of the zones' configuration as desired, allowing some differentiation between the otherwise-identical offerings. There are GUI and command-line utilities (named firewall-config and firewall-cmd, respectively) for examining and altering the configuration options; however as of today not all of the Firewalld rules are supported in either tool.

There does not appear to be a DTD or XML Schema for the zone configuration or firewall rules yet, but the syntax is straightforward. Individual services are enabled with a <service name=servicename> element; the other available firewall options are enabled by adding an element specific to the feature, such as <masquerade enabled="True"/>.

Network Zone integration is available in the NetworkManager 0.9.4 release, which will be part of Fedora 17, allowing users to assign a trust level to each of their saved networks, as well as a default zone to apply to unknown network connections. A system tray applet will display the current firewall state in GNOME Shell. The project has discussed adding the same functionality to KDE's network manager as well.

Firewalld was first made available in Fedora 15, but with the completion of the Network Zones support, it is slated to become the default firewall configuration tool in Fedora 17 (scheduled for release in early May 2012). Network zone support is not the only benefit of the daemon-like firewall approach — D-Bus controls open the door for other dynamic features in the future, like triggering temporary firewall rules without manual intervention, and desktop notifications triggered by firewall events.

The Firewalld project is not resting on its laurels, however. The future plans include support for granting or limiting access to the configuration tools on a per-user basis, and more abstract firewall rules based on metadata — such as "allow external access to music sharing applications." Network Zones is of clear benefit to laptop users, who both expose their systems to the greatest risk while roaming, and have had the hardest time finding a balanced firewall policy. But the possibilities enabled by a dynamically controlled firewall extend further; only time will tell what roles it can fill that a static configuration hasn't.

Comments (12 posted)

Brief items

Security quotes of the week

7. We are right now looking at you through your webcam. Do you always move your lips like that when you read? We also recorded what you were doing last week and are sending the video to (you know who). If the prior statements are not true, it's because in addition to everything else, we reserve the right to lie to you, and you agree to believe us and hold us harmless for any and all such lies. Furthermore, if we are not recording everything you're doing through your webcam, it's either because we haven't figured out how, you're just not that interesting, or both.

8. We are serious about all of the above. So don't go trying to sue us later with some nonsense like "I thought that was all satire." All your privacy are belong to us. We mean it.

-- Parts of the Skipity privacy policy

The aim of our sponsorship is simple: we have a big learning opportunity when we receive full end-to-end exploits. Not only can we fix the bugs, but by studying the vulnerability and exploit techniques we can enhance our mitigations, automated testing, and sandboxing. This enables us to better protect our users.

While we’re proud of Chrome’s leading track record in past competitions, the fact is that not receiving exploits means that it’s harder to learn and improve. To maximize our chances of receiving exploits this year, we’ve upped the ante. We will directly sponsor up to $1 million worth of rewards [...]

-- Google ponies up for Chrome browser exploits

It doesn't take more than a few minutes of thought to see the utterly disastrous ramifications of the "right to be forgotten" approach, and the cascading damage to free speech that could easily spread malignantly across the global Internet as a result.

The crux of the matter is simple enough. Even if search engine results are selectively expunged on demand, the "upsetting" material in question will still likely exist on the Internet itself, still subject to being located by other means, including via sites that merely discuss related topics, situations, companies, or individuals.

-- Lauren Weinstein

Comments (5 posted)

"Unethical" HTML video copy protection proposal draws criticism from W3C reps (ars technica)

A proposal to add a DRM layer for web audio and video has been rather controversial on the W3C HTML mailing list, as reported by ars technica. The Encrypted Media Extensions proposal authored by Google, Microsoft, and Netflix would add an optional layer for protected media content, but Mozilla and others, including Google's Ian Hickson who is the WHATWG HTML specification editor, have spoken up against the proposal. "'I believe this proposal is unethical and that we should not pursue it,' he [Hickson] wrote in response to a message that Microsoft's Adrian Bateman posted on the mailing list about the draft. 'The proposal above does not provide robust content protection, so it would not address this use case even if it wasn't unethical.'"

Comments (90 posted)

Mozilla: announcing "Collusion"

The Mozilla Foundation has announced the availability of the Collusion add-on for Firefox. "Collusion is an experimental add-on for Firefox and allows you to see all the third parties that are tracking your movements across the Web. It will show, in real time, how that data creates a spider-web of interaction between companies and other trackers."

Comments (15 posted)

New vulnerabilities

asterisk: denial of service

Package(s):asterisk CVE #(s):CVE-2012-0885
Created:February 23, 2012 Updated:February 29, 2012
Description:

From the Gentoo advisory:

A vulnerability has been found in Asterisk's handling of certain encrypted streams where the res_srtp module has been loaded but video support has not been enabled.

A remote attacker could send a specially crafted SDP message to the Asterisk daemon, possibly resulting in a Denial of Service condition.

Alerts:
Gentoo 201202-06 asterisk 2012-02-22

Comments (none posted)

csound: code execution

Package(s):csound CVE #(s):CVE-2012-0270
Created:February 28, 2012 Updated:March 14, 2012
Description: From the Secunia advisory:

Secunia Research has discovered two vulnerabilities in Csound, which can be exploited by malicious people to compromise a user's system.

1) A boundary error within the "getnum()" function (util/heti_main.c) can be exploited to cause a stack-based buffer overflow via a specially crafted hetro file.

2) A boundary error within the "getnum()" function (util/pv_import.c) can be exploited to cause a stack-based buffer overflow via a specially crafted PVOC file.

Successful exploitation allows execution of arbitrary code, but requires tricking a user into converting a malicious file.

Alerts:
openSUSE openSUSE-SU-2012:0370-1 csound 2012-03-14
openSUSE openSUSE-SU-2012:0315-1 csound 2012-02-28

Comments (none posted)

drupal6: multiple vulnerabilities

Package(s):drupal6 CVE #(s):
Created:February 27, 2012 Updated:February 29, 2012
Description: Multiple vulnerabilities were fixed in Drupal 6.23. Drupal 6.24 contains additional bug fixes.
Alerts:
Fedora FEDORA-2012-1283 drupal6 2012-02-25
Fedora FEDORA-2012-1306 drupal6 2012-02-25

Comments (none posted)

drupal7: multiple vulnerabilities

Package(s):drupal7 CVE #(s):
Created:February 27, 2012 Updated:February 29, 2012
Description: Drupal 7.11 fixes multiple vulnerabilities. Drupal 7.12 contains additional bug fixes.
Alerts:
Fedora FEDORA-2012-1250 drupal7 2012-02-25
Fedora FEDORA-2012-1268 drupal7 2012-02-25

Comments (none posted)

fex: fixes a regression in a previous update

Package(s):fex CVE #(s):CVE-2012-0869
Created:February 27, 2012 Updated:February 29, 2012
Description: From the Debian advisory:

It was discovered that the last security update for F*X, DSA-2414-1, introduced a regression. Updated packages are now available to address this problem.

Alerts:
Debian DSA-2414-2 fex 2012-02-25

Comments (none posted)

glibc: format string protection mechanism bypass

Package(s):glibc CVE #(s):CVE-2012-0864
Created:February 27, 2012 Updated:March 22, 2012
Description: From the Red Hat bugzilla:

In the Phrack article "A Eulogy for Format Strings", a researcher using nickname "Captain Planet" reported an integer overflow flaw in the format string protection mechanism offered by FORTIFY_SOURCE. A remote attacker could provide a specially crafted executable, leading to FORTIFY_SOURCE format string protection mechanism bypass, when executed.

Alerts:
Gentoo 201312-01 glibc 2013-12-02
Mandriva MDVSA-2013:162 glibc 2013-05-07
Scientific Linux SL-glib-20120321 glibc 2012-03-21
Scientific Linux SL-glib-20120321 glibc 2012-03-21
Oracle ELSA-2012-0397 glibc 2012-03-20
CentOS CESA-2012:0397 glibc 2012-03-20
Red Hat RHSA-2012:0397-01 glibc 2012-03-19
Oracle ELSA-2012-0393 glibc 2012-03-15
CentOS CESA-2012:0393 glibc 2012-03-15
Red Hat RHSA-2012:0393-01 glibc 2012-03-15
Ubuntu USN-1396-1 eglibc, glibc 2012-03-09
Fedora FEDORA-2012-2144 glibc 2012-03-08
Fedora FEDORA-2012-2162 glibc 2012-02-25

Comments (none posted)

kernel: denial of service

Package(s):kernel CVE #(s):CVE-2011-2518
Created:February 29, 2012 Updated:February 29, 2012
Description: The TOMOYO Linux security module does not properly handle mount() calls, allowing an unprivileged local process to oops the kernel.
Alerts:
Ubuntu USN-1386-1 linux-lts-backport-natty 2012-03-06
Ubuntu USN-1383-1 linux-ti-omap4 2012-03-06
Ubuntu USN-1380-1 linux 2012-02-28

Comments (none posted)

kernel: denial of service

Package(s):kernel CVE #(s):CVE-2011-4097
Created:February 29, 2012 Updated:March 6, 2012
Description: The kernel's calculation of out-of-memory scores could result in the untimely demise of the wrong process. A local user could use this error to kill an unrelated process.
Alerts:
Oracle ELSA-2013-1645 kernel 2013-11-26
openSUSE openSUSE-SU-2013:0927-1 kernel 2013-06-10
Ubuntu USN-1386-1 linux-lts-backport-natty 2012-03-06
Ubuntu USN-1384-1 linux-lts-backport-oneiric 2012-03-06
Ubuntu USN-1380-1 linux 2012-02-28

Comments (none posted)

libvpx: denial of service

Package(s):libvpx CVE #(s):CVE-2012-0823
Created:February 27, 2012 Updated:February 29, 2012
Description: From the CVE entry:

VP8 Codec SDK (libvpx) before 1.0.0 "Duclair" allows remote attackers to cause a denial of service (application crash) via (1) unspecified "corrupt input" or (2) by "starting decoding from a P-frame," which triggers an out-of-bounds read, related to "the clamping of motion vectors in SPLITMV blocks".

Alerts:
Mandriva MDVSA-2012:023-1 libvpx 2012-02-28
Mandriva MDVSA-2012:023 libvpx 2012-02-27

Comments (none posted)

maradns: denial of service

Package(s):maradns CVE #(s):CVE-2012-0024
Created:February 23, 2012 Updated:February 29, 2012
Description:

From the Gentoo advisory:

MaraDNS does not properly randomize hash functions to protect against hash collision attacks.

A remote attacker could send many specially crafted DNS recursive queries, possibly resulting in a Denial of Service condition.

Alerts:
Gentoo 201202-03 maradns 2012-02-22

Comments (none posted)

notmuch: information disclosure

Package(s):notmuch CVE #(s):CVE-2011-1103
Created:February 23, 2012 Updated:March 19, 2012
Description:

From the Debian advisory:

It was discovered that Notmuch, an email indexer, did not sufficiently escape Emacs MML tags. When using the Emacs interface, a user could be tricked into replying to a maliciously formatted message which could lead to files from the local machine being attached to the outgoing message.

Alerts:
Fedora FEDORA-2012-3315 notmuch 2012-03-17
Fedora FEDORA-2012-3312 notmuch 2012-03-17
Debian DSA-2416-1 notmuch 2012-02-23

Comments (none posted)

openjdk: sandbox bypass

Package(s):openjdk CVE #(s):CVE-2012-0507
Created:February 29, 2012 Updated:May 10, 2012
Description: The openjdk AtomicReferenceArray class does not check the type of the incoming array, leading to a JVM crash or sandbox bypass.
Alerts:
Gentoo 201401-30 oracle-jdk-bin 2014-01-26
SUSE SUSE-SU-2012:0603-1 IBM Java 1.6.0 2012-05-09
SUSE SUSE-SU-2012:0602-1 IBM Java 1.5.0 2012-05-09
Red Hat RHSA-2012:0514-01 java-1.6.0-ibm 2012-04-24
Red Hat RHSA-2012:0508-01 java-1.5.0-ibm 2012-04-23
Ubuntu USN-1373-2 openjdk-6b18 2012-03-01
Debian DSA-2420-1 openjdk-6 2012-02-28

Comments (none posted)

postgresql: multiple vulnerabilities

Package(s):postgresql CVE #(s):CVE-2012-0866 CVE-2012-0867 CVE-2012-0868
Created:February 27, 2012 Updated:September 28, 2012
Description: From the Debian advisory:

CVE-2012-0866: It was discovered that the permissions of a function called by a trigger are not checked. This could result in privilege escalation.

CVE-2012-0867: It was discovered that only the first 32 characters of a host name are checked when validating host names through SSL certificates. This could result in spoofing the connection in limited circumstances.

CVE-2012-0868: It was discovered that pg_dump did not sanitise object names. This could result in arbitrary SQL command execution if a malformed dump file is opened.

See the PostgreSQL 9.1.3, 9.0.7, 8.4.11 and 8.3.18 update announcement for more information.

Alerts:
Gentoo 201209-24 postgresql-server 2012-09-28
Oracle ELSA-2012-1263 postgresql, postgresql84 2012-09-14
Oracle ELSA-2012-1037 postgresql, postgresql84 2012-06-30
Oracle ELSA-2012-1037 postgresql, postgresql84 2012-06-26
Scientific Linux SL-post-20120522 postgresql 2012-05-22
Oracle ELSA-2012-0678 postgresql84, postgresql 2012-05-22
Oracle ELSA-2012-0678 postgresql84, postgresql 2012-05-22
CentOS CESA-2012:0678 postgresql 2012-05-21
openSUSE openSUSE-SU-2012:0480-1 postgresql 2012-04-11
Fedora FEDORA-2012-2589 postgresql 2012-03-08
Fedora FEDORA-2012-2591 postgresql 2012-03-08
Mandriva MDVSA-2012:026 postgresql 2012-02-29
Mandriva MDVSA-2012:027 postgresql8.3 2012-02-29
Ubuntu USN-1378-1 postgresql-8.3, postgresql-8.4, postgresql-9.1 2012-02-28
Debian DSA-2418-1 postgresql-8.4 2012-02-27
CentOS CESA-2012:0678 postgresql84 2012-05-21
CentOS CESA-2012:0677 postgresql 2012-05-21
Red Hat RHSA-2012:0678-01 postgresql84, postgresql 2012-05-21
Oracle ELSA-2012-0677 postgresql 2012-05-22
Scientific Linux SL-post-20120522 postgresql, postgresql 84 2012-05-22
Red Hat RHSA-2012:0677-01 postgresql 2012-05-21

Comments (none posted)

puppet: two privilege escalations

Package(s):puppet CVE #(s):CVE-2012-1053 CVE-2012-1054
Created:February 23, 2012 Updated:July 4, 2012
Description:

From the Ubuntu advisory:

It was discovered that Puppet did not drop privileges when executing commands as different users. If an attacker had control of the execution manifests or the executed command, this could be used to execute code with elevated group permissions (typically root). (CVE-2012-1053)

It was discovered that Puppet unsafely opened files when the k5login type is used to manage files. A local attacker could exploit this to overwrite arbitrary files and escalate privileges. (CVE-2012-1054)

Alerts:
openSUSE openSUSE-SU-2012:0835-1 puppet 2012-07-04
Fedora FEDORA-2012-2367 puppet 2012-03-10
Fedora FEDORA-2012-2415 puppet 2012-03-10
SUSE SUSE-SU-2012:0325-1 puppet 2012-03-06
Gentoo 201203-03 puppet 2012-03-05
Debian DSA-2419-1 puppet 2012-02-27
Ubuntu USN-1372-1 puppet 2012-02-23

Comments (none posted)

python-httplib2: information disclosure

Package(s):python-httplib2 CVE #(s):
Created:February 27, 2012 Updated:February 29, 2012
Description: From the Ubuntu advisory:

The httplib2 Python library earlier than version 0.7.0 did not perform any server certificate validation when using HTTPS connections. If a remote attacker were able to perform a man-in-the-middle attack, this flaw could be exploited to alter or compromise confidential information in applications that used the httplib2 library.

Alerts:
Ubuntu USN-1375-1 python-httplib2 2012-02-27

Comments (none posted)

samba: remote code execution

Package(s):samba CVE #(s):CVE-2012-0870
Created:February 24, 2012 Updated:March 12, 2012
Description: From the Red Hat advisory:

An input validation flaw was found in the way Samba handled Any Batched (AndX) requests. A remote, unauthenticated attacker could send a specially-crafted SMB packet to the Samba server, possibly resulting in arbitrary code execution with the privileges of the Samba server (root).

Alerts:
Gentoo 201206-22 samba 2012-06-24
SUSE SUSE-SU-2012:0515-1 Samba 2012-04-17
openSUSE openSUSE-SU-2012:0507-1 samba 2012-04-16
SUSE SUSE-SU-2012:0502-1 Samba 2012-04-14
Oracle ELSA-2012-0332 samba 2012-03-09
SUSE SUSE-SU-2012:0348-1 Samba 2012-03-09
SUSE SUSE-SU-2012:0338-1 Samba 2012-03-08
SUSE SUSE-SU-2012:0337-1 Samba 2012-03-08
Oracle ELSA-2012-0332 samba 2012-02-29
Mandriva MDVSA-2012:025 samba 2012-02-28
Scientific Linux SL-samb-20120228 samba 2012-02-28
Ubuntu USN-1374-1 samba 2012-02-24
Scientific Linux SL-samb-20120224 samba 2012-02-24
CentOS CESA-2012:0332 samba 2012-02-24
Red Hat RHSA-2012:0332-01 samba 2012-02-23

Comments (none posted)

systemd: arbitrary file creation

Package(s):systemd CVE #(s):CVE-2012-0871
Created:February 29, 2012 Updated:March 12, 2012
Description: The systemd-logind process creates files under /run/user in an insecure manner, allowing a local attacker to create symbolic links in arbitrary locations.
Alerts:
Fedora FEDORA-2012-2557 systemd 2012-03-11
SUSE SUSE-SA:2012:001 systemd 2012-02-29

Comments (none posted)

systemtap: denial of service

Package(s):systemtap CVE #(s):CVE-2012-0875
Created:February 27, 2012 Updated:June 5, 2014
Description: From the Red Hat bugzilla:

A flaw was discovered in how systemtap handled DWARF expressions when unwinding the stack. This could result in an invalid pointer read, leading to reading kernel memory, or a kernel panic (and if the kernel reboot on panic flag was set (panic_on_oops), it would cause the system to reboot).

In order to trigger this flaw, an admin would have to enable unprivileged mode (giving users membership in the 'stapusr' group and configuring the local machine with 'signer,all-users' stap-server trust). If an admin has enabled unprivileged mode, a user with such access could use this to crash the local machine.

Alerts:
Gentoo 201406-04 systemtap 2014-06-04
openSUSE openSUSE-SU-2013:0475-1 systemtap 2013-03-18
Scientific Linux SL-syst-20120321 systemtap 2012-03-21
Oracle ELSA-2012-0376 systemtap 2012-03-09
Oracle ELSA-2012-0376 systemtap 2012-03-09
CentOS CESA-2012:0376 systemtap 2012-03-09
CentOS CESA-2012:0376 systemtap 2012-03-08
Red Hat RHSA-2012:0376-01 systemtap 2012-03-08
Fedora FEDORA-2012-2218 systemtap 2012-02-25
Fedora FEDORA-2012-2213 systemtap 2012-02-25

Comments (none posted)

webcalendar: cross-site scripting

Package(s):WebCalendar CVE #(s):CVE-2012-0846
Created:February 28, 2012 Updated:February 29, 2012
Description: From the Red Hat bugzilla:

It was reported that WebCalendar suffers from a stored XSS flaw in the location variable.

Alerts:
Fedora FEDORA-2012-1934 WebCalendar 2012-02-28

Comments (none posted)

Page editor: Jake Edge

Kernel development

Brief items

Kernel release status

The current development kernel is 3.3-rc5, released on February 25. "Hey, no delays this week. And nothing really odd going on either. Maybe things are finally calming down." 164 non-merge changesets went into this release; the short-form changelog can be found in the announcement.

Stable updates: 3.2.8 was released on February 27. "This is an "interesting" update, in that "only" one bug is being resolved here, handling a floating point issue with new x86 processors running in 32bit mode. But, this also fixes x86-64 issues as well in this area, so all users are encouraged to upgrade. If anyone has any problems, please let us know as soon as possible."

The 3.0.23 and 3.2.9 updates were released on February 29; they each contain just over 70 important fixes.

Comments (none posted)

Quotes of the week

In the Olden Days, we could easily spot code which needed auditing simply by looking at the (lack of) coding style. But then someone decided that the most important attribute of good code was the nature of the whitespace within it. And they wrote a tool.

For a while, this made it really hard for us maintainers to tell which incoming patches we should actually read! Fortunately, the Anti-whitespace crusaders have come full circle: we can now tell newbie coders by the fact that they obey checkpatch.pl.

-- Rusty Russell

The Linux kernel project has some serious and ongoing structural problems and I think it's very important that as a project we [Debian] keep our options open.
-- Ian Jackson

UIO provides a framework that actually works (unlike all of the previous research papers were trying to do), and is used in real systems (laser welding robots!) every day, manufacturing things that you use and rely on.

You remove UIO at the risk of pissing off those robots, the choice is yours, I know I'm not going to do it...

-- Greg Kroah-Hartman

80 column wraps are almost always not a sign of lack of screen real estate, but a symptom of lack of thinking.
-- Ingo Molnar

Comments (none posted)

very_unlikely() very unliked

By Jonathan Corbet
February 29, 2012
The facility formerly known as jump labels is designed to minimize the cost of rarely-used features in the kernel. The classic example is tracepoints, which are turned off almost all of the time. Ideally, the overhead of a disabled tracepoint would be zero, or something very close to it. Jump labels (now "static keys") use runtime code modification to patch unused features out of the code path entirely until somebody turns them on.

With Ingo Molnar's encouragement, Jason Baron recently posted a patch set changing the jump label mechanism to look like this:

    if (very_unlikely(&jump_label_key)) {
	/* Rarely-used code appears here */
    }

A number of reviewers complained, saying that very_unlikely() looks like just a stronger form of the unlikely() macro, which works in a different way. Ingo responded that the resemblance was not coincidental and made sense, but he made few converts. After arguing the point for a while, Ingo gave in to the inevitable and stopped pushing for the very_unlikely() name.

Instead, we have yet another format for jump labels, as described in this document. A jump label static key is defined with one of:

    struct static_key key = STATIC_KEY_INIT_FALSE;
    struct static_key key = STATIC_KEY_INIT_TRUE;

The initial value should match the normal operational setting - the one that should be fast. Testing of static keys is done with:

    if (static_key_false(&key)) {
	/* Unlikely code here */
    }

Note that static_key_false() can only be used with a key initialized with STATIC_KEY_INIT_FALSE; for default-true keys, static_key_true() must be used instead. To make a key true, pass it to static_key_slow_inc(); removing a reference to a key (and possibly making it false) is done with static_key_slow_dec().

This version of the change was rather less controversial and, presumably, will find its way in through the 3.4 merge window. After that, one assumes, this mechanism will not be reworked again for at least a development cycle or two.

Comments (none posted)

Kernel development news

Fixing control groups

By Jonathan Corbet
February 28, 2012
Control groups are one of those features that kernel developers love to hate. It is not that hard to find developers complaining about control groups and even threatening to rip them out of the kernel some dark night when nobody is paying attention. But it is much less common to see explicit discussion around the aspects of control groups that these developers find offensive or what might be done to improve the situation. A recent linux-kernel discussion may have made some progress in that direction, though.

The control group mechanism is really just a way for a system administrator to partition processes into one or more hierarchies of groups. There can be multiple hierarchies, and a given process can be placed into more than one of them at any given time. Associated with control groups is the concept of "controllers," which apply some sort of policy to a specific control group hierarchy. The group scheduling controller allows control groups to contend against each other for CPU time, limiting the extent to which one group can deprive another of time in the processor. The memory controller places limits on how much memory and swap can be consumed by any given group. The network priority controller, merged for 3.3, allows an administrator to give different groups better or worse access to network interfaces. And so on.

Tejun Heo started the discussion with a lengthy message describing his complaints with the control group mechanism and some thoughts on how things could be fixed. According to Tejun, allowing the existence of multiple process hierarchies was a mistake. The idea behind multiple hierarchies is that they allow different policies to be applied using different criteria. The documentation added with control groups at the beginning gives an example with two distinct hierarchies that could be implemented on a university system:

  • One hierarchy would categorize processes based on the role of their owner; there could be a group for students, one for faculty, and one for system staff. Available CPU time could then be divided between the different types of users depending on the system policy; professors would be isolated somewhat from student activity, but, naturally, the system staff would get the lion's share.

  • A second hierarchy would be based on the program being executed by any given process. Web browsers would go into one group while, say, bittorrent clients could be put into another. The available network bandwidth could then be split according to the administrator's view of each class of application.

On their face, multiple hierarchies provide a useful level of flexibility for administrators to define all kinds of policies. In practice, though, they complicate the code and create some interesting issues. As Tejun points out, controllers can only be assigned to one hierarchy. For controllers implementing resource allocation policies, this restriction makes some sense; otherwise, processes would likely be subjected to conflicting policies when placed in multiple hierarchies. But there are controllers that exist for other purposes; the "freezer" controller simply freezes the processes found in a control group, allowing them to be checkpointed or otherwise operated upon. There is no reason why this kind of feature could not be available in any hierarchy, but making that possible would complicate the control group implementation significantly.

The real complaint with multiple hierarchies, though, is that few developers seem to see the feature as being useful in actual, deployed systems. It is not clear that it is being used. Tejun suggests that this feature could be phased out, perhaps with a painfully long deprecation period. In the end, Tejun said, the control group hierarchy could disappear as an independent structure, and control groups could just be overlaid onto the existing process hierarchy. Some others disagree, though; Peter Zijlstra said "I rather like being able to assign tasks to cgroups of my making without having to mirror that in the process hierarchy." So the ability to have a control group hierarchy that differs from the process hierarchy may not go away, even if the multiple-hierarchy feature does eventually become deprecated.

A related problem that Tejun raised is that different controllers treat the control group hierarchy differently. In particular, a number of controllers seem to have gone through an evolutionary path where the initial implementation does not recognize nested control groups but, instead, simply flattens the hierarchy. Later updates may add full hierarchical support. The block I/O controller, for example, only finished the job with hierarchical support last year; others still have not done it. Making the system work properly, Tejun said, requires getting all of the controllers to treat the hierarchy in the same way.

In general, the controllers have been the source of a lot of grumbling over the years. They tend to be implemented in a way that minimizes their intrusiveness on systems where they are not used - for good reason - but that leads to poor integration overall. The memory controller, for example, created its own shadow page tracking system, leading to a resource-intensive mess that was only cleaned up for the 3.3 release. The hugetlb controller is not integrated with the memory controller, and, as of 3.3, we have two independent network controllers. As the number of small controllers continues to grow (there is, for example, a proposed timer slack controller out there), things can only get more chaotic.

Fixing the controllers requires, probably more than anything else, a person to take on the role as the overall control group maintainer. Tejun and Li Zefan are credited with that role in the MAINTAINERS file, but it is still true that, for the most part, control groups have nobody watching over the whole system, so changes tend to be made by way of a number of different subsystems. It is an administrative problem in the end; it should be amenable to solution.

Fixing control groups overall could be harder, especially if the elimination of the multiple-hierarchy feature is to be contemplated. That, of course, is a user-space ABI change; making it happen would take years, if it is possible at all. Tejun suggests "herding people to use a unified hierarchy," along with a determined effort to make all of the controllers handle nesting properly. At some point, the kernel could start to emit a warning when multiple hierarchies are used. Eventually, if nobody complains, the feature could go away.

Of course, if nobody is actually using multiple hierarchies, things could happen a lot more quickly. Nobody entered the discussion to say that they needed multiple hierarchies, but, then, it was a discussion among kernel developers and not end users. If there are people out there using the multiple hierarchy feature, it might not be a bad idea to make their use case known. Any such use cases could shape the future evolution of the control group mechanism; a perceived lack of such use cases could have a very different effect.

Comments (4 posted)

The Android mainlining interest group meeting: a report

February 28, 2012

This article was contributed by John Stultz

Overview and Introduction

A face-to-face meeting of the Android mainlining interest group, which is a community interested in upstreaming the Android patch set into the Linux kernel, was organized by Tim Bird (Sony/CEWG) and hosted by Linaro Connect on February 10th. Also in attendance were representatives from Linaro, TI, ST-Ericsson, NVIDIA, Google's Android and ChromeOS teams, among others.

The meeting was held to give better visibility into the various upstreaming efforts going on, making sure any potential collaboration between interested parties was able to happen. It covered a lot of ground, mostly discussing Android features that recently were re-added to the staging directory and how these features might be reworked so that they can be moved into mainline officially. But a number of features that are still out of tree were also covered.

Kicking off the meeting, participants introduced themselves and their interests in the group. Also some basic goals were covered just to make sure everyone was on the same page. Tim summarized his goals for the project as: to let android run on a vanilla mainline kernel, and everyone agreed to this as a desired outcome.

Tim also articulated an important concern in achieving this goal is the need to avoid regressions. As Android is actively shipping on a huge number of devices, we need to take care that getting things upstream at the cost of short term stability to users isn't an acceptable trade-off. To this point, Brian Swetland (Google) clarified that the Google Android developers are quite open to changes in interfaces. Since Android releases both userland and kernel bundled together, there is less of a strict need for backward API compatibility. Thus it's not the specific interfaces they rely on, but the expected behavior of the functionality provided.

This is a key point, as there is little value in upstreaming the Android patches if the resulting code isn't actually usable by Android. A major source of difficulty here is that the expected behavior of features added in the Android patch set isn't always obvious, and there is little in the way of design documentation. Zach Pfeffer (Linaro) suggested that one way to both improve community understanding of the expected behavior, as well as helping to avoid regressions, would be to provide unit tests. Currently Android is for the most part tested at a system level, and there are very few unit tests for the kernel features added. Thus creating a collection of unit tests would be beneficial, and Zach volunteered to start that effort.

Run-time power management and wakelocks

At this point, the discussion was handed over to Magnus Damm to talk about power domain support as well as the wakelock implementation that the power management maintainer Rafael Wysocki had recently posted to the Linux kernel mailing list. The Android team said they were reviewing Rafael's patches to establish if they were sufficient and would be commenting on them soon.

Magnus asked if full suspend was still necessary now that run-time power-management has improved; Brian clarified that this isn't an either/or sort of situation. Android has long used dynamic power management techniques at run time, and they are very excited about using the run-time power-management framework to further to save power, but they also want to use suspend to further reduce power usage. They really would like to see deep idle for minutes to hours in length, but, even with CONFIG_NOHZ, there are still a number of periodic timers that keep them from reaching that goal. Using the wakelocks infrastructure to suspend allows them to achieve those goals.

Magnus asked if Android developers could provide some data on how effective run-time power-management was compared to suspend using wakelocks, and to let him know if there are any particular issues that can be addressed. The Android developers said they would look into it, and the discussion continued on to Tim and his work with the logger.

Logger

Back in December, Tim posted the logger patches to the linux-kernel list and got feedback for changes that he wanted to run by the Android team to make sure there were no objections. The first of these, which the Android developers supported, is to convert static buffer allocation to dynamic allocation at run-time.

Tim also shared some thoughts on changing the log dispatching mechanism. Currently logger provides a separation between application logs and system logs, preventing a noisy application from causing system messages to be lost. In addition, one potential security issue the Android team has been thinking about is a poorly-written (or deliberately malicious) application dumping private information into the log; that information could then be read by any other application on the system. A proposal to work around this problem is per-application log buffers; however, with users able to add hundreds of applications, per-application or per-group partitioning would consume a large amount of memory. The alternative - pushing the logs out to the filesystem - concerns the Android team as it complicates post-crash recovery and also creates more I/O contention, which can cause very poor performance on flash devices.

Additionally, the benefits from doing the logging in the kernel rather than in user space are low overhead, accurate tagging, and no scheduler thrashing. The Android developers are also very hesitant to add more long-lived daemons to the system, as while phones have gained quite a lot of memory and cpu capacity, there is still pressure to use Android on less-capable devices.

Another concern with some of the existing community logging efforts going on is that Android developers don't really want strongly structured logging, or binary logging, because it further complicates accessing the data when the system has crashed. With plain text logging, there is no need for special tools or versioning issues should the structure change between newer and older devices. In difficult situations, with plain text logging, a raw memory dump of the device can still provide very useful information. Tim discussed a proposed filesystem interface to the logger to which the Android team re-iterated their interface agnosticism, as long as access control is present and writing to the log is cheap. Tim promised to benchmark his prototypes to ensure performance doesn't degrade.

ION

From here, the discussion moved to ION with Hiroshi Doyu (NVIDIA) leading the discussion. There had been an earlier meeting at Linaro Connect focusing on graphics that some of the ION developers attended, so Jesse Barker (Linaro) summarized that discussion. The basic goal there is to make sure the Contiguous Memory Allocator (CMA), dma-buf, and other buffer sharing approaches are compatible with both general Linux and Android Linux.

Dima Zavin (Google) clarified that ION is focused on standardizing the user-space interface, allowing different approaches required by different hardware to work, while avoiding the need to re-implement the same logic over and over for each device. The really hard part of buffer sharing is sorting out how to allocate buffers that satisfy all the constraints of the different hardware that will need to access those buffers. For Android, the constraints are constant per-device, so they can be statically decided, but for more generic Linux systems there may need to be some run-time detection of what devices a buffer might be shared with, possibly converting buffer types when a incompatible buffer interacts with a constrained bit of hardware for the first time. There are some conflicting needs here as Android is very focused on constant performance, and per-device optimizations allow that, whereas the more general proposed solution might have occasional performance costs, requiring buffers to be occasionally copied.

The ION work is still under development, and hasn't yet been submitted to the kernel mailing list, but Hiroshi pointed out that a number of folks are actively working on integrating the dma-buf and CMA work with ION, thus there is a need for a point where discussion and collaboration can be done. As it seems the ION work won't move upstream or into staging immediately, it's likely the linaro-mm-sig mailing list will be the central collaboration point.

Ashmem & the Alarm driver

The next topic, was ashmem, which John Stultz (Linaro/IBM) has been working on. He helped get the ashmem patches reworked and merged into staging, and is now working on pulling the pinning/unpinning feature from ashmem out of the driver and pushing it a little more deeply into the VM subsystem via the fadvise volatile interface. The effort is still under development, but there have been a number of positive comments on the general idea of the feature.

The current discussion with the code on linux-kernel is around how to best store the volatile ranges efficiently, and some of the behavior around coalescing neighboring volatile ranges. In addition there have been requests by Andrew Morton to find a way to reuse the range storage structure with the POSIX file locking code, which has similar requirements. This effort wouldn't totally replace the ashmem driver, as the ashmem driver also provides a way to get unlinked tmpfs file descriptors without having tmpfs mounted, but it would greatly shrink the ashmem code, making it more of a thin "Android compatibility driver", which would hopefully be more palatable upstream.

John then covered the Android Alarm Driver feature, which he had worked on partially upstreaming with the virtualized RTC and POSIX alarm_timers work last year. John has refactored the Android Alarm driver and has submitted it for staging in 3.4; he also has a patch queue ready which carves out chunks of the Android Alarm driver and converts it to using the upstreamed alarm_timers. To be sure regressions are not introduced, this conversion will likely be done slowly, so that any minor behavioral differences between the Android alarm infrastructure and the upstreamed infrastructure can be detected and corrected as needed. Further, the upstream POSIX alarm user interfaces are not completely sufficient to replace the Android Alarm driver, due to how it interacts with wakelocks, but this patch set will greatly reduce the alarm driver, and from there we can see if it would either be appropriate to preserve this smaller Android specific interface, or try to integrate the functionality into the timerfd code.

Low memory killer & interactive cpufreq governor

As Anton Vorontsov (Linaro) could not attend, John covered Anton's work on improving the low memory killer in the staging tree as well as his plans to re-implement the low memory killer in user space using a low-memory notifier interface to signal the Android Activity Manager. There is wider interest in the community in a low-memory notifier interface, and a number of different approaches have been proposed, using both control groups (with the memory controller) or simpler character device drivers.

The Android developers expressed concern that if the killing logic is pushed out to the Activity Manager, it would have to be locked into memory and the killing path would have to be very careful not to cause any memory allocations, which is difficult with Dalvik applications. Without this care, if memory is very tight, the killing logic itself could be blocked waiting for memory, causing the situation to only get worse. Pushing the killing logic into its own memory-locked task which was then carefully written not to trigger memory allocations would be needed, and the Android developers weren't excited about adding another long-lived daemon to the system. John acknowledged those concerns, but, as Anton is fairly passionate about this approach, he will likely continue his prototyping in order to get hard numbers to show any extra costs or benefits to the user-space approach.

Anton had also recently pushed out the Android "interactive cpufreq governor" to linux-kernel for discussion. Due to a general desire in the community not to add any more cpufreq code, it seems the patches won't be going in as-is. Instead it was suggested that the P-state policy should be in the scheduler itself, using the soon-to-land average load tracking code, so Anton will be doing further research there. There was also a question raised whether the ondemand cpufreq governor would not now be sufficient, as a number of bugs relating to the sampling period had been resolved in the last year. The Android developers did not have any recent data on ondemand, but suggested Anton get in touch with Todd Poyner at Google as he is the current owner of the code on their side.

Items of difficulty

Having covered most of the active work, folks discussed which items might be too hard to ever get upstream.

The paranoid network interface, which hard-fixes device permissions to specific group IDs, is clearly not generic enough to be able to go upstream. Arnd Bergmann (Linaro/IBM) however commented that network namespaces could be used to provide very similar functionality. The Android developers were very interested in this, but Arnd warned that the existing documentation is narrowly scoped and will likely be lacking to help them do exactly what they want. The Android developers said they would look into it.

The binder driver also was brought up as a difficult bit of code to get included into mainline officially. The Android developers expressed their sympathy as it is a large chunk of code with a long history, and they themselves have tried unsuccessfully to implement alternatives. However, the specific behavior and performance characteristics of binder are very useful for their systems. Arnd made the good point that there is something of a parallel to SysV IPC. The Linux community would never design something like SysV IPC, and generally has a poor opinion of it, but it does support SysV IPC to allow applications that need it to function. Something like this sort of a rationale might be persuasive to get binder merged officially, once the code has been further cleaned up.

At this point, after 4 hours of discussion, the focus started to fray. John showed a demo of the benefit of the Android drivers landing in staging, with an AOSP build of Android 4.0 running on a vanilla 3.3-rc3 kernel with only few lines of change. Dima then showed off a very cool debugging tool for their latest devices. With a headphone plug on one end and USB on the other, he showed off how the headphone jack on the phone can do double duty as a serial port allowing for debug access. With this, excited side conversations erupted. Tim finally thanked everyone for attending and suggested we do this again sometime soon, personal and development schedules allowing.

Thanks

Many thanks to the meeting participants for an interesting discussion, Tim Bird for organizing, and to Paul McKenney, Deepak Saxena, and Dave Hansen for their edits and thoughts on this summary.

Comments (4 posted)

A long-term support initiative update

February 29, 2012

This article was contributed by Greg Kroah-Hartman.

Back in October 2011, the Long-term Support Initiative (LTSI) was announced at the LinuxCon Europe event by Linux Foundation board member (and NEC manager) Tsugikazu Shibata. Since that initial announcement, a lot has gone on in the planning stages of putting the framework for this type of project together, and at the Embedded Linux Conference a few weeks ago, the project was publicly announced as active.

Shibata-san again presented about LTSI at ELC 2012, giving more details about how the project would be run, and how to get involved. The full slides for the presentation [PDF] are available online.

The LTSI project has grown out of the needs of a large number of consumer electronic companies that use Linux in their products. They need a new kernel every year to support newer devices, but they end up cobbling together a wide range of patches and backports to meet their needs. At LinuxCon in Europe, the results of a survey of a large number of kernel releases in different products showed that there are a set of common features that different companies need, yet they were implemented in different ways, pulling from different upstream kernel or project versions, and combining them in different kernel versions with different bug fixes, some done in opposite ways. The results of this survey are now available for the curious.

This is usually done because of the short development cycle that consumer electronic companies are under in order to get a product released. Once a product is released, the development cycle starts up again, and old work is usually forgotten, except to the extent that it is copied into the new project. This cycle is hard to break out of, but that needs to happen in order to be able to successfully take advantage of the Linux kernel community's releases.

The LTSI project's goals were summarized as:

  • Create a longterm community-based Linux kernel to cover the embedded life cycle.
  • Create an industry-managed kernel as a common ground for the embedded industry.
  • Set up a mechanism to support upstream activities for embedded engineers.

Let's look at these goals individually, to find out how they are going to be addressed.

Longterm community kernel

A few months ago, after discussing this project and the needs of the embedded industry, I announced that I will be maintaining a longterm Linux kernel for bugfixes and security updates for two years after it was originally released by Linus. This kernel version will be picked every year, enabling two different longterm kernels to be supported at the same time. The rules of these kernels are the same as the normal stable kernel releases, and can be found in the in-kernel file Documentation/stable_kernel_rules.txt.

The 3.0 kernel was selected last August as the first of these longterm kernels to be supported in this manner. The previous longterm kernel, 2.6.32, which was selected with the different enterprise Linux distributions' needs in mind, will still be maintained for a while longer as they rely on this kernel version for their users.

As the 3.0 kernel was selected last August, it looks like the next longterm kernel will be selected sometime around August, 2012. I've been discussing this decision with a number of different companies about which version would work well for them, and if anyone has any input into this, please let me know.

Longterm industry kernel

As shown at LinuxCon Europe, basing a product kernel on just a community-released kernel does not usually work. Almost all consumer electronics releases depend on a range of out-of-tree patches for some of their features. These patches include the Android kernel patch set, LTTng, some real-time patches, different architecture and system-on-a-chip support, and various small bugfixes. These patches, even if they are accepted upstream into the kernel.org releases, usually do not fit the requirements that the community stable kernel releases require. Because of that, the LTSI kernel has been created.

This kernel release will be based on the current longterm community-based kernel, with a number of these common patchsets applied. Applying them in a single place enables developers from different companies to be assured that the patches are correct, they have the latest versions, and, if any fixes are needed, they can be done in a common way for all users of the code. This LTSI kernel tree is now public, and can be browsed at http://git.linuxfoundation.org/?p=ltsi-kernel.git. [Editor's note: this site, like the LTSI page linked below, fails badly if HTTPS is used; "HTTPS Everywhere" users will have difficulty accessing these pages.] It currently contains a backport of the Android kernel patches, as well as the latest LTTng codebase. Other patches are pending review to be accepted into this tree in the next few weeks.

This tree is set up much like most distribution kernel trees are: a set of quilt-based patches that apply on top of an existing kernel.org release. This organization allows the patches to be easily reworked, and even removed if needed; patches can also be forward-ported to new kernel releases without the problems of rebasing a whole expanded kernel git tree. There are scripts in the tree to generate the patches in quilt, git, or just tarball formats, meeting the needs of all device manufacturers.

During the hallway track at ELC 2012, I discussed the LTSI kernel with a number of different embedded companies, embedded distributions, and community distributions. All of these groups were eager to start using the LTSI kernel as a base for their releases, as no one likes to do the same work all the time. By working off of a common base, they can then focus on the real value they offer their different communities.

Upstream support for embedded engineers

Despite the continued growth of the number of developers and companies that are contributing to the Linux kernel, some companies still have a difficult time of figuring out how to get involved. Because of this, a specific effort to get embedded engineers working upstream is part of the LTSI project.

The LTSI kernel will accept patches from companies even if they contain code that does not meet the normal acceptance criteria set by Linux kernel developers due to technical reasons. These patches will be cleaned up in the LTSI kernel tree and helped to be merged upstream by either submission to the various kernel subsystem groups, or through the staging subsystem if the code will need more work than a simple cleanup. This feedback loop into the community is to help these embedded engineers learn how to work with the community, and in the end, be part of the community directly, so for the next product they have to create, they will not need to use the LTSI kernel as a "launching pad".

Help make the project obsolete

The end goal of LTSI is to put itself out of business. With companies learning how to work directly with mainstream, their patches will not need the help of the LTSI developers to be accepted. And if those patches are accepted, their authors will not have to maintain them on their own for their product's lifetime; instead, they can just rely on the community longterm kernel support schedule for the needed bugfixes and security updates. However, until those far-reaching goals are met, there will be a need for the LTSI project and developers for some time.

If you work for a company interested in working with the LTSI kernel as either a base for your devices, or to help get your code upstream, please see the LTSI web site for information on how to get involved.

Comments (2 posted)

Patches and updates

Kernel trees

Architecture-specific

Core kernel code

Development tools

Device drivers

Documentation

Filesystems and block I/O

Memory management

Networking

Security-related

Virtualization and containers

Benchmarks and bugs

Miscellaneous

Page editor: Jonathan Corbet

Distributions

Chakra: An Arch Linux fork that is rough but promising

February 29, 2012

This article was contributed by Bruce Byfield

In Hinduism and Buddhism, a chakra is one of several centers of energy in the body. In free software, The Chakra Project is a fork of Arch Linux. The name is appropriate, because, like its mystical namesake, Chakra concentrates on several centers of development within the general body of Linux, including a design that is simple yet aimed at intermediate users, the creation of a system that consists of only KDE/Qt applications, and an original selection of default software. Some of these concentrations are well-advanced, while others are still in early development.

[Chakra desktop]

According to project leader Phil Miller, Chakra began as a sub-project in Arch Linux to develop a version of KDE called KDEMod. It differs from other KDE versions in that it split and reordered packages and added a few minor usability enhancements, such as extra options in the Dolphin file manager's context menu. The same sub-project also started developing a Live CD and a graphical installer. However, he said:

As time passed by, we [had] troubles keeping up with the fast updates Arch Linux had. KDEMod packages were more broken than working because of updates in libraries or kernels (we needed a special patched kernel for our Live CD).

Developer Jan Mette wrote scripts to create packages from the Arch Linux repositories, and eventually created a new repository called core, which was working toward a fork. Then, abruptly, he disappeared from IRC and forums. When news came that Mette had died, Miller said, those in the sub-project "decided to go on with his dream and started the fork. We honor Jan with the Chakra distribution."

Today, Chakra maintains its own repositories, with a schedule that Anke Boersma, one of the founding developers, describes as a "half-rolling release" as opposed to Arch's continual updates. In other words, while many applications are continually upgraded, they are tested and released in sets to minimize potential problems. In addition, core packages are upgraded on a semi-regular schedule. In this way, Chakra hopes to avoid the problems its founders found in maintaining KDE support for Arch.

Technically, Chakra differs from Arch Linux in several ways. Boersma explained that, for one thing, Chakra is developing a package manager that, instead of using text files like Arch Linux, uses a true database, and supports a graphical interface rather than a text-based installer. In addition, Chakra developers are dedicated to creating a distribution consisting entirely of KDE/Qt applications, without any GTK2 build dependencies or libraries. This policy affects many aspects of Chakra, including its development priorities and application selection.

However, Boersma said, the major difference from Arch is philosophical:

Arch prides itself in being completely bleeding edge, always being the place where any and all new gets tested and tried first. Chakra developers felt that approach was not always in the best interests of its KDEMod users. Having a fully rolling KDE, [for] apps and games, but being much more conservative with the base packages, would be more appealing to many users and developers alike.

In other ways, Chakra continues to resemble Arch Linux, sharing an emphasis on speed, minimalism, and hands-on administration. The main differences are opinions on how to attain those goals and user-friendliness. If Arch Linux can be characterized as intended for experienced users, then Chakra could be described as aimed at intermediate users. Or, as the online help for the Live CD suggests, Chakra is aimed at "KISS-minded users who aren't afraid to get their hands dirty."

Boersma described the relationship today between Arch Linux and Chakra as "mostly mutual respect, where a few Arch developers communicate regularly with Chakra developers, giving advice, and exchanging ideas about found bugs, and how to correct those issues found."

Today, Chakra is developed by about ten regular contributors and enjoys a moderate popularity. Its first release series was downloaded nearly 115,000 times, while the 2012.02 release (codenamed "Archimedes") was downloaded 40,000 times in its first week, according to Boersma. At Distrowatch, Chakra is currently in thirteenth position for page views, which, if nothing else, indicates a strong curiosity in the free software community about the young distribution.

Installing Chakra

[Tribe installer]

True to its basic policy, Chakra is developing its own Qt4-based installer, called Tribe. Tribe is currently in Alpha release, and its interface is sometimes illogical — for instance, at the partitioning stage, users must click the "Advanced" button rather than the "Format" button. Functionally, it is about equivalent to the Ubuntu installer, offering much less user control than Arch's installer, although more control of the installation process is planned for later releases.

[Welcome widget]

Tribe is different enough from other installers that users should read Chakra's comprehensive Beginner's Guide before plunging into an installation, as well as consult the Welcome widget, a small Plasma application that opens when you boot the Live CD's desktop to display release notes.

For instance, since you are never asked to set a root password, you might assume that Chakra uses sudo without a root password, the way that Ubuntu does. In fact, sudo is set up, but Tribe also automatically assigns the root user the same password as the user account you create during installation — a practice that saves a step, but which security-minded users might want to change immediately after their first login.

Even more importantly, just before you install, you confront a button inviting you to "Download Popular Bundles". Contrary to what you might first think, "bundles" are not Chakra's jargon for packages. Rather, they are Chakra's way of permitting the use of GTK2 applications without actually adding their libraries and dependencies to the system. While Qt-based applications are installed via normal packages, GTK2-based ones are made available as loop-mounted squashfs/lzma filesystem images that also contain their dependencies.

This approach may seem like an overly complicated way to ensure a pure KDE/QT system. However, what it means in practice is that if you want applications like Audacity, Chrome, Firefox, Inkscape, GIMP, or Thunderbird, you should select the button during installation, even though nothing in Tribe itself indicates why you might want to. Otherwise, you face installing GTK2 apps afterward with a bundle manager that is currently little more than a list placed in a window.

But even without such idiosyncrasies, Tribe leaves something to be desired. For one thing, it has no provision for selecting individual packages and bundles within the installer, which is unlikely to to please Chakra's target audience.

For another, Tribe does not work consistently on some hardware. In my experience, it seems especially prone to display corruption at the partitioning stage. A workable but annoying solution is to repeat efforts to install three or four times until they succeed. Overall, Tribe is an interesting preview, but unfinished enough that perhaps it should not be included in the current release.

Desktop and software selection

With GTK2 applications banished to Bundles, Chakra has a KDE orientation whose thoroughness is unusual among current distributions. Boersma told me that Chakra's founders decided that having too many desktop environments would only create a "splitting of resources," so that the distribution would end up "not doing any desktop environment the way it could be done, if it had the full and only attention." The founders chose KDE, Miller said, because "we want to showcase KDE as it should be."

For this reason, Chakra offers what appears to be a largely unmodified version of KDE 4.8, using the standard Oxygen widgets and icons with the main menu in its usual position on the bottom left along with the basic selection of Activities. Branding is confined chiefly to the Chakra Project wallpaper.

The main difference in KDE on Chakra is that — subjectively, at least — it is much faster than most installations. Boersma attributed the apparent speed to the removal all traces of GTK2 from core packages. "Because Chakra has stripped all of its base packages of any GTK libs, it does start showing a difference in lightness," Boersma insisted. "Strip it from one or two applications you run, you won't notice so much, but keep multiplying it by so many base packages that carry everything to run on both GTK2 and QT/KDE based systems, and the end results start adding up. ."

Where possible, Chakra's software is free-licensed, although it does include the common proprietary drivers and proprietary-dependent free tools like Q4wine, a Qt interface for WINE. Much of the software, too, will be familiar to KDE users, such as Marble, KDE's answer to Google Earth and Maps, or KNetAttach, the wizard for integrating network resources for the desktop.

However, much of Chakra's software will be strange even to many KDE users. The default browser is Rekonq, and the lightweight music player Bangarang. Chakra also borrows SUSE Studio Imagewriter for creating bootable USB drives, and Ubuntu's System Cleaner for removing unwanted files. Other menu items, such as miniBackup, are merely shell scripts to automate a process. Taken as a whole, they make an intriguing change from the standard applications in most distributions, and are enough by themselves to make an afternoon's exploration of Chakra worthwhile.

In the current release, Chakra uses Arch's Pacman for package installation, and CInstall, a rough and ready interface which the project site calls a "temporary GUI" for bundles. This division is inconvenient — especially if you cannot recall or have no idea whether a particular application uses Qt or GTK2. However, Pacman is scheduled to be replaced in the near future with a new package manager called Akabei. Eventually, too, package and bundle installation will be combined into a single utility, possibly based on Arch Linux's Shaman, a front end for Pacman.

One to watch

Chakra's latest release may inspire mixed reactions. On the one hand, the incompleteness of the installer and the mixed system of packages and bundles is irritating. Granted, the development team makes no secret that Chakra is incomplete, yet could the release not wait until these things were more polished? Or does the project not consider initial setup and application installation part of the core that needs to be reliable?

On the other hand, these shortcomings offer few difficulties that Chakra's target users should not be able to overcome. Moreover, Chakra's traditional virtues — speed, efficiency, and do-it-yourself maintenance — are likely to have a deep appeal to its target users. At a time when many major distributions hide complexity for the sake of helping newcomers, Chakra aims for simplicity with a hands-on approach. Add an original selection of applications, and many users might be able to forgive the distribution's uneven edges.

Others might prefer to wait until Chakra is out of rapid development before using it as a main desktop. However, no matter what the immediate verdict, Chakra is among the most innovative of newer distributions. What it accomplishes in the next release or two should be worth watching.

Comments (3 posted)

Brief items

Distribution quote of the week

My intuition (based on anecdote from non-Ubuntu-y developer friends and not real data) is that most people just want to dump patches on us. Furthermore, if they did want to do it all themselves, they're the sort of person likely to be a repeat contributor anyway, so at worst they'd only get "cheated" the first time and would have additional opportunities to do it all themselves later.

IME, people are pleasantly surprised when they drop a patch on [LaunchPad] and that results in a package getting fixed; I've never heard of someone being disappointed they didn't have to go through another round of review.

-- Evan Broder

Comments (none posted)

Fedora 17 Alpha released

The alpha version of the Fedora 17 ("beefy miracle") release is out. "When we said Beefy, we weren't kidding: an a-bun-dance of condiments, err, features, are available to help you feed your hunger for the best in free and open source software. We take pride in our toppings, and in our fine ingredients; Fedora 17 includes both over- and under-the-bun improvements that show off the power and flexibility of the advancing state of free (range) software." See the announcement for an overview of new features in this release.

Full Story (comments: 21)

MINIX 3.2.0

MINIX 3.2.0 has been released. From the project's home page: "MINIX 3 is a free, open-source, operating system designed to be highly reliable, flexible, and secure. It is based on a tiny microkernel running in kernel mode with the rest of the operating system running as a collection of isolated, protected, processes in user mode."

Comments (none posted)

openSUSE ARM plans

Andrew Wafaa posted plans for "full ARM support" in openSUSE 12.2 to the opensuse-arm mailing list. "ARMv7 is the focus. This isn't necessarily news, but we just wanted to reiterate the message. Also the ARMv5 builds are going to have their priorities in the OBS lowered to minimise the build power consumption. We're not deleting them,  just not focusing on them; if someone *really* wants v5 builds then best you roll your sleeves up and get stuck  in, we will provide support, guidance and encouragement." The Pandaboard has been chosen as the reference platform, but not to the exclusion of others, and XFCE will be the reference desktop. (Thanks to Trevor Woerner.)

Comments (none posted)

Scientific Linux 4.x End Of Life

Scientific Linux 4.x has reached its end of support. "Anyone still running production workloads on Scientific Linux 4 should be aware that after today [February 29] no updates of any kind will be published. Because of this, we hope everyone has completed their migration to Scientific Linux 5 or Scientific Linux 6 by now."

Full Story (comments: none)

Stevland project bridges the accessibility gap for web kiosks

64 Studio has announced Stevland a new GNU/Linux distribution designed to make web kiosks more accessible to people with disabilities. "Stevland is a GNU/Linux distribution, based on Ubuntu, designed so that people with disabilities can enjoy access to the Internet, regardless of their level of computer knowledge. It includes a wizard designed to help computer users set their accessibility preferences. This wizard uses large buttons and text, together with audio description and key press feedback, so that users with vision impairments, hearing impairments or mobility impairments can set up the kiosk for their individual needs, without having to know anything about system administration."

Full Story (comments: none)

Tizen Software Platform Gains Momentum

The Tizen Association and the Linux Foundation have announced the beta release of the Tizen platform source code and Software Development Kit. "As announced today by the Tizen Technical Steering Group on tizen.org, developers are encouraged to start working with the new features and functionality of the Tizen beta source code and SDK and provide feedback to help improve the platform during the final stages of its development."

Comments (none posted)

Distribution News

Fedora

The bidding process for the next FUDCon in North America is now OPEN.

The Fedora Project is collecting bids for the next FUDCon (Fedora Users and Developers Conference) in North America. The process will close on March 23, 2012 and a decision will be made in early April. "Important to note: FUDCon for North America should occur between December 1, 2012 and February 28, 2013, though ideally in December or January, as February is the last month of the fiscal year, and, well, expense reports need to get in, or Daddy Shadowman gets sad. The North American FUDCon has approximately a $20k (USD) budget."

Full Story (comments: none)

Newsletters and articles of interest

Distribution newsletters

Comments (none posted)

Linux Mint – the taste of success (LinuxUser & Developer)

LinuxUser & Developer has an interview with Clement Lefebvre, the creator of Linux Mint. "The vast majority of features and improvements which make it to each Linux Mint release come from ideas and feedback contributed to us by the community. Linux Mint is easy to use, it’s comfortable and it packs some of the most advanced features available in desktop Linux nowadays, but the main thing about it is that it brings to people what they need, what they want and what they ask for."

Comments (1 posted)

How Red Hat killed its core product—and became a billion-dollar business (ars technica)

Here's a look at Red Hat's corporate history in ars technica. "Red Hat Linux was much like today's Fedora, releasing new versions quickly to get the bleeding-edge technology out to users. But new versions and patches could break old applications, and there was no ecosystem of software and hardware vendors supporting applications running on Red Hat. With RHEL, Red Hat gives the enterprise what it wants: a stable lifecycle and roadmap, and a more careful system for inserting patches without breaking application compatibility. That model has certainly proven its worth."

Comments (46 posted)

SUSE Linux moves to 3.0 kernel with SP2 (Register)

The Register looks at the SUSE Linux Enterprise Server 11 Service Pack 2 release, which brings a lot of new software with it. "[T]he snapshotting features of btrfs as well as the scalability are why SUSE Linux is recommending it as the root file system for SLES 11 SP2. The company has encapsulated some of these btrfs features into a tool called Snapper, which integrates with the SLES update and Yast management tool and allows system admins to take a system snapshot before they make important changes to the system and then instantly roll them back if something goes snafu."

Comments (23 posted)

Page editor: Rebecca Sobol

Development

Speeding up D-Bus

By Jake Edge
February 29, 2012

The D-Bus interprocess communication (IPC) mechanism is used extensively by Linux desktop environments and applications, but it suffers from less-than-optimal performance. While that problem may not be so noticeable on desktop-class systems, it can be a real issue for smaller and embedded devices. Over the years there have been a number of attempts to add functionality to the kernel to increase D-Bus performance, but none have passed muster. A recent proposal to add multicast functionality to Unix sockets is another attempt at solving that problem.

D-Bus currently relies on a daemon process to authenticate processes and deliver messages that it receives over Unix sockets. Part of the performance problem is caused by the user-space daemon, which means that messages need two trips through the kernel on their way to the destination (once on the way to the daemon and another on the way to the receiver). It also requires waking up the daemon and an "extra" transition to and from kernel mode. Putting D-Bus message handling directly into the kernel would eliminate the need to involve the daemon at all. That would eliminate one of the transitions and one of the copies, which would improve performance.

If all of the D-Bus messages were simply between pairs of processes, Unix sockets could potentially be used directly between them. But there is more to D-Bus than that. Processes can register for certain kinds of events they wish to receive (things like USB devices being attached, a new song playing, or battery status changes for example), so a single message may need to be multicast to multiple receivers. That is part of what the daemon mediates.

Earlier efforts to add an AF_DBUS socket type (and associated kdbus module) to handle D-Bus messages in the kernel weren't successful because kernel hackers were not willing to add the complexity of D-Bus routing. The most recent proposal was posted by Javier Martinez Canillas based on work from Alban Créquy, who also proposed the AF_DBUS feature. It adds multicasting support to Unix (i.e. AF_UNIX) sockets, instead, while using packet filters so that receivers only get the messages they are interested in. That way, the routing is strictly handled via multicast plus existing kernel infrastructure.

As described in Rodrigo Moya's blog posting, there are a number of reasons that a D-Bus optimization can't use the existing mechanisms in the kernel. Netlink sockets would seem to be one plausible alternative, and there is support for multicasting, but D-Bus requires fully reliable delivery even if the receiver's queue is full. In that case, netlink sockets just drop packets, while D-Bus needs the sender to block until the receiver processes some of its messages. In addition, netlink sockets do not guarantee the ordering of multicast messages that D-Bus requires.

Another option would be to use UDP multicast, but Moya (and Canillas) seem skeptical that it will perform as well as Unix socket multicast. There is also a problem for devices that do not have a network card, because the lo loopback network device does not support multicast. Moya also notes that a UDP-based solution suffers from the same packet loss and ordering guarantee problems that come with netlink sockets.

So, that left Créquy and others at Collabora (including Moya, Canillas, and others) to try a different approach. Créquy outlines the multicast approach on his blog. Essentially, both SOCK_DGRAM and SOCK_SEQPACKET socket types can create and join multicast groups which will then forward all traffic to each member of the group. Datagram multicast allows any process that knows the group address to join, while seqpacket multicast (which is connection-oriented like a SOCK_STREAM but enforces message boundaries) allows the group creator to decide whether to allow a particular group member at accept() time.

As Moya described, a client would still connect to the D-Bus daemon for authentication, and would then be added to the seqpacket multicast group for the bus. The daemon would also attach a packet filter that would eliminate any of the messages that the client should not receive. One of the patches in the set implements the ability for the daemon to attach a filter to the remote endpoint, so that it would be in control of which messages a client can see.

The idea is interesting, but so far comments on the netdev mailing list have been light. Kernel network maintainer David Miller is skeptical that the proposal is better than having the daemon just use UDP:

My first impression is that I'm amazed at how much complicated new code you have to add to support groups of receivers of AF_UNIX messages.

I can't see how this is better than doing multicast over ipv4 using UDP or something like that, code which we have already and has been tested for decades.

Cannilas responded by listing some of the reasons that UDP multicast would not serve their purposes, but admitted that no performance numbers had yet been gathered. Miller said that he will await those numbers before reviewing the proposal further, noting: "In many cases TCP/UDP over loopback is actually faster than AF_UNIX.".

Even if UDP has the needed performance, some solution would need to be found for the packet loss and ordering issues. Blocking senders due to inattentive receivers may be a hard sell, however, as it seems like it could lead to denial of service problems, no matter which socket type is used. But it is clear that there is a lot of interest in better D-Bus performance. In fact, it goes well beyond just D-Bus as "fast" IPC mechanisms are regularly proposed for the kernel. It's unclear whether multicasting for Unix sockets is suitable for that, but finding a way to speed up D-Bus (and perhaps other IPC-using programs) is definitely on some folks' radar.

Comments (13 posted)

Brief items

Quote of the week

To be fair, it also omits one of SystemD's greatest strengths: Roman numeral-compliant project naming. System D is obviously way better than System V.
-- Steve Langasek

Comments (1 posted)

Netmap for Linux

Netmap is a framework for packet generation and capture from user space; it claims to be efficient to a point where it can saturate a 10Gb line with minimal system load. It is available under the BSD license. A version for Linux is now available alongside the FreeBSD port. "This is a preliminary version supporting the ixgbe and e1000/e1000e driver. Patches for other devices (igb, r8169, forcedeth) are untested and probably not working yet. Netmap relies on a kernel module (netmap_lin.ko) and slightly modified device drivers. Userspace programs can use the native API (documented in netmap.4) or a libpcap emulation library."

Comments (13 posted)

nntpgit initial release

Nntpgit is an NNTP server that tracks branches in a git repository and makes commits available as "articles" accessible in a news reader. The initial release of this program (written in Python3) is available for brave users who do not fear young and irresponsible software.

Full Story (comments: 3)

Release candidates for Python 2.6.8, 2.7.3, 3.1.5, and 3.2.3

Release candidates for Python versions 2.6.8, 2.7.3, 3.1.5, and 3.2.3 are available. "The main impetus for these releases is fixing a security issue in Python's hash based types, dict and set, as described below. Python 2.7.3 and 3.2.3 include the security patch and the normal set of bug fixes. Since Python 2.6 and 3.1 are maintained only for security issues, 2.6.8 and 3.1.5 contain only various security patches."

Full Story (comments: 1)

A set of PostgreSQL updates

The PostgreSQL project has announced the release of versions 9.1.3, 9.0.7, 8.4.11 and 8.3.18. They contain a number of fixes, some of which are security-related. "Users of pg_dump, users of SSL certificates for validation or users of triggers using SECURITY DEFINER should upgrade their installations immediately. All other database administrators are urged to upgrade your version of PostgreSQL at the next scheduled downtime."

Full Story (comments: none)

Newsletters and articles

Development newsletters from the last week

Comments (none posted)

Why Linux Is a Model Citizen of Quality Code (Linux.com)

Linux.com reports on the latest defect statistics from Coverity. "With 6,849,378 lines of Linux 2.6 code scanned, 4,261 outstanding defects were detected and 1,283 were fixed in 2011. The defect density of Linux 2.6 is .62, compared to .20 for PHP 5.3 and .21 for PostgreSQL 9.1. Keep in mind that the codebase for PHP 5.3 — 537,871 lines of code — is a fraction of that of Linux 2.6, and PostgreSQL 9.1 has 1,105,634 lines of code."

Comments (54 posted)

Page editor: Jonathan Corbet

Announcements

Brief items

Collabora and Fluendo announce GStreamer partnership

Collabora Ltd. and Fluendo S.A. will invest in promoting the GStreamer multimedia framework through the creation of a cross platform software development kit (SDK), targeting desktop and server platforms like Linux, Windows and Mac OS X, and very soon to include leading mobile platforms, such as Android. "For 2012, the objective of the partnership between Fluendo and Collabora is to ease GStreamer adoption by providing a full-featured, well tested and high quality multimedia sub-system for desktop, server and embedded systems."

Full Story (comments: none)

The Document Foundation, SUSE and Intel

The Document Foundation (TDF) has announced the availability of LibreOffice for Windows from SUSE in the Intel AppUp Center. The AppUp Center is an online repository designed for Intel processor-based devices. Also Intel is also becoming a member of TDF Advisory Board. "LibreOffice for Windows from SUSE is available in Intel AppUp(TM) Center as a special, five-language version featuring English, German, French, Spanish and Italian. As a validated Intel AppUp Center app, LibreOffice for Windows from SUSE features a new, smooth, silent installation flow and improved uninstallation cleanup."

Full Story (comments: none)

EFF: New 'HTTPS Everywhere' Version

The Electronic Frontier Foundation (EFF) has released the 2.0 version of HTTPS Everywhere for the Firefox browser. "The "Decentralized SSL Observatory" is an optional feature that detects encryption weaknesses and notifies users when they are visiting a website with a security vulnerability – flagging potential risk for sites that are vulnerable to eavesdropping or "man in the middle" attacks."

Full Story (comments: 15)

FSFE launches Campaign For A Free Android System

The Free Software Foundation Europe (FSFE) is launching its "Free Your Android!" campaign. "The lack of software freedom on smartphones and tablets is becoming increasingly problematic. Many apps spy on their users without their knowledge (Carrier IQ) and transmit personal data such as address books on the iPhone. Other devices are completely locked down, prevent users from uninstalling certain apps, or just do not receive updates. "Free Your Android!" promotes versions of Android optimised for user control, and an alternative market that only provides free-as-in- freedom apps. It also invites people to contribute to various initiatives and to identify essential apps that still have no free alternatives."

Full Story (comments: none)

Mozilla to ship Boot to Gecko devices in 2012

Mozilla has sent out a press release describing its announcements at Mobile World Congress. "In a joint press conference, Telefónica revealed their intention to work with us to deliver the very first open Web devices in 2012. These devices, architected entirely on the Web and built based on an HTML5 stack with powerful Web APIs, will mean significant advances in speed and cost reduction for mobile devices in the future. Attendees at the press conference were able to see a sneak preview of what is possible with HTML5 open Web technologies powering entire mobile device functions and experiences." There is also a teaser of sorts at openwebdevice.com.

Comments (14 posted)

The Software Freedom Conservancy and the Community for Open Source Microfinance Assume Leadership of the Mifos project

The Software Freedom Conservancy and the Community for Open Source Microfinance (COSM) have announced parallel efforts to assume stewardship of Mifos. "“Mifos provides the software infrastructure for our community's efforts to lift people out poverty,” added Edward Cable, co-founder of COSM and Mifos Community Manager, who has also been a member of the Mifos team since 2007. “We look forward to working side by side with the Conservancy to advance this technology and help our community enable microfinance institutions to further extend their outreach to the poorest of the poor”."

Full Story (comments: none)

Articles of interest

If Android is a "stolen product," then so was the iPhone (ars technica)

Ars technica digs into the history of touchscreen interfaces and phones to show that Apple (and Android) built on the shoulders of giants. In particular, prior art is shown for many of the "innovations" claimed by Apple, like slide-to-unlock and the pinch gesture. "By February 2006, [Jeff] Han brought a number of these ideas together to create a suite of multitouch applications that he presented in a now-famous TED talk. He showed off a photo-viewing application that used the "pinch" gesture to re-size and rotate photographs; it included an on-screen keyboard for labeling photos. He also demonstrated an interactive map that allowed the user to pan, rotate, and zoom with dragging and pinching gestures similar to those used on modern smartphones."

Comments (32 posted)

Transparency Launches as Linux of Drug Development (Xconomy)

Xconomy takes a look at Transparency Life Sciences, which is trying to apply the open source development model to drug development. "[Tomasz] Sablinski hopes the population of registered users will balloon in the coming weeks, when Transparency begins reaching out to patient advocacy groups in the disease areas it’s working in: multiple sclerosis, peripheral vascular disease, and inflammatory bowel disease. 'What we’re trying to do is channel this tremendous Web energy into helping us create experiments that will answer the right questions,' Sablinski says."

Comments (3 posted)

New Books

The Developer's Code--New from Pragmatic Bookshelf

Pragmatic Bookshelf has released "The Developer's Code" by Ka Wai Cheung.

Full Story (comments: none)

Machine Learning for Hackers--New from O'Reilly Media

O'Reilly Media has released "Machine Learning for Hackers" by Drew Conway and John Myles White.

Full Story (comments: none)

Calls for Presentations

Tizen Conference Call for papers and registration

The Tizen conference will take place May 7-9, 2012, in San Francisco, California. Registration and the call for papers are open. The CFP deadline is March 8.

Comments (none posted)

GUADEC 2012 Call for Presentations

The 2012 GNOME Users And Developers European Conference (GUADEC) will take place July 26-29, in A Coruña, Spain. The call for papers is open until April 14. "For the 2012 edition, GUADEC comes to A Coruña in Spain, a city that has already seen several successful GNOME-related hackfests in the past three years. The conference will be held from July 26th to 29th at the Faculty of Computer Science of the University of A Coruña, with additional extra days afterwards for hackfests and meetings."

Full Story (comments: none)

2012 Linux Plumbers Conference

The 2012 edition of the Linux Plumbers Conference will take place August 29-31, in San Diego, California. The call for microconferences is open. "These microconferences are working sessions that are roughly a half day in length, each focused on a specific aspect of the “plumbing” in the Linux system."

Comments (1 posted)

Upcoming Events

Events: March 1, 2012 to April 30, 2012

The following event listing is taken from the LWN.net Calendar.

Date(s)EventLocation
February 27
March 2
ConFoo Web Techno Conference 2012 Montreal, Canada
March 2
March 4
BSP2012 - Moenchengladbach Mönchengladbach, Germany
March 2
March 4
Debian BSP in Cambridge Cambridge, UK
March 5
March 7
14. German Perl Workshop Erlangen, Germany
March 6
March 10
CeBIT 2012 Hannover, Germany
March 7
March 15
PyCon 2012 Santa Clara, CA, USA
March 10
March 11
Debian BSP in Perth Perth, Australia
March 10
March 11
Open Source Days 2012 Copenhagen, Denmark
March 16
March 17
Clojure/West San Jose, CA, USA
March 17
March 18
Chemnitz Linux Days Chemnitz, Germany
March 23
March 24
Cascadia IT Conference (LOPSA regional conference) Seattle, WA, USA
March 24
March 25
LibrePlanet 2012 Boston, MA, USA
March 26
April 1
Wireless Battle of the Mesh (V5) Athens, Greece
March 26
March 29
EclipseCon 2012 Washington D.C., USA
March 28 PGDay Austin 2012 Austin, TX, USA
March 28
March 29
Palmetto Open Source Software Conference 2012 Columbia, South Carolina, USA
March 29 Program your own open source system-on-a-chip (OpenRISC) London, UK
March 30 PGDay DC 2012 Sterling, VA, USA
April 2 PGDay NYC 2012 New York, NY, USA
April 3
April 5
LF Collaboration Summit San Francisco, CA, USA
April 5
April 6
Android Open San Francisco, CA, USA
April 10
April 12
Percona Live: MySQL Conference and Expo 2012 Santa Clara, CA, United States
April 12
April 13
European LLVM Conference London, UK
April 12
April 19
SuperCollider Symposium London, UK
April 12
April 15
Linux Audio Conference 2012 Stanford, CA, USA
April 13 Drizzle Day Santa Clara, CA, USA
April 16
April 18
OpenStack "Folsom" Design Summit San Francisco, CA, USA
April 17
April 19
Workshop on Real-time, Embedded and Enterprise-Scale Time-Critical Systems Paris, France
April 19
April 20
OpenStack Conference San Francisco, CA, USA
April 21 international Openmobility conference 2012 Prague, Czech Republic
April 23
April 25
Luster User Group Austin, Tx, USA
April 25
April 28
Evergreen International Conference 2012 Indianapolis, Indiana
April 27
April 29
Penguicon Dearborn, MI, USA
April 28 Linuxdays Graz 2012 Graz, Austria
April 28
April 29
LinuxFest Northwest 2012 Bellingham, WA, USA

If your event does not appear here, please tell us about it.

Page editor: Rebecca Sobol


Copyright © 2012, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds