User: Password:
|
|
Subscribe / Log in / New account

LWN.net Weekly Edition for February 17, 2011

The Ada Initiative takes a different approach

By Jake Edge
February 16, 2011

The gender imbalance in the free software world is largely mirrored in the related "open technology and culture" communities. Various efforts have been tried over the years to try to rebalance things, with varying degrees of success. The newly formed Ada Initiative is taking a different tack than those previous efforts: raising money to support full-time staff, along with various projects, rather than going the traditional all-volunteer route.

Valerie Aurora and Mary Gardiner, who are longtime advocates and organizers for "women in open source" projects, launched the Ada Initiative (TAI) on February 7 to "concentrate on focused, direct action programs, including recruitment and training for women, education for community members, and working with companies and projects to improve their outreach to women". While the first steps for the initiative are somewhat bureaucratic—filling out paperwork to put the organization on a sound legal footing along with raising the funds that it needs—TAI has some concrete plans for projects that it will be working on.

At the top of the priority list, according to Aurora, is a survey that will measure the participation of women in the open technology and culture communities. This would be something of an update to the FLOSSPOLS survey that was done in 2006. TAI is working on a methodology for the survey, so that it can be repeated over time to gauge progress. The survey is meant to answer a very fundamental question, Aurora said: "How bad is the problem, and is what we are doing making things better? If we can't answer these questions, we can't do a good job."

Another project in the works is "First Patch Week", which will be an effort to pair companies and projects with female developers to help get the new developers over the first hurdle in joining a development community: submitting their first patch. The idea is that the existing community supplies mentors who have been trained by TAI to bring these new developers along, and it will be beneficial to both sides: "Participating in First Patch Week is an excellent opportunity to get new developers working on your project (with the potential of hiring them later on, of course)." Like the survey, First Patch Week is going to take some time to get up and running, but once past the organizational set-up phase, TAI intends to put in "several months of full time effort" to find the right projects and train mentors.

So far, the response to the initiative has been "amazing", Aurora said, with inquiries from "enormous international corporations" as well as community organizations and individuals. TAI is in discussions with multiple sponsors, but it is really looking for more than money:

At this point, we are focusing on sponsors who want to do more than write a check: donate engineer time, help organize meetings, run scholarships, give us advice on fundraising, or otherwise help us with things money can't buy.

Linux Australia is the first TAI sponsor, and is providing some general sponsorship money that Aurora described as a "do the right thing" sponsorship. Because the organization is so small, general sponsorships, rather than those focused on a specific project, are what it is looking for. There's still plenty of room to become a sponsor, but "if your organization would like to be a founding member of the Ada Initiative, now is the time to be talking to us."

Discussions on the supporters mailing list have focused on individual contributions. While that is not the kind of funding TAI is looking for in the long term, it would help with the start-up process, so there will be some means of doing that (possibly through a Kickstarter campaign) coming soon. But there are ways to help beyond just the financial:

The best way to support the Ada Initiative right now is to encourage other people in your organization to support us. Right now we have people helping by writing checks, but also by offering meeting space, travel funding, pro bono legal advice, event planning, and the like.

If you want to you help, you should also sign up for one of our myriad announcement channels - Twitter, blog, etc. - and we will make announcements as we have opportunities for people to contribute.

It is clear from the FAQ that TAI hopes that fundraising will provide the financial resources to allow the organization to dig into projects that are difficult or impossible for all-volunteer organizations to take on. By providing salaries to its employees (eventually, anyway), those people can concentrate solely on the projects, rather than having to work on them in "evening and weekend" time. It is a different style than that taken by existing organizations, such as LinuxChix and AussieChix, but one that TAI believes will be beneficial to the whole ecosystem, as Aurora pointed out:

In general, our theory is that the majority of people in open technology and culture really want women to be involved and welcome - they just don't know how to do it. Our goal is to give these people the information and opportunity to accomplish this. Whenever we do a project, the project itself is just the first step. Documenting what we did and teaching other people to reproduce it is just as important.

The announcement was met with an "excited and supportive" reception, which, along with the sponsors that seem to be lining up, should bode well for TAI. According to Aurora, the initiative expects to be fully funded and working full-time on its projects by July. That means we should start seeing concrete results from those efforts in the latter half of the year. Gardiner and Aurora created TAI because it was "the right thing" to do, Aurora said, and they have been pleasantly surprised with the reaction from the rest of the open technology and culture communities:

What we didn't realize was the intensity of frustrated desire that many people have about helping women in open technology and culture. People desperately want to do something about the injustice and imbalance they see around themselves every day in the tech community. We're finally giving people an outlet for all that energy.

The Ada Initiative—named for Countess Ada Lovelace, "the world's first woman open source programmer"—is a very interesting experiment. It will not only provide ways to increase the participation of women in free software and related fields, which is worthwhile goal in itself, but it may also provide an example of how to fund organizations focused on other specific initiatives within our communities.

There are a number of similar kinds of organizations in our community, the foundations for Linux, GNOME, and Apache for example, but those tend to be larger, umbrella organizations, whereas TAI is tightly focused on a well-defined, existing problem. There are certainly other technical and social problems in our communities that might benefit from a similar approach. More women in open technology and culture would be a fabulous outcome from this experiment, and finding more ways to fund interesting projects would just be icing on the cake.

Comments (56 posted)

FOSDEM: Icing the robot

By Jonathan Corbet
February 11, 2011
Anybody who looks at an Android system knows that, while Android is certainly based on the Linux kernel, it is not a traditional Linux system by any stretch. But Android is free software; might it be possible to create a more "normal" Android while preserving the aspects that make Android interesting? Developers Mario Torre and David Fu think so; they also plan to soon have the code to back it up. Their well-attended FOSDEM talk covered why they would want to do such a thing and how they plan to get there.

Mario and David are annoyed that Android does not run on a normal Linux system, on any other operating system, or on any architecture except ARM (though they did note the in-progress x86 port). They like their Android applications and want to be able to run them on ordinary systems. To get there, they have developed a plan of decoupling the various parts of an Android system so that they can be replaced. Then they will implement whatever pieces are needed using ordinary Java and the openJDK; that includes implementing a Dalvik virtual machine (VM) in Java and/or running Dalvik as a standalone application. The result will be IcedRobot - an Android implementation built with ordinary Java and which can run on standard operating systems.

Why would one get into a project like this? As Mario put it: they like Google TV and want to run it on a desktop system. It might be nice to dispense with GNOME shell or Unity altogether and run in a pure Android environment. Or, on a traditional desktop, one could run interesting Android applications as "desklets." There is, they said, some potential commercial value for the Dalvik virtual machine which has been liberated from the custom Android kernel and libraries. They mentioned that a Dalvik VM running inside a normal Java VM might take the wind out of the sails of Oracle's lawsuit; since it would obviously be a pure Java application, Oracle's patent claims might not apply. And, they said, it's "time to do something crazy" now that the task of liberating Java is finally complete.

IcedRobot comes down to three separate projects aimed at different use cases. The first of these is gnudroid, which can be thought of as the IcedRobot "micro edition." For this incarnation, there is no interest in running on desktop systems. Gnudroid dispenses with the special Android kernel and the "bionic" libc replacement as well, going back to using standard system components. The Dalvik VM runs as a standalone application on such systems; the end result is something which is quite similar to standard Android in terms of functionality. The developers are removing "meaningless" code from the system - a move, which they say, cuts out 70% of the code. (Details on what is "meaningless" were not provided, though one assumes that removing the custom kernel is a big part of the total.) A new set of build scripts has been written, and the whole thing has been put into a Mercurial repository - they are evidently more comfortable with Mercurial than with git.

The next component, called Daneel, is a Dalvik interpreter written in pure Java. It's only an interpreter at the outset; they acknowledged that it may be necessary to add a just-in-time compiler in the future. This is the piece that, they think, might serve as a workaround for any Oracle patents which might otherwise be applicable. It is, they said, "a bridge between the worlds" of the Dalvik VM and pure Java systems.

Finally, GNUBishop is the "IcedRobot standard edition." It would be made up of three parts - a browser plugin, a desktop application framework, and a full standalone operating system. It replaces the Dalvik runtime entirely, using OpenJDK for the runtime system and Daneel as the core virtual machine. The plugin would allow running Android applications within a browser; most of the popular browsers are targeted. The application framework, instead, would allow the installation of Android applications on a normal desktop system. Linux systems are clearly targeted here, but the developers also have Mac OS and Windows systems in mind - and even QNX. The full operating system would be a Linux distribution built around the Android system.

This work is a volunteer effort for now, but Mario and David would appear to have some commercial goals in mind as well. They discussed the idea of the "GNU AppBazaar," which would be an IcedRobot equivalent to the Android Market. Evidently 10% of all proceeds from the AppBazaar will be sent to the Free Software Foundation. Also planned is "GNU AdNonSense," an advertising system for IcedRobot applications. They were quite firm that any such ads would be completely untargeted and that privacy is an important feature of this system. So no per-user information would be collected, and there will be no way for advertisers to target their ads to specific users. There was some talk of aiming IcedRobot at the automotive market, where, evidently, the developers see a fair amount of opportunity.

The current state of the code is not at all clear; it will, they said, be posted on IcedRobot.org soon, but, as of this writing, that site does not yet exist. From this weblog posting it seems that the process of decoupling Dalvik from the Android kernel is not yet complete; in the talk they said that the replacement of bionic is also an ongoing task. But there are apparently a number of developers working on the project, and they have that wild look in their eyes that suggests they may have the drive to see it through. The IcedRobot may yet walk among us.

Comments (37 posted)

PostgreSQL, OpenSSL, and the GPL

By Jake Edge
February 16, 2011

The OpenSSL license, which is BSD-style with an advertising clause, has been a source of problems in the past because it is rather unclear whether projects using it can also include GPL-licensed code. Most distributions seem to be comfortable that OpenSSL can be considered a "system library", so that linking to it does not require OpenSSL to have a GPL-compatible license, but the Free Software Foundation (FSF) and, unsurprisingly, Debian are not on board with that interpretation. This licensing issue recently reared its head again in a thread on the pgsql-hackers (PostgreSQL development) mailing list.

For command-line-oriented programs, the GNU readline library, which allows various types for command-line editing, is a common addition. But readline is licensed under the GPL (rather than the LGPL), which means that programs which use it must have a compatible license and PostgreSQL's BSD-ish permissive license certainly qualifies. But the OpenSSL license puts additional restrictions on its users and is thus not compatible with the GPL. Whether that is a real problem in practice depends on how you interpret the GPL and whether OpenSSL qualifies for the system library exception.

Debian has chosen a fairly hardline stance on the matter, which is evidently in line with the FSF's interpretation, so it switched to the BSD-licensed Editline (aka libedit) library instead of readline. PostgreSQL supports libedit as a readline alternative, so making the switch is straightforward. Unfortunately, a bug in libedit means that Debian PostgreSQL users can't input multi-byte characters into the psql command-line tool when using Unicode locales.

For the PostgreSQL project, it is something of a "rock and a hard place" problem. The OpenSSL code works well, and is fairly tightly—perhaps too tightly—integrated. There are two obvious alternatives, though, GnuTLS and Mozilla's Network Security Services (NSS). Switching to either of those would obviate the readline problem because their licenses do not contain the problematic advertising clause.

There have been efforts to switch PostgreSQL to use GnuTLS, as described in Greg Smith's nice overview of the history of the problem, but they didn't pass muster due to the size and intrusiveness of the patch. Part of the problem is that psql is too closely tied to OpenSSL as Martijn van Oosterhout, who developed the GnuTLS support, describes:

I spent some time a while back making PostgreSQL work with GnuTLS. The actual SSL bit is trivial. The GnuTLS interface actually made sense whereas the OpenSSL one is opaque (at least, I've never seen any structure in it). The GnuTLS interface was designed in the modern era and it shows.

The problems are primarily that psql exposes in various ways that it uses OpenSSL and does it in ways that are hard to support backward [compatibly]. So for GnuTLS support you need to handle all those bits too.

Another route to fixing the problem might be for either the readline or the OpenSSL license to change, but that is not a very likely outcome. Some GPL-licensed code has added an explicit "OpenSSL exception", but it is pretty implausible to expect the FSF to do so for readline—it has long seen that library as a way to move more projects to GPL-compatible licenses. OpenSSL is either happy with its license or is unable to change it as Stephen Frost points out in the thread:

aiui [as I understand it], the problem here is actually a former OpenSSL hacker who has no interest (and, in fact, a positive interest against) in changing the OpenSSL licensing. Most of the current OpenSSL hackers don't have an issue with the change (again, aiui).

Robert Haas recommends revisiting the GnuTLS support for the PostgreSQL 9.2 release, but in the meantime there are some Debian users who cannot easily use psql. It goes beyond just Debian, though, because Ubuntu will be picking up the PostgreSQL+libedit version for its next release. That spreads the problem further, as Joshua D. Drake, who started the whole thread, notes: "As popular as Debian is, the 'user' population is squarely in Ubuntu world and that has some serious public implications as a whole."

Instead of GnuTLS, NSS could be used and has one major advantage: Federal Information Processing Standard (FIPS) 140-2 certification. FIPS 140-2 is a US government standard for encryption that is sometimes required by companies and organizations when adopting products that contain encryption. OpenSSL has been FIPS 140-2 certified, as has NSS, but GnuTLS has not been. For that reason, there is talk of making PostgreSQL support NSS rather than GnuTLS.

The Fedora project is also looking at NSS as part of an effort to consolidate the cryptography libraries used by the project. For a number of reasons, including FIPS certification and some features missing from GnuTLS (notably S/MIME), NSS is the direction Fedora chose. One would guess that the GPL-incompatible license for OpenSSL played a role in eliminating it from consideration.

On the other hand, Fedora does ship various tools with both readline and OpenSSL, including PostgreSQL. It would seem that Fedora (and possibly Red Hat's lawyers) are relying on a belief that OpenSSL is distributed as a system library, as Fedora engineering manager Tom "spot" Callaway has said in 2008 and again in 2009. The project (and other distributions) may also be relying on the near-zero probability that the FSF will ever make a serious effort to stop the distribution of PostgreSQL using readline.

For Debian, though, that's not enough. Another Debian bug report contained more discussion of the problem, and a workaround discovered by Andreas Barth:

If calling psql as
    LD_PRELOAD=/lib/libreadline.so.5 psql
everything works as normal.

That's a bit of an ugly hack, and no one seems very happy about it, but the plan is to add the LD_PRELOAD (if libreadline is available) into the psql wrapper that is shipped in the postgresql-client-common package. Martin Pitt sums it up this way:

Technically, this is a bit fragile, of course, as there might be some subtle ABI differences which lead to crashes. However, the preloading workaround already makes the situation so much better than before, so IMHO it's better than the previous status quo.

I don't really like this situation, and personally I'd rather move back to libreadline until OpenSSL or readline or PostgreSQL threatens Debian with a legal case for license violation (I daresay that the chances of this happening are very close to zero..). But oh well..

This kind of licensing clash occurs with some frequency, and the OpenSSL license is known to be problematic—at least for projects that use GPL code. The advertising requirement, which is something of a throwback to the early days of the BSD license, makes OpenSSL increasingly isolated. Distributions and other projects are likely to continue to search for, and find, alternatives, if only to reduce the licensing murkiness and associated questions from developers and users. It is unfortunate that an ego-stroking clause or two in the license of a useful library may reduce its usage but, as always, free software will find a way to work around these kinds of problems and move on.

Comments (95 posted)

Page editor: Jonathan Corbet

Security

Bluepot: A honeypot for Bluetooth attacks

February 16, 2011

This article was contributed by Nathan Willis

Servers and PCs get the lion's share of security attention, so it is refreshing to occasionally find a security tool addressing other areas of the ubiquitous computing landscape. One such tool is Bluepot, a GPLv3-licensed honeypot for Bluetooth attacks originally written as a school project by developer Andrew Smith.

A "honeypot" is security slang for a trap designed to lure in attackers by masquerading as a vulnerable system. Generally speaking, a honeypot is used to catch attackers before they reach a genuine network resource (either to shut them down or to report them), but honeypots can also be used as purely research devices — helpful tools to profile the current vulnerability landscape. In Bluetooth attack preparedness, setting up an attractive honeypot probably means pretending to be a phone model with known exploits, known or weak PINs, or other enticing properties.

Bluepot is written in Java and distributed as a JAR file, although, despite the language choice, for the moment it runs only on Linux. This is because Smith designed the application to support the use of multiple Bluetooth adapters simultaneously, which is a feature that Windows cannot handle. The current release is version 0.1, from December 29, 2010. From the Subversion logs, it appears that the bulk of the code was written in the spring of 2010, with a cleanup phase preceding its public release in December. Smith announced the release on his blog, on which he regularly writes about honeypot development.

To get started, you must first install the Bluetooth development libraries for your distribution (presumably this is required to make use of the libraries' lower-level Bluetooth utilities in order to manipulate the adapter's hardware settings more easily). Debian and Ubuntu title the package libbluetooth-dev, while Fedora and Red Hat name it bluez-libs-devel, and openSUSE calls it bluez-devel. You must also have one or more BlueZ-supported Bluetooth adapters. With the dependencies taken care of, simply unpack the Bluepot tarball, and launch Bluepot-0.1.jar with root privileges (root is required in order to change adapter settings; if you attempt to start Bluepot without root privileges, it will not even run).

Normally, your Bluetooth adapter advertises a public name set through the GNOME or KDE tool's Bluetooth configuration utility, and a "computer" major device class. Bluepot allows you to advertise each adapter on your system with a different name, major device class, and minor device class. Historically, lower-level devices such as low-end cell phones, printers, and headsets have had most of the Bluetooth security holes exploited in the wild, particularly because few consumers update the firmware of such products. Thus, to make your honeypot the most attractive to would-be attackers, you may wish to set its name to an older-model Nokia phone and its device class to phone/cellular. Alternatively, Bluepot can randomly alter the advertised name and device class of each adapter, which is probably wise if you want to take a longer look at the attackers in your surroundings.

Attacks

Bluepot runs its adapters in discoverable mode, accepting all incoming connection requests and transfers. It tracks the OBEX (Object Exchange) protocol used to directly transfer files between devices, the RFCOMM (Radio Frequency Communication) protocol used for serial communication, and the L2CAP (Logical Link Control and Adaptation Protocol) used for transmission control.

The simplest Bluetooth attack is called bluejacking. In spite of the seeming connection to "hijacking," bluejacking is simply sending an unauthorized message of file transfer to another device, using OBEX. For the most part, modern phones and printer now refuse to accept incoming file transfers without explicit user authorization, but there are older models that still accept files from previously-paired devices, and some phones that automatically accept vCards (or any other file payload with the .vcf extension) in the interest of friendly business-card-like information exchange.

Cracking tools may allow an attacker to brute-force the four-digit numeric PIN used to initially pair new devices, which potentially allows for an attack vector to get around the previously-paired-device limitation. According to the specification, Bluetooth PINs can be up to 128 bits long; consumer electronics manufacturers tend to use 4-6 numeric digits to make them easier to remember — which also makes them far easier to brute-force. Even worse, a significant percentage of non-computer devices use easily guessed PINs like 0000 or 1234.

A far more serious exploit goes by the memorable name bluesnarfing; this attack involves remotely reading files from another device: address books, SIM contacts, photos, saved text messages and emails, etc. As with bluejacking, it works over OBEX, although it is more complex, because the remote device must authorize file browsing. The weak-PIN problem is a potential issue here, too, although most devices use encryption, and there are fewer devices that accept any form of incoming file browsing requests without explicit user authorization.

The most serious attack is referred to as bluebugging, which amounts to remotely taking over control of the target device, using it to place or route calls, send SMS or MMS messages, or consume data services. This is typically done by exploiting the Bluetooth stack in order to do a privilege escalation. In addition to these phone-centric attacks, there is an array of potential exploits not centered around cell phone usage, including uploading malware to Bluetooth devices, and hijacking or snooping audio connections.

Bluepot should be able to track and log all of these connections. In its configuration tab, you can specify a directory in which to store any files uploaded by attackers, and you can customize the OBEX and RFCOMM response messages sent, in order to better masquerade as a specific device.

Testing

News and blog coverage of bluejacking and bluesnarfing peaked in the mid-2000s, at which point there were a number of common cell phones on the market with known vulnerabilities. Most of the media coverage of the phenomenon I read involves attackers lying in wait for victims in high-traffic public locations such as mass transit points. Since I did not expect to find such nefarious behavior on display in the non-public-transport-served area where I live, I opted to test Bluepot at home instead, by using a pair of machines and a variety of Internet-provided tools.

That itself proved to be a challenge, since most of the publicly-available pen-testing tools date from the mid-2000s as well, and BlueZ, the Linux Bluetooth stack, has undergone a number of revisions since then. The Bluesnarfer tool, for example, is apparently written for BlueZ prior to the version 4.0 release, which changed a number of the setup utilities. Others, like Blooover, are written for Java MIDP-powered phones.

Nevertheless, I was able to test and verify Bluepot's ability to falsely advertise my desktop's USB Bluetooth adapter as a phone, a printer, a network access point, and several other devices, and to safely intercept OBEX file transfers. Along the way, I think I discovered what I would have to call a bug in the GNOME Bluetooth stack, namely that every Linux machine that I tested with aggressively caches the advertised names and device classes of the Bluetooth devices that it discovers when scanning for nearby connections — even when I could verify that a name change had taken place on the Bluepot machine (with hciconfig), it took a reboot of the attacker machines to pick up the updated information.

Along those lines, though, one thing I was not able to do was browse files on the Bluepot machine. That is a feature I was expecting in a honeypot application — to see which files attackers requested, and potentially to feed them bogus data in response. It is possible that Bluepot supports this and I simply could not get it to work — sadly, BlueZ 4.x on Linux is almost completely undocumented. It has improved considerably in the past two or three years, but vague and cryptic error messages (such as "Unable to find service record" in response to a failed OBEX file transfer to a paired device) are still the norm.

I had better luck with the audio device exploit tester carwhisperer, which is designed to inject a harmless audio message to un-secured car hands-free devices, and to intercept and record audio from them. Naturally there was no audio to record when using Bluepot to simulate a hands-free device, but Bluepot tracked and logged the connections admirably.

[Bluepot main screen]

Bluepot has some basic diagnostic tools, allowing you to chart protocol traffic and file downloads over time, and to view the session logs sorted by adapter and attacker (for each attacker, it logs the Bluetooth device address). One area in which Bluepot falls short, however, is in saving these session logs: its logs its internal status in the logs directory of the unpacked tar archive, but this only includes startup, adapter initialization, and shutdown messages. It apparently attempts to log attack data with the log4j Java library, but through some misconfiguration, fails to do so, and the log settings are not configurable in the user interface. Thus, if you want to save session data, you will have to cut-and-paste information from the GUI's log tab into an external editor.

Smith is pretty open about Bluepot's feature set and limitations on the project site and on his blog; the basic framework is there to collect Bluetooth attack data, and, through multiple-adapter support and device randomization, to do so without the likelihood of discovery. It might be more powerful to masquerade as other Bluetooth addresses, or to provide some more interactive honeypot-like features (such as dummy file content), but it is still a nice starting point, and admirably simple to get started using. I don't expect to catch bluebugging criminals at my local Starbucks, but it will be tempting to take Bluepot with me to the next free software conference I attend, just to see what turns up in the hallway track.

Comments (1 posted)

Brief items

Security quotes of the week

From: Greg
To: Jussi
Subject: Re: need to ssh into rootkit
yes jussi thanks

did you reset the user greg or?

-------------------------------------

From: Jussi
To: Greg
Subject: Re: need to ssh into rootkit
nope. your account is named as hoglund
-- "Anonymous" does some social engineering (as reported by ars technica)

Security isn't just a tax on the honest; it's a very expensive tax on the honest. It's the most expensive tax we pay, regardless of the country we live in. If people were angels, just think of the savings!
-- Bruce Schneier

In my own private-sector security industry work, I observed a pattern: the higher the stakes, the worse the security. "Worse" usually means "more easily resolved with known techniques". I evaluated a wide range of applications and platforms, and almost invariably found that the most important systems — those managing life, health, and money — were poorly engineered. By contrast, small startups doing something interesting but not (yet) critical would sometimes have very well-engineered systems, with entire classes of vulnerability designed away, minimal feature creep, and solid development practices reducing the risk of accidental implementation flaws.
-- Chris Palmer in the EFF's Deeplinks blog

Comments (2 posted)

New vulnerabilities

abcm2ps: multiple unspecified vulnerabilities

Package(s):abcm2ps CVE #(s):CVE-2010-3441
Created:February 15, 2011 Updated:November 21, 2011
Description: From the Fedora advisory:

Abcm2ps v5.9.12: Multiple unspecified security vulnerabilities

Abcm2ps v5.9.13: More multiple unspecified security vulnerabilities

Alerts:
Gentoo 201111-12 abcm2ps 2011-11-20
Fedora FEDORA-2011-1092 abcm2ps 2011-02-05

Comments (none posted)

cgiirc: cross-site scripting

Package(s):cgiirc CVE #(s):CVE-2011-0050
Created:February 10, 2011 Updated:February 16, 2011
Description:

From the Debian advisory:

Michael Brooks (Sitewatch) discovered a reflective XSS flaw in cgiirc, a web based IRC client, which could lead to the execution of arbitrary javascript.

Alerts:
Debian DSA-2158-1 cgiirc 2011-02-09

Comments (none posted)

chrome/chromium: multiple vulnerabilities

Package(s):chrome chromium CVE #(s):CVE-2011-0777 CVE-2011-0778 CVE-2011-0783 CVE-2011-0983 CVE-2011-0981 CVE-2011-0984 CVE-2011-0985
Created:February 16, 2011 Updated:August 23, 2011
Description: The Google chrome and chromium browsers prior to chrome 9.0.597.84 contain a number of vulnerabilities with denial of service or "unspecified impact" consequences.
Alerts:
Ubuntu USN-1195-1 webkit 2011-08-23
SUSE SUSE-SR:2011:009 mailman, openssl, tgt, rsync, vsftpd, libzip1/libzip-devel, otrs, libtiff, kdelibs4, libwebkit, libpython2_6-1_0, perl, pure-ftpd, collectd, vino, aaa_base, exim 2011-05-17
openSUSE openSUSE-SU-2011:0482-1 webkit 2011-05-13
Debian DSA-2188-1 webkit 2011-03-10
Fedora FEDORA-2011-1224 webkitgtk 2011-02-09
Debian DSA-2166-1 chromium-browser 2011-02-16

Comments (none posted)

chromium: multiple vulnerabilities

Package(s):chromium-browser CVE #(s):
Created:February 14, 2011 Updated:February 16, 2011
Description: Version 9.0.597.94 contains an updated version of Flash player (10.2), along with several security fixes.
Alerts:
Pardus 2011-27 chromium-browser 2011-02-12

Comments (none posted)

ffmpeg: multiple vulnerabilities

Package(s):ffmpeg mplayer CVE #(s):CVE-2010-3429 CVE-2010-4704 CVE-2010-4705
Created:February 16, 2011 Updated:September 12, 2011
Description: The ffmpeg library suffers from integer overflow and "arbitrary offset dereference" vulnerabilities which can be exploited via hostile flic and Vorbis files.
Alerts:
Gentoo 201310-13 mplayer 2013-10-25
Gentoo 201310-12 ffmpeg 2013-10-25
Debian DSA-2306-1 ffmpeg 2011-09-11
Mandriva MDVSA-2011:114 blender 2011-07-18
Mandriva MDVSA-2011:112 blender 2011-07-18
Ubuntu USN-1104-1 ffmpeg 2011-04-04
Mandriva MDVSA-2011:062 ffmpeg 2011-04-01
Mandriva MDVSA-2011:061 ffmpeg 2011-04-01
Mandriva MDVSA-2011:060 ffmpeg 2011-04-01
Mandriva MDVSA-2011:089 mplayer 2011-05-16
Mandriva MDVSA-2011:088 mplayer 2011-05-16
Debian DSA-2165-1 ffmpeg-debian 2011-02-16

Comments (none posted)

flash-player: multiple vulnerabilities

Package(s):flash-player CVE #(s):CVE-2011-0558 CVE-2011-0559 CVE-2011-0560 CVE-2011-0561 CVE-2011-0571 CVE-2011-0572 CVE-2011-0573 CVE-2011-0574 CVE-2011-0575 CVE-2011-0577 CVE-2011-0578 CVE-2011-0607 CVE-2011-0608
Created:February 10, 2011 Updated:March 22, 2011
Description:

From the Red Hat advisory:

Multiple security flaws were found in the way flash-plugin displayed certain SWF content. An attacker could use these flaws to create a specially-crafted SWF file that would cause flash-plugin to crash or, potentially, execute arbitrary code when the victim loaded a page containing the specially-crafted SWF content. (CVE-2011-0558, CVE-2011-0559, CVE-2011-0560, CVE-2011-0561, CVE-2011-0571, CVE-2011-0572, CVE-2011-0573, CVE-2011-0574, CVE-2011-0575, CVE-2011-0577, CVE-2011-0578, CVE-2011-0607, CVE-2011-0608)

Alerts:
Gentoo 201110-11 adobe-flash 2011-10-13
Red Hat RHSA-2011:0368-01 flash-plugin 2011-03-21
SUSE SUSE-SA:2011:011 acroread 2011-03-07
openSUSE openSUSE-SU-2011:0156-1 acroread 2011-03-07
SUSE SUSE-SA:2011:009 flash-player 2011-02-14
Red Hat RHSA-2011:0206-01 flash-plugin 2011-02-09
openSUSE openSUSE-SU-2011:0109-1 flash-player 2011-02-10

Comments (none posted)

italc: remote system breach

Package(s):italc CVE #(s):CVE-2011-0724
Created:February 11, 2011 Updated:February 16, 2011
Description: From the Ubuntu advisory:

Stéphane Graber discovered that the iTALC private keys shipped with the Edubuntu Live DVD were not correctly regenerated once Edubuntu was installed. If an iTALC client was installed with the vulnerable keys, a remote attacker could gain control of the system. Only systems using keys from the Edubuntu Live DVD were affected.

Alerts:
Ubuntu USN-1061-1 italc 2011-02-11

Comments (none posted)

java: denial of service

Package(s):java-1.6.0-openjdk CVE #(s):CVE-2010-4476
Created:February 11, 2011 Updated:July 22, 2011
Description: From the Red Hat advisory:

A denial of service flaw was found in the way certain strings were converted to Double objects. A remote attacker could use this flaw to cause Java-based applications to hang, for instance if they parse Double values in a specially-crafted HTTP request.

Alerts:
Gentoo 201406-32 icedtea-bin 2014-06-29
Gentoo 201111-02 sun-jdk 2011-11-05
SUSE SUSE-SU-2011:0823-1 IBM Java 2011-07-22
SUSE SUSE-SR:2011:008 java-1_6_0-ibm, java-1_5_0-ibm, java-1_4_2-ibm, postfix, dhcp6, dhcpcd, mono-addon-bytefx-data-mysql/bytefx-data-mysql, dbus-1, libtiff/libtiff-devel, cifs-mount/libnetapi-devel, rubygem-sqlite3, gnutls, libpolkit0, udisks 2011-05-03
CentOS CESA-2011:0336 tomcat5 2011-04-14
CentOS CESA-2011:0214 java-1.6.0-openjdk 2011-04-14
Mandriva MDVSA-2011:054 java-1.6.0-openjdk 2011-03-27
SUSE SUSE-SA:2011:014 java-1_6_0-ibm,java-1_5_0-ibm,java-1_4_2-ibm 2011-03-22
SUSE SUSE-SA:2011:024 java-1_4_2-ibm 2011-05-13
Ubuntu USN-1079-3 openjdk-6b18 2011-03-17
Ubuntu USN-1079-2 openjdk-6b18 2011-03-15
Red Hat RHSA-2011:0336-01 tomcat5 2011-03-09
Red Hat RHSA-2011:0335-01 tomcat6 2011-03-09
Ubuntu USN-1079-1 openjdk-6 2011-03-01
Red Hat RHSA-2011:0290-01 java-1.6.0-ibm 2011-02-22
Red Hat RHSA-2011:0291-01 java-1.5.0-ibm 2011-02-22
Red Hat RHSA-2011:0292-01 java-1.4.2-ibm 2011-02-22
SUSE SUSE-SA:2011:010 java-1_6_0-sun 2011-02-22
openSUSE openSUSE-SU-2011:0126-1 java-1_6_0-sun 2011-02-22
Red Hat RHSA-2011:0282-01 java-1.6.0-sun 2011-02-17
Debian DSA-2161-2 openjdk-6 2011-02-14
Debian DSA-2161-1 openjdk-6 2011-02-13
Fedora FEDORA-2011-1231 java-1.6.0-openjdk 2011-02-10
Fedora FEDORA-2011-1263 java-1.6.0-openjdk 2011-02-10
Red Hat RHSA-2011:0214-01 java-1.6.0-openjdk 2011-02-10

Comments (none posted)

kernel: information disclosure

Package(s):kernel CVE #(s):CVE-2010-4655
Created:February 16, 2011 Updated:July 6, 2011
Description: An initialization flaw in the ethtool ioctl() handler could disclose information to a local user with the CAP_NET_ADMIN capability.
Alerts:
Oracle ELSA-2013-1645 kernel 2013-11-26
Ubuntu USN-1202-1 linux-ti-omap4 2011-09-13
Ubuntu USN-1164-1 linux-fsl-imx51 2011-07-06
Debian DSA-2264-1 linux-2.6 2011-06-18
Scientific Linux SL-kern-20110216 kernel 2011-02-16
Ubuntu USN-1146-1 kernel 2011-06-09
CentOS CESA-2011:0303 kernel 2011-04-14
Red Hat RHSA-2011:0421-01 kernel 2011-04-07
SUSE SUSE-SA:2011:015 kernel 2011-03-24
Red Hat RHSA-2011:0330-01 kernel-rt 2011-03-10
Red Hat RHSA-2011:0303-01 kernel 2011-03-01
Red Hat RHSA-2011:0263-01 kernel 2011-02-16

Comments (none posted)

nbd: remote code execution

Package(s):nbd CVE #(s):CVE-2011-0530
Created:February 16, 2011 Updated:June 26, 2012
Description: The developers of the nbd block device server managed to reintroduce CVE-2005-3534 - a buffer overflow enabling code execution by a remote attacker.
Alerts:
Gentoo 201206-35 nbd 2012-06-25
Ubuntu USN-1155-1 nbd 2011-06-21
SUSE SUSE-SR:2011:007 NetworkManager, OpenOffice_org, apache2-slms, dbus-1-glib, dhcp/dhcpcd/dhcp6, freetype2, kbd, krb5, libcgroup, libmodplug, libvirt, mailman, moonlight-plugin, nbd, openldap2, pure-ftpd, python-feedparser, rsyslog, telepathy-gabble, wireshark 2011-04-19
openSUSE openSUSE-SU-2011:0193-2 nbd 2011-04-18
SUSE SUSE-SR:2011:005 hplip, perl, subversion, t1lib, bind, tomcat5, tomcat6, avahi, gimp, aaa_base, build, libtiff, krb5, nbd, clamav, aaa_base, flash-player, pango, openssl, subversion, postgresql, logwatch, libxml2, quagga, fuse, util-linux 2011-04-01
Debian DSA-2183-1 nbd 2011-03-04
Fedora FEDORA-2011-1108 nbd 2011-02-05
Fedora FEDORA-2011-1097 nbd 2011-02-05

Comments (none posted)

openssh: hash collision attacks

Package(s):openssh CVE #(s):CVE-2011-0539
Created:February 14, 2011 Updated:February 16, 2011
Description: From the Pardus advisory:

The key_certify function in usr.bin/ssh/key.c in OpenSSH 5.6 and 5.7, when generating legacy certificates using the -t command-line option in ssh-keygen, does not initialize the nonce field, which might allow remote attackers to obtain sensitive stack memory contents or make it easier to conduct hash collision attacks.

Alerts:
Pardus 2011-40 openssh 2011-02-14

Comments (none posted)

openssl: denial of service

Package(s):openssl CVE #(s):CVE-2011-0014
Created:February 11, 2011 Updated:May 19, 2011
Description: From the openssl advisory:

Incorrectly formatted ClientHello handshake messages could cause OpenSSL to parse past the end of the message.

This issue applies to the following versions:
1) OpenSSL 0.9.8h through 0.9.8q
2) OpenSSL 1.0.0 through 1.0.0c

The parsing function in question is already used on arbitrary data so no additional vulnerabilities are expected to be uncovered by this. However, an attacker may be able to cause a crash (denial of service) by triggering invalid memory accesses.

Alerts:
SUSE SUSE-SU-403 openSSL 2012-01-05
Gentoo 201110-01 openssl 2011-10-09
SUSE SUSE-SR:2011:005 hplip, perl, subversion, t1lib, bind, tomcat5, tomcat6, avahi, gimp, aaa_base, build, libtiff, krb5, nbd, clamav, aaa_base, flash-player, pango, openssl, subversion, postgresql, logwatch, libxml2, quagga, fuse, util-linux 2011-04-01
openSUSE openSUSE-SU-403 openssl 2011-03-28
Fedora FEDORA-2011-5876 mingw32-openssl 2011-04-23
Fedora FEDORA-2011-5865 mingw32-openssl 2011-04-23
Fedora FEDORA-2011-1255 openssl 2011-02-10
Ubuntu USN-1064-1 openssl 2011-02-15
Mandriva MDVSA-2011:028 openssl 2011-02-15
Fedora FEDORA-2011-1273 openssl 2011-02-10
Debian DSA-2162-1 openssl 2011-02-14
Slackware SSA:2011-041-04 openssl 2011-02-11
Red Hat RHSA-2011:0677-01 openssl 2011-05-19

Comments (none posted)

pam: multiple vulnerabilities

Package(s):pam CVE #(s):CVE-2010-3430 CVE-2010-3431 CVE-2010-4706
Created:February 14, 2011 Updated:May 31, 2011
Description: From the Pardus advisory:

The privilege-dropping implementation in the (1) pam_env and (2) pam_mail modules in Linux-PAM (aka pam) 1.1.2 does not perform the required setfsgid and setgroups system calls, which might allow local users to obtain sensitive information by leveraging unintended group permissions, as demonstrated by a symlink attack on the .pam_environment file in a user's home directory. NOTE: this vulnerability exists because of an incomplete fix for CVE-2010-3435. (CVE-2010-3430)

The privilege-dropping implementation in the (1) pam_env and (2) pam_mail modules in Linux-PAM (aka pam) 1.1.2 does not check the return value of the setfsuid system call, which might allow local users to obtain sensitive information by leveraging an unintended uid, as demonstrated by a symlink attack on the .pam_environment file in a user's home directory. NOTE: this vulnerability exists because of an incomplete fix for CVE-2010-3435. (CVE-2010-3431)

The pam_sm_close_session function in pam_xauth.c in the pam_xauth module in Linux-PAM (aka pam) 1.1.2 and earlier does not properly handle a failure to determine a certain target uid, which might allow local users to delete unintended files by executing a program that relies on the pam_xauth PAM check. (CVE-2010-4706)

Alerts:
Gentoo 201206-31 pam 2012-06-25
Ubuntu USN-1140-2 pam 2011-05-31
Ubuntu USN-1140-1 pam 2011-05-30
Pardus 2011-41 pam 2011-02-14

Comments (none posted)

patch: arbitrary file creation

Package(s):patch CVE #(s):CVE-2010-4651
Created:February 14, 2011 Updated:September 14, 2012
Description: From the Pardus advisory:

It was discovered that the patch utility allowed '..' in path names which could allow an attacker to create arbitrary files using a specially-crafted patch file.

Alerts:
Ubuntu USN-2651-1 patch 2015-06-22
Slackware SSA:2012-257-02 patch 2012-09-13
Fedora FEDORA-2011-1269 patch 2011-02-10
Fedora FEDORA-2011-1272 patch 2011-02-10
Pardus 2011-28 patch 2011-02-12

Comments (none posted)

php: multiple vulnerabilities

Package(s):mod_php php-cli php-common CVE #(s):CVE-2010-4697 CVE-2010-4698
Created:February 10, 2011 Updated:May 5, 2011
Description:

From the Pardus advisory:

CVE-2010-4697: Use-after-free vulnerability in the Zend engine in PHP before 5.2.15 and 5.3.x before 5.3.4 might allow context-dependent attackers to cause a denial of service (heap memory corruption) or have unspecified other impact via vectors related to use of __set, __get, __isset, and __unset methods on objects accessed by a reference.

CVE-2010-4698: Stack-based buffer overflow in the GD extension in PHP before 5.2.15 and 5.3.x before 5.3.4 allows context-dependent attackers to cause a denial of service (application crash) via vectors related to the iimagepstext function and invalid anti-aliasing.

Alerts:
Debian DSA-2408-1 php5 2012-02-13
Gentoo 201110-06 php 2011-10-10
SUSE SUSE-SR:2011:006 apache2-mod_php5/php5, cobbler, evince, gdm, kdelibs4, otrs, quagga 2011-04-05
openSUSE openSUSE-SU-2011:0276-1 php5 2011-04-01
Ubuntu USN-1126-2 php5 2011-05-05
Ubuntu USN-1126-1 php5 2011-04-29
Pardus 2011-26 mod_php php-cli php-common 2011-02-09

Comments (none posted)

php: multiple vulnerabilities

Package(s):mod_php php-cli php-common CVE #(s):CVE-2011-0752 CVE-2011-0753 CVE-2011-0755
Created:February 14, 2011 Updated:April 5, 2011
Description: From the Pardus advisory:

The extract function in PHP before 5.2.15 does not prevent use of the EXTR_OVERWRITE parameter to overwrite (1) the GLOBALS superglobal array and (2) the this variable, which allows context-dependent attackers to bypass intended access restrictions by modifying data structures that were not intended to depend on external input. (CVE-2011-0752)

Race condition in the PCNTL extension in PHP before 5.3.4, when a user-defined signal handler exists, might allow context-dependent attackers to cause a denial of service (memory corruption) via a large number of concurrent signals. (CVE-2011-0753)

Integer overflow in the mt_rand function in PHP before 5.3.4 might make it easier for context-dependent attackers to predict the return values by leveraging a script's use of a large max parameter, as demonstrated by a value that exceeds mt_getrandmax. (CVE-2011-0755)

Alerts:
Gentoo 201110-06 php 2011-10-10
SUSE SUSE-SR:2011:006 apache2-mod_php5/php5, cobbler, evince, gdm, kdelibs4, otrs, quagga 2011-04-05
openSUSE openSUSE-SU-2011:0276-1 php5 2011-04-01
Pardus 2011-35 mod_php php-cli php-common 2011-02-12

Comments (none posted)

phpmyadmin: multiple vulnerabilities

Package(s):phpmyadmin CVE #(s):CVE-2011-0986 CVE-2011-0987
Created:February 14, 2011 Updated:February 25, 2011
Description: From the Mandriva advisory:

When the files README, ChangeLog or LICENSE have been removed from their original place (possibly by the distributor), the scripts used to display these files can show their full path, leading to possible further attacks (CVE-2011-0986).

It was possible to create a bookmark which would be executed unintentionally by other users (CVE-2011-0987).

Alerts:
Gentoo 201201-01 phpmyadmin 2012-01-04
Fedora FEDORA-2011-1373 phpMyAdmin 2011-02-13
Fedora FEDORA-2011-1408 phpMyAdmin 2011-02-13
Debian DSA-2167-1 phpmyadmin 2011-02-16
Mandriva MDVSA-2011:026 phpmyadmin 2011-02-14

Comments (none posted)

poppler: arbitrary command execution

Package(s):poppler CVE #(s):CVE-2010-4653
Created:February 14, 2011 Updated:February 16, 2011
Description: From the Pardus advisory:

Due to an integer overflow when parsing CharCodes for fonts and a failure to check the return value of a memory allocation, it is possible to trigger writes to a narrow range of offsets from a NULL pointer.

Alerts:
Gentoo 201310-03 poppler 2013-10-06
Pardus 2011-44 poppler 2011-02-14

Comments (none posted)

python-django: multiple vulnerabilities

Package(s):python-django CVE #(s):CVE-2011-0696 CVE-2011-0697
Created:February 14, 2011 Updated:October 5, 2011
Description: From the Debian advisory:

For several reasons the internal CSRF protection was not used to validate ajax requests in the past. However, it was discovered that this exception can be exploited with a combination of browser plugins and redirects and thus is not sufficient. (CVE-2011-0696)

It was discovered that the file upload form is prone to cross-site scripting attacks via the file name. (CVE-2011-0697)

Alerts:
Fedora FEDORA-2011-12481 Django 2011-09-10
Debian DSA-2163-2 dajaxice 2011-03-01
Fedora FEDORA-2011-1261 Django 2011-02-10
Fedora FEDORA-2011-1235 Django 2011-02-10
Mandriva MDVSA-2011:031 python-django 2011-02-18
Ubuntu USN-1066-1 python-django 2011-02-17
Pardus 2011-45 Django 2011-02-14
Debian DSA-2163-1 python-django 2011-02-14

Comments (none posted)

qemu-kvm: session hijack

Package(s):qemu-kvm CVE #(s):CVE-2011-0011
Created:February 15, 2011 Updated:May 2, 2011
Description: From the Ubuntu advisory:

Neil Wilson discovered that if VNC passwords were blank in QEMU configurations, access to VNC sessions was allowed without a password instead of being disabled. A remote attacker could connect to running VNC sessions of QEMU and directly control the system. By default, QEMU does not start VNC sessions.

Alerts:
Debian DSA-2230-1 qemu-kvm 2011-05-01
Red Hat RHSA-2011:0345-01 qemu-kvm 2011-03-10
Ubuntu USN-1063-1 qemu-kvm 2011-02-14

Comments (none posted)

shadow: privilege escalation

Package(s):shadow CVE #(s):CVE-2011-0721
Created:February 16, 2011 Updated:March 28, 2011
Description: The chfn and chsh utilities fail to properly sanitize user input, allowing the injection of newlines into the password file; that, in turn, allows the addition of arbitrary entries.
Alerts:
Gentoo 201412-09 racer-bin, fmod, PEAR-Mail, lvm2, gnucash, xine-lib, lastfmplayer, webkit-gtk, shadow, PEAR-PEAR, unixODBC, resource-agents, mrouted, rsync, xmlsec, xrdb, vino, oprofile, syslog-ng, sflowtool, gdm, libsoup, ca-certificates, gitolite, qt-creator 2014-12-11
Slackware SSA:2011-086-03 shadow 2011-03-28
Pardus 2011-47 shadow 2011-02-21
Debian DSA-2164-1 shadow 2011-02-16
Ubuntu USN-1065-1 shadow 2011-02-15

Comments (none posted)

tomcat: multiple vulnerabilities

Package(s):tomcat6 CVE #(s):CVE-2010-3718 CVE-2011-0013 CVE-2011-0534
Created:February 14, 2011 Updated:October 20, 2011
Description: From the Debian advisory:

It was discovered that the SecurityManager insufficiently restricted the working directory. (CVE-2010-3718)

It was discovered that the HTML manager interface is affected by cross-site scripting. (CVE-2011-0013)

It was discovered that NIO connector performs insufficient validation of the HTTP headers, which could lead to denial of service. (CVE-2011-0534)

Alerts:
Gentoo 201206-24 tomcat 2012-06-24
Oracle ELSA-2012-0474 tomcat5 2012-04-12
CentOS CESA-2011:1845 tomcat5 2011-12-20
Oracle ELSA-2011-1845 tomcat5 2011-12-20
Scientific Linux SL-tomc-20111220 tomcat5 2011-12-20
Red Hat RHSA-2011:1845-01 tomcat5 2011-12-20
Fedora FEDORA-2011-13457 tomcat6 2011-09-29
SUSE SUSE-SR:2011:005 hplip, perl, subversion, t1lib, bind, tomcat5, tomcat6, avahi, gimp, aaa_base, build, libtiff, krb5, nbd, clamav, aaa_base, flash-player, pango, openssl, subversion, postgresql, logwatch, libxml2, quagga, fuse, util-linux 2011-04-01
Ubuntu USN-1097-1 tomcat6 2011-03-29
Red Hat RHSA-2011:0791-01 tomcat6 2011-05-19
Red Hat RHSA-2011:0335-01 tomcat6 2011-03-09
openSUSE openSUSE-SU-2011:0146-1 tomcat6 2011-03-02
Mandriva MDVSA-2011:030 tomcat5 2011-02-18
Debian DSA-2160-1 tomcat6 2011-02-13

Comments (none posted)

vlc: arbitrary command execution

Package(s):vlc CVE #(s):CVE-2011-0531
Created:February 11, 2011 Updated:April 7, 2011
Description: From the CVE entry:

demux/mkv/mkv.hpp in the MKV demuxer plugin in VideoLAN VLC media player 1.1.6.1 and earlier allows remote attackers to cause a denial of service (crash) and execute arbitrary commands via a crafted MKV (WebM or Matroska) file that triggers memory corruption, related to "class mismatching" and the MKV_IS_ID macro.

Alerts:
Gentoo 201411-01 vlc 2014-11-05
Debian DSA-2211-1 vlc 2011-04-06
Pardus 2011-39 vlc vlc-firefox 2011-02-14
Debian DSA-2159-1 vlc 2011-02-10

Comments (none posted)

vlc: arbitrary code execution

Package(s):vlc vlc-firefox CVE #(s):CVE-2011-0021
Created:February 14, 2011 Updated:February 16, 2011
Description: From the Pardus advisory:

Multiple heap-based buffer overflows in cdg.c in the CDG decoder in VideoLAN VLC Media Player before 1.1.6 allow remote attackers to cause a denial of service (application crash) or possibly execute arbitrary code via a crafted CDG video.

Alerts:
Gentoo 201411-01 vlc 2014-11-05
Pardus 2011-39 vlc vlc-firefox 2011-02-14

Comments (none posted)

wireshark: denial of service

Package(s):wireshark CVE #(s):CVE-2011-0538
Created:February 14, 2011 Updated:April 19, 2011
Description: From the Pardus advisory:

Wireshark 1.5.0, 1.4.3, and earlier frees an uninitialized pointer during processing of a .pcap file in the pcap-ng format, which allows remote attackers to cause a denial of service (memory corruption) or possibly have unspecified other impact via a malformed file.

Alerts:
Gentoo 201110-02 wireshark 2011-10-09
SUSE SUSE-SR:2011:007 NetworkManager, OpenOffice_org, apache2-slms, dbus-1-glib, dhcp/dhcpcd/dhcp6, freetype2, kbd, krb5, libcgroup, libmodplug, libvirt, mailman, moonlight-plugin, nbd, openldap2, pure-ftpd, python-feedparser, rsyslog, telepathy-gabble, wireshark 2011-04-19
CentOS CESA-2011:0370 wireshark 2011-04-14
Debian DSA-2201-1 wireshark 2011-03-23
CentOS CESA-2011:0370 wireshark 2011-03-22
Red Hat RHSA-2011:0370-01 wireshark 2011-03-21
Red Hat RHSA-2011:0369-01 wireshark 2011-03-21
Fedora FEDORA-2011-2620 wireshark 2011-03-04
Fedora FEDORA-2011-2632 wireshark 2011-03-04
Mandriva MDVSA-2011:044 wireshark 2011-03-08
Pardus 2011-43 wireshark 2011-02-14

Comments (none posted)

Page editor: Jake Edge

Kernel development

Brief items

Kernel release status

The current development kernel is 2.6.38-rc5, released on February 15. The patch volume is dropping (a bit) as this kernel stabilizes, so there's not a lot of new features, but there are some important bug fixes here. Details can be found in the full changelog.

Stable updates: the 2.6.32.29 (115 patches), 2.6.36.4 (176 patches), and 2.6.37.1 (272 patches!) updates are currently in the review process; these updates can be expected on or after February 18.

Comments (none posted)

Quotes of the week

So never _ever_ mark anything "deprecated". If you want to get rid of something, get rid of it and fix the callers. Don't say "somebody else should get rid of it, because it's deprecated".

And yes, next time this discussion comes up, I _will_ remove that piece-of-sh*t. It's a disease. It's just a stupid way to say "somebody else should deal with this problem". It's a way to make excuses. It's crap. It was a mistake to ever take any of that to begin with.

-- Linus Torvalds

Hey, if that's what it takes to get __deprecated removed i'll bring it up tomorrow!!
-- Ingo Molnar

Comments (7 posted)

Remnant: The Proc Connector and Socket Filters

Scott James Remnant has posted a surprisingly detailed description of how to use the process connector to get process events from the kernel, combined with use of socket filters to reduce the information flow. "As I mentioned before, the proc connector is built on top of the generic connector and that itself is on top of netlink so sending that subscription message also involves embedded a message, inside a message inside a message. If you understood Christopher Nolan's Inception, you should do just fine."

Comments (10 posted)

The MD roadmap

By Jonathan Corbet
February 16, 2011
Users of the MD (multiple disk or RAID) subsystem in Linux may be interested in the MD roadmap posted by maintainer Neil Brown. It discusses a number of things he has planned for MD in quite a bit of detail; as Neil put it:

A particular need I am finding for this road map is to make explicit the required ordering and interdependence of certain tasks. Hopefully that will make it easier to address them in an appropriate order, and mean that I waste less time saying "this is too hard, I might go read some email instead".

There are a lot of enhancements in the pipeline. A bad block log would allow RAID arrays to continue functioning in the presence of bad blocks without needing to immediately eject the offending drive. There is a variant on "hot replace" which would allow a new drive to be inserted before removing the old one, thus allowing the array to continue with a full complement of drives while the new one is being populated. Tracking of areas which are known not to contain useful data would reduce synchronization costs. A number of proposed enhancements to the "reshape" functionality would make it more robust and flexible and allow operations to be undone. A number of other changes are contemplated as well; see Neil's post for the full list.

Comments (4 posted)

CFS bandwidth control

By Jonathan Corbet
February 16, 2011
The CFS scheduler does its best to divide the available CPU time between contending processes, keeping the CPU utilization of each about the same. The scheduler will not, however, insist on equal utilization when there is free CPU time available; rather than let the CPU go idle, the scheduler will give any left-over time to processes which can make use of it. This approach makes sense; there is little point in throttling runnable processes when nobody else wants the CPU anyway.

Except that, sometimes, that's exactly what a system administrator may want to do. Limiting the maximum share of CPU time that a process (or group of processes) may consume can be desirable if those processes belong to a customer who has only paid for a certain amount of CPU time or in situations where it is necessary to provide strict resource-use isolation between processes. The CFS scheduler cannot limit CPU use in that manner, but the CFS bandwidth control patches, posted by Paul Turner, may change that situation.

This patch adds a couple of new control files to the CPU control group mechanism: cpu.cfs_period_us defines the period over which the group's CPU usage is to be regulated, and cpu.cfs_quota_us controls how much CPU time is available to the group over that period. With these two knobs, the administrator can easily limit a group to a certain amount of CPU time and also control the granularity with which that limit is enforced.

Paul's patch is not the only one aimed at solving this problem; the CFS hard limits patch set from Bharata B Rao provides nearly identical functionality. The implementation is different, though; the hard limits patch tries to reuse some of the bandwidth-limiting code from the realtime scheduler to impose the limits. Paul has expressed concerns about the overhead of using this code and how well it will work in situations where the CPU is almost fully subscribed. These concerns appear to have carried the day - there has not been a hard limits patch posted since early 2010. So the CFS bandwidth control patches look like the form this functionality will take in the mainline.

Comments (3 posted)

Kernel development news

Go's memory management, ulimit -v, and RSS control

By Jonathan Corbet
February 15, 2011
Many years ago, your editor ported a borrowed copy of the original BSD vi editor to VMS; after all, using EDT was the sort of activity that lost its charm relatively quickly. DEC's implementation of C for VMS wasn't too bad, so most of the port went reasonably well, but there was one hitch: the vi code assumed that two calls to sbrk() would return virtually contiguous chunks of memory. That was true on early BSD systems, but not on VMS. Your editor, being a fan of elegant solutions to programming problems, solved this one by simply allocating a massive array at the beginning, thus ensuring that the second sbrk() call would never happen. Needless to say, this "fix" was never sent back upstream (the VMS uucp port hadn't been done yet in any case) and has long since vanished from memory.

That said, your editor was recently amused by this message on the golang-dev list indicating that the developers of the Go language have adopted a solution of equal elegance. Go has memory management and garbage collection built into it; the developers believe that this feature is crucial, even in a systems-level programming language. From the FAQ:

One of the biggest sources of bookkeeping in systems programs is memory management. We feel it's critical to eliminate that programmer overhead, and advances in garbage collection technology in the last few years give us confidence that we can implement it with low enough overhead and no significant latency.

In the process of trying to reach that goal of "low enough overhead and no significant latency," the Go developers have made some simplifying assumptions, one of which is that the memory being managed for a running application comes from a single, virtually-contiguous address range. Such assumptions can run into the same problem your editor hit with vi - other code can allocate pieces in the middle of the range - so the Go developers adopted the same solution: they simply allocate all the memory they think they might need (they figured, reasonably, that 16GB should suffice on a 64-bit system) at startup time.

That sounds like a bit of a hack, but an effort has been made to make things work well. The memory is allocated with an mmap() call, using PROT_NONE as the protection parameter. This call is meant to reserve the range without actually instantiating any of the memory; when a piece of that range is actually used by the application, the protection is changed to make it readable and writable. At that point, a page fault on the pages in question will cause real memory to be allocated. Thus, while this mmap() call will bloat the virtual address size of the process, it should not actually consume much more memory until the running program actually needs it.

This mechanism works fine on the developers' machines, but it runs into trouble in the real world. It is not uncommon for users to use ulimit -v to limit the amount of virtual memory available to any given process; the purpose is to keep applications from getting too large and causing the entire system to thrash. When users go to the trouble to set such limits, they tend, for some reason, to choose numbers rather smaller than 16GB. Go applications will fail to run in such an environment, even though their memory use is usually far below the limit that the user set. The problem is that ulimit -v does not restrict memory use; it restricts the maximum virtual address space size, which is a very different thing.

One might argue that, given what users typically want to do with ulimit -v, it might make more sense to have it restrict resident set size instead of virtual address space size. Making that change now would be an ABI change, though; it would also make Linux inconsistent with the behavior of other Unix-like systems. Restricting resident set size is also simply harder than restricting the virtual address space size. But even if this change could be made, it would not help current users of Go applications, who may not update their kernels for a long time.

One might also argue that the Go developers should dump the continuous-heap assumption and implement a data structure which allows allocated memory to be scattered throughout the virtual address space. Such a change also appears not to be in the cards, though; evidently that assumption makes enough things easy (and fast) that they are unwilling to drop it. So some other kind of solution will need to be found. According to the original message, that solution will be to shift allocations for Go programs (on 64-bit systems) up to a range of memory starting at 0xf800000000. No memory will be allocated until it is needed; the runtime will simply assume that nobody else will take pieces of that range in between allocations. Should that assumption prove false, the application will die messily.

For now, that assumption is good; the Linux kernel will not hand out memory in that range unless the application asks for it explicitly. As with many things that just happen to work, though, this kind of scheme could break at any time in the future. Kernel policy could change, the C library might begin doing surprising things, etc. That is always the hazard of relying on accidental, undocumented behavior. For now, though, it solves the problem and allows Go programs to run on systems where users have restricted virtual address space sizes.

It's worth considering what a longer-term solution might look like. If one assumes that Go will continue to need a large, virtually-contiguous heap, then we need to find a way to make that possible. On 64-bit systems, it should be possible; there is a lot of address space available, and the cost of reserving unused address space should be small. The problem is that ulimit -v is not doing exactly what users are hoping for; it regulates the maximum amount of virtual memory an application can use, but it has relatively little effect on how much physical memory an application consumes. It would be nice if there were a mechanism which controlled actual memory use - resident set sizes - instead.

As it turns out, we have such a mechanism in the memory controller. Even better, this controller can manage whole groups of processes, meaning that an application cannot increase its effective memory limit by forking. The memory controller is somewhat resource-intensive to use (though work is being done to reduce its footprint) and, like other control group-based mechanisms, it's not set up to "just work" by default. With a bit of work, though, the memory controller could replace ulimit -v and do a better job as well. With a suitably configured controller running, a Go process could run without limits on address space size and still be prevented from driving the system into thrashing. That seems like a more elegant solution, somehow.

Comments (13 posted)

Security modules and ioctl()

By Jonathan Corbet
February 16, 2011
The ioctl() system call has a bad reputation for a number of reasons, most of which are related to the fact that every implemented command is, in essence, a new system call. There is no way to effectively control what is done in ioctl(), and, for many obscure drivers, no way to really even know what is going on without digging through a lot of old code. So it's not surprising that code adding new ioctl() commands tends to be scrutinized heavily. Recently it turned out that there's another reason to be nervous about ioctl() - it doesn't play well with security modules, and SELinux has been treating it incorrectly for the last couple of years.

SELinux works by matching a specific access attempt against the permissions granted to the calling process. For system calls like write(), the type of access is obvious - the process is attempting to write to an object. With ioctl(), things are not quite so clear. In past times, SELinux would attempt to deal with ioctl() calls by looking at the specific command to figure out what the process was actually trying to do; a FIBMAP command, for example (which reads a map of a file's block locations) would be allowed to proceed if the calling process had the permission to read the file's attributes.

There are a couple of problems with this approach, starting with the fact that the number of possible ioctl() commands is huge. Even without getting into obscure commands implemented by a single driver, trying to enumerate them all and determine their effects is a road to madness. But it gets worse, in that the intended behavior of a given command may not match what a specific driver actually does in response to that command. So the only way to really know what an ioctl() command will do is to figure out what driver is behind the call, and to have some knowledge of what each driver does. Simply creating this capability is not a task for sane people; maintaining it would not be a task for anybody wanting to remain sane. So security module developers were looking for a better way.

They thought they had found one when somebody realized that the command codes used by ioctl() implementations are not random numbers. They are, instead, a carefully-crafted 32-bit quantity which includes an 8-bit "type" field (approximately identifying the driver implementing the command), a driver-specific command code, a pair of read/write bits, and a size field. Using the read/write bits seemed like a great way to figure out what sort of access the ioctl() call needed without actually understanding the command. Thus, a patch to SELinux was merged for 2.6.27 which ripped out the command recognition and simply used the read/write bits in the command code to determine whether a specific call should be allowed or not.

That change remained for well over two years until Eric Paris noticed that, in fact, it made no sense at all. Most ioctl() calls involve the passing of a data structure into or out of the kernel; that structure describes the operation to be performed or holds data returned from the kernel - or both. The size field in the command code is the size of this structure, and the permission bits describe how the structure will be accessed by the kernel. Together, that information can be used by the core ioctl() code to determine whether the calling process has the proper access rights to the memory behind the pointer passed to the kernel.

What those bits do not do, as Eric pointed out, is say anything about what the ioctl() call will do to the object identified by the file descriptor passed to the kernel. A call passing read-only data to the kernel may reformat a disk, while a call with writable data may just be querying hardware information. So using those bits to determine whether the call should proceed is unlikely to yield good results. It's an observation which seems obvious when spelled out in this way, but none of the developers working on security noticed the problem at the time.

So that code has to go - but, as of this writing, it has not been changed in the mainline kernel. There is a simple reason for that: nobody really knows what sort of logic should replace it. As discussed above, simply enumerating command codes with expected behavior is not a feasible solution either. So something else needs to be devised, but it's not clear what that will be.

Stephen Smalley pointed out one approach which was posted back in 2005. That patch required drivers (and other code implementing ioctl()) to provide a special table associating each command code with the permissions required to execute the command. The obvious objections were raised at that time: changing every driver in the system would be a pain, ioctl() implementations are already messy enough as it is, the tables would not be maintained as the driver changed, and so on. The idea was eventually dropped. Bringing it back now seems unlikely to make anybody popular, but there is probably no other way to truly track what every ioctl() command is actually doing. That knowledge resides exclusively in the implementing code, so, if we want to make use of that knowledge elsewhere, it needs to be exported somehow.

Of course, the alternative is to conclude that (1) ioctl() is a pain, and (2) security modules are a pain. Perhaps it's better to just give up and hope that discretionary access controls, along with whatever checks may be built into the driver itself, will be enough. That is, essentially, the solution we have now.

Comments (8 posted)

Hierarchical group I/O scheduling

By Jonathan Corbet
February 15, 2011
There has recently been much attention paid to the group CPU scheduling feature built into the Linux kernel. Using group scheduling, it is possible to ensure that some groups of processes get a fair share of the CPU without being crowded out by a rather larger number of CPU-intensive processes in a different group. Linux has supported this feature for some years, but it has languished in relative obscurity; it is only with recent efforts to make group scheduling "just work" that it has started to come into wider use. As it happens, the kernel has a very similar feature for managing access to block I/O devices which is also, arguably, underused. In this case, though, I/O group scheduling is not as completely implemented as CPU scheduling, but some ongoing work may change that situation.

The "completely fair queueing" (CFQ) I/O scheduler tries to divide the available bandwidth on any given device fairly between the processes which are contending for that device. "Bandwidth" is measured not in the number of bytes transferred, but the amount of time that each process gets to submit requests to the queue; in this way, the code tries to penalize [Group hierarchy] processes which create seek-heavy I/O patterns. (There is also a mode based solely on the number of I/O operations submitted, but your editor suspects it sees relatively little use). The CFQ scheduler also supports group scheduling, but in an incomplete way.

Imagine the group hierarchy shown on the right; here we have three control groups (plus the default root group), and four processes running within those groups. If every process were contending fully for the available I/O bandwidth, and they all had the same I/O priority, one would expect that bandwidth to be split equally between P0, Group1, and Group2; thus P0 should get twice as much I/O bandwidth as either P1 or P3. If more processes were to be added to the root, they should be able to take I/O bandwidth at the expense of the processes in the other control groups. Similarly, the creation of new control groups underneath Group1 should not affect anybody outside of that branch of the hierarchy. In current kernels, though, that is not how things work.

With the current implementation of CFQ group scheduling, the above hierarchy is transformed into something that looks like this:

[No Hierarchy]

The CFQ group scheduler currently treats all groups - including the root group - as being equal, at the same level in the hierarchy. Every group is a top-level group. This level of grouping will be adequate for a number of situations, but there will be other users who want the full hierarchical model. That is why control groups were made to be hierarchical in the first place, after all.

The hierarchical CFQ group scheduling patch set from Gui Jianfeng aims to make that feature available. These patches introduce a new cfq_entity structure which is used for the scheduling of both processes and groups; it is clearly modeled after the sched_entity structure used in the CPU scheduling code. With this in place, the I/O scheduler can just give bandwidth to the top-level cfq_entity which has run up the least "vdisktime" so far; if that entity happens to be a group, the scheduling code drops down a level and repeats the process. Sooner or later, the entity which is scheduled for I/O will be an actual process, and the scheduler can start dispatching I/O requests.

This patch set is on its fourth revision; the previous iterations have led to significant changes. It appears that there are a few things to fix up still, but this work seems to be getting closer to being ready.

One thing is worth bearing in mind: there are two I/O bandwidth controllers in contemporary Linux kernels: the proportional bandwidth controller (built into the CFQ scheduler) and the throttling controller built into the block layer. The group scheduling changes only apply to the proportional bandwidth controller. Arguably there is less need for full group scheduling with the throttling controller, which puts absolute caps on the bandwidth available to specific processes.

Controlling I/O bandwidth has a lot of applications; providing some isolation between customers on a shared hosting service is an obvious example. But this feature may yet prove to have value on the desktop as well; many interactivity problems come down to contention for I/O bandwidth. Anybody who has tried to start an office suite while simultaneously copying a video image on the same drive understands how bad it can be. If the group I/O scheduling feature can be made to "just work" like the group CPU scheduling, we may have made another step toward a truly responsive Linux desktop.

Comments (1 posted)

Patches and updates

Kernel trees

Architecture-specific

Build system

Core kernel code

Device drivers

Documentation

Filesystems and block I/O

Memory management

Security-related

Benchmarks and bugs

Miscellaneous

Page editor: Jonathan Corbet

Distributions

First look at Ubuntu "Natty" and the state of Unity

February 14, 2011

This article was contributed by Joe 'Zonker' Brockmeier.

Ubuntu's 11.04 release ("Natty Narwhal") is going to be an important inflection point for the project, and for Canonical. The company is banking on its users, and potential users, embracing a user interface (Unity) that differs significantly from the previous Ubuntu release as well as other familiar desktop UIs. Further, the target release date is less than three months away and significant chunks of the Unity interface are still unfinished. The second alpha release on February 3 shows promise, but there is significant work left to be done.

[Unity]

The most interesting, or at least most visible, change is in the shift to Unity. Canonical began work on Unity during the 10.10 cycle for the Ubuntu Netbook Remix. Despite the less-than-exuberant reception for Unity on 10.10, where some vendors opted to remain on 10.04 for netbooks, Canonical decided to push ahead and make Unity the default shell in 11.04 rather than adopting GNOME Shell from GNOME 3.0.

Why has Canonical chosen to take this route instead of GNOME Shell? In part because of differing visions for the desktop. Ubuntu developer Jorge Castro pointed to different ideas, for example, about Application Indicators. While GNOME Shell and Unity have some similarities, the projects also diverge significantly. Initially designed to use the new GNOME window manager (Mutter), Mark Shuttleworth has said that Canonical was unhappy with its performance — which has led to using Compiz instead. There were also problems with getting the Zeitgeist data engine fully integrated with upstream GNOME.

The second alpha lives up to the alpha name. You don't expect that an alpha will be ready for prime time, but this alpha has more bugs than is expected from an Ubuntu development release due to Unity development and some major shifts in underlying packages. Specifically, the alpha was pushed out very shortly after the transition to X Server 1.10 and the rest of the X.org stack, which breaks the proprietary Nvidia and ATI drivers and has a few bugs when using the Intel drivers as well.

Booting the standard desktop ISO to install or test 11.04 alpha 2 on many systems (or under VirtualBox or VMware) is unlikely to result in much joy due to the changes in the X.org stack. This, however, is likely to be resolved by the time that the third alpha ships in March. For determined developers and testers, it is possible to get a working install. Users that have been running the first Natty alpha will escape the problems in the transition, as the upgrade won't replace the affected X packages. I was also able to upgrade a system running Ubuntu 10.10 in place to Natty without problems, though it required manually installing the Nouveau driver to be able to use the default Unity interface. Unity is now no longer dependent on Mutter (as it was in 10.10), and is instead using Compiz.

Unity's UI consists of the Launcher on the left-hand side of the screen, a Panel at the top of the screen, and a Home button (also referred to as the Big Freaking Button) on the extreme left on the panel. The BFB now brings up, or should, the Dash (dashboard) with applications and a search bar that allows the user to search the system for applications, files, etc. In this alpha, however, it simply brings up a blank Dash that's approximately the size of a netbook screen. Castro said that it will eventually be re-sizable so users can expand it to fit the whole screen or just part of the screen at their preference.

The Launcher holds icons or items, which can be for individual applications (such as Firefox) or "Places." What's a Place? One example is the Application place which should display the most used applications as a top row and then all installed applications grouped by category, or displayed alphabetically. But the hope is that developers will create Places that are much more specialized. Castro described it to me as "like a Firefox special search on steroids." Eventually, Castro says, developers should be able to create Places for all manner of things — one example would be an IMDB "place" that would allow users to search IMDB via a launcher and see results in an overlay from the Launcher.

The top panel implements a global application menu that works with most applications. This means that instead of displaying the standard "File, Edit, View," etc. menu items in each window, they are displayed in the Panel. This works with standard GNOME and Qt applications, but there are some outliers — like Firefox, LibreOffice, and Eclipse to name just three — that don't use GTK or Qt menuing. For Firefox (and Thunderbird) this is being implemented as an extension by Chris Coulson that should be ready in time for 11.04. However, it seems likely that there will be at least some percentage of applications that will not quite fit in the standard Unity UI for some time.

Whether the switch to a global application menu is preferable or not is left as an exercise to the reader. The per-window menu mode is deeply ingrained for many of us, so even when the menu works properly for all applications it's going to take some getting used to. Having it implemented for most, but not all, applications is likely to irritate many users.

[Workspace viewer]

Unity also has a workspace switcher that allows users to view all workspaces in a tiled view, move applications back and forth between workspaces, or switch between them. This is not dissimilar to the way that GNOME Shell works, or Spaces in Mac OS X.

Overall, the release (if you can get it running) is usable but not entirely stable. A helpful tip, if Unity crashes but the desktop session remains open, you can restart and refresh unity with unity --refresh. But you have to use a terminal emulator to run this, as Unity does not yet have a run dialog that can be called with Alt-F2 implemented. Castro said that they're likely to use the GNOME Completion-Run Utility, but it hasn't been decided yet.

Though not yet implemented in the alpha, by the time 11.04 ships, there will be an API in place for applications to have a progress meter and/or number on the launcher. If you've used an iOS or Android device, you've probably seen something similar with the application icons on those devices. Castro says that the idea is to stop cluttering the system tray with application-specific notifications and move them to the application icons, keeping system-level notifications and controls (such as the sound volume or network indicators) in the system tray. A mockup can be found on Castro's post about the libunity library. One might wonder, what happens on other distributions without libunity with applications that have implemented these features? Castro says that they'll still run fine on other distributions without any problems, though without the notifications.

What if you don't have supported 3D hardware? Natty will fall back to the standard GNOME 2.32 interface, even though Canonical is working on a 2D Unity interface based on Qt for Ubuntu on ARM. Why not default to Unity 2D for the x86/AMD64 releases of 11.04 as well? The primary issue here is making space for the Qt libraries on the installation CD. However, the plan now is to make space for those libraries in time for Ubuntu 11.10.

Users also won't be seeing an option for GNOME 3.0 in 11.04, either. In fact, they won't be seeing the option in the Software Center. The decision was made mid-January and announced by Sebastien Bacher on the ubuntu-desktop list, where Bacher said "we don't feel integrating GNOME3 with a high quality level in Ubuntu is a job which can be done in one cycle and we prefer to delay it to be default next cycle."

Specifically, Bacher says that "it's not really possible to bring some updated components or [software] in without bringing the GNOME3 desktop" which left the desktop team to decide whether to switch to GNOME 3 in the 11.04 cycle. The decision ultimately was to remain on GNOME 2.32, which is the basis for Ubuntu's 2D fallback. There's also the small matter that GNOME 3.0 would probably not be ready in time for the feature freeze for 11.04 toward the end of February. At any rate, users will need to seek out a Personal Package Archive (PPA) for GNOME 3.0 on 11.04 if they prefer that interface. Castro did indicate that Ubuntu was open to making available an Ubuntu-based release with GNOME 3.0 at some point if there were contributors interested in doing the work.

For contributors interested in working on Unity, there's plenty of room. The project has a collection of small bugs and projects under the "bitesize" label that should be a good option for new contributors. It should be noted, however, that even "bite-sized" patches require agreement to Canonical's contributor agreement, which is less than universally loved by free and open source software developers.

Though buggy and incomplete, the implementation of Unity as it stands now looks interesting. It's unlikely to appeal to GNOME 2.x stalwarts, but it's unclear whether GNOME 3.0 will either. It's an interface that may appeal to non-Linux users, if Canonical can find hardware partners to ship it pre-installed.

Comments (60 posted)

Brief items

Distribution quotes of the week

Actually power users install Gentoo from memory, it's really not much more, than untarring two tarfiles, editing a few configs, making the kernel and installing the bootloader.
-- Antoni Grzymala

To cut a long story short - lots of people who use centos dont understand what the project is about, what we do, why we do it and how they can help. On the other hand, we also seem unable to hold people's attention ( and i mean people at large, not just the centos community ) in order to get them thinking about the project ( and not the distro, remember project != distro, needs of the hour are trivial, needs for the project to sustain and exist are more important ).
-- Karanbir Singh

Comments (1 posted)

CyanogenMod 7 release candidates available

A set of release candidates for CyanogenMod 7 - a rebuilt and enhanced version of the Android "Gingerbread" release - has been announced. "These are builds that are feature-complete and fairly well tested, but still have some minor tweaking that needs done. You should find them stable for everyday use though!" This terse changelog gives some sense of what's included in this release.

Comments (25 posted)

The first Mageia alpha is available

The Mageia project has announced the availability of the first alpha release of its Mandriva fork. By all indications, this release will be on the rough side; it seems to be, as much as anything, a celebration of the fact that the project's build and distribution systems are now up and running. "We know that this release may not impress you that much, nor will it bring anything revolutionary for the moment and this is not one of our goals yet; as we first plan to have a rock solid factory and system."

Comments (10 posted)

Mandriva 2011 Alpha 1

Mandriva has released the first Alpha of Mandriva 2011. "As promised some weeks ago, the Mandriva 2011 Alpha1 is following the lead of Mandriva 2011 Technical Preview, sorting out some of the issues we noticed in it." There are a number of updated packages including kernel 2.6.37, GCC 4.5.2, with plugins enabled by default, Systemd 17, enabled and activated by default, and more.

Comments (none posted)

Oracle Linux 6 released

Oracle Linux 6 was first released for customers with an Oracle Linux support subscription. Then RPMs were made available on the public yum server and DVD images were also published. "Oracle Linux 6 is free to download, install and use. The full release notes are here..." Oracle Linux 6 comes with the "Unbreakable Enterprise Kernel" by default. A Red Hat compatible kernel built from RHEL source is also available.

Comments (none posted)

Community Support Expands for Red Hat Enterprise Linux 6

The Extra Packages for Enterprise Linux (EPEL) project has announced the release of EPEL 6. "A community project, EPEL 6 is a collection of open source projects packaged specifically for Red Hat Enterprise Linux 6, which was released in November 2010, and other compatible systems. These supplementary applications, tools and libraries are maintained and supported by volunteers for the convenience and advancement of the community. Though EPEL is under the umbrella of the Fedora Project, it is not commercially supported by Red Hat."

Comments (none posted)

Distribution News

Debian GNU/Linux

Debian volatile replaced by new updates suite

The Debian Volatile archive was discontinued with the release of Debian 6.0 ("Squeeze"). "It is replaced by the suite squeeze-updates on the official mirrors. Its management will move to the Debian Release Team, who already manage regular updates to Debian stable and oldstable."

Full Story (comments: none)

Fedora

Planet Edited launches

Planet Edited is a blog aggregator for Fedora-related content. "The adjective edited came from the fact that this planet will be maintained and edited by a group of people (the editors), that will make sure appropriate and relevant content gets posted."

Full Story (comments: none)

Ubuntu family

Natty Schedule Adjustments

Ubuntu's Natty Narwhal (11.04) will have a second beta instead of a release candidate. "After reviewing the plans at the end of this release, it was felt that a release candidate release on April 21st showing up just before the easter holiday would be a bit late." Beta 2 is scheduled for April 14.

Full Story (comments: none)

Get ready for Ubuntu Developer Week

Ubuntu Developer Week will be happening February 28 - March 4 in #ubuntu-classroom on irc.freenode.net. There will be speakers and sessions on Getting Started with Ubuntu Development, Rocking with Unity, and several other topics.

Full Story (comments: none)

Newsletters and articles of interest

Distribution newsletters

Comments (none posted)

Getting Started with MeeGo (Linux.com)

Over at Linux.com, Nathan Willis looks at how to get involved with MeeGo, either by working on the core of the distribution itself or by writing applications for the platform. "You can also use the community OBS to host a personal or team package repository, which makes it easy to distribute your work to testers and end users. The community OBS will power the MeeGo Garage, a third-party application repository for community-created, open source applications. It has not launched yet, but it is a direct descendant of the successful Maemo Garage project. You certainly are not required to use the community OBS or MeeGo garage, however: you can distribute your applications individually, or through third-party repositories or 'app stores.'"

Comments (30 posted)

Banshee Amazon Store disabled in Ubuntu 11.04 by Canonical (Network World)

Over at Network World, Joe "Zonker" Brockmeier describes a change to the default music store for Banshee in Ubuntu 11.04. "Banshee is a media/music player for Linux that has support for purchasing music via Amazon MP3. The revenues have always gone directly to the GNOME Foundation. Historically, the default music player in Ubuntu has been Rhythmbox, but that's changing in 11.04 to Banshee. The problem, at least as Canonical seems to see it? Amazon MP3 support in Banshee competes with Ubuntu's own offering, Ubuntu One — which also has support for purchasing music. The alternative? Canonical offered to leave the Amazon Store on by default, but take a 75% cut by changing the affiliate code and then passing a paltry 25% on to GNOME. The good news is that Canonical conferred with the Banshee team and gave them the option, so they elected to disable the store by default. Users can re-enable it if they are aware of the Amazon store, but defaults are powerful: Many users may never even realize that the Amazon store is an option."

Comments (87 posted)

Page editor: Rebecca Sobol

Development

FOSDEM: Configuration management

February 16, 2011

This article was contributed by Koen Vervloesem

The 2011 FOSDEM conference had a Configuration and Systems Management developer room on its second day. This first meeting about configuration management and automation with open source tools was organized by the people from Puppet Labs and had a focus on Puppet, but other tools like Chef and Cfengine were also discussed.

Configuration management is about establishing and maintaining consistency of a system throughout its life. For software, this means that the system has to track and control all configuration changes, which can be the contents of files in /etc, the installation of specific packages, file permissions, users, and so on. Having a configuration management tool for your systems is useful in a lot of ways: you can automatically repair a system's configuration after a failure, you can easily reproduce a specific configuration on another system, you can audit changes, and, if you pair the configuration management system with a version control system like Git, you can always return to a known-good configuration if things go wrong. Where configuration management systems really shine is when you have a large number of systems networked together: by automating the configuration, you save the system administrator's time and you're sure that all systems are configured consistently.

The big three configuration management systems for Linux are Puppet (used by Red Hat, Citrix, and the Los Alamos National Laboratory), Chef (used by Engine Yard, 37signals, and Scribd), and Cfengine 3 (used by Facebook, AMD and the Joint Australia Tsunami Warning Centre). Puppet and Chef are broadly similar in architecture, but Puppet has a language designed specifically for the task of describing resources, while Chef is using the general-purpose programming language Ruby to configure resources. Also, Chef seems to be more aimed at developers that want to deploy their web applications, and it doesn't support as many platforms as Puppet does. Cfengine is the grandfather of these configuration management systems (with Cfengine 3 as a total rewrite); one of its advantages is its lower memory footprint and higher performance than Puppet and Chef, but in recent years its popularity has declined. Other configuration management systems that were present in the developer room are FusionInventory, GLPI, and OPSI.

A meta-distribution

In his case study about Linux system engineering in air traffic control, Stefan Schimanski showed how scalable Puppet really is and how it can guarantee reliable mass deployment of the Linux-based, mission critical applications needed in air traffic control centers. Air traffic is growing yearly, so the number of computer systems that have to handle these flights is also growing, as is the work load for the system administrators. Moreover, the systems really need 24/7 365 high-availability: if they go down for 30 minutes, air traffic control has a really big problem. For example, if a computer in a control center freezes, the operator is essentially blind.

These strong requirements coupled with the growing number of servers mean that air traffic control centers need automatic installations of every system with minimal downtime and fast rollbacks. Moreover, all informal requirements documents, described by non-technical people, should be converted into formal specifications of the configuration of the system, to be able to standardize the systems and make their configuration reproducible. Therefore, Schimanski rethought his system engineering approach in 2010 and turned to Puppet.

One thing that Puppet makes easy is distinguishing between the abstract requirements and the concrete implementation. For each node, the system administrator can define how the node has to be configured in an abstract way, e.g. by including classes for a desktop node, a server node, a webserver node, and so on. By reading these node definitions, you can easily see what the node is supposed to be doing, without having to bother with the concrete implementation, which is written in separate files for these classes. For example, the webserver class installs and configures Apache and also includes the configuration of the server class. Moreover, according to Schimanski a good Puppet configuration introduces traceability, which is essential in that kind of environment: "If someone asks where requirement #91 of the requirements document is implemented, it's easy to point out the Puppet code that implements this."

Another interesting idea that Schimanski introduced in his talk was the concept of a meta-distribution: the air traffic control systems are implemented as SUSE Linux Enterprise and Red Hat Enterprise Linux servers, but the Linux distribution itself is completely interchangeable. The AutoYaST or Kickstart files of the installation are minimal, and almost all configuration is done in the form of Puppet modules, e.g. for NTP and other services. The result is a heavily customized enterprise Linux distribution, but all these customizations are documented in a completely formal way. Schimanski explains the rationale behind this approach:

We don't want to depend on one operating system, so if, hypothetically, Novell stops the development of SUSE Linux Enterprise, we could migrate our systems to Red Hat Enterprise Linux or even Ubuntu Server in only four days without redoing all the configuration work.

To a certain degree, Puppet modules can be written in an operating system independent way. There are always some minor differences, such as where the distribution puts its configuration files, but this can be abstracted away with variables that get their value (e.g. the file path) depending on the operating system. Of course you have to check these little things before migrating to another operating system, so it's not effortless, but according to Schimanski, Puppet makes migrating a lot easier.

The Puppet ecosystem

The talks also showed that there is a nice ecosystem of tools developing around Puppet. For example, Henrik Lindberg gave a demo of Geppetto, a new Eclipse-based project developing tools to simplify the process of authoring and using Puppet manifests and modules. The near-term objectives of the project are flattening the learning curve for new Puppet users, supporting best practices, and encouraging the sharing of Puppet modules. Under the hood, Geppetto has a grammar for the Puppet DSL (Domain Specific Language), written with Xtext. Thanks to Xtext, this also automatically results in an Eclipse editor that knows the Puppet language and offers syntax coloring, code completion, code folding, and syntax errors and warnings. Moreover, when creating a Puppet module you can enter metadata and choose dependencies, and at the end you can export the module to a zip file which can be uploaded to the Puppet Forge. The Geppetto integrated development environment can be downloaded as a stand-alone product for Linux, Windows or Mac OS X, or as a separate plug-in for Eclipse.

Another rising star in the Puppet ecosystem is Foreman, presented by its creator Ohad Levy, who joined the ranks of Red Hat in August 2010 as a principal software engineer in its cloud team. This project is now a year and a half old and has 20 contributors, but according to Levy, Foreman will at some point be part of Red Hat's cloud portfolio. Foreman integrates with Puppet and acts as a web based dashboard for it, providing real time information about the status of hosts based on Puppet reports, statistics, and so on. Moreover, Foreman takes care of the low-level details of setting up machines and installing the Puppet client on them, until Puppet is able to take care of the configuration defined in your Puppet modules. It even supports creating virtual machines using the libvirt API, with RHEV-M and Amazon EC2 support in the works. The largest installation managed by Foreman that Levy knows about is running 4000 active hosts. This is clearly a project to watch, as it is backed by Red Hat and it has the potential to make managing an environment with Puppet a lot easier.

Configuration management is not only useful for system administrators installing servers, but also for developers setting up their development environment. Gareth Rushgrove talked about using configuration management tools to get new employees up and running quickly with a development virtual machine. Especially interesting was his coverage of Vagrant, a tool for automated virtual machine creation for Oracle's VirtualBox. Using automated provisioning of the virtual environments using Puppet or Chef, developers can get a complete development environment up and running in no time. Users can configure Vagrant to forward ports to the host machine, to configure shared folders, and so on. It's also possible to package an environment in a distributable box, and rebuilding a complete environment from scratch or tearing down the environment when you're done is possible with a single command. Normally users start by downloading a base box to use with Vagrant (the default one is Ubuntu Lucid Lynx), but they can also build their own base box with a tool like VeeWee.

Lessons for disaster recovery

While Puppet clearly was the most visible configuration management system at FOSDEM, it was not the only one. Joshua Timberman, Sr. Technical Evangelist at Opscode (the creators of Chef), gave a short "Chef 101" talk, followed by an overview of how to use Chef to deploy applications with nothing but the source code repository and data about the application configuration. Traditionally, one deploys applications with tools like tar, rsync and (in the Ruby world) cap deploy, but what do you do then with the server configuration, like that needed for web servers, load balancers, database servers? Timberman showed how you can easily deploy web applications with their corresponding servers using various server roles configured in Chef cookbooks. The Chef server itself is a lightweight Ruby on Rails application, and the largest Chef deployment that Timberman knows about has 5000 nodes checking in to the Chef server each 30 minutes.

The first talk of the day was by Nicolas Charles and Jonathan Clarke who presented their use of Cfengine in their company Normation and focused on their experiences with disaster recovery. All their services (web, email, Git repository, Redmine, ...) were running on one hosted server. This used a three-disk RAID5 array, with daily backups, separate virtual machines for each service, and all services automatically installed and configured using Cfengine 3.

When two hard drives failed simultaneously, they first thought this would be easy to repair, as they had backups and used a configuration management system. However, it seemed they had forgotten some things. For example, they hadn't automated nor made a backup of the configuration of the virtual machines, so these had to be re-created manually. Moreover, after watching all the services coming back online with the right configuration thanks to Cfengine 3, they saw that they had to manually restore the backups, after which they saw that a couple of files were missing. The three big lessons here are: don't forget to describe your virtualization setup in your configuration management system, tie in your configuration management system to your backup tool, and always test your backups.

The system administrator as glue

The best quote that summarized the don't reinvent the wheel approach of configuration management came from Levy's talk: "Automate as many processes as possible, using best practices where available, and act as the glue between the gaps." In this regard, it is interesting to know that everyone can share their Chef "cookbooks" (packages of "recipes") on cookbooks.opscode.com, and Puppet users can share their Puppet modules on the Puppet Forge. This is great for new users who can research the modules of other users and reuse them in their own infrastructure. Your author had already automated some of the services on his home network with Puppet, and this configuration management track at FOSDEM was inspiring enough to continue this approach and decrease the amount of glue in his network.

Comments (7 posted)

Brief items

Quotes of the week

I wonder what you are supposed to do with end-users who insist on mailing you personally, with blindingly obvious suggestions for improvement, and who when you politely point out that there is no shortage of good ideas only developer time (which they are wasting right now), and can they go to the discuss list, instead reply with yet another set of time wasting waffle; sigh.
-- Michael Meeks

Let's discuss a real world scenario. As you know we like to help The Department of Homeland Security impede the plans of American travelers. Just the other day, I saw a security guard discover day old sushimi in a tourist's pocket. He confiscated the tuna fish and ate it immediately. That's when I thought of using Go to screen travellers who had recently eaten sushi. This will help security guards fish out spoiled tuna to eat, which will surely lead to indigestion problems and subsequent longer processing times.
-- Charles Thompson

This is fascinating turn of events for C# developers as Nokia will make WP7 more relevant in the marketplace, making C# the lingua-franca of all major mobile operating systems. This astute chart explains why I am basking in joy.
-- Miguel de Icaza

Unequivocally, Qt is not dead. This morning we heard top Nokia executives like CTO Rich Green talk about Qt and the future. Qt will continue to live on through Symbian, MeeGo and the non-mobile Qt industries and platforms.
-- Aron Kozak

Comments (8 posted)

GNU Guile 2.0.0 released

GNU Guile is an implementation of the Lisp-like Scheme language; version 2.0.0 has been released. The interpreter has been reimplemented as a compiler and a virtual machine, yielding a significant performance improvement; other changes include ECMAScript and Emacs Lisp support, a new debugger, support for "hygienic macros," Unicode support, a new dynamic foreign (as in implemented in C) function interface, a better garbage collector, and more. See the announcement (click below) for details.

Full Story (comments: 8)

GParted 0.8.0 Released

Version 0.8.0 of the GParted partition table editor is available. The main change in this release appears to be a mechanism to look for lost partitions on a device and recover them.

Full Story (comments: none)

GTK+ 3.0.0 released

The developers of the GTK+ toolkit have just celebrated the 3.0.0 release; GNOME 3.0 has just gotten that much closer. Needless to say, a lot has changed: use of Cairo throughout for drawing, updated input device handling, better theming, better application support, and more. See the announcement (click below) for more information, or see the FAQ or the migration guide.

Full Story (comments: 8)

OpenShot 1.3.0 released

Version 1.3.0 of the OpenShot video editor is available. The project's web site and the release notes are rather terse on what this release brings: "Version 1.3.0 brings with it lots of bug fixes, a new user interface theme (called Fresh), stock icons, video upload support for YouTube and Vimeo, new 3D animations (Snow, Lens Flare, Particle Effects), and more timeline and interface animations." Some more information, with screen shots, can be found in this Ubuntu Vibes article.

Comments (9 posted)

Parrot 3.1.0 "Budgerigar" Released

Version 3.1.0 of the Parrot multi-language virtual machine has been released. Changes include improved garbage collection performance, working Ruby support, and IPv6 support.

Full Story (comments: none)

TileMill 0.1.4

TileMill is "a tool for cartographers to quickly and easily design maps for the web using custom data. It is built on the powerful open-source map rendering library Mapnik - the same software OpenStreetMap and MapQuest use to make some of their maps. TileMill is not intended to be a general-purpose cartography tool, but rather focuses on streamlining and simplifying a narrow set of use cases." See this weblog entry for an introduction to what TileMill can do and a bunch of screenshots.

Comments (none posted)

Newsletters and articles

Development newsletters from the last week

Comments (none posted)

Getting Your Feet Wet with Blender: A Short Guide to Understanding Blender (Linux.com)

Nathan Willis strives for a basic understanding of the 3D content creation suite, Blender. "Blender's toolbox provides multiple ways to construct objects - assembling them out of primitive solids, extruding and transforming meshes, drawing shapes with 3D bezier curves, even "sculpting" existing parts as if they were clay. Step one is getting familiar with Blender's modes - because the screen itself is two-dimensional, the app has to offer a separate mode for moving and manipulating the models within the scene, and for moving and manipulating the faces, edges, and vertices of the objects. Otherwise, there is no clear way to distinguish between clicking the cursor on an object and clicking the cursor on the face of the object."

Comments (2 posted)

Peters: Learning to write JavaScript

Now that she has started working at Mozilla, Stormy Peters decided she needed to write a web application. To that end, she started learning JavaScript. "Trouble shooting JavaScript was not always easy. If I was getting someone started with JavaScript, I'd set up their development environment and explain the tools first. Firebug, the Firefox Console and alerts ended up being my friends. Before I do more JavaScript development, I'll explore some more debugging tools." While the post is (obviously) JavaScript-specific, the approach she took could also be applied to learning other scripting languages.

Comments (14 posted)

Page editor: Jonathan Corbet

Announcements

Brief items

MPEG LA Announces Call for Patents Essential to VP8 Video Codec

The MPEG Licensing Authority (LA) is trying to form a patent pool around patents that apply to the VP8 video codec, which is part of Google's WebM open web media push. "In order to participate in the creation of, and determine licensing terms for, a joint VP8 patent license, any party that believes it has patents that are essential to the VP8 video codec specification is invited to submit them for a determination of their essentiality by MPEG LA's patent evaluators. At least one essential patent is necessary to participate in the process, and initial submissions should be made by March 18, 2011. Although only issued patents will be included in the license, in order to participate in the license development process, patent applications with claims that their owners believe are essential to the specification and likely to issue in a patent also may be submitted."

Comments (59 posted)

Canonical announces a component catalog for Linux

Canonical has announced the release of a component catalog that lists Linux-compatible devices. "With this database, corporate buyers can specify the design of their Ubuntu desktops or servers from manufacturers much more efficiently. Individuals can be sure that the key components of the machine they are considering will work with their preferred Ubuntu or Linux distribution. The PC and server industry will also have a simple single source to publicize the work that they do in certifying Linux components and making that knowledge freely available." This looks to be a great resource, but it does not seem to make any distinction between free and binary-only driver support.

Comments (17 posted)

LiMo Foundation Unveils LiMo 4

LiMo Foundation has announced the launch of LiMo 4. "LiMo 4 makes extensive use of best of breed technologies from leading open source projects. LiMo's Open Source Policy also promotes strong bilateral engagement with these projects in the interests of maintenance efficiency and market access for future open source innovation. It is planned that LiMo 4 code will become available for public download from July 2011."

Comments (7 posted)

LibreOffice Community starts 50,000 Euro challenge for setting-up its foundation

The community around LibreOffice has announced its fifty-thousand Euro challenge for setting-up The Document Foundation as a legal entity. "The race for funds is open until March 21st 2011, which marks the beginning of Spring in the northern hemisphere. All users - especially enterprises - are invited to donate to the capital stock of the future foundation."

Full Story (comments: none)

Articles of interest

Decentralizing the Internet So Big Brother Can't Find You (NY Times)

The New York Times looks at the Freedom Box Foundation. There will be little new here for most LWN readers, but it's nice to see the effort getting wider attention. "Mr. Moglen said that if he could raise 'slightly north of $500,000,' Freedom Box 1.0 would be ready in one year."

Comments (13 posted)

Open Source Hardware Definition 1.0 published (The H)

The H takes a look at version 1.0 of the Open Source Hardware Definition. "The definition, based in part on the OSI's Open Source Definition, covers the requirements for availability of documentation, necessary software and optional attribution. It also covers what attributes the licence used should have such as not being specific to a product, not restricting other hardware or software and being technology neutral."

Comments (none posted)

Linux Supercomputer is a Contestant on Jeopardy (LinuxPlanet)

LinuxPlanet covers the IBM Watson game playing supercomputer. "The IBM Watson supercomputer runs on 10 racks of IBM POWER 750 Servers that can be powered by a number of operating systems including IBM's own AIX Unix operating system as well as Linux. IBM chose Linux and more specifically, Novell's SUSE Linux Enterprise Server (SLES) as the underlying operating system for Watson."

Comments (5 posted)

What Nokia's Windows move means for Open Source (ZDNet)

Steven J. Vaughan-Nichols talks with some of Nokia's open-source partners about MeeGo. "In particular, although Nokia has said it will continue to support MeeGo, Intel, Nokia's chief MeeGo partner was not pleased. In a statement Intel said: "While we are disappointed with Nokia's decision, Intel is not blinking on MeeGo. We remain committed and welcome Nokia's continued contribution to MeeGo open source.""

Comments (54 posted)

Education and Certification

Linux Professional Institute develops Academic Program in Malaysia

The Linux Professional Institute (LPI) has announced a new initiative to promote Linux training within colleges and universities in Malaysia. "This initiative by LPI affiliate, LPI-Asia Pacific will enable post-secondary academic programs in Malaysia to adopt LPI training as part of their regular IT curriculum. LPI-APAC is working with the Department of Higher Education of the Ministry of Higher Education of Malaysia in introducing this program to both private and public educational institutions within Malaysia."

Full Story (comments: none)

Calls for Presentations

LAC2011: Paper deadline coming closer

There are only a few days left to submit a paper for the 2011 Linux Audio Conference in Maynooth, Ireland (May 6-8, 2011). The submission deadline is February 20.

Full Story (comments: none)

Upcoming Events

SCALE Update: UpSCALE Talks/Birds of a Feather

The 9th annual Southern California Linux Expo has announced the schedule for its UpSCALE talks, to be held on February 25. There are still slots available for Birds of a Feather sessions to be held on February 25-26.

Full Story (comments: none)

Linux Foundation End User Summit Program Announced

The Linux Foundation has announced the program for its End User Summit which takes place March 1-2, 2011, in Jersey City, New Jersey.

Full Story (comments: none)

Record-breaking submissions to PyCon 2011

The PyCon team has reported a record number of submissions for PyCon 2011 (March 9-17 in Atlanta, Georgia). Click below for links to the lists of talks, tutorials, sprints and keynotes.

Full Story (comments: none)

The Android Builders Summit announced

The Linux Foundation has announced the first Android Builders Summit, to be held April 13 and 14 in San Francisco, immediately after the Embedded Linux Conference. "Android is expanding to an increasing number of industry segments in addition to smart phones and tablets. There is a need for the ecosystem of builders to collaborate on a common solution for existing limitations and desired features across all of these device categories."

Comments (1 posted)

Events: February 24, 2011 to April 25, 2011

The following event listing is taken from the LWN.net Calendar.

Date(s)EventLocation
February 25 Build an Open Source Cloud Los Angeles, CA, USA
February 25
February 27
Southern California Linux Expo Los Angeles, CA, USA
February 25 Ubucon Los Angeles, CA, USA
February 26 Open Source Software in Education Los Angeles, CA, USA
March 1
March 2
Linux Foundation End User Summit 2011 Jersey City, NJ, USA
March 5 Open Source Days 2011 Community Edition Copenhagen, Denmark
March 7
March 10
Drupalcon Chicago Chicago, IL, USA
March 9
March 11
ConFoo Conference Montreal, Canada
March 9
March 11
conf.kde.in 2011 Bangalore, India
March 11
March 13
PyCon 2011 Atlanta, Georgia, USA
March 19 Open Source Conference Oita 2011 Oita, Japan
March 19
March 20
Chemnitzer Linux-Tage Chemnitz, Germany
March 19 OpenStreetMap Foundation Japan Mappers Symposium Tokyo, Japan
March 21
March 22
Embedded Technology Conference 2011 San Jose, Costa Rica
March 22
March 24
OMG Workshop on Real-time, Embedded and Enterprise-Scale Time-Critical Systems Washington, DC, USA
March 22
March 25
Frühjahrsfachgespräch Weimar, Germany
March 22
March 24
UKUUG Spring 2011 Conference Leeds, UK
March 22
March 25
PgEast PostgreSQL Conference New York City, NY, USA
March 23
March 25
Palmetto Open Source Software Conference Columbia, SC, USA
March 26 10. Augsburger Linux-Infotag 2011 Augsburg, Germany
March 28
April 1
GNOME 3.0 Bangalore Hackfest | GNOME.ASIA SUMMIT 2011 Bangalore, India
March 28 Perth Linux User Group Quiz Night Perth, Australia
March 29
March 30
NASA Open Source Summit Mountain View, CA, USA
April 1
April 3
Flourish Conference 2011! Chicago, IL, USA
April 2
April 3
Workshop on GCC Research Opportunities Chamonix, France
April 2 Texas Linux Fest 2011 Austin, Texas, USA
April 4
April 5
Camp KDE 2011 San Francisco, CA, USA
April 4
April 6
SugarCon ’11 San Francisco, CA, USA
April 4
April 6
Selenium Conference San Francisco, CA, USA
April 6
April 8
5th Annual Linux Foundation Collaboration Summit San Francisco, CA, USA
April 8
April 9
Hack'n Rio Rio de Janeiro, Brazil
April 9 Linuxwochen Österreich - Graz Graz, Austria
April 9 Festival Latinoamericano de Instalación de Software Libre
April 11
April 14
O'Reilly MySQL Conference & Expo Santa Clara, CA, USA
April 11
April 13
2011 Embedded Linux Conference San Francisco, CA, USA
April 13
April 14
2011 Android Builders Summit San Francisco, CA, USA
April 16 Open Source Conference Kansai/Kobe 2011 Kobe, Japan

If your event does not appear here, please tell us about it.

Page editor: Rebecca Sobol


Copyright © 2011, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds